1
0
forked from VimPlug/jedi

Compare commits

...

2245 Commits

Author SHA1 Message Date
Dave Halter
02f238ce08 Add the executable bit to deploy-master.sh 2017-12-14 22:51:02 +01:00
Dave Halter
e526cb1ae3 Don't run Python 2.6 in tox by default
Python 2.6 seems to be harder and harder to run in tox if setuptools is not properly configured for it.
It's still possible to run it and it still runs on travis.
2017-12-14 22:50:13 +01:00
Daniel Hahler
e621e8590c Improve IntegrationTestCase.__repr__
Having the path (together with the line only) makes it easy to go to the
actual test.
2017-12-14 22:44:24 +01:00
Dave Halter
62915686af Don't use pytest 3.3+ because it removed support for Python 3.3 2017-12-14 22:29:13 +01:00
Dave Halter
6ee361864c Merge branch 'master' of github.com:davidhalter/jedi 2017-11-06 19:32:31 +01:00
Thomas A Caswell
22c97b0917 FIX: install on python 3.7 (#971)
* FIX: install on python 3.7

https://github.com/python/cpython/pull/46 /
https://bugs.python.org/issue29463 move the module comment into the
AST node and hence out of the tree which means the 2nd entry in the
tree is now the import rather than the `__version__` string.

Adds nightly on travis.

* BLD: update python tags in setup.py

* CI: switch to 3.7-dev

* CI: allow failure on 3.7 dev
2017-11-06 19:29:06 +01:00
Dave Halter
6c355a0ac2 Update version 2017-11-05 15:07:06 +01:00
Dave Halter
e783611030 Merge branch 'master' of github.com:davidhalter/jedi 2017-11-05 15:05:22 +01:00
Dave Halter
fc0397732e Update the parso dependency 2017-11-05 15:05:09 +01:00
Samuel Bishop
9cbcf00aa2 Fix dead link.
Hopefully this is close enough to the original video.
2017-11-03 17:52:22 +01:00
Dave Halter
baafea4a90 Remove unused code 2017-11-01 19:14:54 +01:00
langsamer
1428b67c4d Replace TODO by explanation of the parameter 'cut_own_trailer'
(it cannot be dropped!) (#978)
2017-10-29 02:00:28 +02:00
Robin Roth
f13b4e800a Install docopt for dev setup 2017-10-28 14:36:16 +02:00
Robin Roth
88cf592c95 Make goto work with await
Created together with @langsamer and @davidhalter
2017-10-28 14:10:05 +02:00
Dave Halter
752b7d8d49 One more usages test. 2017-10-15 21:11:49 +02:00
Dave Halter
2b138b3150 Usages fix for more complex situations 2017-10-09 21:09:04 +02:00
Dave Halter
06004ad2f5 Some minor refactorings. 2017-10-09 20:32:28 +02:00
Dave Halter
8658ac5c28 Using additional_dynamic_modules sometimes led to weird behavior of using modules twice. 2017-10-09 20:28:39 +02:00
Dave Halter
bedff46735 Simplify usages. It should also work way better, now. 2017-10-08 20:13:24 +02:00
Dave Halter
4ddf7bf56d Remove the disabling of dynamic flow information
We should be able to handle this anyway also in completions. Don't hide issues here.
2017-10-07 10:52:43 +02:00
Dave Halter
8dba08eeb2 Small sys path refactoring. 2017-10-06 09:01:15 +02:00
Dave Halter
21531abd1e Fix a small test error 2017-10-05 20:43:31 +02:00
Dave Halter
7019ca643e Remove a possible security issue
sys paths are not executed anymore and use static analysis now.
2017-10-05 19:57:50 +02:00
Dave Halter
aa8a6d2482 Move a function around 2017-10-05 18:49:12 +02:00
Dave Halter
28dea46bed Better English 2017-10-05 18:35:17 +02:00
Dave Halter
2b30c6fee4 Remove documentation about caveats that are not realy 100% true anymore. 2017-10-05 18:33:02 +02:00
Dave Halter
51d2ffb078 Use sys path mostly from project and move some sys path stuff around. 2017-10-05 10:06:28 +02:00
Dave Halter
383f749026 Move the initial sys path generation into a new project class. 2017-10-02 20:19:55 +02:00
Dave Halter
0762c9218c Move arguments to a separate module. 2017-10-01 13:29:28 +02:00
Dave Halter
b6bb251c96 Common instance objects are now directly accessible 2017-09-30 18:19:25 +02:00
Dave Halter
604ca65a9b Directly importing FunctionContext. 2017-09-30 18:11:15 +02:00
Dave Halter
39b24ff2df Move lazy contexts to a separate module not in contexts 2017-09-30 18:02:02 +02:00
Dave Halter
16011a91af Move iterable to context/iterable. 2017-09-30 17:41:21 +02:00
Dave Halter
06b2857974 Import simplification. 2017-09-30 17:26:20 +02:00
Dave Halter
f733e07045 AbstractSequence -> AbstractIterable. 2017-09-30 17:23:15 +02:00
Dave Halter
3bfff846ed Move the special method filter from iterable to filters. 2017-09-30 17:15:23 +02:00
Dave Halter
2c81bd919e ClassContext is now importable from context. 2017-09-30 16:57:28 +02:00
Dave Halter
3c75f27376 Move the base Context stuff to another module to keep context free for imports. 2017-09-30 16:46:07 +02:00
Dave Halter
3c2221ec2d Don't use a star import. 2017-09-29 15:47:36 +02:00
Dave Halter
c8cae2140f Move the lazy contexts to a separate module. 2017-09-29 15:44:47 +02:00
Dave Halter
8c601a1c65 Also move the class to the context package. 2017-09-29 15:39:20 +02:00
Dave Halter
5f613ece28 Move the namespace to a separate module. 2017-09-29 15:31:26 +02:00
Dave Halter
32917d5565 Remove the function context to a separate module. 2017-09-29 15:28:17 +02:00
Dave Halter
8a9e1cd914 Move an import of a function. 2017-09-29 15:17:19 +02:00
Dave Halter
95930d293c Move instance module to the context package. 2017-09-29 15:14:56 +02:00
Dave Halter
8f177eea07 Move the ModuleContext to a separate module. 2017-09-29 13:24:48 +02:00
Dave Halter
41cfbe2382 Move context to base.py 2017-09-29 13:06:03 +02:00
Dave Halter
20a462597d Move context.py to a separate package. 2017-09-28 21:10:19 +02:00
Dave Halter
b70cef735a Find packages differently in setup.py 2017-09-28 21:03:56 +02:00
Dave Halter
d656ccd833 Move a BaseContext to jedi.common.context. 2017-09-28 17:06:58 +02:00
Dave Halter
d99d4deebf Merge branch 'values' 2017-09-28 16:19:38 +02:00
Dave Halter
3734d52c8b Move all the remaining imports out of the syntax tree functions 2017-09-28 14:44:58 +02:00
Dave Halter
18bab194c0 Move a few imports out of functions. 2017-09-28 14:38:11 +02:00
Dave Halter
e62d89bb03 Move the is_string etc functions to the helpers module. 2017-09-28 14:28:07 +02:00
Dave Halter
6b76e37673 Make some functions private in evaluate/iterable. 2017-09-28 14:19:11 +02:00
Dave Halter
612ad2f491 Move eval_subscript_list to the syntax_tree module. 2017-09-28 14:17:37 +02:00
Dave Halter
65ef6a3166 Move py__getitem__ to the context module. 2017-09-28 14:10:32 +02:00
Dave Halter
30df79e234 Rename py__iter__types to iterate_contexts. 2017-09-28 13:19:33 +02:00
Dave Halter
8c0845cf0c Move iterate logic to the context. 2017-09-28 13:13:09 +02:00
Dave Halter
47c249957d Make BuiltinMethod a Context object. 2017-09-28 12:04:44 +02:00
Dave Halter
b08300813e Fix an issue surrounding namedtuples where I didn't see the tests failing. 2017-09-28 10:39:54 +02:00
Dave Halter
1c9060ebc5 Remove evaluator as param from apply_decorators. 2017-09-28 09:18:12 +02:00
Dave Halter
d9d3aeb5bc Move more functions to the syntax tree module. 2017-09-28 09:16:43 +02:00
Dave Halter
0782a80cef Move all the search to py__getattribute__ and remove find_types. 2017-09-27 19:22:50 +02:00
Dave Halter
9073f0debc Use the typical ordering of arguments for ClassContext. 2017-09-27 19:16:05 +02:00
Dave Halter
a7a66024d4 Make a lot more functions private. 2017-09-27 19:13:19 +02:00
Dave Halter
ed43a68c03 Remove the precedence module in favor of the syntax tree module. 2017-09-27 19:09:30 +02:00
Dave Halter
d0939f0449 Move eval_or_test away from precedence module. 2017-09-27 18:51:53 +02:00
Dave Halter
08a48672bc A minor rename. 2017-09-27 18:15:12 +02:00
Dave Halter
d584b698b7 Move eval_element and eval_stmt to the syntax tree module. 2017-09-27 18:14:04 +02:00
Dave Halter
b997b538a7 Move eval_atom to the syntax tree module. 2017-09-27 16:27:37 +02:00
Dave Halter
5415a6164f Starting to try to move some functions away from Evaluator.
This time eval_trailer.
2017-09-27 16:21:02 +02:00
Dave Halter
313e1b3875 Use a different way of executing functions. 2017-09-27 16:07:24 +02:00
Dave Halter
025951089a Some conversions of eval_element -> eval_node. 2017-09-27 15:17:11 +02:00
Dave Halter
b1ed0c7d22 Add py__class__ to ContextSet. 2017-09-27 14:09:09 +02:00
Dave Halter
b74c8cb033 To be able to customize ContextSet, move a subclass to evaluate.context 2017-09-27 09:20:58 +02:00
Dave Halter
faa2d01593 The memoize decorator doesn't need to magically cache generators as lists.
This makes no sense at all. Explicit is better than implicit.
2017-09-26 18:36:10 +02:00
Dave Halter
a0a438fe6f Forgot an iterator in context sets. 2017-09-26 18:32:42 +02:00
Dave Halter
e4090910f6 Remove the ParamListener, it was not used anymore. 2017-09-26 18:24:42 +02:00
Dave Halter
00f2f9a90c Fix the final issues with the ContextSet refactoring. 2017-09-26 18:17:19 +02:00
Dave Halter
ee52cc7501 Fix most dynamic array issues. 2017-09-26 17:26:33 +02:00
Dave Halter
592f2dac95 A lot more fixes for tests. 2017-09-26 16:29:07 +02:00
Dave Halter
174eff5875 Replace a lot more of empty sets and unite calls. 2017-09-25 23:08:59 +02:00
Dave Halter
921d1008f2 First tests are now passing. 2017-09-25 11:10:09 +02:00
Dave Halter
5328d1e700 Add a ContextSet.
This is not bug free yet, but it's going to be a good abstraction for a lot of small things.
2017-09-25 11:04:09 +02:00
Dave Halter
dd924a287d Deployment script forgot to push the tags to github. 2017-09-21 00:05:52 +02:00
Dave Halter
a06af3d989 Remove the old deploy script. 2017-09-20 22:23:50 +02:00
Dave Halter
f2855ebb11 Change the date of the change log. 2017-09-20 20:33:52 +02:00
Dave Halter
a433ee7a7e Move common to evaluate.utils. 2017-09-20 20:33:01 +02:00
Dave Halter
55d7c2acff A bit of a different import. 2017-09-20 18:32:16 +02:00
Dave Halter
84ec5eda4c Remove two internal deprecations that don't seem to matter. 2017-09-20 18:28:46 +02:00
Dave Halter
d6a04b2928 Remove the deprecated attributes from Jedi. 2017-09-20 18:27:29 +02:00
Dave Halter
0c01a3b823 The sys.modules implementation did not work properly with newly created files.
Fixes #886.
2017-09-20 10:06:02 +02:00
Dave Halter
03584ff3f3 Imports can be executed twice without this. 2017-09-19 18:17:07 +02:00
Dave Halter
260aef943a Increase Python's recursion limit
Currently there is still the possiblity that Jedi fails with a recursion error,
because the stack is too small. (see #861) By increasing it we improve the
situation.

Probably we should just be switching away from this extreme amount of recursion
and move to queueing which would also allow to use other algorithms such as
breadth-first-search.
2017-09-18 10:26:42 +02:00
Dave Halter
c7dbf95344 Fix recursion issues.
Completely refactored the recursion checker and changed some settings.

Fixes #861.
2017-09-17 21:54:09 +02:00
Dave Halter
0516a8bd35 Get rid of the settings module in recursions. 2017-09-17 14:18:56 +02:00
Dave Halter
2eb715dae8 Mention in the changelog that the recursion settings have been moved. 2017-09-17 14:10:26 +02:00
Dave Halter
f4ba71f6a3 Move the recursion limit settings to the recursion module. 2017-09-17 14:08:39 +02:00
Dave Halter
f2d24f0259 Remove inspecting the stack in the debugger.
This feature wasn't used anymore and it made debugging a slower by a factor of 10-10000.
2017-09-17 03:03:12 +02:00
Dave Halter
c51634b8d4 dict_values should be accessible for CompiledObjects. 2017-09-17 02:48:09 +02:00
Dave Halter
96ad254fcc Typo. 2017-09-17 02:15:49 +02:00
Dave Halter
4b4b2c2122 Fix a small issue surrounding old school classes in Python 2. 2017-09-17 02:09:39 +02:00
Dave Halter
8fcb468539 Jedi was able to go crazy and loop endlessly in certain if/self assignment combinations.
Here we limit type inferance per tree scope. I'm still not sure this is the way
to go, but it looks okay for now and we can still go anther way in the future.
Tests are there.

Fixes #929.
2017-09-17 02:04:42 +02:00
Dave Halter
9dd2027299 Way better support for instantiated classes in REPL
Fixes several issues:

- It was not possible to correctly trace where instances were coming from in a
  REPL. This led to them being pretty much ignored.
- Instances were then just treated as classes and not as actual instances in
  MixedObjects. (However since they were ignored in the first place this
  wasn't really an issue).
- Avoiding the repr bug https://github.com/python/cpython/pull/2132/files in
  Jedi is working a bit differently. We're just never accessing Objects
  directly. This should work around 99.99% of the cases were people are using
  this stuff.

Fixes #872
2017-09-15 01:55:18 +02:00
Dave Halter
63edbdcc5b Better context completions for finally/except/else/elif
Fixes #837
2017-09-15 00:48:56 +02:00
Dave Halter
e389c61377 Relative imports with more than one level did not work
Fixes #784.
2017-09-14 22:06:08 +02:00
Dave Halter
ab84030ad2 full_name was buggy when used on import error names
Fixes #873.
2017-09-14 20:41:25 +02:00
Dave Halter
2210b11778 Fix some issues with import completion
Fixes #759
2017-09-14 20:09:13 +02:00
Dave Halter
4c2d1ea7e7 Understand context managers correctly
Fixes #812.
2017-09-13 11:00:34 +02:00
Dave Halter
5ff7e3dbbe Actually do goto when follow_imports is used
Fixes #945.
2017-09-13 00:28:49 +02:00
Dave Halter
5a8b9541a7 Add operator.itemgetter support for Python <= 3.3.
Also fixes namedtuple support for these versions.
2017-09-12 23:18:32 +02:00
Dave Halter
a8a15114ac Fix namedtuple support
There were a couple issues:
 - namedtuple with one member didn't work
 - namedtuple content access was never possible
 - operator.itemgetter didn't work properly. Corrected py__bool__ for FakeSequence

Fixes #730.
2017-09-12 11:06:39 +02:00
Dave Halter
4a544c29ea Fix a follow_imports (goto) issue. 2017-09-11 23:32:10 +02:00
Dave Halter
619acbd2ca Goto didn't work well on imports in __init__.py files.
Fixes #956.
2017-09-11 21:48:37 +02:00
Dave Halter
c05f1d3ccc Completion after as in imports should not be possible.
Fixes #841.
2017-09-10 11:27:57 +02:00
Dave Halter
c25a4a00df readlines should be completable.
Fixes #921.
2017-09-10 01:54:50 +02:00
Dave Halter
80284fb14b Gracefully fail in 2.7 because inspect.signature is not available. 2017-09-10 01:36:32 +02:00
Dave Halter
5c6f8bda01 Fix inspect.signature for Python3.4. 2017-09-10 01:34:15 +02:00
Dave Halter
d1c85191a0 Start using inspect.signature for CompiledObject params.
Fixes 917 and 924.
2017-09-09 22:29:00 +02:00
Dave Halter
c7f225439d Comprehenions can also define self variables.
Also related to #932.
2017-09-09 20:20:05 +02:00
Dave Halter
40f4f032c6 Fix class/def/class nesting definitions
Fixes #932.
2017-09-09 20:13:03 +02:00
Dave Halter
236b860cc7 Add the numpy docstring changes to the changelog. 2017-09-09 19:27:11 +02:00
Dave Halter
d47804edef Don't use literal_eval
Using it without control over the input leads to various possible exceptions.
Refs #868.
2017-09-09 19:23:06 +02:00
Dave Halter
3bceef075a Merge branch 'numpydoc' of https://github.com/bcolsen/jedi 2017-09-09 18:50:19 +02:00
Dave Halter
381fedddb4 Fix get_line_code().
Fixes #948.
2017-09-09 18:28:05 +02:00
Dave Halter
ef6a1ca10f Fix an issue with choosing the right lines in get_line_code. Refs #948. 2017-09-09 18:10:53 +02:00
Dave Halter
46f306aa11 Add a TODO. 2017-09-09 17:59:53 +02:00
Dave Halter
078b5802d2 Remove unused code. 2017-09-09 17:58:06 +02:00
Dave Halter
077bccadc7 Remove AnonymousFunctionExecution and simplify everything. 2017-09-09 17:58:06 +02:00
Dave Halter
37ec79241c Remove the only param for AnonymousArguments. 2017-09-09 17:58:06 +02:00
Dave Halter
04c4313dc7 Start refactoring arguments. 2017-09-09 17:58:06 +02:00
Dave Halter
2f213f89e5 Remove code that was scheduled for removal. 2017-09-09 17:58:06 +02:00
Guglielmo Saggiorato
12a6a388cd removed reference to autocomplete-python
kept only ref to autocomplete-python-jedi
2017-09-07 10:58:13 +02:00
Guglielmo Saggiorato
06fac596d9 corrected typo in docs/docs/usage.rst 2017-09-07 10:58:13 +02:00
Guglielmo Saggiorato
7c4a96fbfa Citing autocomplete-python-jedi alongside to autocomplete-python 2017-09-07 10:58:13 +02:00
Dave Halter
4841b8d491 Merge branch 'master' of github.com:davidhalter/jedi 2017-09-07 10:46:15 +02:00
Dave Halter
794880b8a8 Prepare for version 0.11.0. 2017-09-07 10:43:40 +02:00
Dave Halter
c4601b835f Don't go crazy with big lists. 2017-09-07 01:26:53 +02:00
Dave Halter
a0bf465aee Fix an issue in stdlib path checking. 2017-09-07 01:10:54 +02:00
Dave Halter
d2b4e0511f Ignore stdlib paths for dynamic param inference. 2017-09-07 00:09:14 +02:00
Dave Halter
8d06e9f9c9 Do some parser tree caching. This might be important for recursions. 2017-09-05 19:00:49 +02:00
Dave Halter
16ad43922f Aldo change CachedMetaClass a bit to use the same memoize decorator. 2017-09-05 18:52:12 +02:00
Dave Halter
e85000b798 Replace memoize_default with two nicer functions. 2017-09-05 18:46:16 +02:00
Dave Halter
e81486894f Prepare for eventual cache changes. 2017-09-05 18:38:32 +02:00
Dave Halter
2aa5da8682 Parso was finally released. 2017-09-05 18:19:10 +02:00
Jakub Wilk
6c85ec1a6d Fix typos. 2017-09-05 00:34:27 +02:00
Dave Halter
882f8029ea Use split_lines and python_bytes_to_unicode directly. 2017-09-03 18:38:00 +02:00
Dave Halter
ef89593896 Disable more tests in Python2.6, because of set literals that don't exist there. 2017-09-03 02:01:43 +02:00
Dave Halter
957f2cedf4 Disable some tests that don't run in 2.6, because its syntax doesn't support it. 2017-09-03 01:23:54 +02:00
Dave Halter
245ad9d581 Bump parso version. 2017-09-03 01:10:22 +02:00
Dave Halter
65c02a2332 A bit of shuffling code around get_definition around. 2017-09-03 01:05:53 +02:00
Dave Halter
f69d8f1f29 _get_definition -> get_definition in parso. 2017-09-03 00:50:52 +02:00
Dave Halter
4795ed9071 More refactoring. 2017-09-03 00:39:15 +02:00
Dave Halter
6fb2f73f88 Some more refactorings. 2017-09-03 00:37:20 +02:00
Dave Halter
b64690afb8 Param defaults were not correctly followed when goto was used on them. 2017-09-03 00:22:59 +02:00
Dave Halter
e85816cc85 Simplify getting code for completions. 2017-09-03 00:11:23 +02:00
Dave Halter
fc8326bca1 Finally get rid of the last get_definition. 2017-09-03 00:07:14 +02:00
Dave Halter
333babea39 get_definition has now a new option. 2017-09-02 23:56:00 +02:00
Dave Halter
747e0aa7c4 Remove a get_definition usage. 2017-09-02 23:23:09 +02:00
Dave Halter
4a04bf78c7 Move some code around. 2017-09-02 22:45:23 +02:00
Dave Halter
9663e343c2 Almost the last switch to _get_definition. 2017-09-02 22:42:01 +02:00
Dave Halter
03da6b5655 get_definition change in finder. 2017-09-02 21:46:03 +02:00
Dave Halter
6419534417 Some more _get_definition fixes 2017-09-02 21:37:59 +02:00
Dave Halter
ee6d68c3a8 Remove a get_definnition usage. 2017-09-02 17:59:09 +02:00
Dave Halter
7e19e49200 Start replacing get_definitions. 2017-09-02 17:48:01 +02:00
Dave Halter
9cac7462d6 Return statements should be handled correctly if the return_stmt is only a return without an expression behind it. 2017-09-02 14:03:54 +02:00
Dave Halter
c47f5ca68c Fix issues with yield. 2017-09-01 18:38:19 +02:00
Dave Halter
e2d53f51b0 test for yields in expressions. 2017-09-01 18:08:52 +02:00
Dave Halter
16f1eb417a One more parso rename. 2017-09-01 18:05:19 +02:00
Dave Halter
2b08c0ac88 Bump parso to 0.0.3 2017-08-31 22:54:09 +02:00
Dave Halter
3789709ec0 Add the deployment script from parso. 2017-08-31 22:45:27 +02:00
Dave Halter
fe9be9fe09 source_to_unicode -> python_bytes_to_unicode. 2017-08-15 20:09:48 +02:00
Dave Halter
f9e31dc941 Refactor splitlines -> split_lines. 2017-08-15 19:55:50 +02:00
Dave Halter
e3ca1e87ff Simplify Contributing.md. 2017-08-13 22:31:05 +02:00
Dave Halter
a37201bc1d Finally fixing the Python 2 issues with static_getattr. 2017-08-13 22:24:50 +02:00
Dave Halter
13a0d63091 Add Python 2 compatibility. 2017-08-12 23:15:16 +02:00
Dave Halter
88cfb2cb91 Remove side effects when accessing jedi from the interpreter.
Note that there is http://bugs.python.org/issue31184.
Fixes #925.
2017-08-12 22:49:05 +02:00
Dave Halter
b26b8a1749 Merge branch 'dev' 2017-08-12 22:46:24 +02:00
Dave Halter
997cb2d366 Merge branch 'dev' of github.com:davidhalter/jedi into dev 2017-08-12 22:45:47 +02:00
Yariv Kenan
9a43c35a4d fix old_files method
Returns old files instead of new files info.
Probably not what it was meant for.
2017-08-10 00:14:07 +02:00
bcolsen
3422b21c62 Added Yields test 2017-08-09 00:37:29 -06:00
bcolsen
38a690b4e4 add numpydoc to cov in tox.ini 2017-08-08 23:41:08 -06:00
bcolsen
77d6de0ae5 fix test skip and py3.6 2017-08-08 23:30:02 -06:00
bcolsen
4f96cdb3b0 Numpydocs doesn't support 2.6 or 3.3 2017-08-08 23:13:16 -06:00
bcolsen
d19a97f53a Numpydocs and compiled objects return types 2017-08-08 22:46:33 -06:00
Dave Halter
ff001e07a6 In parso params is now get_params(). 2017-08-06 17:35:05 +02:00
Dave Halter
39cbd003c0 A small change in parso changed the normalize API. 2017-08-06 16:43:47 +02:00
Dave Halter
8d6732c28c Remove a print statement. 2017-07-16 22:16:13 +02:00
Dave Halter
7e4504efbd Fix ellipsis issues of python2. 2017-07-16 20:07:49 +02:00
Dave Halter
54490be1b2 parso.load_grammar now needs version as a keyword argument. 2017-07-16 17:16:37 +02:00
Dave Halter
2fcd2f8f89 Fix some more stuff because of newer parso changes. 2017-07-14 18:21:52 +02:00
micbou
175e57214e Fix instance docstring 2017-07-14 00:59:55 +02:00
micbou
f5248250d8 Fix keyword docstring 2017-07-14 00:22:27 +02:00
Dave Halter
945a2ba405 Dedent some code to avoid issues with parso. 2017-07-09 00:27:23 +02:00
Dave Halter
72b4c8bd9f The normalize function is private for now. 2017-07-08 18:56:42 +02:00
denfromufa
270f70ea7e more precise SO link 2017-06-25 21:39:11 +02:00
Dave Halter
e0485b032e Fix some stuff to make parso work again. 2017-06-02 00:00:31 +02:00
Dave Halter
af26cc9f05 Remove the license parts that are not needed anymore, because they are now part of parso. 2017-06-01 23:59:04 +02:00
Dave Halter
5d657500d1 Use the new normalize function instead of get_code(normalize=True) that was removed in parso. 2017-05-27 13:12:11 -04:00
Dave Halter
b9271cf5a5 Use the parser_cache correctly. 2017-05-26 13:43:18 -04:00
Dave Halter
76529ca34d The parser_cache contents have changed. Therefore adapt. 2017-05-26 12:52:52 -04:00
Dave Halter
35e248091e Some more parso API changes. 2017-05-26 12:02:39 -04:00
Dave Halter
3015e7e60f Remove use_exact_op_types because the default changed. 2017-05-26 11:35:47 -04:00
Dave Halter
24cd603fcf Some more parso adaptations. 2017-05-26 09:08:34 -04:00
Dave Halter
f94ef63ff2 Remove load_python_grammar for tests as well. 2017-05-25 13:36:40 -04:00
Dave Halter
3f36824a94 Parso changed load_python_grammar to load_grammar. 2017-05-25 12:41:19 -04:00
Dave Halter
d0127a7f61 Fix a warning that happened if there was no valid Python function in a place. 2017-05-25 12:26:07 -04:00
Dave Halter
ef2e2f343e Fix some warnings. 2017-05-25 12:24:21 -04:00
Dave Halter
6a320147ac Catch an importlib warning. 2017-05-24 23:43:27 -04:00
Dave Halter
4b0b07791e Bump parso version. 2017-05-24 00:42:06 -04:00
Dave Halter
7173559182 Move a test to parso. 2017-05-24 00:41:55 -04:00
Dave Halter
cd8932fbfc Add a latest grammar to the evaluator and use it to avoid importing from parso import parse. 2017-05-24 00:37:36 -04:00
Dave Halter
b90589b62e Some changes because parso has changed. 2017-05-22 15:42:42 -04:00
Dave Halter
91e753e07a The deploy script should create versions prefixed with v. 2017-05-20 18:01:33 -04:00
Dave Halter
d6f695b3bb Use the ast module instead of a jedi import to get the jedi version.
With dependencies it's not possible to do this with importing jedi anymore. It's now just a bit more complicated. Gosh I hate setup.py.
2017-05-20 17:53:11 -04:00
Dave Halter
c7984c0710 Add a requirements.txt.
Also use it within setup.py. It doesn't seem possible to define dependencies for tox with install_requires.
2017-05-20 17:22:34 -04:00
Dave Halter
fdff9396dd Move an import. 2017-05-20 16:08:43 -04:00
Dave Halter
aec86c6c80 distutils doesn't support install_requires. 2017-05-20 16:07:38 -04:00
Dave Halter
f35f1b9676 Add the cache_path parameter to parso calls. 2017-05-20 10:08:48 -04:00
Dave Halter
50c7137437 splitlines and source_to_unicode are utils of parso. 2017-05-20 09:55:16 -04:00
Dave Halter
0f4b7db56a Move jedi parser cache tests to parso. 2017-05-19 15:04:28 -04:00
Dave Halter
3c2b10a2a0 Remove a test that wasn't used for a long time. 2017-05-19 14:45:36 -04:00
Dave Halter
576c8cb433 Remove a star import cache test (the star import cache doesn't exist anymore). 2017-05-19 14:24:48 -04:00
Dave Halter
9bca3d39f5 Actually use parso now instead of Jedi. 2017-05-19 14:20:14 -04:00
Dave Halter
ccbaa12143 Add parso as a depencency in setup.py. 2017-05-19 10:29:32 -04:00
Dave Halter
32432b1cd1 Remove the parser packages from setup.py. 2017-05-19 10:27:26 -04:00
Dave Halter
f92e675400 Remove the whole parser. 2017-05-19 10:26:24 -04:00
Dave Halter
fb1c208985 Remove the tests that have been moved to parso. 2017-05-19 10:23:56 -04:00
Dave Halter
3c57f781dd Move another few tests. 2017-05-15 15:18:42 -04:00
Dave Halter
f4548d127c Some simplifications for the parsers. 2017-05-15 15:02:45 -04:00
Dave Halter
882ddbf8ac Move some more parser tests. 2017-05-15 15:00:34 -04:00
Dave Halter
0a8c96cd22 Remove a test that is really not necessary anymore, because the issues that it was covering back then are not issues anymore with the new infrastructure. 2017-05-15 14:53:50 -04:00
Dave Halter
6848762f7c Move some more tests. 2017-05-15 14:51:25 -04:00
Dave Halter
f8b5aab6f4 Move some parser tests. 2017-05-15 13:57:26 -04:00
Dave Halter
90b531a3b3 Correcting a sentence. 2017-05-15 11:10:22 -04:00
Dave Halter
0da875281b Remove an unused compatibility function that was overriden by the same name lower in the same file. 2017-05-11 16:22:11 -04:00
Dave Halter
0b3590ce20 Python 3.6 was not tested in the default configuration of tox. 2017-05-08 19:55:35 +02:00
Dave Halter
9fb7fb66da Move another test to delete a file. 2017-05-07 16:39:32 +02:00
Dave Halter
3b033bb276 Remove two tests that are not necessary anymore because the code that made them necessary was removed (some import hacks). 2017-05-07 16:33:24 +02:00
Dave Halter
ab71c943ee Move a parser test to the correct place. 2017-05-07 16:29:48 +02:00
Dave Halter
d717c3bf40 Merge some import tests. 2017-05-07 16:20:49 +02:00
Dave Halter
f9f60177bf Move an analysis test. 2017-05-07 16:14:21 +02:00
Dave Halter
6b7376bc5d Move some stdlib tests. 2017-05-07 16:06:01 +02:00
Dave Halter
6c95f73d77 Remove a function that was not really needed. 2017-05-07 16:00:08 +02:00
Dave Halter
84d8279089 Import.paths -> Import.get_paths. 2017-05-07 15:47:34 +02:00
Dave Halter
9bf66b6149 Make Import.aliases private. 2017-05-07 15:38:03 +02:00
Dave Halter
66b28ca840 Small cleanup. 2017-05-07 15:22:45 +02:00
Dave Halter
fe49fc9b99 Add slots to the PythonMixin. 2017-05-07 15:06:34 +02:00
Dave Halter
536e62e67d Move is_scope and get_parent_scope out of the parser. 2017-05-07 14:58:53 +02:00
Dave Halter
0882849e65 Don't do a simple_stmt error recovery in the parser, because it makes it more complicated. 2017-05-07 14:52:46 +02:00
Dave Halter
30a02587a7 Merge branch 'dev' of github.com:davidhalter/jedi into dev 2017-05-07 14:46:47 +02:00
Élie Gouzien
80fbdec1da Corrected test class name. 2017-05-06 19:40:36 +02:00
Élie Gouzien
405a339719 Add author Élie Gouzien. 2017-05-06 19:40:36 +02:00
Élie Gouzien
9d5cc0be06 Test that no repr() can slow down completion.
Was reported with issue #919.
2017-05-06 19:40:36 +02:00
Élie Gouzien
a78769954d Check whether inspect.getfile will raise a TypeError and anticipate it.
Anticipate the raise of TypeError from inspect.getfile to prevent the computation of repr() for the error message wich is not used.
Useful for some big pandas arrays.
Fix tentative of #919.
2017-05-06 19:40:36 +02:00
Dave Halter
14eeb60240 Remove is_scope from CompiledObject. It's not needed according to tests. 2017-05-05 09:23:50 +02:00
Dave Halter
f916b9b054 More docstrings. 2017-05-05 09:21:42 +02:00
Dave Halter
336b8a46d0 search_ancestor now uses *node_types as a parameter instead of a mix of tuple and simple string like isinstance. 2017-05-02 19:19:07 +02:00
Dave Halter
6ea06fdfd8 Even if static analysis is not working well, we can at least write it correctly. 2017-05-02 08:59:07 +02:00
Dave Halter
5c836a72b6 Lambda and Function docstrings render better. 2017-05-02 08:57:03 +02:00
Dave Halter
fc7cc1c814 Docstrings for get_defined_names. 2017-05-02 08:50:52 +02:00
Dave Halter
e96bb29d18 Param docstring. 2017-05-02 08:43:46 +02:00
Dave Halter
c1c3f35e08 Docstring for Param.get_code(). 2017-05-01 02:26:24 +02:00
Dave Halter
63679aabd9 Replace Param.get_description with get_code and a parameter include_coma. 2017-05-01 02:19:42 +02:00
Dave Halter
e0b0343a78 Remove expanduser from the parser path. Not sure if that makes sense so I'd rather remove it. 2017-04-30 15:23:43 +02:00
Dave Halter
e2f88db3c2 Trying to make coveralls work again. 2017-04-30 14:19:53 +02:00
Dave Halter
0f1570f682 position_nr -> position_index 2017-04-30 14:12:30 +02:00
Dave Halter
2383f5c0a0 docstrings for the parser tree. 2017-04-30 14:06:57 +02:00
Dave Halter
a1454e3e69 Fix a docstring test. 2017-04-30 03:11:09 +02:00
Dave Halter
78fd3ad861 is_generator is not needed in lambdas. 2017-04-30 03:07:48 +02:00
Dave Halter
1295d73efd path_for_name -> get_path_for_name 2017-04-30 03:03:58 +02:00
Dave Halter
e2d6c39ede Remove yields from lambda. It was previously removed from Function. 2017-04-30 02:59:09 +02:00
Dave Halter
076eea12bd Some minor refactoring of the python tree. 2017-04-30 02:56:44 +02:00
Dave Halter
8165e1a27f Add Module.iter_future_import_names to make checking for future imports easier. 2017-04-30 02:44:02 +02:00
Dave Halter
f2a77e58d8 Merge branch 'dev' of github.com:davidhalter/jedi into dev 2017-04-30 02:34:38 +02:00
Dave Halter
d8761e6310 Use names instead of the isinstance checks in _search_in_scope 2017-04-30 02:33:51 +02:00
Dave Halter
6e9911daa3 Scope.imports -> iter_imports. 2017-04-30 02:31:30 +02:00
Dave Halter
42fe1aeaa1 Move yields -> iter_yield_exprs. 2017-04-30 02:13:25 +02:00
Dave Halter
606871eb62 returns -> iter_return_stmts 2017-04-30 01:45:59 +02:00
Dave Halter
b4039872bd Replace Scope.subscopes with iter_funcdefs and iter_classdefs. 2017-04-30 01:36:17 +02:00
Matthias Bussonnier
6f1ee0cfa8 Use stacklevel in warnings or filters don't work.
In particular with the right stacklevel IPython will display the warning
if code is directly entered by the user. Without this info it does not.

Use the opportunity to add in the warning since when things are
deprecated. This leads to one less lookup of information for the user.
2017-04-29 20:13:19 +02:00
Dave Halter
3e05061f3b Remove old unused code. 2017-04-28 18:34:02 +02:00
Dave Halter
ad536a837c A small change. 2017-04-28 18:29:35 +02:00
Dave Halter
b328e727ea Remove Scope.walk, because it was never used. 2017-04-28 18:20:07 +02:00
Dave Halter
eaa5100372 Removed Scope.statements from the parser tree. 2017-04-28 18:18:58 +02:00
Dave Halter
307adc2026 Scope.flows is never used so remove it. 2017-04-28 00:23:47 +02:00
Dave Halter
3cf4c66112 Change some more docstring stuff. 2017-04-28 00:23:28 +02:00
Dave Halter
bc4c5fafb7 Start creating documentation for the parser. 2017-04-27 21:50:31 +02:00
Dave Halter
02a8443541 search_ancestor docstring 2017-04-27 21:47:39 +02:00
Dave Halter
a846e687c3 Move search_ancestor to jedi.parser.tree. 2017-04-27 21:41:24 +02:00
Simon Ruggier
338ea42ed9 docstrings: fix "Sphinx param with type" pattern (#807)
* docstrings: fix "Sphinx param with type" pattern

Previously, the pattern only matched if the parameter description
followed on the same line, like so:

    :param type foo: A param named foo.

However, it's also valid for the parameter description to be wrapped
onto the next line, like so:

    :param type foo:
        A param named foo.

This change updates the pattern to match the second example as well, and
adds a test to verify this behaviour.

Fixes #806.

* Add Simon Ruggier to the AUTHORS file
2017-04-27 20:05:48 +02:00
Dave Halter
800bf4bbe2 _NodeOrLeaf -> NodeOrLeaf. 2017-04-27 19:59:30 +02:00
Dave Halter
e8cfb99ada Fix a docs issue. 2017-04-27 19:59:09 +02:00
Dave Halter
8bd41ee887 Better documentation of get_code. 2017-04-27 19:48:00 +02:00
Dave Halter
e8718c6ce5 Docs for IPython completion which depends now on Jedi. 2017-04-27 19:31:50 +02:00
Dave Halter
0474854037 More docstrings of a few _BaseOrLeaf methods/properties. 2017-04-27 17:39:46 +02:00
Dave Halter
e998a18d8e More docstrings. 2017-04-27 09:14:23 +02:00
Dave Halter
819e9f607e Move get_following_comment_same_line out of the parser tree. 2017-04-27 08:56:11 +02:00
Dave Halter
cc4681ec54 Merge branch 'dev' of github.com:davidhalter/jedi into dev 2017-04-26 18:45:33 +02:00
Dave Halter
e8b32e358b Remove 'move' from the parser tree. 2017-04-26 18:45:18 +02:00
Dave Halter
dea09b096d Some docstrings for the parser. 2017-04-26 18:16:50 +02:00
Dave Halter
c124fc91ca Remove further clean_scope_docstring usages. 2017-04-26 09:52:18 +02:00
Dave Halter
bea28fd33f Give ExecutionParams a better way of knowing what called them. 2017-04-26 09:32:47 +02:00
Matthias Bussonnier
b0f10081d4 Fix : Jedi do not complete numpy arrays in dictionary
Fix ipython/ipython#10468
2017-04-21 13:14:07 +02:00
Dave Halter
f136745a8a follow_param -> infer_param. 2017-04-20 18:09:00 +02:00
Dave Halter
ea1905f121 Refactor the docstring input. 2017-04-20 18:06:40 +02:00
Dave Halter
fbde21166b find_return_types -> infer_return_types. 2017-04-20 09:56:16 +02:00
Dave Halter
ac8ed62a77 Remove FakeName since it's not actually used anymore. 2017-04-20 09:52:31 +02:00
Dave Halter
db683acfc1 One more docstring test. 2017-04-20 09:47:30 +02:00
Dave Halter
7ca62578e1 Add py__doc__ as a better approach to docstrings. 2017-04-20 09:45:15 +02:00
Dave Halter
b4631d6dd4 Progress in removing the docstring/call signature logic from the parser. 2017-04-18 18:48:05 +02:00
Dave Halter
deb028c3fb Move get_statement_of_position out of the parser tree. 2017-04-15 02:23:08 +02:00
Dave Halter
1cfe5c2945 Python3Method is not needed anymore in the parser. 2017-04-15 01:53:58 +02:00
Dave Halter
4bd3c91622 Fix Python 2 tests. 2017-04-15 01:49:20 +02:00
Dave Halter
c4e51f9969 Use object for Python 2 classes. 2017-04-15 01:47:48 +02:00
Dave Halter
d6d25db9a2 Remove __str__ from name. 2017-04-12 23:06:11 +02:00
Dave Halter
73a38267cf Simplify the Operator/Keyword string comparison. 2017-04-12 19:11:14 +02:00
Dave Halter
a0b65b52c6 used_names -> get_used_names(). 2017-04-12 08:56:11 +02:00
Dave Halter
b0ac07228b Restructure/Refactor has_absolute_import a bit. 2017-04-12 08:47:30 +02:00
Dave Halter
c056105502 get_except_clauses -> get_except_clause_tests 2017-04-12 08:40:27 +02:00
Dave Halter
7e560bffe8 Move in_which_test_node -> get_corresponding_test_node. 2017-04-12 08:35:48 +02:00
Dave Halter
6190a65f23 The Lambda type should be lambdef, not lambda. Use the grammar types. 2017-04-11 18:28:25 +02:00
Dave Halter
685e630c03 Simple refactoring. 2017-04-11 18:20:44 +02:00
Dave Halter
afa6427861 Fix the remaining lambda issue. 2017-04-11 18:18:31 +02:00
Dave Halter
5cd26615e8 Removed the name attribute from lambda. It doesn't exist so don't fake it. 2017-04-11 18:10:35 +02:00
Dave Halter
e675715357 Rename a few IfStmt methods. 2017-04-10 22:46:06 +02:00
Dave Halter
797953df39 More Flow cleanups. 2017-04-10 10:05:21 +02:00
Dave Halter
218e715553 Make the some names more concise in the parser tree. 2017-04-10 09:44:08 +02:00
Dave Halter
769cc80d6b Cleanup with_stmt. 2017-04-09 21:20:33 +02:00
Dave Halter
f855c2bb70 More parser tree simplifications. 2017-04-09 13:24:17 +02:00
Dave Halter
ff82763e6b get_annotation -> annotation (property). 2017-04-08 15:29:29 +02:00
Dave Halter
545cb26f78 stars -> star_count. 2017-04-08 15:26:57 +02:00
Dave Halter
1625834f81 Move get_comp_fors out of the parser. 2017-04-08 14:16:00 +02:00
Dave Halter
4cd7f40e3b Simplify get_executable_nodes. 2017-04-08 14:05:18 +02:00
Dave Halter
65a6c61dc6 Remove nodes_to_execute in favor of a function in parser_utils. 2017-04-08 12:59:49 +02:00
Dave Halter
8542047e5c Adding a tag should be part of the deployment script. 2017-04-06 18:21:20 +02:00
Dave Halter
053020c449 Bump version to 0.10.3. 2017-04-06 18:13:06 +02:00
Dave Halter
8bdf7e32ef Remove the dist folder before deploying. 2017-04-05 20:54:22 +02:00
Dave Halter
5427b02712 Merge branch 'dev' 2017-04-05 20:00:16 +02:00
Felipe Lacerda
aa2dfa9446 Fix path for grammar files in MANIFEST 2017-04-05 19:59:00 +02:00
Dave Halter
3cc97f4b73 Changelog for 0.10.2. 2017-04-05 18:04:41 +02:00
Dave Halter
536ad4c5f1 Added the forgotten changelog for 0.10.1. 2017-04-05 18:03:45 +02:00
Dave Halter
54242049d2 Bump version to 0.10.2. 2017-04-05 01:42:02 +02:00
Dave Halter
eb37f82411 Add memoization where it needs to be. Fixes #894. 2017-04-05 01:06:48 +02:00
Dave Halter
4b841370e4 Test full name for os.path imports. Fixes #873. 2017-04-05 01:00:20 +02:00
Dave Halter
fe5eaaf56c Add a better debugging message for import fails. 2017-04-04 23:27:45 +02:00
Dave Halter
fb8ed61b87 Add a way to cwd into a tmpdir. 2017-04-04 21:03:45 +02:00
Dave Halter
0117f83809 Forgot to include a test for #844. 2017-04-04 20:35:32 +02:00
Dave Halter
e660a5a703 Forgot to include the test for #884. 2017-04-04 20:31:27 +02:00
Dave Halter
947d91f792 Refactor the ClassName to allow inheritance in different modules. Fixes #884. 2017-04-04 20:11:07 +02:00
Dave Halter
d41e036427 Keyword-only arguments were not usable. Fixes #883 and #856. 2017-04-03 18:18:21 +02:00
Dave Halter
632072000e Fix the builtin docstring issue that we've had. Fixes #859. 2017-04-03 00:27:31 +02:00
Dave Halter
47c1b8fa07 Fix bug #844. 2017-04-02 22:21:57 +02:00
Dave Halter
9f1dda04c0 Remove print statements that are not needed. 2017-04-02 21:43:30 +02:00
Dave Halter
7ecaf19b59 Fix _remove_last_newline. Fixes #863. 2017-04-02 21:29:48 +02:00
Dave Halter
3a6d815e9e Another conversion. 2017-04-01 18:12:53 +02:00
Dave Halter
ed8370fa68 isinstance to type conversions. 2017-04-01 18:08:59 +02:00
Dave Halter
bd779655ae Merge branch 'dev' of github.com:davidhalter/jedi into dev 2017-04-01 17:51:36 +02:00
Dave Halter
d6d1a39bf2 Remove some print statements. 2017-04-01 17:50:47 +02:00
Dave Halter
4cc467123c Use PythonNode and not Node in the evaluator. 2017-04-01 17:39:52 +02:00
Andy Lee
1624f6945e Fix api.usages so it finds cross-module usages 2017-04-01 15:52:22 +02:00
Andy Lee
3e36238da3 Add test for cross-module usages 2017-04-01 15:52:22 +02:00
Dave Halter
281d6a87a0 Remove a few print statements. 2017-04-01 12:43:57 +02:00
Dave Halter
1fd10d978d Replace a few isinstance calls with the type. 2017-04-01 00:26:22 +02:00
Dave Halter
a6829ca546 Use the cache variables in more straight forward fashion. 2017-03-31 23:10:39 +02:00
Dave Halter
b708b7f07d Remove other crap code in the diff parser. 2017-03-31 21:48:30 +02:00
Dave Halter
b15aa197fd Remove CachedFastPaser, not needed anymore. 2017-03-31 21:46:57 +02:00
Dave Halter
1bb0c89f46 Remove source parameter from parser. 2017-03-31 21:44:43 +02:00
Dave Halter
a687910368 Also remove _parsed from all parsers. 2017-03-31 21:42:11 +02:00
Dave Halter
d2d165267d Remove unused get_root_node on the parser. 2017-03-31 21:41:36 +02:00
Dave Halter
7a51dbea08 Fix an issue in Python 2. 2017-03-31 21:03:15 +02:00
Dave Halter
f6f2765ab9 Rename the profile script to profile_output to avoid name clashes with the stdlibs profile.py. 2017-03-31 17:27:16 +02:00
Dave Halter
36b2fce030 The profile scripts default had to be changed. 2017-03-31 17:26:24 +02:00
Dave Halter
7e45ee3096 Refactor our parser caching a bit more. 2017-03-30 18:41:51 +02:00
Dave Halter
35fd1c70bd Rename parser.utils to parser.cache. 2017-03-30 01:57:48 +02:00
Dave Halter
db364bc44d Move underscore memoization. 2017-03-30 01:53:18 +02:00
Dave Halter
54d69fb9f4 Remove the ParserPickling class. 2017-03-30 01:50:50 +02:00
Dave Halter
8059c3c2c8 Save a module instead of a parser when pickling. 2017-03-30 00:55:04 +02:00
Dave Halter
932703f04a Remove an import that is not needed anymore. 2017-03-28 02:09:38 +02:00
Dave Halter
ee47be0140 Merge Parser and ParserWithRecovery. 2017-03-28 02:08:16 +02:00
Dave Halter
1d0796ac07 Remove a usage of the old module path. 2017-03-28 01:43:40 +02:00
Dave Halter
6a9c2f8795 Start using ContextualizedNode for py__iter__. 2017-03-28 01:34:07 +02:00
Dave Halter
bb9ea54402 Remove ImplicitTuple. 2017-03-27 23:18:06 +02:00
Dave Halter
8a35a04439 Remove the module path from the parser tree.
Some static analysis tests are still failing.
2017-03-27 18:13:32 +02:00
Dave Halter
b60ec024fa Remove start_parsing completely from the Parsers. 2017-03-26 12:52:37 +02:00
Dave Halter
63cafeaa87 Remove all usages of start_parsing=True in the fast parser. 2017-03-26 12:49:40 +02:00
Dave Halter
3d27d06781 Use the new parse method instead of a Parser. 2017-03-26 11:49:17 +02:00
Dave Halter
5c54650216 Code to source. 2017-03-26 01:50:19 +01:00
Dave Halter
aff0cbd68c Remove the last usage of save/load_parser in jedi. 2017-03-26 01:48:45 +01:00
Dave Halter
fb8ffde32e Fix an issue in the parse call. 2017-03-26 01:26:01 +01:00
Dave Halter
7874026ee5 A lot of work toward a better diff parser API. 2017-03-25 01:51:03 +01:00
Dave Halter
ac0d0869c9 Start using the parse function for caching as well. 2017-03-24 01:52:55 +01:00
Dave Halter
fb4cff8ef9 A small buildout script refactoring. 2017-03-23 14:22:27 -07:00
Dave Halter
5aa379945e Merge the FileNotFoundError cache. 2017-03-23 14:22:19 -07:00
Andy Lee
eb9af19559 Add test for loading deleted cache file 2017-03-23 08:17:11 -07:00
Andy Lee
3a851aac8c Catch FileNotFoundError when opening file cache 2017-03-23 08:16:51 -07:00
Dave Halter
6fef385774 Clean the path in pickling. 2017-03-23 08:52:25 +01:00
Dave Halter
26cce4d078 Add the grammar as an argument to saving the parser.
This makes collisions of different grammars when loading from the cache impossible.
2017-03-22 18:32:49 +01:00
Dave Halter
c41bee4253 Trying to ideas to reshape the diff parser APIs. 2017-03-22 09:38:06 +01:00
Dave Halter
2cb565561d Replace the diff parser imports with the modified path. 2017-03-21 22:10:01 +01:00
Dave Halter
3a2811fbe8 Move the diff parser. 2017-03-21 22:03:58 +01:00
Dave Halter
6f01264ed3 Restructure import's module loading. 2017-03-21 17:20:10 +01:00
Dave Halter
ff90beca6b Remove some documentation that was not necessary. 2017-03-20 21:10:49 +01:00
Dave Halter
d218acee6b Create a default implementation of leafs. 2017-03-20 19:34:48 +01:00
Dave Halter
c6811675b6 Rename ast_mapping to node_map. 2017-03-20 08:55:18 +01:00
Dave Halter
2d7fd30111 Remove _remove_last_newline from the parser. 2017-03-20 08:49:30 +01:00
Dave Halter
9dedb9ff68 Don't start parsing in our own API. 2017-03-20 08:44:52 +01:00
Dave Halter
53b4e78a9b Make some stuff private in the pgen parser API. 2017-03-20 08:36:11 +01:00
Dave Halter
689af9fc4e Remove tokens initialization parameter from the parser api. 2017-03-20 08:34:07 +01:00
Dave Halter
42e8861798 Simplify the parse method. 2017-03-19 22:15:01 +01:00
Dave Halter
b4af42ddb3 Some minor parser changes to get ready for bigger stuff. 2017-03-19 21:30:41 +01:00
Dave Halter
3163f4d821 Trying to start moving more stuff to the BaseParser. 2017-03-19 21:06:51 +01:00
Dave Halter
dad40597c5 Start moving stuff to the parser. 2017-03-18 15:01:34 +01:00
Dave Halter
52d855118a Remove get_parsed_node from the parser as well. 2017-03-18 03:55:23 +01:00
Dave Halter
0f66a3c7a8 Remove the module attribute from the parser. 2017-03-18 03:53:34 +01:00
Dave Halter
d0b6d41e99 Remove the old star import cache, because it's not even used. 2017-03-18 03:30:23 +01:00
Dave Halter
aaf6c61e69 Make remove_last_newline private. 2017-03-18 03:07:01 +01:00
Dave Halter
519fa9cfb5 Remove complicated merging of used names from the parser.
It's a lot of complicated code and a lot can go wrong. It also didn't speed up anything. If anything it made things like 5% slower. I have tested this with:

./scripts/diff_parser_profile.py wx._core.py

wx._core.py is not part of Jedi.
2017-03-16 22:00:01 +01:00
Dave Halter
ce41119051 Fix some stuff in a diff profile test script. 2017-03-16 21:45:51 +01:00
Dave Halter
8156a6b8a2 Remove used_names from the parser step. It's a separate iteration, now. 2017-03-16 21:28:42 +01:00
Dave Halter
fd50146f92 Simple cleanup. 2017-03-16 20:20:58 +01:00
Dave Halter
96c67cee26 Simplify the leaf with newlines stuff. 2017-03-16 20:18:30 +01:00
Dave Halter
4573ab19f4 Separate the python syntax tree stuff from the non python stuff. 2017-03-16 19:54:08 +01:00
Dave Halter
448bfd0992 Move the python parser tree. 2017-03-16 17:20:32 +01:00
Dave Halter
b136800cfc Don't use as in imports when not needed. 2017-03-16 08:45:12 +01:00
Dave Halter
06702d2a40 Move the python parser. 2017-03-16 08:40:19 +01:00
Dave Halter
a83b43ccfd Merge branch 'dev' of github.com:davidhalter/jedi into dev 2017-03-15 19:12:51 +01:00
Dave Halter
93f14157a6 Cleanup the ParseError stuff. 2017-03-15 18:41:58 +01:00
Dave Halter
0effd348e8 Add a note about the grammar creation. 2017-03-15 18:18:06 +01:00
Dave Halter
c332fba488 Fix a namespace packages related issue. 2017-03-15 08:59:24 +01:00
Dave Halter
375749c5c3 Small restructuring. 2017-03-15 08:56:49 +01:00
Dave Halter
55c9fd3227 Fix an issue in the fake parser 2017-03-15 08:44:49 +01:00
Matthias Bussonnier
9a851165ad Lookup optimisation
avoid a couple of dynamic lookup in the core of the parsing loop.
Performance improvement will be minimal, but adding a little many time
can be consequent.
2017-03-14 23:55:03 +01:00
Matthias Bussonnier
bb8fe0b24c A missing docstring. 2017-03-14 23:54:38 +01:00
Dave Halter
68a7365a0a A few docstring improvements. 2017-03-14 21:03:03 +01:00
Dave Halter
9efb3f0af2 More direct parser usage removals. 2017-03-14 19:31:54 +01:00
Dave Halter
717bfeb574 Remove an occurance of the complicated parser creation. 2017-03-14 19:27:03 +01:00
Dave Halter
97fc3bc23c Refactored the parser calls. Now it's possible to use jedi.parser.python.parse to quickly parse something. 2017-03-14 00:38:58 +01:00
Dave Halter
9b5e6d16da Improved grammar loading API. 2017-03-13 20:33:29 +01:00
Dave Halter
595ffc24d4 Move some more stuff to a python directory in the parser. 2017-03-13 00:54:39 +01:00
Dave Halter
922c480e2e Moved the parser to a new file. 2017-03-12 21:33:41 +01:00
Dave Halter
a635b6839a Remove unused code. 2017-03-12 21:28:32 +01:00
Dave Halter
af9b0ba8d6 Merge branch 'master' into dev 2017-03-12 20:51:17 +01:00
Alex Wiltschko
82d165a723 Missing paren 2017-03-12 20:41:17 +01:00
Dave Halter
a7b1e3fe70 Fixed another diff parser error. 2017-03-12 15:58:14 +01:00
Dave Halter
6e3b00802c Another endless while loop issue, add an assert. 2017-03-11 14:54:44 +01:00
Dave Halter
818fb4f60c Fix a bug that might have caused an endless while loop a while ago. Fixes #878. 2017-03-09 21:47:16 +01:00
Dave Halter
ccef008376 Python 2 compatibility. 2017-03-09 21:06:20 +01:00
Dave Halter
c7a74e6d1c Make the tokenizer a generator. 2017-03-09 18:53:09 +01:00
Dave Halter
989e4bac89 Speed up splitlines.
We use the python function again with the modifications we need.
I ran it with:

    python3 -m timeit  -n 10000 -s 'from jedi.common import splitlines; x = open("test_regression.py").read()'

The speed differences are quite remarkable, it's ~3 times faster:

    10000 loops, best of 3: 52.1 usec per loop

vs. the old:

    10000 loops, best of 3: 148 usec per loop

We might need to speedup splitlines with  as well. It's probably
also a factor 2-3 slower than it should be.
2017-03-09 08:58:57 +01:00
Dave Halter
b814a91f29 Avoid endless loop with an assert in the diff parser. 2017-03-08 18:33:38 +01:00
Dave Halter
5c9769c5a3 Merge remote-tracking branch 'origin/master' into dev 2017-03-07 19:01:53 +01:00
Dave Halter
ee98eab64c Merge branch 'dev' of github.com:davidhalter/jedi into dev 2017-03-07 19:01:42 +01:00
Dave Halter
05e05252fa Make python -m jedi.parser.tokenize possible for debugging purposes. 2017-03-07 18:31:12 +01:00
micbou
a859add6d7 Only resolve names for actual modules
A name can be part of an import statement without being a module.
2017-03-01 21:06:21 +01:00
Matthias Bussonnier
fc27ca1b6a 'fix a couple of error locations' 2017-02-24 13:03:03 +01:00
Matthias Bussonnier
784de85b36 Add test for handeling of newline in multiline strings 2017-02-24 00:05:38 +01:00
Matthias Bussonnier
0fb386d7e2 Make sure error token set the new_line flag when necessary
Should solve #855
2017-02-24 00:05:38 +01:00
daniel
5513f72987 added support for implicit ns packages and added tests 2017-02-23 23:53:14 +01:00
Matthias Bussonnier
ef1b1f41e4 Use start and end position in repr for simpler debugging. 2017-02-14 23:53:53 +01:00
Daniel M. Capella
68c6f8dd03 readme: Add maralla/completor.vim
https://github.com/maralla/completor.vim
2017-02-14 22:51:17 +01:00
Matthias Bussonnier
b72aa41019 Missing assert 2017-02-08 23:40:23 +01:00
Thomas Kluyver
adc08785b6 Build universal wheels
This tells wheel that the packages are compatible with Python 2 and 3.

The wheel currently on PyPI is only tagged 'py2', so installing on
Python 3 uses the sdist.
2017-02-04 18:13:18 +01:00
Dave Halter
8131f19751 Correct an issue in the tests with the last commit. 2017-02-04 18:11:54 +01:00
Dave Halter
b6e61133d8 Move the tests for the last PR #848. 2017-02-04 18:11:14 +01:00
Mathias Rav
37d7b85ed1 Add tests for decorator completion 2017-02-04 18:05:15 +01:00
Mathias Rav
c6cd18802b Add myself as contributor 2017-02-04 18:05:15 +01:00
Mathias Rav
c809aad67f Complete dotted names properly in decorators
Outside decorators, a dotted name is parsed by the grammar as
stmt -> test -> power -> atom trailer -> (NAME) ('.' NAME)
where the first NAME is an 'atom' and the second NAME is a 'trailer'.

Thus, testing if the most recent variable is a 'trailer' and the most
recent node is a '.' is almost always enough for Jedi to properly
complete dotted names.

However, the grammar for decorators is more restricted and doesn't allow
arbitrary atoms and trailers; instead, a dotted name in a decorator is
decorator -> '@' dotted_name -> '@' (NAME '.' NAME),
meaning the most recent variable will be 'dotted_name', not 'trailer'.

Besides in decorators, the 'dotted_name' variable is only used in import
statements which are handled previously in _get_context_completions,
so checking for 'dotted_name' in this arm of the if only covers decorators
and not inadvertently anything else.
2017-02-04 18:05:15 +01:00
Dave Halter
1d64a5caa1 Replace first_leaf and last_leaf with get_first_leaf and get_last_leaf. 2017-02-03 17:35:53 +01:00
Dave Halter
90fffd883e Clean up the function docstring. 2017-02-03 17:30:58 +01:00
Dave Halter
647aec11a6 Return None instead of raising an IndexError in get_next_leaf. 2017-02-03 17:26:02 +01:00
Dave Halter
c5071f9f49 Change get_previous_leaf to return None if there is no leaf anymore (at the start of the file). 2017-02-03 17:23:15 +01:00
Dave Halter
445bf6c419 Get rid of get_parent_until. 2017-02-03 09:59:32 +01:00
Dave Halter
b3cb7b5490 Remove the def isinstance from the parser. It was a really bad pattern. 2017-02-03 09:37:59 +01:00
Dave Halter
6ccac94162 Add a deploy script. 2017-02-03 00:40:19 +01:00
Dave Halter
f2b41b1752 Update already the version number so we don't forget it. 2017-02-03 00:38:08 +01:00
Dave Halter
7020086166 Changelog for 0.10.0 2017-02-03 00:08:51 +01:00
Dave Halter
8027aeebd8 Fix a small bug that was raised two commits ago (in the test suite). 2017-02-02 23:50:29 +01:00
Dave Halter
f627d541b8 Trying to fix the docs. 2017-02-02 23:39:10 +01:00
Dave Halter
a5a54fbc85 Fix a call signature issue. 2017-02-01 19:21:07 +01:00
Dave Halter
68a3a9cf41 Don't do anything with the diff parser if nothing changes. 2017-01-29 22:12:24 +01:00
Dave Halter
e5af996829 Remove old debug code from the diff parser. 2017-01-29 21:57:09 +01:00
Dave Halter
e0c8b3dd3b Fix an context issue in completions. 2017-01-29 19:09:35 +01:00
Dave Halter
3f1e658e1d Set the limit for debugging failed diff parsers a bit higher. 2017-01-29 14:34:20 +01:00
Dave Halter
40b6079ebd If an INDENT is the next supposed token, we should still be able to complete. 2017-01-29 14:06:22 +01:00
Dave Halter
b0f340748c So much work for one simple diff fail. 2017-01-29 00:42:09 +01:00
Dave Halter
b779677bf9 Merge branch 'dev' of github.com:davidhalter/jedi into dev 2017-01-25 23:21:45 +01:00
Dave Halter
b18239f9dd Add a way to profile the diff parser. 2017-01-25 23:00:33 +01:00
Dave Halter
9982975ad2 Another small performance improvement. 2017-01-25 22:54:08 +01:00
Dave Halter
4918fb49f5 Implement binary search for get_leaf_for_position. This makes it a lot faster. 2017-01-25 22:27:36 +01:00
Dave Halter
f2db0dceb4 A few small performances improvements on the diff parser. 2017-01-25 21:12:13 +01:00
Dave Halter
dfced86730 Merge pull request #834 from Carreau/fix-keyword
Fix keywords detected as modules
2017-01-24 13:27:49 +01:00
Dave Halter
551c122cf8 Fix an issue in sith where there we have accessed a removed Jedi property. 2017-01-24 09:57:43 +01:00
Dave Halter
f4b8a02d37 Add a few speed debugging times to the diff parser. 2017-01-24 09:51:23 +01:00
Dave Halter
09779c88aa Fix a nasty issue in the tokenizer. Fixes #836.
At the same time there was a related issue of not cleaning up newlines properly.
2017-01-24 00:50:37 +01:00
Dave Halter
741993a738 Merge branch 'dev' of github.com:davidhalter/jedi into dev 2017-01-23 20:37:34 +01:00
Dave Halter
e7fcc21863 Remove both MergedNodes and AlreadyEvaluated, they are unused. 2017-01-23 20:36:26 +01:00
Dave Halter
7623b1e350 Removed tree.is_node.
It's not needed anymore, because we have Node/Leaf.type now.
2017-01-23 20:34:30 +01:00
Dave Halter
64abe401ed The position modifier is not used anymore. 2017-01-23 20:12:17 +01:00
Dave Halter
d85ceb9222 More cleanups in the parser. 2017-01-23 20:10:02 +01:00
Dave Halter
645841d98c Remove more unused code. 2017-01-23 19:51:30 +01:00
Dave Halter
b286f3aef0 Merge pull request #832 from Carreau/more-docs
Improve some documentation about name_with_symbols, name and completion.
2017-01-23 15:00:45 +01:00
Dave Halter
01b25efea1 Use the same function to detect newlines in the diff parser. 2017-01-23 09:56:38 +01:00
Matthias Bussonnier
0f865a17ef Improve some documentation about name with symbols, name and completion. 2017-01-22 18:41:35 -08:00
Matthias Bussonnier
d3e8a9bd52 Resolve kewords types as keyword in completions
Closes #833
2017-01-22 18:39:32 -08:00
Dave Halter
1caa2ceafa Cannot use sys.version.major and minor names, because in Python 2.6 it's
not a namedtuple.
2017-01-23 01:09:01 +01:00
Dave Halter
8d2ec6556e Fix a Python 2.7 issue. 2017-01-23 00:36:57 +01:00
Dave Halter
1ff7ecc7af Remove jedi.settings.add_dot_after_module that was removed a while ago from documentation. 2017-01-23 00:12:02 +01:00
Dave Halter
194295066a Fix one more issue in the diff parser. 2017-01-22 23:44:10 +01:00
Dave Halter
08c66207ec Fix the last diff parser test. 2017-01-22 20:27:11 +01:00
Dave Halter
dca35393d5 Remove old code from the diff parser. 2017-01-22 20:22:20 +01:00
Dave Halter
8f4b862892 Fix most diff tests. 2017-01-22 20:13:18 +01:00
Dave Halter
005b24ed54 Better handling of the stack. 2017-01-21 18:43:54 +01:00
Dave Halter
21cd10cefd Get a few diff tests passing. 2017-01-20 20:46:30 +01:00
Dave Halter
73b2287fb4 Fix some tests. 2017-01-20 18:12:09 +01:00
Dave Halter
ebfae050a8 Delete a lof of duplicate code. 2017-01-19 18:31:53 +01:00
Dave Halter
ef31c3d1f4 Some asserts pass now in the tests. 2017-01-19 18:26:50 +01:00
Dave Halter
3bf5f93edd Progress in using a stack in the diff parser. 2017-01-19 09:44:07 +01:00
Dave Halter
fe44458ec0 Start implementing the node stack. 2017-01-16 16:32:49 +01:00
Dave Halter
98185d530e Simplify deep_ast_parser a lot. 2017-01-12 21:51:02 +01:00
Dave Halter
ad1222e6d7 Fix another parser bug. 2017-01-12 08:46:58 +01:00
Dave Halter
0141711af8 Diff parser docstring. 2017-01-10 19:17:37 +01:00
Dave Halter
425fba5e95 Move the parser.fast module to parser.diff. 2017-01-10 19:15:47 +01:00
Dave Halter
1edccbe2c3 Improve literal tests. 2017-01-08 19:52:21 +01:00
Dave Halter
7300f3e7ef Fix issues with Python 3.6's f strings and underscores in numbers. 2017-01-08 19:39:14 +01:00
Dave Halter
00a9f1ec0a Update the tokenizer to include f literals and underscores. Need tests still. 2017-01-08 16:03:45 +01:00
Dave Halter
3f09f3a304 Add support for PEP 0526.
This makes it possible to assign variables like

    asdf: typing.List[int] = []
2017-01-08 03:57:35 +01:00
Dave Halter
6d00a5702f If newer versions are using Jedi (e.g. at the moment Python 3.7), it shouldn't just result in a grammar issue, just because that grammar doesn't not exist. Just take the Python 3.6 grammar instead. 2017-01-07 15:54:04 +01:00
Dave Halter
aff3950085 Better async testing. 2017-01-07 15:40:55 +01:00
Dave Halter
6a18ddf8cb Fix await issues. At the moment Jedi is just ignoring await statements. 2017-01-07 15:27:32 +01:00
Dave Halter
d3c437e891 Restructure yield code to make it less error prone. 2017-01-07 12:43:15 +01:00
Dave Halter
1f15ee8bc7 Fix an issue with contexts. 2017-01-06 00:08:01 +01:00
Dave Halter
ae8e43d3c7 Move get_node() to tree_node and replace all the custom classdefs/funcdefs. 2017-01-05 23:43:12 +01:00
Dave Halter
b44f0aae5d Remove the origin_scope from filters that don't need it. 2017-01-05 21:57:06 +01:00
Dave Halter
89ec207f49 Add a failing test for an inheritanc context completion issue. 2017-01-05 21:50:15 +01:00
Dave Halter
9fb2644f03 Fix an issue with creating contexts. 2017-01-05 18:05:24 +01:00
Dave Halter
12a9ef48f7 Move the completion tests. 2017-01-04 22:34:43 +01:00
Dave Halter
9341df11bf Fix the issues that were changed by removing start_pos from the api classes. 2017-01-04 22:24:25 +01:00
Dave Halter
e96fd32588 Fix an issue of params completion signatures. 2017-01-04 22:09:08 +01:00
Dave Halter
55ec47f15f Test module attributes. 2017-01-04 18:32:16 +01:00
Dave Halter
0caeef2589 Remove the deprecated start_pos. 2017-01-04 18:23:41 +01:00
Dave Halter
01099ce5a9 Create a name for the generators. 2017-01-04 18:12:33 +01:00
Dave Halter
cd23499fbe Fix param issues in goto definition. 2017-01-04 08:58:29 +01:00
Dave Halter
24457bfe2e Fix some usage cases of comprehensions. 2017-01-03 02:15:04 +01:00
Dave Halter
306fd5b95b Fix a recursion issue. 2017-01-02 23:57:59 +01:00
Dave Halter
c7241068e8 Fix an issue with call signatures in empty files. 2017-01-02 19:39:48 +01:00
Dave Halter
5d071ede8c Fix the typing module issues in Python 3.6. 2017-01-02 15:01:12 +01:00
Dave Halter
5b9e5f96aa Merge with master. 2017-01-02 13:05:45 +01:00
Dave Halter
96bb9e3c1a LazyContext.infer() should return a set. 2017-01-02 12:15:09 +01:00
Dave Halter
f05c0714c7 Merge pull request #823 from fsouza/grammar-3.6
parser: add grammar3.6
2017-01-02 12:13:56 +01:00
Francisco Souza
65371ca59a travis: add support for Python 3.6 and use tox-travis 2016-12-31 09:54:26 -05:00
Francisco Souza
14e722ff13 parser: add grammar3.6
I manually replicated the changes that were applied to grammar3.5.
2016-12-30 22:18:03 -05:00
Dave Halter
375fcd9e66 Fix an issue with nested flows in the diff parser. 2016-12-31 03:12:56 +01:00
Dave Halter
29e286488b Fix the last remaining fail in the diff parser. 2016-12-30 21:13:44 +01:00
Dave Halter
c4eec88fc9 Reenable some tests that were somehow disabled. 2016-12-30 20:47:40 +01:00
Dave Halter
61c7444185 Fix some more issues in the diff parser that caused it to completely crash. 2016-12-30 20:13:05 +01:00
Dave Halter
b3f9b9eed3 Trying to refactor divide_nodes into a more general state. 2016-12-23 18:20:09 +01:00
Dave Halter
f437ce5ae7 Some diff parser refactorings. 2016-12-22 09:13:14 +01:00
Dave Halter
464968aed7 Fix an issue where compiled object api types raised an error. 2016-12-21 00:23:50 +01:00
Dave Halter
90b76ee3ec Fix an issue in the diff parser. 2016-12-20 23:32:51 +01:00
Dave Halter
68fabc3048 Increase parser pickling. 2016-12-20 09:51:13 +01:00
Dave Halter
fda0f80573 Remove global_names from modules. 2016-12-18 22:21:06 +01:00
Dave Halter
7385bd0716 Add the latest changes to the change log. 2016-12-18 21:45:03 +01:00
Dave Halter
4e62e98539 Fix an issue with displaying attribute errors. 2016-12-18 21:37:15 +01:00
Dave Halter
7fdcdbbd5c Temporarily execute a string, because currently not everything is working properly. 2016-12-18 19:17:25 +01:00
Dave Halter
b7ae8a746c Simplify recursion issues. 2016-12-18 17:24:20 +01:00
Dave Halter
0daf3e4e9f Remove an unused recursion decorator. 2016-12-17 18:13:53 +01:00
Dave Halter
81e9403aef Delete more unused code. 2016-12-17 18:00:54 +01:00
Dave Halter
9cbfb76eb5 Fix getting the names for specific scopes in jedi.names. 2016-12-17 17:50:41 +01:00
Dave Halter
5c52c7fb45 Completely remove names_dicts from the diff parser. 2016-12-17 17:15:44 +01:00
Dave Halter
589e1906e4 Cleanup the finder. 2016-12-17 16:59:21 +01:00
Dave Halter
437f915f35 Delete a lot of names dict related stuff but also other things that were not used anymore. 2016-12-17 16:51:28 +01:00
Dave Halter
07b58bc549 Cleanup function executions. 2016-12-17 16:44:26 +01:00
Dave Halter
1ef507e5e6 Cleanup some code and a small Python 2 fix. 2016-12-17 16:42:52 +01:00
Dave Halter
75e09baee9 Some Python 2 fixes. 2016-12-17 16:19:01 +01:00
Dave Halter
880aa152fb Fix an import test and with this finally the whole 3.3 test suite is working again. 2016-12-17 16:15:23 +01:00
Dave Halter
57857b6332 Remove the ImportWrapper and replace it with something simpler. 2016-12-17 16:08:37 +01:00
Dave Halter
173c939956 Add a comment. 2016-12-17 14:29:53 +01:00
Dave Halter
6bccbb562a Fix some utils completions. 2016-12-17 14:25:52 +01:00
Dave Halter
ce0a02f6c1 Fix an issue with executed python objects. 2016-12-17 14:08:49 +01:00
Dave Halter
e0b3ec1829 Use different Foo classes to avoid confusion around which class is used where. 2016-12-17 13:04:19 +01:00
Dave Halter
d93f6815fc Refactor test_interpreter. 2016-12-16 18:55:21 +01:00
Dave Halter
5fb5580259 Fix a few things that were broken by the mixed object refactoring. 2016-12-16 17:29:37 +01:00
Dave Halter
9ac301d0c3 Refactor the mixed objects a bit to make at least some interpreter tests pass. 2016-12-16 17:17:03 +01:00
Dave Halter
575352d4b6 Start cleaning up the interpreter module. 2016-12-15 09:49:34 +01:00
Dave Halter
a4fdc716b0 Improve a doctest. 2016-12-15 01:07:44 +01:00
Dave Halter
6fe9971122 Remove a debugging statement. 2016-12-15 00:34:50 +01:00
Dave Halter
edf1c319c6 Fix all remaining static analysis tests. This time we have just hacked around and added proper contexts to the iterables. It's not as clean as it could be. 2016-12-15 00:34:14 +01:00
Dave Halter
3a84e04df7 Remove unused code. 2016-12-15 00:26:47 +01:00
Dave Halter
7084d9ab89 Fix param/argument static analysis. 2016-12-15 00:25:10 +01:00
Dave Halter
6c4abcc84c Fix some more issues with imports and attribute warnings of static analysis. 2016-12-14 01:35:55 +01:00
Dave Halter
4074ca1e84 Fix some static analysis tests like attribute errors and normal arguments. 2016-12-14 01:04:57 +01:00
Dave Halter
03525bbef5 Renamed a static analysis test case to be able to better execute that one explicitly. 2016-12-13 18:13:33 +01:00
Dave Halter
59a432bddd Fix flow analysis a bit in static analysis cases. 2016-12-13 18:07:41 +01:00
Dave Halter
eaf0100446 Some analysis improvements. 2016-12-11 15:03:19 +01:00
Dave Halter
2be5da3f85 Fix some old regression tests. 2016-12-07 01:30:30 +01:00
Dave Halter
97ccb74ebb Api classes test fixes. 2016-12-07 01:00:03 +01:00
Dave Halter
c6248ae169 Some testing fixes that were broken with the few previous commits. 2016-12-06 18:18:53 +01:00
Dave Halter
becf1027c0 Refactor our create_context constructs. 2016-12-06 09:51:57 +01:00
Dave Halter
cb4f405f7d Fix some internal name handling. 2016-12-05 22:27:18 +01:00
Dave Halter
fe64df2e42 Fix displaying param names for classes. 2016-12-05 18:12:23 +01:00
Dave Halter
6736b1a5ce Fix some docstring stuff. 2016-12-05 09:42:51 +01:00
Dave Halter
641ecedcd2 Improve a few anonymous function execution context goto issues. 2016-12-04 22:35:23 +01:00
Dave Halter
6f4cd7e6d3 Improve api class tests. 2016-12-04 20:04:54 +01:00
Dave Halter
439e394535 Fix call signatures. 2016-12-04 03:52:33 +01:00
Dave Halter
6940900c58 A lot more fixes - fix all evaluate integration tests. 2016-12-03 22:17:38 +01:00
Dave Halter
ee1f077014 Some test refactorings. 2016-12-03 14:32:00 +01:00
Dave Halter
69c23ac113 Fix yield from in python 3. 2016-12-03 13:47:49 +01:00
Dave Halter
760f900560 Fix a python3 issue with py__file__. 2016-12-03 13:41:55 +01:00
Dave Halter
f355c04cae Finally fixed all black box tests in python 2. 2016-12-03 13:37:51 +01:00
Dave Halter
2edbe44d64 Fix some next() stuff. 2016-12-03 03:45:27 +01:00
Dave Halter
7607db801f Rewrite the next function. 2016-12-03 02:54:09 +01:00
Dave Halter
da1a163da7 Fix python 2 string iterators. 2016-12-03 02:08:40 +01:00
Dave Halter
ba8a3215f2 Fix some issues with usages and imports. 2016-12-02 23:51:01 +01:00
Dave Halter
9d4786ddcb Refactor the descriptor logic. 2016-12-02 22:13:45 +01:00
Dave Halter
565989cf07 More small bug fixes. 2016-12-02 15:21:50 +01:00
Dave Halter
dfc06dfe83 A lot of small bug fixes. 2016-12-02 15:08:54 +01:00
Dave Halter
16a48a7a45 Fix a lot of list comprehensions. 2016-12-02 11:17:55 +01:00
Dave Halter
dac372405e Fix some isinstance checks. 2016-11-30 00:26:12 +01:00
Dave Halter
d74e48dae2 Fix context completions and super calls. 2016-11-29 18:38:04 +01:00
Dave Halter
60234e68ca Fixed sys path scanning again. 2016-11-29 18:28:28 +01:00
Dave Halter
ba52ecd0df Some isinstance/flow analysis improvements. 2016-11-29 18:19:15 +01:00
Dave Halter
5b81a2375d More tests and better understanding of if/try branches name resolution. 2016-11-29 10:21:50 +01:00
Dave Halter
481a917ada Remove the wrapper from the class. 2016-11-28 09:49:37 +01:00
Dave Halter
f6070496ad Fixes to the isinstance tests. 2016-11-28 09:34:59 +01:00
Dave Halter
c3f6172ca2 Fix dynamic arrays. 2016-11-27 21:51:05 +01:00
Dave Halter
558e8add49 Fix complex instance name resolving. 2016-11-27 11:51:12 +01:00
Dave Halter
4ad0179b72 Fix named param completion and add a few tests. 2016-11-27 11:27:30 +01:00
Dave Halter
0b5f8ccc21 Remove memoize_default from api. 2016-11-26 17:50:21 +01:00
Dave Halter
898fefcb17 Fix dynamic params. 2016-11-26 16:53:44 +01:00
Dave Halter
c1b7acc9ac Get finally rid of context.type. 2016-11-26 10:32:44 +01:00
Dave Halter
2161be2dcb Fix side effect issues with predefined names and lazy contexts. 2016-11-26 10:16:26 +01:00
Dave Halter
fe54285311 Only 125 fails left in the integration tests. 2016-11-26 00:25:31 +01:00
Dave Halter
bad1f85f8f Improvements towards arrays / predefined names logic. 2016-11-25 23:31:45 +01:00
Dave Halter
8fd08c86b7 Fix some mostly iterable related stuff. 2016-11-24 21:06:55 +01:00
Dave Halter
75e412dbc5 Remove the old instance. 2016-11-24 19:59:26 +01:00
Dave Halter
7ed1c95737 Fix dynamic param tests. 2016-11-24 00:11:26 +01:00
Dave Halter
06efc8fb8c Fixing lambdas. 2016-11-20 22:09:45 +01:00
Dave Halter
cbd6713b5e Fix a lot of the import completion issues. 2016-11-20 16:37:02 +01:00
Dave Halter
e79ebe3ee7 Usage fixes for imports. 2016-11-19 03:05:10 +01:00
Dave Halter
b77fa58058 Fix most usage tests. 2016-11-19 02:24:34 +01:00
Dave Halter
05581714d9 Fix goto tests. 2016-11-17 23:28:47 +01:00
Dave Halter
d15016c5c1 Fix the whole test suite of descriptors. 2016-11-16 22:57:07 +01:00
Dave Halter
ba03f1dcb9 Fix properties. 2016-11-16 21:16:12 +01:00
Dave Halter
21e17b7762 Fix an issue with dicts. 2016-11-16 09:49:00 +01:00
Dave Halter
af7c13d2e6 List comprehensions now at least don't cause errors anymore. 2016-11-16 09:43:45 +01:00
Dave Halter
f672b367da Fix most of PEP 484. 2016-11-15 00:05:08 +01:00
Dave Halter
31514dfe76 Fix a lot on PEP 484. 2016-11-14 23:29:09 +01:00
Dave Halter
03aa630932 Fixed all generator tests. 2016-11-14 20:57:46 +01:00
Dave Halter
65d3e29146 dostring fixes. 2016-11-13 20:23:17 +01:00
Dave Halter
b2bdfe4a28 Array fixes except for conversions. 2016-11-13 16:18:46 +01:00
Dave Halter
21cfe4fc21 Fix the decorator issues. 2016-11-13 04:07:23 +01:00
Dave Halter
cf00c83ed8 Fix parameter issues in functions that are defined on self. 2016-11-13 03:29:36 +01:00
Dave Halter
cb86cdec68 Small fixes to the origin_scopes. 2016-11-12 17:01:26 +01:00
Dave Halter
853f9fdb08 Somre more fixes towards basic stuff. 2016-11-12 14:23:44 +01:00
Dave Halter
469d6940a7 Fix global statements. 2016-11-12 13:11:54 +01:00
Dave Halter
7848be97ab Some improvements towards iterators. 2016-11-12 03:53:25 +01:00
Dave Halter
9369d264d4 Fix all imports inference tests. 2016-11-12 03:26:30 +01:00
Dave Halter
806a43d085 Implement submodule name completion. 2016-11-12 03:01:07 +01:00
Dave Halter
e4ea9bb630 Some import fixes. 2016-11-12 01:01:47 +01:00
Dave Halter
2ba78ab725 Merge pull request #799 from Cologler/master
fix source code read issue.
2016-11-11 15:04:41 +01:00
Dave Halter
052f6bf9e7 Fix some small import stuff and the whole ordering tests. 2016-11-11 00:45:16 +01:00
Dave Halter
14a8377817 Fix all class issues. 2016-11-10 23:43:07 +01:00
Dave Halter
cfc988b786 Fix super calls. 2016-11-10 09:00:24 +01:00
Dave Halter
6899c8f646 Fix __getattribute__ and __getattr__. 2016-11-09 22:23:04 +01:00
cologler
a51475d265 Update __init__.py
fix the source file read mode issue.
if use 'r' mode, this may raise a `UnicodeDecodeError`.
2016-11-09 07:48:45 +08:00
Dave Halter
20380e80b0 Remove type(self).__name__ occurences, because python 2 will always output 'instance'. 2016-11-08 19:18:03 +01:00
Dave Halter
4a8fd73601 Fixing getattr tests. 2016-11-07 20:15:58 +01:00
Dave Halter
40f599c3b6 Progress in creating anonymous instances. 2016-11-07 01:11:55 +01:00
Dave Halter
81ccedc353 Fix an issue with lookups. 2016-11-07 00:35:04 +01:00
Dave Halter
7f95495ca5 More instance improvements. 2016-11-06 23:50:29 +01:00
Dave Halter
cd4a7a9fc3 Implementation of BoundMethod. 2016-11-06 22:33:22 +01:00
Dave Halter
afac66d82c Working on __init__. 2016-11-06 18:28:04 +01:00
Dave Halter
5ef874796a Merge pull request #790 from blueyed/fix-goto_definitions-for-derived-class
Fix goto_definitions being invoked on a parent class
2016-11-06 13:55:03 +01:00
Dave Halter
2b753b642d Merge pull request #788 from blueyed/tests-tox-posargs
tox: use posargs and test by default for py.test command
2016-11-05 10:43:30 +01:00
Dave Halter
eb558e0e09 Merge pull request #794 from blueyed/doc-fixes
Improve documentation in test/run.py
2016-11-05 10:42:42 +01:00
Daniel Hahler
94dc563d8a tox: use posargs ("jedi test") for py.test command
Without this it would collect tests from other dirs also by default.
2016-11-04 14:31:43 +01:00
Daniel Hahler
a4aabc2b65 Improve documentation in test/run.py 2016-11-04 14:28:27 +01:00
Daniel Hahler
78573b8fa2 Fix goto_definitions being invoked on a parent class
When invoking `goto_definitions` on `RequestFactory` in line 5, it would
jump to `Client` after 27f05de:

```python
class RequestFactory(object):
    pass

class Client(RequestFactory):
    pass
```

Fixes https://github.com/davidhalter/jedi/issues/761.
2016-11-04 13:00:23 +01:00
Dave Halter
82667b85b9 Publicize the _evaluator in contexts. 2016-11-03 09:54:47 +01:00
Dave Halter
63b6fa1416 All function tests are passing, yay! 2016-11-03 09:43:24 +01:00
Dave Halter
7291413696 More fixes for arrays. 2016-11-02 16:35:14 +01:00
Dave Halter
694a2e0769 Cleanup even more param magic. 2016-11-02 16:29:32 +01:00
Dave Halter
cd874cb052 Trying to get dyanmic params working. 2016-11-02 11:11:21 +01:00
Dave Halter
aaaa3c24a5 Listeners should not be part of the parser tree. This is logic that belongs to the evaluation. 2016-11-02 09:22:19 +01:00
Dave Halter
f57455f0ad Deprecate Evaluator.wrap. 2016-11-01 23:38:06 +01:00
Dave Halter
2eb701d2d2 Some class fixes. 2016-11-01 18:28:47 +01:00
Dave Halter
9a55c9cf50 Most function calls seem to work now. 2016-11-01 00:44:57 +01:00
Dave Halter
4aec9cadd7 Function **kwargs fixes. 2016-11-01 00:23:44 +01:00
Dave Halter
6d8ff9293d Fixes to decorators and *arg functions. 2016-10-31 09:19:58 +01:00
Dave Halter
c537d360f3 More fixes to *args type inference. 2016-10-30 01:35:36 +02:00
Dave Halter
3cce530ef4 Taking a stab at simple *args and generators. 2016-10-29 02:11:04 +02:00
Dave Halter
bbb1d1e04c Better working flow scopes. 2016-10-28 00:36:17 +02:00
Dave Halter
a620c7dbdb Try to get star arguments working just a little bit. 2016-10-27 18:14:20 +02:00
Dave Halter
bcaf06399f Fix another execute issue. 2016-10-25 18:17:07 +02:00
Dave Halter
90af0c36e0 Function -> FunctionContext and fakes use the FunctionContext, too. 2016-10-25 09:59:42 +02:00
Dave Halter
64b6396d19 Fix one array usage. 2016-10-24 09:58:40 +02:00
Dave Halter
5b1d62a11e Fix the recursion detection. 2016-10-24 01:03:17 +02:00
Dave Halter
e34246eb00 Fix __call__. 2016-10-24 00:39:59 +02:00
Dave Halter
7a347898dd Merge pull request #787 from blueyed/egg-link-before
sys_path: prepend/prefer egg-link files
2016-10-23 18:07:44 +02:00
Dave Halter
0392524dfc Merge pull request #789 from blueyed/parser.utils.clear_cache-clear-self.__index
parser.utils.clear_cache: clear self.__index
2016-10-23 18:03:43 +02:00
Dave Halter
0475bb5fd0 First function execution that is working. 2016-10-23 03:02:57 +02:00
Dave Halter
75b67af000 Starting to improve function calls. 2016-10-22 21:02:15 +02:00
Dave Halter
2e6603cc2e A lot of small improvements. 2016-10-22 17:40:42 +02:00
Dave Halter
4ccfbb4962 Use super().__getattribute__ instead of custom AttributeErrors in __getattr__. 2016-10-21 02:12:54 +02:00
Dave Halter
cad9ae8ab1 Start implemeting contexts with specialized TreeNameDefinitions to avoid gambling with the parser. 2016-10-20 19:36:44 +02:00
Dave Halter
3654de97b0 Better filter for arrays. 2016-10-16 14:57:08 +02:00
Dave Halter
0a0cb2a722 Fix generators. 2016-10-16 04:17:11 +02:00
Dave Halter
4ca3556c3b Fix the fundamentally wrong cache. 2016-10-16 04:04:31 +02:00
Dave Halter
129c669bc0 Fix private variables in filters. 2016-10-15 19:12:46 +02:00
Dave Halter
5c0b2d7aae Fixed a lot of class tests. 2016-10-14 18:09:29 +02:00
Daniel Hahler
a5480c054d parser.utils.clear_cache: clear self.__index
This fixes a potential FileNotFoundError when clearing the cache
manually, using the method from
https://github.com/davidhalter/jedi-vim/pull/625.

Traceback:

```
  File "…/jedi/evaluate/imports.py", line 342, in _do_import
    module = _load_module(self._evaluator, module_path, source, sys_path, parent_module)
  File "…/jedi/evaluate/imports.py", line 457, in _load_module
    cached = load_parser(path)
  File "…/jedi/parser/utils.py", line 72, in load_parser
    return ParserPickling.load_parser(path, p_time)
  File "…/jedi/parser/utils.py", line 126, in load_parser
    with open(self._get_hashed_path(path), 'rb') as f:
FileNotFoundError: [Errno 2] No such file or directory: '…/.cache/jedi/cpython-35/759d60e96c76f41ffd882d9b8d844899.pkl'
```
2016-10-14 14:25:18 +02:00
Daniel Hahler
f7f966805f sys_path: prepend/prefer egg-link files
With `pip install -e` the generated .egg-link file gets preferred over
any normally installed distribution, and `pip uninstall` will first
remove the egg-link before the normal package.
2016-10-14 14:18:09 +02:00
Dave Halter
1752598353 A small class name lookup improvement. 2016-10-12 15:54:54 +02:00
Dave Halter
fce0eff18a Get rid of all names_dicts in the completion api. 2016-10-12 03:40:24 +02:00
Dave Halter
482103e796 Replace names_dicts with filters in trailer completion. 2016-10-12 02:34:50 +02:00
Dave Halter
862e4a6176 Add filters for the sub module dicts and module attributes dicts. 2016-10-12 02:19:52 +02:00
Dave Halter
5f46b48433 Add a filter for global names. 2016-10-11 16:01:26 +02:00
Dave Halter
37ba971787 Trying to start implementing instance filters. 2016-10-06 16:35:53 +02:00
Dave Halter
2f1e9d634f FunctionExecution improvement. 2016-10-02 19:54:03 +02:00
Dave Halter
c2873792eb Filters for compiled objects and also FunctionExecution. 2016-10-02 15:36:24 +02:00
Dave Halter
249049b10c Start using filters for name resolution. 2016-09-30 13:29:20 +02:00
Dave Halter
a96eec8058 Fix an issue with mixed objects. 2016-09-27 01:28:42 +02:00
Dave Halter
9b85d5517f Fix more issues in the diff parser. 2016-09-27 00:29:11 +02:00
Dave Halter
09a5f27068 Add a test for wrong whitespace. 2016-09-26 23:56:37 +02:00
Dave Halter
c728148ece Fix an issue with dividing suites and remove a lot of print statements. 2016-09-23 17:31:29 +02:00
Dave Halter
e371b670f5 Remove a comprehension hack in the parser that would have made the diff parsers world hell. 2016-09-22 18:26:09 +02:00
Dave Halter
c161e33119 Fix one more issue with the fast parser. 2016-09-21 20:36:54 +02:00
Dave Halter
6eb3b15e9b Make a test a bit better testable (more flexible). 2016-09-21 18:13:18 +02:00
Dave Halter
37e3e79faa Check in the diff tests that the graph is valid. 2016-09-19 05:41:59 +02:00
Dave Halter
ccc325616a Temporarily fix an issue with list comprehensions. 2016-09-19 05:28:35 +02:00
Dave Halter
8aeeaec9c3 Remove some print statements 2016-09-18 20:43:52 +02:00
Dave Halter
b594a7d861 Merge pull request #776 from Alexey-T/patch-1
For Py3.5 embeddable, which misses pydoc_data module
2016-09-18 18:58:34 +02:00
Dave Halter
959f7b5e00 We don't need to reset the last failed start_pos anymore, because this is something that was necessary with the fucked up old parser. 2016-09-18 00:52:22 +02:00
Dave Halter
885cf62a12 Remove the position_modifier from the parser. 2016-09-18 00:50:31 +02:00
Uvview
e08209f35e For Py3.5 embeddable, which misses pydoc_data module 2016-09-17 03:47:56 +04:00
Dave Halter
ed71d05ed7 Small test changes. 2016-09-15 09:37:35 +02:00
Dave Halter
74058fbf28 Finally passing all diff parser tests. 2016-09-15 01:26:28 +02:00
Dave Halter
8132055428 Fix an issue with parser endings and therefore adapt a few tests. 2016-09-14 17:23:49 +02:00
Dave Halter
47028c947a Better debugging and solving a test with for stmts. 2016-09-13 20:34:02 +02:00
Dave Halter
f1a45ee4e6 Some error leaf handling. 2016-09-13 09:37:59 +02:00
Dave Halter
70e3719fb9 Small bug fixes. 2016-09-12 02:26:45 +02:00
Dave Halter
2eeafe23f8 Use differ for all diff tests. 2016-09-12 02:26:29 +02:00
Dave Halter
994e6615b1 Ifs in two directions. 2016-09-11 22:42:47 +02:00
Dave Halter
dfdda4a2f1 Copying an if (and other flows) is now working. 2016-09-11 21:51:44 +02:00
Dave Halter
c764976ef2 Merge branch 'remove_names_dicts' into diff 2016-09-11 13:24:11 +02:00
Dave Halter
7667cba17e Remove old indent/dedent usages. Now they are not needed anymore. 2016-09-11 13:20:24 +02:00
Dave Halter
1226962922 Remove dedents from the parser tree. No need for them. 2016-09-11 13:03:29 +02:00
Dave Halter
cc5a2cd219 Small changes. 2016-09-09 17:38:07 +02:00
Dave Halter
5923765369 get_parsed_node should return the right thing. 2016-09-08 18:14:13 +02:00
Dave Halter
024a97e59c Better end positions. 2016-09-08 09:52:42 +02:00
Dave Halter
91ed1da6f4 Better testing. 2016-09-08 00:17:54 +02:00
Dave Halter
20b4f6c363 Rework the parents when dividing nodes. 2016-09-05 18:04:53 +02:00
Dave Halter
f353c79528 Some passing tests for the new diff parser (the old fast parser tests). 2016-09-05 00:42:41 +02:00
Dave Halter
00a8b3e4f1 Some more tests are passing. 2016-09-03 03:06:38 +02:00
Dave Halter
2f6ba2a7ae Split the old fast parser tests. 2016-09-02 13:49:44 +02:00
Dave Halter
24605a750e Finally a fast parser test passing. 2016-09-01 00:42:38 +02:00
Dave Halter
79c2d017db A simplification. 2016-08-31 09:51:51 +02:00
Dave Halter
d505c764de First time a test partially passes of the new fast parser. 2016-08-30 23:12:24 +02:00
Dave Halter
42e5777620 Some progress and bugfixes. 2016-08-26 12:47:02 +02:00
Dave Halter
be2a97cd36 Merge pull request #764 from DonJayamanne/patch-1
update usage.rst
2016-08-25 14:16:03 +02:00
Don Jayamanne
85970d25f9 updated as per code review comments 2016-08-25 20:57:35 +10:00
Don Jayamanne
b82687642d update usage.rst
Updated to make reference to VS Code using this library.
2016-08-24 20:27:40 +10:00
Dave Halter
1e5ad467d3 Start debugging the beast. 2016-08-23 18:24:58 +02:00
Dave Halter
1126a6c871 Merge branch 'dev' into diff 2016-08-22 18:03:35 +02:00
Dave Halter
d748f6fad6 Forgot to add the splitlines test. 2016-08-22 18:03:19 +02:00
Dave Halter
16feea9daf used_names copying. 2016-08-22 09:26:12 +02:00
Dave Halter
70220171fa names_dict merging progress. 2016-08-20 14:21:37 +02:00
Dave Halter
37712ace9c Care about more detailed issues in the diff parser. 2016-08-18 01:21:16 +02:00
Dave Halter
54297cc4a5 Most of the new diff parsers functionality should be working now. There are a few TODOs to solve, though. 2016-08-16 18:58:28 +02:00
Dave Halter
b9040870c0 Some ideas for a diff parser. 2016-08-14 00:23:40 +02:00
Dave Halter
8a34481e8c Merge pull request #758 from blueyed/doc-goto_assignments
doc: fix goto_assignments, which can follow imports now
2016-08-13 23:07:49 +02:00
Daniel Hahler
171873761b doc: fix goto_assignments, which can follow imports now 2016-08-13 09:04:57 +02:00
Dave Halter
721195157a Add the keepends parameter to common.splitlines. 2016-08-07 16:57:53 +02:00
Dave Halter
2ae3aee7d0 Increase parser pickling version to reduce bugtracker issues in the future with people upgrading git commits. 2016-08-07 13:05:14 +02:00
Dave Halter
ebd080a0fd Implement goto_assignments(follow_imports=True). Fixes #382. 2016-08-03 18:05:08 +02:00
Dave Halter
c1bef454f5 Restructure namedtuple tests a bit. 2016-08-03 09:21:51 +02:00
Dave Halter
7c5e75f31b Make it possible to debug the REPL. 2016-08-02 23:21:53 +02:00
Dave Halter
05ad8c6608 Start working on param autocompletion for the REPL. 2016-08-01 23:59:49 +02:00
Dave Halter
9acb5cf1b3 Make it possible to do class context completions even for non functions. Fixes #639. 2016-08-01 23:13:28 +02:00
Dave Halter
abaa9732eb Merge branch 'dev' of https://github.com/bstaint/jedi into dev 2016-08-01 15:17:52 +02:00
Dave Halter
51802e9784 Fix a test that was actually wrong in Python 2.7 (not working). 2016-08-01 14:57:58 +02:00
Dave Halter
add5b68269 Try to get travis working again. 2016-08-01 14:47:03 +02:00
bstaint
e35a9ff389 Replace multiple slashes. 2016-08-01 10:42:36 +08:00
Dave Halter
6440e33512 Fix an issue with magic methods on classes. Fixes #461. 2016-07-31 23:42:16 +02:00
Dave Halter
647a4db326 Autocomplete inherited methods when overriding in child class. Fixes #458. 2016-07-31 23:09:50 +02:00
Dave Halter
62e184134b Fix __call__ param completion. Fixes #613. 2016-07-31 21:37:01 +02:00
Dave Halter
5f064a2a0a Add a way to get the line in a BaseDefinition. Fixes #518. 2016-07-31 20:37:48 +02:00
bstaint
b31b456dd4 Fixed Windows slashes problem. 2016-08-01 01:13:39 +08:00
Dave Halter
6f598b1157 Use the memoize function for faked arguments only when needed.
It's important to note that memoizing every object would mean that
theoretically all objects passed through get_faked would get memoized. This
would have been a possible memory leak, which we should avoid.
Obviously the previous solution proposed in #649 was still better, but this
issue was a new one. Also using str() around keys was not a good idea.

Refs #649.
2016-07-31 15:02:30 +02:00
Dave Halter
7b58ffcfd1 Merge branch 'bugfix/performances_degradation' of https://github.com/ColinDuquesnoy/jedi into dev 2016-07-31 13:51:24 +02:00
Dave Halter
524a13ba26 Proof that docstring inference is working even on renamed imports. Fixes #507. 2016-07-31 12:14:44 +02:00
Dave Halter
a4edf5d5d1 Test lambdas better. 2016-07-31 11:41:39 +02:00
ColinDuquesnoy
600a087446 Merge branch 'dev' into bugfix/performances_degradation
# Conflicts:
#	test/test_regression.py
2016-07-30 16:52:17 +02:00
Dave Halter
2b4b5f069b Docstring should also be evaluated in class definitions. Fixes #631. 2016-07-30 14:18:20 +02:00
Dave Halter
15221bc8f5 Make sure that the encoding parameters are always right. 2016-07-30 03:34:24 +02:00
Dave Halter
454c8de7b1 Merge pull request #726 from nakamuray/fix-source_to_unicode_py3_compatibility
fix source_to_unicode py3 compatibility
2016-07-30 03:15:04 +02:00
Dave Halter
320f0dc920 Added @scribu as an author. 2016-07-30 02:45:04 +02:00
Dave Halter
ec51891bb2 Fix nested namespace packages. At least now there's no error anymore. Fixes #743. 2016-07-30 02:44:09 +02:00
scribu
4fbde0001a add test for namespaced packages 2016-07-29 08:31:21 +02:00
Dave Halter
1fa16337b7 Fix an issue with named args goto. 2016-07-29 00:22:24 +02:00
Dave Halter
77fa2928ee Add some completion tests. 2016-07-28 23:16:37 +02:00
Dave Halter
142f6652b5 Move toward ParserWithRecovery for the completion context.
It was simply not possible to do it with the normal parser, because of dedents.
2016-07-28 23:14:24 +02:00
Dave Halter
f605359c16 More comprehension issues. 2016-07-28 18:12:41 +02:00
Dave Halter
1903b31b9a Merge branch 'dev' of github.com:davidhalter/jedi into dev 2016-07-27 23:48:19 +02:00
Dave Halter
230a7bc024 Remove the recursion detection in imports, because it's not needed there anymore. 2016-07-27 23:48:04 +02:00
Dave Halter
092299f537 Fix a recursion issue with nested for loops. 2016-07-27 23:36:44 +02:00
Dave Halter
01e577be8b Move some recursion issues to the recursion module. 2016-07-27 22:23:30 +02:00
Dave Halter
4c6669e081 Fix another issue. 2016-07-27 21:17:12 +02:00
Dave Halter
0a4e858d88 Fix a recursion issue and add a test. 2016-07-27 19:15:03 +02:00
Dave Halter
a6dd7bf822 Fix an issue with inherited lists. 2016-07-26 09:24:51 +02:00
Dave Halter
92dd6df804 Merge pull request #745 from sadovnychyi/dev
Try to correctly import ZIP and EGG files
2016-07-25 20:28:33 +02:00
Dave Halter
fc7fd9d989 Fix some more fast parser issues. 2016-07-25 18:29:01 +02:00
Dmitry Sadovnychyi
60484707a0 Add support for ZIP and EGG packages in imports 2016-07-25 20:29:02 +08:00
Dave Halter
2d544c51c6 Better completions in comments. 2016-07-25 09:58:04 +02:00
Dave Halter
6ed864f032 Merge branch 'dev' of github.com:davidhalter/jedi into dev 2016-07-25 00:16:12 +02:00
Dave Halter
aeb734564c Finally fix all tests. 2016-07-25 00:15:58 +02:00
Dave Halter
ebbaaf7ad2 Fix some more problems with the fast parser. 2016-07-24 23:44:26 +02:00
Dave Halter
7ec957e918 Forgot to include this file in previous commits. 2016-07-24 17:17:03 +02:00
Dave Halter
ff47fab62a Remove Whitespace class and replace it with Newline and Endmarker. 2016-07-24 17:16:36 +02:00
Dave Halter
7f2f66f011 Trying to refactor the completion stack finding. 2016-07-24 17:06:54 +02:00
ColinDuquesnoy
2ea31df5c4 Merge branch 'dev' into bugfix/performances_degradation 2016-07-24 15:54:54 +02:00
Dave Halter
536424159e Merge pull request #747 from ColinDuquesnoy/bump_version_dev
Bump version to 0.10.0.dev0
2016-07-21 17:17:51 +02:00
ColinDuquesnoy
98cd1cccd6 Remove .dev suffix 2016-07-21 11:06:09 +02:00
ColinDuquesnoy
7c8aa51381 Bump version to 0.10.0.dev0 2016-07-21 10:54:29 +02:00
ColinDuquesnoy
07f76a1703 Merge remote-tracking branch 'upstream/dev' into bugfix/performances_degradation
# Conflicts:
#	jedi/evaluate/compiled/fake.py
2016-07-21 10:41:11 +02:00
Dave Halter
cd9a8705a2 Fix a potential issue with the loading of settings. 2016-07-21 00:48:17 +02:00
Dave Halter
42bf193af8 Fix for some small issues with the equals. 2016-07-20 23:24:29 +02:00
Dave Halter
f20df95074 Fix the issues with added equals after params in the wrong places. Fixes #643. 2016-07-20 23:19:05 +02:00
Dave Halter
a2d66579d7 Test for the equals that is added to params sometimes. Refs #582. 2016-07-20 09:27:28 +02:00
Dave Halter
7ee08d01fd Add a TODO. 2016-07-20 09:10:31 +02:00
Dave Halter
b5bd8496b0 Fix the errors for the old octal tests. 2016-07-18 19:28:01 +02:00
Dave Halter
2776af3db5 Fix an issue with global stmts. They caused recursionerrors when used wrong. Fixes #610. 2016-07-18 19:23:08 +02:00
Dave Halter
9eee0d6635 Remove misleading/wrong TODO. 2016-07-18 00:02:47 +02:00
Dave Halter
20529d3405 Fix decorator issues with nested decorators and class combinations. Fixes #642. 2016-07-17 23:55:59 +02:00
Dave Halter
4b0e164d91 Add the long forgotten tests for test_usages.py 2016-07-17 22:45:12 +02:00
Dave Halter
2563746810 Fix issues with octals in Python 2 (and possibly 3). Fixes #559. 2016-07-17 22:36:26 +02:00
Dave Halter
68ff520cf8 Limit dynamic param searches to not go crazy in a lot of occasions. Refs #574. 2016-07-17 19:49:43 +02:00
Dave Halter
becbbb2e64 Refactor the dynamic params functionality. 2016-07-17 19:05:47 +02:00
Dave Halter
75c1ebc2fe Add a max_dynamic_params_depth setting to limit recusive searching for those params. It shouldn't be too crazy. 2016-07-17 13:59:19 +02:00
Dave Halter
218278af8d Fix an issue with slice indexing. 2016-07-14 18:28:24 +02:00
Dave Halter
3a0008ea80 Simplification. 2016-07-14 08:40:32 +02:00
Dave Halter
cc953ffff0 Goto on trailers wasn't correct. Fixes #571. 2016-07-13 19:15:28 +02:00
Dave Halter
927534a8d5 Strange unicode characters are error tokens. Fixes #587. 2016-07-13 08:53:08 +02:00
Dave Halter
c26f740dcd The import path can be a list as well as a tuple. Deal with it. 2016-07-13 08:33:57 +02:00
Dave Halter
45941a7006 Fix usage tests. 2016-07-12 23:32:33 +02:00
Dave Halter
1d8b71ba56 Add an isinstance test. 2016-07-12 19:31:28 +02:00
Dave Halter
e18c8200dd Fixed an issue with error nodes and completion in more complex settings. 2016-07-11 17:32:00 +02:00
Dave Halter
b1fbc512d8 xfail for a fast parser test that I'm not sure what to do with. 2016-07-11 17:05:59 +02:00
Dave Halter
72634a94b8 Try to use line numbers instead of offsets in the fast parser. 2016-07-11 08:56:30 +02:00
Dave Halter
3ad67a4ec7 Jedi raised an error when defined_names was called on empty functions, fixes #697. 2016-07-10 18:15:06 +02:00
Dave Halter
1c0aa06c7d PEP 3132 unpacking should not raise an error (it may yield wrong results though at the moment), fixes #707. 2016-07-10 17:51:01 +02:00
Dave Halter
7d64069780 An empty path given to Jedi should not raise errors. Fixes #577. 2016-07-09 17:27:57 +02:00
Dave Halter
690241332d Add a changelog for 0.10.0. 2016-07-09 02:40:27 +02:00
Dave Halter
e0cb1346e1 Add basic yield from type inference. References #647. 2016-07-09 02:33:56 +02:00
Dave Halter
5280f567f9 The docstring of import completions was wrong.
This is fixed now. However, since this might massively decrease performance,
it's not enabled by default. You can enable it with `docstring(fast=False)`
(see test changes), but I wouldn't recommend it at this point.

Fixes #656.
2016-07-09 01:04:15 +02:00
Dave Halter
baa745a6ac A minor issue for getting the stack at a position, fixes #590. 2016-07-08 08:39:36 +02:00
Dave Halter
e5f09e1c7d Fix an issue with end_pos of a module. 2016-07-08 00:03:52 +02:00
Dave Halter
c499696b60 Fix python 2.7 tests. 2016-07-07 19:16:01 +02:00
Dave Halter
adcc1c2b51 Don't delete ErrorNode names. They are part of the parser now.
Fixes #594 and possibly also #590 and #579.
2016-07-07 18:33:45 +02:00
Dave Halter
4a19376187 Fix issue https://github.com/DamnWidget/anaconda/issues/449. Using self should not cause side effects in completion. 2016-07-06 18:31:47 +02:00
Dave Halter
3ad159b0aa The import logic cannot assume that a file is always importable, fixes #716. 2016-07-06 08:52:23 +02:00
Dave Halter
4243adf54b Add param splitting test. 2016-07-06 08:30:27 +02:00
Dave Halter
074a154af3 Fix a small issue that coul dhappen e.g. in stdin. 2016-07-06 08:05:50 +02:00
Dave Halter
24cddda8e7 Remove the old interpreter logic. 2016-07-04 08:35:22 +02:00
Dave Halter
67ea9ab9f3 Add Eric IDE to the list of supported plugins. 2016-07-03 16:22:23 +02:00
Dave Halter
447656fd14 Remvoed SynWrite from the Jedi implementations. From now on we only add major editors and projects that are using it. 2016-07-03 16:16:54 +02:00
Dave Halter
a02b07bcb8 Merge remote-tracking branch 'origin/master' into away 2016-07-03 15:46:55 +02:00
Dave Halter
6a8138d185 Improve the compiled object generation caching, which was very wrong and is ok now, but still needs improvements. 2016-07-03 15:32:08 +02:00
Dave Halter
0223471237 Increase the parser pickling version. 2016-07-03 12:31:57 +02:00
Dave Halter
1ba226d4a2 Typing after all cannot be used in Python 2.6, therefore remove it again and disable the tests for 2.6 that need it. 2016-07-03 12:10:19 +02:00
Dave Halter
2d2b22ba69 Dependency updates for tox, add typing for Python 2.6 and remove it for 3.5, because that version includes it natively. 2016-07-03 12:07:23 +02:00
Dave Halter
9af2fe6f0d Remove the Python 3.2 test from travis. 2016-07-03 11:57:58 +02:00
Dave Halter
c4ec5caf40 3.2 is not supported anymore. Mention this in docs as well. 2016-07-03 11:38:38 +02:00
Dave Halter
45dde12429 Pip compatibility with Python 3.2 is gone. Therefore we also remove it. 2016-07-03 11:36:19 +02:00
Dave Halter
62786158da Some more Python compatibility improvements. 2016-07-03 11:35:07 +02:00
Dave Halter
10b8936b11 More python2.7 fixes. 2016-07-03 02:57:43 +02:00
Dave Halter
9245181a8c Some python 2.7 (and 3.3) compatibility improvements. 2016-07-03 02:54:21 +02:00
Dave Halter
609965d07c Finally fix all python 3.4 tests again. 2016-07-01 20:59:24 +02:00
Dave Halter
67a0f604a7 Fix an issue with interpreter exceptions in certain cases. 2016-07-01 19:32:05 +02:00
Dave Halter
2652666080 Remove the logic to not use getattr on instances in CompiledObjects. 2016-07-01 18:11:44 +02:00
Dave Halter
056ad1b8a8 Fix a few more tests that where not correctly written a while ago. 2016-07-01 08:42:05 +02:00
Dave Halter
d5098ef096 Forgot to add parser/utils.py. 2016-06-30 19:36:48 +02:00
Dave Halter
f7278f5bf1 Some more bug fixes for MixedObject. 2016-06-30 19:36:21 +02:00
Dave Halter
6b41db96bf Refactor something to use .type instead of isinstance. 2016-06-30 09:55:21 +02:00
Dave Halter
689284c615 Refactor Evaluator.wrap to use the types in a more consequent way. 2016-06-29 21:06:35 +02:00
Dave Halter
a3b263a599 REPL completion is working again partially. There's some progress at least. 2016-06-29 08:49:20 +02:00
Dave Halter
52c42c3392 Reenable call signature caching and move a lot of parser specific caching to the parser itself. 2016-06-28 08:46:29 +02:00
Dave Halter
746a513ef9 Return a TokenInfo object in the tokenizer instead of a tuple (it's a namedtuple now.) 2016-06-27 09:14:11 +02:00
Dave Halter
9b85080fb8 Finally removed the user_context. Goodbye old hacks... 2016-06-27 08:49:27 +02:00
Dave Halter
969100e471 Move the parsing away from user_context to api.py. 2016-06-27 08:48:36 +02:00
Dave Halter
0445d51d34 Remove the user_scope from the user_context module. 2016-06-27 08:35:24 +02:00
Dave Halter
bb4ab45131 Don't use UserContextParser.user_stmt anymore, since we can access it directly. 2016-06-23 16:36:12 +02:00
Dave Halter
73e71b3c1a Finally able to remove the user_context. This is awesome!
Now we only use the syntax tree to understand where the user is doing something.
2016-06-23 16:26:28 +02:00
Dave Halter
672bf9908c Remove path argument from completions, because it's really not needed anymore. 2016-06-23 16:11:23 +02:00
Dave Halter
9225db084a user_context is not needed anymore for completions. yay! 2016-06-23 09:19:20 +02:00
Dave Halter
8f39a6e89d 'source' should not be public in the API. Move it to _source. 2016-06-23 08:53:34 +02:00
Dave Halter
77e66e01e3 Remove the inference module, it's unused code. 2016-06-23 08:49:26 +02:00
Dave Halter
1ab4eb3696 Exchange the completion trailer evaluation logic. It's way more consistent now. 2016-06-23 08:47:43 +02:00
Dave Halter
cbef4235ff Remove needs_dot and settings.add_dot_after_module. Both are not really used anymore with context completions anymore.
Also the setting doesn't seem to be used anywhere as far as I can tell.
2016-06-22 22:52:10 +02:00
Dave Halter
070ee0cc2f Merge pull request #734 from kiryph/patch-1
Add vim editor plugin deoplete-jedi
2016-06-22 22:37:02 +02:00
Dave Halter
1355ea01b3 Simplify completions further to eventually get rid of user_context. 2016-06-22 09:15:32 +02:00
Dave Halter
80aa9ad079 A small python 2 bugfix. 2016-06-22 00:45:14 +02:00
Dave Halter
57b1fdaa26 Remove code that's not used anymore. 2016-06-22 00:31:36 +02:00
Dave Halter
6fec29d778 All tests except the Interpreter tests are working again. 2016-06-22 00:27:21 +02:00
Dave Halter
8e67facecc Refactoring: call_of_name -> call_of_leaf. 2016-06-21 18:42:20 +02:00
Dave Halter
d0eb8137e2 Remove old unused call_of_name madness. 2016-06-21 18:39:35 +02:00
Dave Halter
0a3bc34d6b Fix some more issues with the call_of_name function. 2016-06-21 18:39:02 +02:00
Dave Halter
b941e36f04 Another call_of_name fix (breaking tests.) 2016-06-21 09:49:12 +02:00
Dave Halter
5212849780 Fix the last known case of call signatures. Yay! 2016-06-20 18:32:44 +02:00
Dave Halter
e0631cfda2 Add new tests and certain fixes for some new call signature issues. 2016-06-20 18:20:35 +02:00
kiryph
a9b0af3167 Add vim editor plugin deoplete-jedi 2016-06-20 11:26:25 +02:00
Dave Halter
b9d3371f39 Small refactoring of call signatures. 2016-06-20 08:44:56 +02:00
Dave Halter
fa13889e70 Fix the latest call signature tests. 2016-06-20 08:26:18 +02:00
Dave Halter
389885c285 Fix some of the newer call_signature tests. 2016-06-18 00:47:53 +02:00
Dave Halter
7ddc9c9c78 Fix all call signature tests. 2016-06-17 17:03:34 +02:00
Dave Halter
32346c6da8 A lot of call signature refactorings. Note that this commit is totally broken. 2016-06-17 00:20:13 +02:00
Dave Halter
6f366e2d77 Rename next_sibling and prev_sibling. 2016-06-14 23:22:33 +02:00
Dave Halter
78d25541bb The parser tree doesn't need to care about error statements anymore. 2016-06-14 18:12:19 +02:00
Dave Halter
6853bd70f4 Adding a token in pgen should have the same signature that the tokenizer uses. 2016-06-14 18:09:31 +02:00
Dave Halter
2ce66a9508 Imports belong to the top of the module. 2016-06-14 18:09:08 +02:00
Dave Halter
01ddacfec4 Generalize the tuple order of tokens. 2016-06-14 08:57:38 +02:00
Dave Halter
118ba7d833 A lot of stuff is not needed anymore, because of the recent refactorings. 2016-06-14 08:31:36 +02:00
Dave Halter
6cf8ca03b2 Goto refactoring. Everything is now so much less complicated. 2016-06-14 00:10:14 +02:00
Dave Halter
653f247a42 Fix a goto_definitions test. 2016-06-13 18:27:39 +02:00
Dave Halter
27f05de3b7 Fix a few more issues, mostly with the fast parser. 2016-06-13 18:21:17 +02:00
Dave Halter
a485412af0 Rename goto_definition to goto_definitions. 2016-06-11 23:16:44 +02:00
Dave Halter
c82691a12b Make goto_definitions a lot simpler. 2016-06-11 23:13:04 +02:00
Dave Halter
9930ab5056 Small fixes to make the tests pass again. 2016-06-11 16:50:05 +02:00
Dave Halter
4c711339dd The integration test runner is now using a different way of getting the supposed results.
This is needed, because goto_definition will not work in comments anymore.
2016-06-11 16:33:56 +02:00
Dave Halter
99a03da8de .gitignore: Ignore all vim backups. 2016-06-07 22:17:16 +02:00
Dave Halter
82c76fa689 Merge with the linter branch (especially the changes of pep484. 2016-06-07 13:51:25 +02:00
Dave Halter
ed152e440e Merge master into the away branch. The merge conflict that was resolved was to get 'debug.py' working again. 2016-06-07 08:55:27 +02:00
Dave Halter
23ff395754 Merge dev and the away branch. 2016-06-07 08:45:26 +02:00
Dave Halter
6b9f96ce13 Keyword completion is now possible in a semantic way. This includes better testing and documentation. 2016-06-06 18:32:00 +02:00
Dave Halter
028d0a2509 After as ther should not follow any completions. 2016-06-06 18:08:45 +02:00
Dave Halter
a74d3d6e9a Remove more unused code. 2016-06-06 09:29:05 +02:00
Dave Halter
87a75fdfe4 Remove old unused code. 2016-06-06 09:27:43 +02:00
Dave Halter
7a170532d9 Now finally all tests are running again (except the repl completion, which is a separate issue). 2016-06-06 09:22:50 +02:00
Dave Halter
4ec72d8f24 Finally fix the last remaining fast parser issue. 2016-06-06 08:55:10 +02:00
Dave Halter
436f7dffe0 Fix another very annoying fast parser issue. 2016-06-06 08:37:40 +02:00
Dave Halter
dd85fc6ffd Add error token in a normal way to the syntax tree as ErrorLeaf. 2016-06-05 14:49:57 +02:00
Dave Halter
aa97e4e714 Fix the fast parser issue #589. 2016-06-05 14:42:32 +02:00
Dave Halter
0b31614025 Merge pull request #728 from sadovnychyi/master
Encoding isn't set for imported modules
2016-06-04 12:02:55 +02:00
Dmitry Sadovnychyi
1404b7bcea Update AUTHORS.txt 2016-06-04 17:49:28 +08:00
Dmitry Sadovnychyi
5cb969d211 Use utf-8 instead of iso-8859-1 as a default encoding 2016-06-04 17:44:29 +08:00
Dave Halter
12f878a4f7 Test for the issue #589. 2016-06-04 01:06:13 +02:00
Dave Halter
6b63b9cf54 Further fixes for failing tests. 2016-06-04 00:59:17 +02:00
Dave Halter
c3ffaab8af The fast parser had some splitting issues. 2016-06-04 00:50:36 +02:00
Dave Halter
e60c06b691 In every case of a new line the tokenize position should be correct. 2016-06-04 00:34:00 +02:00
Dave Halter
5edcf47512 Break Interpreter completion even more in favor of a better solution in the future. 2016-06-03 19:31:42 +02:00
Dave Halter
0c7894b3e6 Fix a few fast parser tests. 2016-06-02 08:24:52 +02:00
Dave Halter
ad8d730a57 More test fixes. 2016-05-31 01:12:07 +02:00
Dave Halter
c12dbe0b9e Fix a few tests that failed, because they were not correct python (the context was wrong). 2016-05-30 20:10:17 +02:00
NAKAMURA Yoshitaka
127da66ae2 fix source_to_unicode py3 compatibility 2016-05-30 23:31:18 +09:00
Dave Halter
4f6368e7eb Now ErrorLeaf and ErrorNode are part of the syntax tree. This makes probably sense. The documentation will follow once it's clear how they will shape out. 2016-05-30 00:34:58 +02:00
Dave Halter
daa68b66ad Fix a few issues caused by the refactoring. 2016-05-29 19:49:35 +02:00
Dave Halter
feef45f4bb Fixed all on_import tests. 2016-05-29 12:08:53 +02:00
Dave Halter
2700c2cca4 Make it possible to import Jedi in Python 2 again. 2016-05-28 20:20:45 +02:00
Dave Halter
4714b464a6 Further import completion improvements. 2016-05-28 02:08:43 +02:00
Dave Halter
e4fe2a6d09 Now some other tests are working again. 2016-05-26 00:35:56 +02:00
Dave Halter
cbba314286 Progress and actually passing a few tests. 2016-05-26 00:10:54 +02:00
Dave Halter
d4a10929e2 Starting to create a way of how context sensitive completions can be made.
This involves playing heavily with the parser pgen2. We use its stack to check for all possible tokens/keywords.
2016-05-23 18:11:44 +02:00
Dave Halter
36a135c347 pgen2: Don't overwrite type 2016-05-21 16:08:12 +02:00
Dave Halter
97c51cb812 Merge pull request #725 from Carreau/lazy-colorama
Initialize colorama lazily
2016-05-21 15:44:14 +02:00
Dave Halter
479b3cfab2 More completion refactoring. Getting the structure for filtering names right. 2016-05-19 12:41:59 +02:00
Dave Halter
5059febed4 Fix an issue with a wrongly refactored name. 2016-05-19 11:49:59 +02:00
Dave Halter
055ff8be23 Readability for completion parts. 2016-05-19 11:33:17 +02:00
Dave Halter
323581e253 Refactor the completion module. 2016-05-19 11:13:42 +02:00
Dave Halter
cfa65a22fa Move the completion specific parts to api/completion.py. 2016-05-19 11:08:37 +02:00
Dave Halter
2e33394adb Move the type inference call of the api to inference.py 2016-05-19 10:40:22 +02:00
Dave Halter
da6486dc6d Start moving api stuff to the inference module. 2016-05-19 10:25:36 +02:00
Matthias Bussonnier
e96cccb81c Document a bit more 2016-05-18 16:48:08 -07:00
Dave Halter
a08ad2d53d Further improvements to the interpreter refactoring. 2016-05-19 01:41:06 +02:00
Dave Halter
1bb8d32084 Improve interpreter tests. 2016-05-18 11:56:33 +02:00
Dave Halter
d93d31feb8 Make a first test working with mixed objects. 2016-05-18 11:49:50 +02:00
Matthias Bussonnier
4ba6000f92 Initialize colorama lazily
Will prevent colorama from wrapping stderr/out if not in debug mode,
which leads to some errors.
2016-05-17 16:49:57 -07:00
Dave Halter
ef314a5c38 Complete writing the full mixed objects module. 2016-05-17 17:44:22 +02:00
Dave Halter
5595fb3e2f Start adding a module that mixes compiled and parser objects. 2016-05-16 13:12:45 +02:00
Dave Halter
33accb3dc6 It should be possible to pass class.__dict__ to Jedi. 2016-05-16 09:52:45 +02:00
Dave Halter
e21b3024e0 Writing tests for the upcoming side effect fixes in the interpreter completion. 2016-05-16 09:36:54 +02:00
Dave Halter
818730d6ea Fix descriptions for REPL. 2016-05-15 23:32:16 +02:00
Dave Halter
50f6bb0299 When we are working with CompiledObjects and instances there should never be a case where class values are returned. 2016-05-15 23:06:07 +02:00
Dave Halter
cc331d62e0 Get closer to fixing a lot of issues with the completion for repl. 2016-05-15 14:26:22 +02:00
Dave Halter
6266678064 Merge pull request #719 from isamert/master
Add gedit to supported editors list
2016-04-15 14:06:25 +02:00
İsa Mert Gürbüz
53ca0217dd Add gedit to supported editors list 2016-04-15 14:34:24 +03:00
İsa Mert Gürbüz
07c7a69306 Add gedit to editors that uses jedi 2016-04-15 14:32:13 +03:00
Dave Halter
da657fb132 Merge pull request #711 from WoLpH/python35
Enabled python 3.5 support
2016-03-31 10:47:31 +02:00
Rick van Hattem
4afc74f1c7 fixing python 3.2 and 3.5 issues 2016-03-31 04:56:17 +02:00
Rick van Hattem
97178dd92b enabled python 3.5 support 2016-03-31 03:27:43 +02:00
Dave Halter
935aec7b71 Merge pull request #701 from mfussenegger/vscode
Add vscode to editors using jedi
2016-03-22 18:58:50 +01:00
Dave Halter
ea099e514b Merge pull request #709 from davidhalter/debug-docs
Enhance docs about debugging
2016-03-21 11:30:05 +01:00
Danilo Bargen
f5e594970a Enhance docs about debugging 2016-03-21 10:40:58 +01:00
Dave Halter
c0242cf5be Merge pull request #705 from tbradshaw/patch-2
Update unsupported features list with "implicit namespace packages"
2016-03-10 12:12:26 +01:00
Travis Bradshaw
ac9133aafc Update unsupported features list with "implicit namespace packages" 2016-03-09 22:37:21 -06:00
Mathias Fussenegger
a18574246e Add vscode to editors using jedi
The python extension for Visual Studio Code uses Jedi.

As seen here:

    https://github.com/DonJayamanne/pythonVSCode/tree/master/pythonFiles/jedi
2016-03-03 23:02:54 +01:00
Dave Halter
9c0f3f9535 Merge pull request #699 from mfussenegger/py35
add python 3.5 classifier to setup.py
2016-03-01 00:44:23 +01:00
Mathias Fussenegger
09310dae7d add python 3.5 classifier to setup.py
Since 5a555076bb 3.5 is listed as
supported python version in the docs
2016-02-29 22:19:02 +01:00
Dave Halter
e4c959944f Merge pull request #696 from reinhrst/documentation
update documentation
2016-02-16 23:34:49 +01:00
Claude
5a555076bb update documentation add python 3.5 and PEP-0484 2016-02-16 10:04:06 +01:00
Dave Halter
4a6c2f8de7 Merge pull request #664 from reinhrst/typehints-in-comments
simple typehints in comments
2016-02-16 09:34:15 +01:00
Dave Halter
a0ac1e6390 Merge pull request #691 from reinhrst/python35
python 3.5 support status
2016-02-16 00:01:57 +01:00
Claude
795a72a866 add typing.cast() to the todo list 2016-02-15 18:13:47 +01:00
Claude
4fe710d4d3 more tests 2016-02-15 18:02:11 +01:00
Claude
71ab855802 update list of things that are completed 2016-02-15 17:56:14 +01:00
Claude
a9ebe71c64 add some tests to show that type-hints on the next line don't work 2016-02-15 17:54:20 +01:00
Claude
641fb80773 add support for 'with-assignment' hints 2016-02-15 17:52:21 +01:00
Claude
3a1b2e7104 add support for 'for-assignment' hints 2016-02-15 17:37:03 +01:00
Claude
8b28678d19 support tuple-assignment 2016-02-15 17:04:19 +01:00
Claude
a658f7940c typehints for variables in comments 2016-02-15 16:03:23 +01:00
Claude
daeee4ba0c simple typehints in comments 2016-02-15 15:12:07 +01:00
Claude
ca08b8270b combine power-or-atom_expr statements into one statement 2016-02-15 10:20:25 +01:00
Dave Halter
3ac1427242 Merge pull request #693 from chergert/patch-1
Add GNOME Builder to README.rst
2016-02-12 13:18:44 +01:00
Christian Hergert
81e07b866f Add GNOME Builder to list of supported editors
Support is provided out of the box as long as python3-jedi is installed.
Additionally, Builder's implementation includes support for GObject
Introspection (and therefore Gtk).
2016-02-11 13:02:27 -08:00
Claude
d5f08f8bdd opting for skipping PEP492 backwards compatibility magic, instead directly making await and async keywords
See discussion at
https://github.com/davidhalter/jedi/pull/691#issuecomment-182815864
2016-02-11 19:30:01 +01:00
Claude
7077d0b762 Using python 3.7-like parser, instead of python 3.5 magic.
See https://github.com/davidhalter/jedi/pull/691#issuecomment-182815864
Revert "Update tokenizer to adhere to PEP492 magic"

This reverts commit 65187930bd.
2016-02-11 19:14:31 +01:00
Claude
3a36bb3a36 Seems necessary to explicitly specify python3.5: https://github.com/travis-ci/travis-ci/issues/4794 2016-02-10 17:46:56 +01:00
Claude
04524cd63c make travis test python3.5 as well 2016-02-10 13:06:27 +01:00
Claude
4249563eb2 tests can now also run on python 3.5 2016-02-09 21:08:47 +01:00
Claude
65187930bd Update tokenizer to adhere to PEP492 magic 2016-02-09 21:07:18 +01:00
Claude
bf5acb4c7a once more: python 3.5 uses atom_expr node in many places where previous python would use power node 2016-02-09 19:34:44 +01:00
Claude
8819b2133a further fix for *-arguments in arglist 2016-02-09 18:23:24 +01:00
Claude
a09611197b add ATEQUAL token for python < 3.5 2016-02-09 18:17:31 +01:00
Claude
0ed149070a add python 3.5 '@' operator to tokenizer 2016-02-09 17:13:25 +01:00
Claude
de98cda2d7 python3.5 uses 'argument' node type, not 'arglist' for * and ** arguments 2016-02-09 17:12:26 +01:00
Claude
3b0dcb3fcb move file_input to top of file, as mentioned in 19acdd32b7 2016-02-09 11:47:01 +01:00
Claude
241abe9cf3 python 3.5 uses atom_expr node in many places where previous python would use power node 2016-02-09 11:42:53 +01:00
Claude
3fb5fe8c77 allow empty bodies for better autocompletion 2016-02-09 11:22:17 +01:00
Claude
bc0486f723 python 3.5 uses atom_expr node in many places where previous python would use power node 2016-02-09 11:21:26 +01:00
Claude
1f4c95918f Add @= ATEQUAL token 2016-02-09 11:14:56 +01:00
Dave Halter
17a1a0ebfd Colorama 0.3.6 is buggy, so just don't import it if it's not there. 2016-01-28 14:39:18 -02:00
Dave Halter
7fe5280bda Forgot to include all tox dependencies. 2016-01-27 19:01:33 -02:00
Dave Halter
c09a916ab5 Didn't load grammar for Python 2.6 correctly. 2016-01-27 17:52:42 -02:00
Dave Halter
a307bc384c Merge pull request #689 from sergeyglazyrindev/master
fix for #122
2016-01-26 22:53:54 -02:00
Sergey Glazyrin
c2874ff84a fix for #122 2016-01-26 22:54:15 +01:00
Dave Halter
257009d238 Skip pep0484 tests when using Python 2.6. 2016-01-26 15:59:27 -02:00
Dave Halter
633e5aa76f The typing library only works in Python >= 2.7. 2016-01-26 15:05:58 -02:00
Dave Halter
f9a64fd637 Fix some issues in Python 2.7 2016-01-26 14:59:40 -02:00
Dave Halter
3816f28dfa Merge pull request #663 from reinhrst/typing
PEP 484 typing library
2016-01-26 11:30:47 -02:00
Claude
079e3bbd28 use Ellipsis instead of ..., for python 2.7 compatibility 2016-01-23 23:09:45 +01:00
Claude
244c9976e5 cache the parsed jedi_typing module 2016-01-23 23:06:28 +01:00
Claude
e267f63657 python 2.7 compatibility, typing module tested with docstring, so that it can also be tested in python 2.7 2016-01-23 22:53:48 +01:00
Claude
c9bf521efd remove renaming of class based on parameters 2016-01-23 22:10:52 +01:00
Claude
442d948e32 I don't need the __len__ for __iter__ to work (eventually), so leaving it out for now 2016-01-17 18:04:59 +01:00
Claude
941da773f6 temporary fix for typing.Mapping[...].items(), can be removed after #683 is fixed 2016-01-17 17:05:31 +01:00
Claude
b316fb94c4 enable tests for the value type in tuple assignment from typing.Mapping[].items() 2016-01-17 17:05:29 +01:00
Claude
885f7cb068 fix for iterators -- should start working when py__iter__ gets fixed: https://github.com/davidhalter/jedi/pull/663\#issuecomment-172317854 2016-01-17 16:53:09 +01:00
Claude
b499906398 Reverted 10f5e1 --- needed some more work to get it working again 2016-01-17 16:12:43 +01:00
Claude
ae701b2f9a Support for typing.Tuple[type, ...] 2016-01-17 12:43:23 +01:00
Claude
a5fc149f9d use jedi.common.unite in flatten array of sets 2016-01-17 10:57:38 +01:00
Claude
59161c0b5d fix FakeSequence type 2016-01-17 10:51:06 +01:00
Claude
9d7e1ce81b add the typing module for testing 2016-01-17 10:41:41 +01:00
Claude
1b787e2a11 add test to check instanciated subclasses 2016-01-17 10:41:41 +01:00
Claude
409ee5568a test with different ways of importing the typing module 2016-01-17 10:41:41 +01:00
Claude
3852431549 typing.Union and typing.Optional 2016-01-17 10:41:41 +01:00
Claude
7b97312509 tuples and mappings in typing 2016-01-17 10:41:40 +01:00
Claude
10f5e15325 I feel this is a nicer solution. Forward Reference busting should be part of the annotation resolving. It doesn not have anything to do with the typing module (and should indeed also happen if someone writes his own types outside of the typing module) 2016-01-17 10:41:40 +01:00
Claude
5948c63cf9 Make the classes descriptions look better --- not sure whether this is a good idea 2016-01-17 10:41:40 +01:00
Claude
67cbc5ebd1 made code slightly more pytho2 friendly 2016-01-17 10:41:40 +01:00
Claude
90c4ca8c04 should obviously keep typing.py parsable in python 2 2016-01-17 10:41:40 +01:00
Claude
e688a498ab Add sets and iterable/iterator 2016-01-17 10:41:40 +01:00
Claude
85023a22aa Not implemented classes should not default to everything 2016-01-17 10:41:40 +01:00
Claude
cc6bd7d161 rework so that it also works withouty pep0484 type hints in jedi_typing.py 2016-01-17 10:41:40 +01:00
Claude
52cc721f45 made typing classes inheritable; added MutableSequence and List 2016-01-17 10:41:40 +01:00
Claude
f5a31ad78e first try at the typing library 2016-01-17 10:41:40 +01:00
Dave Halter
beeffd2dcd Some pgen2 tests were always skipped. 2016-01-07 18:55:10 +01:00
Dave Halter
379eb440cd Fix: the parent setting of deep_ast_copy worked the wrong way. 2016-01-07 18:52:06 +01:00
Dave Halter
06cb82830a builtin_methods calculation of iterable works now with the mro. 2016-01-07 15:41:20 +01:00
Dave Halter
8216ff3b11 Merge branch 'linter' of github.com:davidhalter/jedi into linter 2016-01-07 12:30:45 +01:00
Dave Halter
438ba3e14a Ellipsis is still be valid in 2.6/2.7 (for now). 2016-01-07 11:01:00 +01:00
Dave Halter
cb7ee00c75 Forgot to include precedence tests. 2016-01-07 10:43:31 +01:00
Dave Halter
43ad4cfeb8 Ellipsis comparisons are working now. Ellipsis was previously ignored. 2016-01-07 10:41:34 +01:00
Dave Halter
5cc27f632d Improve dict comprehension support. 2016-01-02 21:46:14 +01:00
Dave Halter
0acc5256ea Implement imitate_items(), which helps if you use {}.items(). 2016-01-01 12:43:07 +01:00
Dave Halter
e193017163 Merge pull request #669 from reingart/master
Update API doc for completions
2015-12-30 12:34:44 +01:00
Mariano Reingart
2ec196fa2e Update API doc for completions
At least in jedi version 0.9.0 the API seems to change to:
 * complete method is now completions
 * words attribute is now name

Example:
```
(venv)reingart@S55t-B:~$ python
Python 2.7.6 (default, Jun 22 2015, 17:58:13) [GCC 4.8.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import jedi
>>> jedi.__version__
'0.9.0'
>>> source = '''import json; json.l'''
>>> script = jedi.Script(source, 1, 19, '')
>>> script
<Script: ''>
>>> completions = script.completions()
>>> completions
[<Completion: load>, <Completion: loads>]
>>> completions[1].complete
'oads'
>>> completions[1].name
u'loads'
>>> 
```
2015-12-30 06:03:51 -03:00
Dave Halter
4e93fb344b Dict.values is working now on dict literals. 2015-12-27 23:53:56 +01:00
Dave Halter
48f41c5231 Create a way to register builtin methods the iterable module.
With this it's possible to e.g. register a function Array.dict_values as 'dict.values' with all the proper name resolution stuff.
2015-12-27 23:02:37 +01:00
Dave Halter
c0f7e9f820 Fix an issue with predefined_if_name_dict. 2015-12-27 17:30:40 +01:00
Dave Halter
03eaf8455f Dict comprehensions are working partially. 2015-12-27 17:20:49 +01:00
Dave Halter
b3f7d0c29a Get Set comprehensions working. 2015-12-27 15:37:27 +01:00
Dave Halter
b479e157fc Fix an issue in YieldExpr. 2015-12-26 11:39:37 +01:00
Dave Halter
ef3a83a74e Add a link in the finder docstring on how to understand name resolution. 2015-12-26 10:41:26 +01:00
Dave Halter
e34c0b336c Add an acknowledgement section in the README. Thank @tkf, @dbrgn and @gvanrossum for their contributions. 2015-12-26 03:32:50 +01:00
Dave Halter
cd5701cd41 Clean up licensing a bit. 2015-12-26 03:24:01 +01:00
Dave Halter
2a691eefff Including pgen2 tests from the cpython repo. 2015-12-26 03:15:09 +01:00
Dave Halter
507ddfa4b0 Add the Python 3.5 syntax file. 2015-12-26 03:10:59 +01:00
Dave Halter
ab5d0ed72b Starting with Python 3.4 from is not a token that always is a "new" statement. 2015-12-26 02:47:22 +01:00
Dave Halter
eb2e41f771 Grammar versioning has now a smoother interface. 2015-12-25 19:30:25 +01:00
Dave Halter
a373e34229 The parser without error recovery raises an error now if he's not able to parse something. 2015-12-25 18:53:05 +01:00
Dave Halter
6bad5a924b Making it possible for static analysis tests to be skipped if the python version doesn't match. 2015-12-22 17:37:28 +01:00
Dave Halter
515d096d33 The alternative test runner script shouldn't run on skipped tests. 2015-12-22 11:45:24 +01:00
Dave Halter
936cef97e9 Fix param position lookups. Also forward annotations have the correct resolution path now (starting at the end of the file). 2015-12-22 11:25:32 +01:00
Dave Halter
ac294244cf Remove legacy code from FunctionExecution. 2015-12-22 07:37:09 +01:00
Dave Halter
8201fdc5af Merge branch 'pep484' into linter 2015-12-20 23:19:10 +01:00
Dave Halter
c15551ccc1 Errortokens should also make the parser fail in the normal parser. 2015-12-20 23:11:52 +01:00
Dave Halter
5791860861 Actual forward reference annotations are working pretty smooth now. 2015-12-20 22:57:41 +01:00
Dave Halter
c4906e0e3f Rework the parser so we can use arbitrary start nodes of the syntax.
This also includes a rework for error recovery in the parser. This is now just possible for file_input parsing, which means for full files.
Includes also a refactoring of the tokenizer. No more do we have to add an additional newline, because it now works correctly (removes certain confusion.
2015-12-20 22:25:41 +01:00
Dave Halter
9a93d599da Fix: __module__ doesn't need to be properly defined. 2015-12-20 02:35:23 +01:00
Dave Halter
b2a691a69a PEP 484 support also means that we should evaluate comments in the future. 2015-12-19 11:10:05 +01:00
Dave Halter
a2905ae078 Implement get_parent_until for Comprehension. 2015-12-18 23:18:21 +01:00
Dave Halter
e73b1a683a Tests for python2 print statement. 2015-12-18 17:57:25 +01:00
Dave Halter
23f40d8998 Merge branch 'linter' of https://github.com/reinhrst/jedi into pep484
Conflicts:
	AUTHORS.txt
2015-12-17 23:46:20 +01:00
Claude
160b6fca51 show off some power :) 2015-12-17 15:29:49 +01:00
Claude
6bee214948 catch error in certain non-pep0484 annotations 2015-12-17 15:23:40 +01:00
Claude
8bf2fe77e2 add some more non-pep0484-junk to the test 2015-12-17 15:06:20 +01:00
Dave Halter
cc3c538d9d Merge branch 'buildout-unicode-decode-error' of https://github.com/mfussenegger/jedi into linter 2015-12-17 12:50:26 +01:00
Dave Halter
54b1b2be74 Fix: flow analysis crashed when using in combination with different modules. 2015-12-17 12:37:26 +01:00
Dave Halter
3d79d0994e Fix: is_class() on Instance was not implemented. 2015-12-15 16:44:28 +01:00
Dave Halter
ab91cfa3b5 Fix: print_stmt was not actually cared for in Python 2.7, #662. 2015-12-15 13:08:37 +01:00
Dave Halter
7141158484 Merge master into linter. 2015-12-15 12:28:38 +01:00
Claude
1e6397b163 check 'assigned types'-support (comes out of the jedi-box), and add tests for that 2015-12-15 11:56:54 +01:00
Claude
35fda3823e test dynamic annotation and dynamic forward reference 2015-12-15 11:53:48 +01:00
Claude
1258875300 add test that jedi doesn't break in case of non-pep-0484 comments 2015-12-15 00:37:23 +01:00
Claude
3cef8b6d55 string-annotations should only be interpreted by the pep-0484 code, not the parser 2015-12-15 00:31:47 +01:00
Claude
626fa60d03 Revert "clean out the last_* fields of sys before importing it."
This reverts commit be399c81c3.
Will break python 2.6 (possibly 2.7) tests; this is expected behaviour.
See https://github.com/davidhalter/jedi/pull/661#discussion_r47543815
2015-12-14 22:37:20 +01:00
Claude
0f6fb23d91 override annotation() in Lambda, instead of checking in Function on type 2015-12-14 22:02:11 +01:00
Claude
6ce076f413 more elaborate tests 2015-12-14 12:10:48 +01:00
Claude
576fdf8106 better separation pep0484 code and py__annotation__() function 2015-12-14 12:10:00 +01:00
Dave Halter
c85426ebac More detailed testing for value-error-too-few-values. 2015-12-14 06:54:02 +01:00
Claude
be399c81c3 clean out the last_* fields of sys before importing it.
The system gets confused if there were uncaught errors in previous
tests without this. Particularly, it crashes (at least 2.6) if any tests during
test_integrations were skipped.
2015-12-14 00:52:36 +01:00
Claude
0f08dc6ac6 Addinf myself to AUTHORS 2015-12-14 00:03:07 +01:00
Claude
7f8b878c8c if both docstring and annotations are present, use both for function parameters 2015-12-13 23:55:07 +01:00
Claude
f8debace0d forward reference pep-0484 2015-12-13 23:47:45 +01:00
Claude
c61f39cb2b add test for annotations to test_parser_tree 2015-12-13 23:45:37 +01:00
Claude
7e8112d607 pep0484 return type support 2015-12-13 23:07:13 +01:00
Mathias Fussenegger
e0947a04eb don't fail on UnicodeDecodeError in buildout script detection
This fixes #650
2015-12-13 21:50:09 +01:00
Claude
68cbabe819 pep0484 tests only on python >= 3.2 2015-12-13 21:43:34 +01:00
Claude
c02668a443 Build in version-dependency in integration tests
If a line is encountered with the comment  or , then the tests are skipped if the current python version is less than the requested one. All tests until the end of the file, or a new comment specifying a compatibe python version are skipped
2015-12-13 21:42:45 +01:00
Claude
5a8c46d509 seperate parser and testing code 2015-12-13 21:13:20 +01:00
ColinDuquesnoy
6e3b5dfb23 Add memoize_function to cache and use it in fake.get_faked
The previously added test should now pass.

Fix #591
2015-12-13 20:10:09 +01:00
ColinDuquesnoy
9a25d55953 Add regression test for issue #591
See #649
2015-12-13 20:08:48 +01:00
Claude
fadf4f4419 initial poc pep-0484 type hints 2015-12-13 18:05:57 +01:00
Dave Halter
7b8d4e86ac The evaluator recursion limitations are now reset in static analysis for each node, otherwise it's incredibly unprecise. 2015-12-13 17:18:19 +01:00
Dave Halter
106f6f7f5a too many values and too few values errors implemented for list comprehensions tuple unpacking. 2015-12-12 14:09:57 +01:00
Dave Halter
28585dcdba Better testing of classes. 2015-12-12 02:48:37 +01:00
Dave Halter
75ac2b9686 Enable better ways for analysis to analyze loop variables. 2015-12-11 20:25:49 +01:00
Dave Halter
8d3be10270 Fix issues in Python 2.7. 2015-12-10 17:20:21 +01:00
Dave Halter
a1410de9e8 Better description of the py__XXX__ attributes. 2015-12-10 16:53:08 +01:00
Dave Halter
1189868593 Use CheckAttribute descriptor more in CompiledObject to avoid duplicate code. 2015-12-10 16:43:42 +01:00
Dave Halter
5087584fdc evaluator is now used only as an attribute in CompiledObject. 2015-12-10 16:40:56 +01:00
Dave Halter
9e8da17688 Remove py__class__ evaluator param from representation objects. 2015-12-10 16:39:27 +01:00
Dave Halter
afb1d6c3b8 Remove evaluator param from py__call__. 2015-12-10 16:20:46 +01:00
Dave Halter
506d5a4f31 Remove evaluator param from py__bases__. 2015-12-10 16:16:30 +01:00
Dave Halter
98b1845784 Remove evaluator param from py__mro__. 2015-12-10 16:12:43 +01:00
Dave Halter
b16fd84628 Remove py__getattribute__. 2015-12-10 16:07:15 +01:00
Dave Halter
9bac88100a Get rid of get_exact_index_types. 2015-12-10 15:58:34 +01:00
Dave Halter
b10a048167 Get rid of Array.values() and Array.__iter__(). 2015-12-10 15:56:45 +01:00
Dave Halter
3a975db0d7 Get completely rid of get_index_types. 2015-12-10 04:41:21 +01:00
Dave Halter
058779dd42 Get completely rid of iter_content. 2015-12-10 04:38:59 +01:00
Dave Halter
9bd6e6c340 Fix: iterators are working smoothly now. Finally tests are passing again. 2015-12-10 04:37:23 +01:00
Dave Halter
e23f453a11 Fix all remaining issues from the compiled refactoring except static analysis. 2015-12-10 01:48:08 +01:00
Dave Halter
86037222b4 Fix: stdlib issues with the latest CompiledObject changes. 2015-12-10 00:02:06 +01:00
Dave Halter
c9a5caa96e Fix: dicts lookups were not working in all cases. 2015-12-08 22:37:30 +01:00
Dave Halter
bef5fca516 Refactor compiled.CompiledObject so it always owns an evaluator instance. 2015-12-08 02:19:33 +01:00
Dave Halter
18a10c436f Simplify names_dict lookups for Arrays. 2015-12-06 03:16:21 +01:00
Dave Halter
1b634d77af Add ranged test execution for alternate test runner. 2015-12-06 03:03:11 +01:00
Dave Halter
ffeedb32de Fix remaining issues with FakeDict. 2015-12-05 22:33:41 +01:00
Dave Halter
2008775370 Fix an issue with dict lookups. 2015-12-05 20:40:41 +01:00
Dave Halter
3910d97b7e Fix: __getitem__ sometimes didnt evaluate all the types. 2015-12-05 12:36:05 +01:00
Dave Halter
d65684a40b Fix py__getitem__ on Array. 2015-12-05 02:48:20 +01:00
Dave Halter
db060c70c9 Start creating py__getitem__. 2015-12-04 12:08:29 +01:00
Dave Halter
76345c0b58 Final fixes for pure usage of py__iter__. 2015-12-04 00:15:48 +01:00
Dave Halter
5f36019752 Added isinstance tests in static analysis. 2015-12-03 16:21:00 +01:00
Dave Halter
21faf2431a Added isinstance type checks in the linter. 2015-12-03 16:14:26 +01:00
Dave Halter
8daa0b8784 Introduce an additional node parameter for py__iter__ which helps static analysis. 2015-12-03 11:52:54 +01:00
Dave Halter
f66b8138b7 Remove ordered_elements_of_iterable and get_iterator_types, because they are not used anymore. 2015-12-03 09:25:11 +01:00
Dave Halter
76bbc91ff9 Remove some stdlib stuff that only complicated things. 2015-12-02 13:46:13 +01:00
Dave Halter
d835ffc5a3 Get rid of ordered_elements_of_iterable and use py__iter__ instead. 2015-12-02 13:39:22 +01:00
Dave Halter
9a2256f557 Fix issues with py__iter__types. 2015-12-02 07:11:36 +01:00
Dave Halter
41537a78e1 Fix: Array additions (append, insert) should not cause an additional py__iter__ entry if there's none. 2015-12-01 22:57:54 +01:00
Dave Halter
53dbdf22a2 Fix: In the py__iter__ version, we didn't respect __next__ being an option. 2015-12-01 19:55:13 +01:00
Dave Halter
37c21726e7 Fix: py__iter__ in dynamic list/set usages with empy params. 2015-12-01 18:35:12 +01:00
Dave Halter
0a10947ff0 py__iter__ for MergedArray. 2015-11-28 20:14:14 +01:00
Dave Halter
777ec7588c py__iter__ for ArrayInstance. 2015-11-28 19:35:14 +01:00
Dave Halter
55615fb3c1 unite returns a set now, this simplifies all the set(unite( calls. 2015-11-28 17:52:39 +01:00
Dave Halter
9259a432b7 Dicts should be iterated by its keys (__iter__). 2015-11-28 16:08:38 +01:00
Dave Halter
09f7930104 Start implementing py__iter__ for all classes. 2015-11-27 13:07:54 +01:00
Dave Halter
6f4ac70140 Issues with isinstance checks. 2015-11-27 12:22:02 +01:00
Dave Halter
bc41ba7ca9 get_code now has a normalized variable. 2015-11-26 07:11:56 +01:00
Dave Halter
a99368c421 Fix: elifs where not considered for isinstance type inference. 2015-11-25 22:14:23 +01:00
Dave Halter
9dbfb90c20 Fix: Nested flows user scope detection was wrong. 2015-11-25 21:36:17 +01:00
Dave Halter
17ab7bbc3d prepare_goto -> type_inference. 2015-11-25 07:11:48 +01:00
Dave Halter
59e4f567a2 Create a failing test for an issue probably with the parser. 2015-11-25 06:58:34 +01:00
Dave Halter
8dee92bcc5 Fix: Tuple unpacking to x[0] would raise bugs. 2015-11-24 01:27:23 +01:00
Dave Halter
cf4c2cb198 Fix: Dicts shouldn't be accessible in tuple assignments for now. 2015-11-24 01:11:41 +01:00
Dave Halter
724f7111a8 Now expr_stmt tuple unpacking automatically works with static analysis. 2015-11-24 01:07:32 +01:00
Dave Halter
8ee42e24a8 Added a test that shouldn't throw an error when using it. However, because we omited statements to use the actual variables the bug is never seen. 2015-11-23 05:48:57 +01:00
Dave Halter
8d65129a19 Power operation was not implemented before. 2015-11-20 18:26:39 +01:00
Dave Halter
030131d705 Forget to make the set comprehension result a set(). 2015-11-20 15:05:15 +01:00
Dave Halter
ffaf81bf1b Fix: Set/Dict Comprehensions don't raise an error. They are just ignored for now. 2015-11-20 14:51:52 +01:00
Dave Halter
7cc54e08c7 Forgot to include static analysis comprehension tests a while ago. 2015-11-18 18:00:50 +01:00
Dave Halter
8174b312b5 Fix: CompFor.nodes_to_execute didn't include the right nodes. Sometimes too many, sometimes too few. 2015-11-18 18:00:15 +01:00
Dave Halter
595b803f1f Fix an issue with strings that can be chained in the parser. 2015-11-17 11:38:51 +01:00
Dave Halter
03efbca586 Tried to get the recursion issues with if stmts working. 2015-11-16 11:44:25 +01:00
Dave Halter
4361ce0778 test/run.py should be runnable from everywhere. 2015-11-14 23:17:26 +01:00
Dave Halter
dc2e52fd7d Create Comprehension.py__iter__. 2015-11-14 20:34:33 +01:00
Dave Halter
239f0d7213 Small generator correction that leads to more stability in its result. 2015-11-11 11:34:18 +01:00
Dave Halter
f1c827821b Comprehension lookups are now more precise. 2015-11-10 22:31:50 +01:00
Dave Halter
306d274a3d Merge dev into linter. 2015-11-10 21:52:18 +01:00
Dave Halter
292366d3a6 Fix an issue in the API that was created by creating set types. 2015-11-10 21:30:08 +01:00
Dave Halter
eececf0f74 It seems like join completion was wrong before when used within the interpreter. 2015-11-10 21:25:40 +01:00
Dave Halter
7c94cd674a Fix an issue with the default type of memoize_default nt being a set. 2015-11-10 20:53:42 +01:00
Dave Halter
498e24df94 Fix an issue with combined reversed and yield without for loops. 2015-11-10 09:37:07 +01:00
Dave Halter
9f82cce3bb Implement py__iter__ for Generators, which means that yield expressions are now orderable, if they are not too complicated. 2015-11-09 15:15:03 +01:00
Dave Halter
4549157d39 parser.Tree.ForStmt got more utility functions. 2015-11-08 22:29:49 +01:00
Dave Halter
99739aa640 per_index_values is now a method that all the iterable objects should support. however its name is confusing and it should soon be refactored. 2015-11-03 17:35:45 +01:00
Dave Halter
84c43bf2dc Correct issues with slices and some more subtle bugs. 2015-11-01 21:30:41 +01:00
Dave Halter
dd6ade194a += assignments bug fix. 2015-11-01 13:21:41 +01:00
Dave Halter
ee51b0a62f More issues from the list of types to set of types conversion. 2015-10-30 10:32:17 +01:00
Dave Halter
05798734bf Fix an issue with the new set of types instead of lists.
This commit also includes some comments and improvements for debugging.
2015-10-29 20:53:14 +01:00
Dave Halter
c50fc7a044 Merge pull request #636 from immerrr/add-sys-path-customization
Improve virtualenv support & egg-link resolution
2015-10-26 22:07:18 +01:00
immerrr
45642cc16c .coveragerc: exclude imported site.py 2015-10-26 14:23:24 +03:00
immerrr
f634db7a20 jedi.api.Script: document sys_path parameter and VIRTUAL_ENV variable 2015-10-26 13:37:18 +03:00
immerrr
cc139e8f70 evaluate.site: copy/adapt site-packages related functionality from stdlib 2015-10-26 13:03:42 +03:00
immerrr
fb592ad028 test_imports: add test to ensure caching works with sys_path 2015-10-26 13:03:42 +03:00
immerrr
90a08794ba test_imports: use sys_path 2015-10-26 13:03:42 +03:00
immerrr
da4dbe81a9 sys_path: order egg-link files for reproducible test results 2015-10-26 13:03:42 +03:00
immerrr
f500457100 sample_venvs: exclude venvs dir from py.test discovery 2015-10-26 13:03:42 +03:00
immerrr
4eb3cf7921 Improve virtualenv support & egg-link resolution
- add sys_path= kwarg to Script & Evaluator constructors

- store sys_path for each evaluator instance

- replace get_sys_path with get_venv_path

- get_venv_path: use addsitedir to load .pth extension files

- get_venv_path: look for egg-link files in all directories in path
2015-10-26 13:03:42 +03:00
Dave Halter
3eaa3b954a Merge pull request #641 from kelleyk/dev
Fix issues with the way lambdas are handled
2015-10-25 11:07:29 +01:00
Kevin Kelley
2fc962bc3a Add myself to AUTHORS.txt. 2015-10-24 23:34:46 +00:00
Kevin Kelley
e13224bf50 Fix issue with lambda parsing; new test cases now pass. 2015-10-24 23:34:10 +00:00
Kevin Kelley
9ff7f99bac Add test cases demonstrating the issues with parser.tree.Lambda. 2015-10-24 23:34:06 +00:00
Kevin Kelley
8d8dcc2b6e Fix bug in branch condition causing lambdas to be treated like scopes and not like functions. 2015-10-24 23:33:53 +00:00
Dave Halter
e0753da6f1 Merge pull request #637 from immerrr/set-sudo-false
Couple build system improvements
2015-10-22 11:21:15 +02:00
immerrr
a6512f7702 Move clean_jedi_cache fixture to top-level conftest.py
Otherwise doctest module running in jedi/ subdirectory will not find it.
2015-10-21 18:04:32 +03:00
immerrr
c88f251206 travis.yml: run on new infrastructure 2015-10-21 18:04:32 +03:00
Dave Halter
70160d97e7 Debugging with more colors, yay. 2015-10-18 14:19:03 +02:00
Dave Halter
255c8f013d Set the debug_indent default right. 2015-10-16 02:41:46 +02:00
Dave Halter
e947124d83 Small improvments to the += / for logic. 2015-10-15 03:24:21 +02:00
Dave Halter
4b85d342ea Trying to fix issues with for loops and += operators. 2015-10-15 03:00:50 +02:00
Dave Halter
62468fb402 reversed and for loops now produce strings in the correct order. 2015-10-14 16:50:26 +02:00
Dave Halter
b27be47811 Make exact lookups possible in docstrings. 2015-10-14 12:10:48 +02:00
Dave Halter
844a011193 Replacing the types list with a types set. Some tests are failing, though. 2015-10-13 18:03:36 +02:00
Dave Halter
bf3fa11f6f Name lookups shouldn't return duplicates. 2015-10-10 20:01:03 +02:00
Dave Halter
f77712ddf1 Test to assure that imports are not loaded twice. 2015-10-10 19:49:40 +02:00
Dave Halter
317ef333fe Captial letters in SourceLair changed. 2015-09-30 15:01:49 +02:00
Dave Halter
879bede753 Merge branch 'master' of github.com:davidhalter/jedi 2015-09-30 14:59:43 +02:00
Dave Halter
cae88b0f94 Add sourcelair to Jedi usages. 2015-09-30 14:59:21 +02:00
Dave Halter
99e819f91a Fix test failures. 2015-09-22 19:33:12 +02:00
Dave Halter
22da402a7a Replace the get_iterator_types function with a different interface, which enables Jedi to detect invalid for loop inputs that are not iterable. 2015-09-22 19:18:36 +02:00
Dave Halter
786217acad Prepare replacing get_iterator_types. 2015-09-22 17:34:46 +02:00
Dave Halter
88bcb8e476 Rename params -> arguments. 2015-09-22 12:57:04 +02:00
Dave Halter
3a306a4f25 Fix comprehensions type issues. 2015-09-22 02:13:20 +02:00
Dave Halter
4ffc24a919 Fix for loops. 2015-09-21 15:33:59 +02:00
Dave Halter
1eded84a64 Fixes for the fast parser in nodes_to_execute. 2015-09-21 15:30:43 +02:00
Dave Halter
059fc91577 Note for static anaylsis documentation in the parser tree. 2015-09-21 14:52:51 +02:00
Dave Halter
6477944934 Finally able to remove the get_executable_nodes function. 2015-09-21 14:50:51 +02:00
Dave Halter
4180b6590d Move static analysis documentation of the names for nodes_to_execute. 2015-09-21 14:49:36 +02:00
Dave Halter
f455605399 Remove final bugs from the execute_nodes implementation. 2015-09-21 14:43:51 +02:00
Dave Halter
1e8dba9253 Fix classes in static analysis. 2015-09-21 14:35:41 +02:00
Dave Halter
19a5643a2e Few fixes to the nodes_to_execute logic. 2015-09-21 14:29:15 +02:00
Dave Halter
fa82b9a9db Starting to replace the old API code for static analysis. 2015-09-21 14:21:29 +02:00
Dave Halter
e47ca7b734 Removed an important line by accident earlier. 2015-09-21 14:02:42 +02:00
Dave Halter
e09b0a2aab Finish the work on nodes_to_execute. 2015-09-18 01:45:44 +02:00
Dave Halter
de836a6575 Start implementing nodes_to_execute in the parser. 2015-09-15 15:15:09 +02:00
Dave Halter
fdcf19f8b1 Document parser nodes that require execution for good static analysis. 2015-09-15 14:09:12 +02:00
Dave Halter
e7528198d3 Fix an issue with raise statements in the linter. 2015-09-13 23:45:53 +02:00
Dave Halter
eecae7dd38 Automate KeywordStatement.type generation. 2015-09-13 23:22:47 +02:00
Dave Halter
ef7ef08d50 restructure __main__.py 2015-09-13 11:51:18 +02:00
Dave Halter
745184c63b Merge pull request #621 from ronjouch/readme
Fix #603 - Fix broken 'Tips for how to use jedi efficiently' link in README.md
2015-08-27 10:46:45 +02:00
Ronan Jouchet
c2f8e1846e Fix #603 - Fix broken 'Tips for how to use jedi efficiently' link in README.md 2015-08-26 21:58:25 -04:00
Dave Halter
3c1cfff1e4 Merge pull request #618 from jonashaag/patch-1
Fix docstring
2015-08-18 15:36:15 +02:00
Jonas Haag
b9902b22d6 Fix docstring 2015-08-18 11:44:01 +02:00
Dave Halter
995a653122 Merge pull request #612 from tjwei/master
fix a set comprehension issue
2015-08-09 12:18:10 +02:00
Tzerjen Wei
666cbbf123 update the test of set comprehension literal 2015-08-09 17:58:38 +08:00
Tzerjen Wei
fdcf718317 move set comprehension tests to comletion/comprehensions.py 2015-08-09 14:11:42 +08:00
Tzerjen Wei
4cc6cb3ac4 fix a set comprehension issue 2015-07-29 00:08:21 +08:00
Dave Halter
f2cc320a61 Tests and implementation for type(some class). (Which returns type) 2015-07-01 14:58:37 +02:00
Dave Halter
3ac8f02841 Type tests and implementation for functions. 2015-07-01 14:54:23 +02:00
Dave Halter
d1218c97bf Tests for finding if branches that are correct. 2015-07-01 14:44:31 +02:00
Dave Halter
e355ab201e Tests and implementation for type on lambda and function objects. 2015-07-01 14:40:57 +02:00
Dave Halter
e222a30227 Implement the type builtin better and with a lot more tests. 2015-07-01 14:27:49 +02:00
Dave Halter
80492265cf Remove print statements and reenable the if statement scanning that allows names to get resolved in 'if 0:' clauses. Makes all tests work again. 2015-07-01 14:00:50 +02:00
Dave Halter
d739828a4b call_of_name was modified. Fixed an issue that was created earlier. 2015-07-01 13:58:05 +02:00
Dave Halter
d694ab83a3 Memoizing statements doesn't work anymore.
Statements now need to be evaluated at least if predefined_names_dict is set.
2015-06-29 13:32:39 +02:00
Dave Halter
6e44f334d8 Correct positioning for if statements. 2015-06-29 12:10:05 +02:00
Dave Halter
db1ed70318 Recursions on if statements when using the advanced flow evaluation shouldn't be possible anymore. 2015-06-29 12:03:31 +02:00
Dave Halter
a014d4fd38 Fix a bug in call_of_name.
It was possible to get a NAME(x) result when calling call_of_name on x, which shouldn't happen. It should just return x.
2015-06-29 11:27:10 +02:00
Dave Halter
5d9fff50af Static analysis tests for type errors with variables. 2015-06-23 18:04:36 +02:00
Dave Halter
7b50bb00b9 Merge pull request #602 from sadovnychyi/patch-1
Add link to Atom in README
2015-06-23 12:00:18 +02:00
Dmitry Sadovnychyi
81252f0b7d Add link to Atom in docs 2015-06-23 13:49:04 +08:00
Dave Halter
64fcbbba79 First implementation of doing precise if statement filtering. 2015-06-22 22:16:38 +02:00
Dave Halter
6da4f1fffb Static analsyis test for complex flow tests (filtering). 2015-06-22 22:15:02 +02:00
Dmitry Sadovnychyi
91fb4fd9e1 Add link to Atom in README 2015-06-22 18:09:04 +08:00
Dave Halter
a67408ad03 Move the unite function to common. 2015-06-18 14:16:16 +02:00
Dave Halter
7c4ef73669 Merge pull request #599 from bzz/patch-1
[Doc] add emacs company-mode to README.rst
2015-06-14 19:33:52 +02:00
Alexander
a5fd0b6c4f Update README - add emacs company-mode 2015-06-14 19:46:53 +09:00
Dave Halter
7651157487 Merge pull request #586 from squidarth/580_semantic_keyword_completion
Added support for correct continue & break autocompletion.
2015-05-18 11:27:05 +02:00
Sid Shanker
de7273e04b Fixed utf-8 decoding error in build. 2015-05-17 23:11:23 -07:00
Sid Shanker
258d5aee4a Added support for correct continue & break autocompletion. 2015-05-16 14:55:04 -07:00
Dave Halter
9b69f3a20f Merge pull request #584 from blueyed/find_module_py33-valueerror-as-string
AttributeError with ValueError in _compatibility.py:find_module_py33
2015-05-10 03:25:54 +02:00
Daniel Hahler
4469e654ae find_module_py33: use str(e) with ValueError
ValueError has no message attribute.

Fixes https://github.com/davidhalter/jedi/issues/584
2015-05-09 23:05:30 +02:00
Dave Halter
0543586abd __getattr__ comment. 2015-05-06 19:56:00 +02:00
Dave Halter
fd2b087424 Merge pull request #581 from asmeurer/whitespace-repr
Use repr() for the Whitespace repr
2015-05-06 11:25:30 +02:00
Aaron Meurer
2d75efff2a Use repr() for the Whitespace repr
This makes whitespace appear as <Whitespace: '\n'> instead of <Whitespace:
>.
2015-05-05 19:00:25 -05:00
Dave Halter
93500c3f72 Merge pull request #578 from squidarth/554_support_enumerate
Added in support for autocompleting `enumerate`.
2015-05-04 18:46:47 +02:00
Sid Shanker
6237214bff Added @squidarth to AUTHORS.txt. 2015-05-04 09:06:14 -07:00
Sid Shanker
20061fcf2e Added in support for autocompleting enumerate.
Resolves #554
2015-05-04 00:19:14 -07:00
Dave Halter
2221f12de9 Make refactoring clause clearer. 2015-05-01 10:49:26 +02:00
Dave Halter
df8a0d89ce Forgot to mention the import logic changes in the Changelog for 0.9.0. 2015-04-28 19:05:36 +02:00
Dave Halter
66557903ae \\\r\n is as possible as \\\n. 2015-04-28 18:53:14 +02:00
Dave Halter
712ae01ac0 Classes should always evaluate to true when asked for py__bool__() 2015-04-28 18:32:19 +02:00
Dave Halter
607f43290f The backwards tokenizer sometimes parsed not only string literals but also normal names. 2015-04-28 18:10:08 +02:00
Dave Halter
c2a287c25a Usages on syntax should not return anything. 2015-04-28 17:35:26 +02:00
Dave Halter
126f490f1e Modules have now the name __main__ if they contain dots. 2015-04-28 17:29:42 +02:00
Dave Halter
bb02f99de3 Dynamically created trailers need a parent, otherwise it can lead to crashes. 2015-04-28 16:40:58 +02:00
Dave Halter
b59fc04432 Remove crate.io badges. Thy are not working anymore, see crateio/crate.io#18 2015-04-28 12:38:53 +02:00
Dave Halter
cbd3a8a59a Restructured loading of compiled __init__ files. 2015-04-28 02:30:32 +02:00
Dave Halter
836fcd6ea0 Small api.Script.goto cleanup. 2015-04-28 02:07:53 +02:00
Dave Halter
b6f635b88b Python 2.7 io.StringIO always needs unicode input. 2015-04-28 02:05:38 +02:00
Dave Halter
657920baf5 Finally able to ditch the old namespace_packages implementation. 2015-04-28 02:03:17 +02:00
Dave Halter
0d406d27fd Different __init__ file searching. 2015-04-28 01:58:49 +02:00
Dave Halter
b8bb258677 Get rid of get_importer and clean up imports in general. 2015-04-28 01:41:01 +02:00
Dave Halter
ef4b424cda Replace pr with tree, #566. 2015-04-28 01:34:31 +02:00
Dave Halter
71547641ae The recursion detector doesn't need to separate params and normal statements anymore, because now they are two completely different things. 2015-04-28 01:26:48 +02:00
Dave Halter
265e6b2c35 Change parser and api to use tree instead of pr. 2015-04-27 23:38:48 +02:00
Dave Halter
b6ebb2f8bf Fixed issues with last positions in the tokenizer, which was messed up a little bit a few commits ago. 2015-04-27 21:42:40 +02:00
Dave Halter
0a96083fde Fix ur'' literals. 2015-04-27 19:21:41 +02:00
Dave Halter
902482568e The tokenize endmarker should really be the maximum position possible. Caused matplotlib to fail. Fixes davidhalter/jedi-vim#377. 2015-04-27 19:01:45 +02:00
Dave Halter
47d468a9bc forgot to include test_evaluate/not_in_sys_path files. 2015-04-27 17:16:43 +02:00
Dave Halter
84b774d9e1 Small refactorings. 2015-04-27 17:07:38 +02:00
Dave Halter
d7417391a7 Skip star import cache tests. 2015-04-27 14:15:39 +02:00
Dave Halter
0203461980 Disable the star import cache. 2015-04-26 00:02:47 +02:00
Dave Halter
06d134a7c1 Finished changing the import logic. The sys.path calculations within Jedi are clearer now. 2015-04-25 22:45:08 +02:00
Dave Halter
d038fba9df er.wrap -> Evaluator.wrap 2015-04-23 13:51:42 +02:00
Dave Halter
ed74dde45c forgot to check in invisible_pkg 2015-04-23 13:40:05 +02:00
Dave Halter
d16da33b9b Small test fix. 2015-04-23 04:11:28 +02:00
Dave Halter
fbb960423e Remove legacy importer code. 2015-04-23 03:42:29 +02:00
Dave Halter
a7c4b5800b Namespace packages work again. This time the same way as Python does it. 2015-04-23 03:36:46 +02:00
Dave Halter
039579b391 Improved static analysis for imports. 2015-04-23 02:43:49 +02:00
Dave Halter
f4f30841ec change the return of _Importer.follow_file_system 2015-04-23 02:39:44 +02:00
Dave Halter
d04241b482 Goto should not include imports that cannot be followed. 2015-04-23 02:37:22 +02:00
Dave Halter
691e5a8969 Fix flask tests. 2015-04-22 03:58:44 +02:00
Dave Halter
29bd59a355 Following os.path should be possible again. 2015-04-22 03:35:18 +02:00
Dave Halter
dd3edd15f9 Remove legacy code from imports. 2015-04-22 03:22:54 +02:00
Dave Halter
7af5c23874 Cache bug fixes. 2015-04-22 03:01:32 +02:00
Dave Halter
05554a1c89 Fix some issues with import path errors. 2015-04-21 18:45:12 +02:00
Dave Halter
13267adfc2 Move the level calculation into the Importer. 2015-04-21 17:57:06 +02:00
Dave Halter
9b9049e574 Some small import changes that fix a few of the broken test cases. 2015-04-21 17:31:43 +02:00
Dave Halter
18c4b5f7dc Add py__package__ to the ModuleWrapper, which makes relative imports easy to implement and fixed a lot of other things. 2015-04-21 16:12:24 +02:00
Dave Halter
5c65e9cdaa py__name__ now returns the value found in the modules cache. 2015-04-20 16:40:10 +02:00
Dave Halter
77a37be83a Add a py__path__ method to the ModuleWrapper, that behaves very similar to a package's __path__ attribute. 2015-04-20 16:21:00 +02:00
Dave Halter
df9452f210 Trying to change the import logic completely. We now have a sys.modules like cache. 2015-04-20 14:47:33 +02:00
Dave Halter
8fca3f78a1 Add a py__name__ call to modules. This makes listing the qualified names of modules possible (in combination with the module_name_cache). Fixes #519. 2015-04-14 17:36:20 +02:00
Dave Halter
2f64a83e3c Rename test_api_classes -> test_classes. 2015-04-13 15:17:44 +02:00
Dave Halter
fbe26ab64a Importlib might raise a ValueError. Fix #491. 2015-04-13 15:12:46 +02:00
Dave Halter
bc765979ca Import priorities are wrong (__dict__ > files). Test for #536. 2015-04-13 15:04:49 +02:00
Dave Halter
e2455eb670 Call signatures should work better for builtin classes (ducktyping). Fixes #515. 2015-04-10 13:45:23 +02:00
Dave Halter
74779f1a5d Test and preparations for better call signatures with builtins, see #515. 2015-04-10 03:05:38 +02:00
Dave Halter
1e623509cd Fix README glitches. 2015-04-10 02:40:16 +02:00
Dave Halter
47bf1c5daf Issue with numbers after names in call signatures. It would cause Jedi to stop analysing call signatures. Fixes #510 2015-04-10 02:17:12 +02:00
Dave Halter
7a22d374ca Merge branch 'dev' of github.com:davidhalter/jedi into dev 2015-04-09 16:17:30 +02:00
Dave Halter
a9d3df9b5e Replace the threading.Thread tests in docstrings with random.Random tests, because that might work smother in the travis tests. Don't know why it broke there. 2015-04-09 16:17:16 +02:00
Dave Halter
fab6567485 Merge pull request #567 from mfussenegger/buildout
run buildout detection only once per buildout script
2015-04-09 13:07:58 +02:00
Mathias Fussenegger
67d9fbca81 run buildout detection only once per buildout script
in order to avoid running into the max recursion limit.
2015-04-09 08:51:25 +02:00
Dave Halter
1195ed64ea Fix a small issue in the import logic that caused tests to fail. 2015-04-09 01:43:50 +02:00
Dave Halter
79caa2186e list(open().read()) should work now, fixes #412. 2015-04-09 00:46:31 +02:00
Dave Halter
408d182c41 Changelog for 0.9.0. 2015-04-08 13:20:15 +02:00
Dave Halter
f122c9b5b3 Document the new features better in the next release. 2015-04-08 13:15:21 +02:00
Dave Halter
b106dc25bd Update the README to tell more about Python features. 2015-04-08 12:16:13 +02:00
Dave Halter
98cf9f0c1a Jedi description update. 2015-04-08 11:47:58 +02:00
Dave Halter
7773859305 Write the tests for init extension modules (#472). 2015-04-08 02:54:35 +02:00
Dave Halter
474d390220 Use imp.get_suffixes to deal with __init__ files that are not .py files but .so etc. fixes #472 2015-04-08 02:41:59 +02:00
Dave Halter
9149c5adc2 Python 3.2 tests didn't work because a u string literal was used. 2015-03-31 14:42:26 +02:00
Dave Halter
ef855a5316 Param descriptions should not end with a comma. 2015-03-31 14:38:03 +02:00
Dave Halter
72fd190149 unicode strings should not raise an error if used in repr.
Python 2 doesn't allow unicode objects in __repr__ methods. Therefore we need to encode them as utf-8 bytes.
2015-03-25 23:42:52 +01:00
Dave Halter
4bb41b6096 A property can raise an Exception, therefore the interpreter completion should check for those exceptions, fixes #538. 2015-03-24 15:26:00 +01:00
Dave Halter
54d8cd0a9b Small bug in parameter creation. 2015-03-24 15:06:11 +01:00
Dave Halter
0de5a0f412 Python 2 allows tuple unpacking in parameter definitions. Jedi just ignores such constructs, since they are really rare and not the future. 2015-03-24 15:02:07 +01:00
Dave Halter
61683cb83e Remove some unnecessary comment parts in the Python 2.7 grammar. 2015-03-08 22:40:22 +01:00
Dave Halter
e296b00201 Change the tests of @hamatov a small bit. They are now working with the new parser. 2015-03-06 13:10:59 +01:00
Dave Halter
2cddfd656b Merge branch 'unicode_tokenize_fix2' of https://github.com/hatamov/jedi into dev 2015-03-06 11:44:03 +01:00
Dave Halter
8b1c033fc4 Remove old commented code. 2015-03-06 11:22:38 +01:00
Dave Halter
eb146adcc1 Modules that are not importable shouldn't cause Jedi to stop working (just issue a warning). Fixes #468, #71. 2015-03-06 11:13:04 +01:00
farhad
32081bd156 Merge branch 'dev' into unicode_tokenize_fix2
Conflicts:
	AUTHORS.txt
2015-03-06 12:14:38 +04:00
farhad
f9c104348e added myself to AUTHORS.txt 2015-03-06 11:55:16 +04:00
farhad
80719fc821 added test for quoted strings parsing 2015-03-06 11:54:01 +04:00
farhad
3747b009bf fix tokenization of code containing unicode strings 2015-03-06 09:11:35 +04:00
Dave Halter
910f2e6486 Use textwrap.dedent for better readability of the testing code. 2015-03-06 01:49:57 +01:00
Dave Halter
fd1be02f1e Test for unicode tokens in Python 2.7. 2015-03-06 01:47:37 +01:00
Dave Halter
a6c5d9f0a6 Merge branch 'add-egg-links-to-syspath-on-parser' of https://github.com/blueyed/jedi into dev 2015-03-06 01:06:17 +01:00
Dave Halter
0b531d2b17 print in Python 2 shouldn't be a function, it's a keyword (without the future import). 2015-03-06 01:01:20 +01:00
Dave Halter
b036c88b73 True in Python 2 is still not a keyword, but a name. 2015-03-06 00:42:57 +01:00
Dave Halter
a0f8b58e71 Fix a Python 2.7 compatibility issue. 2015-03-06 00:37:41 +01:00
Dave Halter
468ff59c1c Remove hasattr/next from _compatibility (not used anymore), thanks @dongweiming for noticing. 2015-03-06 00:25:42 +01:00
Dave Halter
10df0f933f Remove the strange check in the parser to always create expr_stmt and file_input. 2015-03-05 15:30:07 +01:00
Dave Halter
8f58258f4d Writing a different Name.get_definition() implementation, returns the node, if there's no expr_stmt parent. 2015-03-05 15:17:08 +01:00
Dave Halter
0ceadf69a3 Fake objects don't need an ExprStmt for the docstring anymore. 2015-03-05 14:24:19 +01:00
Dave Halter
76588aa040 Static analysis issues resolved (that were cause by the removal of using ExprStmt for every node). 2015-03-05 14:18:10 +01:00
Dave Halter
e698e6aeeb Rework some of the analysis statement gathering. 2015-03-05 13:36:41 +01:00
Dave Halter
b489019f5b Most integration tests (except 2) pass if we don't always make the use of an ExprStmt. 2015-03-05 01:55:25 +01:00
Dave Halter
5d54922c4b Docstring change, to make non ExprStmt statements possible. 2015-03-05 01:37:47 +01:00
Dave Halter
ec7a609e44 Remove some unnecessary code in dynamic.py 2015-03-05 01:13:43 +01:00
Dave Halter
f273e314b6 Preparing for an eventual replacement of using expr_stmt for all nodes. 2015-03-05 00:07:50 +01:00
Dave Halter
aea38ca9aa Remove the classify function in the parser. This could make Jedi a tiny bit faster. 2015-03-04 17:15:33 +01:00
Dave Halter
9c2e73d460 Add syntax errors to the parser. 2015-03-04 17:12:51 +01:00
Dave Halter
a3c2108ecf Fix and test CallSignature.bracket_start. 2015-03-04 12:15:43 +01:00
Dave Halter
1ce96f2581 More fixes for ExprStmt docstrings. 2015-03-03 18:08:24 +01:00
Dave Halter
40e61fc96d Fix ExprStmt docstring bugs. 2015-03-03 17:42:49 +01:00
Dave Halter
ff0c7e27d3 Comment for two commits earlier. 2015-03-03 13:00:32 +01:00
Dave Halter
5cc5505185 Moved comprehension tests out of basic tests into its own file. 2015-03-03 12:58:52 +01:00
Dave Halter
96add84459 Fix a very complicated issue with comprehensions. 2015-03-03 12:56:48 +01:00
Dave Halter
1520ebf557 Fixed an issue with ArrayInstances that were using name lookups, which it doesn't have. 2015-03-03 02:39:02 +01:00
Dave Halter
5322c4a965 decorator dotted_names goto lookups. 2015-03-02 14:31:12 +01:00
Dave Halter
5a845e4dea Fix a decorator goto issue. 2015-03-02 13:23:26 +01:00
Dave Halter
6d3bb5c4b1 Fix generator comprehensions issue when used as an argument. 2015-03-02 03:06:58 +01:00
Dave Halter
2b1ddb19c9 Need py__bool__ on generators as well as any other object. 2015-02-27 12:36:03 +01:00
Dave Halter
23fe08363d Simplify cache_call_signatures. 2015-02-27 12:20:55 +01:00
Dave Halter
ea8209d45e Call signatures should not fail when used on if(. 2015-02-27 12:17:44 +01:00
Dave Halter
53490991d7 Goto_definitions bug fix -> imports stuff. 2015-02-27 11:56:36 +01:00
Dave Halter
1bc9ac1c00 Goto bug fix. 2015-02-27 11:37:49 +01:00
Dave Halter
610068dde4 Fix merged array values. 2015-02-27 11:23:53 +01:00
Dave Halter
a5728f8767 list comprehensions should be completeable. 2015-02-27 11:14:08 +01:00
Dave Halter
f5dad437dd Get rid of the None default for memoize_default. It shouldn't have a default if not given. This also uncovered a bug in for/else loops, that wasn't teste before. 2015-02-27 01:42:14 +01:00
Dave Halter
a998c36fa3 Fix an attribute error in static analysis code. 2015-02-26 14:40:33 +01:00
Dave Halter
9b4385fb24 Merge branch 'dev' of github.com:davidhalter/jedi into dev 2015-02-26 13:59:30 +01:00
Dave Halter
b8a8c4d402 Fix an array lookup issue. list.pop calls work now pretty well and return the right type. 2015-02-26 13:57:54 +01:00
Dave Halter
d318d3c855 Fix a potential issue in sys path searching. However not tested. This is something that raised an error with sith that was not reproducible. 2015-02-26 13:56:28 +01:00
Dave Halter
d7b69ab92c Fix a small bug in the logic of finding self variables. 2015-02-25 13:54:13 +01:00
Dave Halter
30efdc5e4e Because we replaced and simplified strings in the last commits (including string ERRORTOKENs), we are now able to remove an error recovery in the backwards tokenizer. 2015-02-25 13:34:12 +01:00
Dave Halter
8c08a4e574 Call signatures again: function definitions and other things that cannot be a part of call signatures stop the process of scanning for them. Also strings get replaced and simplified. 2015-02-25 13:33:09 +01:00
Dave Halter
48392a7dac Fix some issues in call signatures. 2015-02-24 16:55:33 +01:00
Dave Halter
b8386d29d5 Whitespace before brackets should still show call signatures. 2015-02-24 01:48:25 +01:00
Dave Halter
0ae74a7666 Replace a __bases__ call with an __mro__ call, because the latter is closer to how Python actually works. __bases__ is never used. 2015-02-23 19:07:23 +01:00
Dave Halter
4f2d4992da Fix an mro resolution issue. 2015-02-23 19:04:35 +01:00
Dave Halter
a91e240c8b ALWAYS_BREAK_TOKEN -> ALWAYS_BREAK_TOKENS 2015-02-23 14:10:29 +01:00
Dave Halter
aebeafccc4 Rewrite last newlines in the fast parser to get correct get_code outputs even with the fast parser. 2015-02-23 13:36:43 +01:00
Dave Halter
489ea8fc83 Replace set_parser with direct ParserNode instance calls. 2015-02-23 13:10:40 +01:00
Dave Halter
2fcb1b9b65 Fast parser fix. 2015-02-23 01:00:17 +01:00
Dave Halter
69412224eb Merge pull request #550 from IanLee1521/issue-525
Point docs at readthedocs.org rather than jedidjah.ch
2015-02-23 00:17:46 +01:00
Ian Lee
49150d760e Fixed #525 - Point to readthedocs.org rather than jedidjah.ch 2015-02-22 14:29:31 -08:00
Dave Halter
3a5b2d396e Failed statements should not lead to parser fails. 2015-02-22 20:29:22 +01:00
Dave Halter
3ec96b25cc Issue with backslashes again in the fast parser. 2015-02-21 18:07:21 +01:00
Dave Halter
3347718808 Merge pull request #549 from IanLee1521/readme-update
Readme update
2015-02-21 10:37:26 +01:00
Ian Lee
5625e1cb62 Add self to authors list 2015-02-20 17:17:34 -08:00
Ian Lee
2b193cb1f0 Update list of supported cPython versions in readme 2015-02-20 17:14:00 -08:00
Dave Halter
0b5a509e83 Small correction: mixed up a re.match and re.search. 2015-02-20 00:48:05 +01:00
Dave Halter
ce96af5e04 Fix an issue with open parentheses and function definitions right after. The fast parser should behave like the normal one and just ignore the open brackets. 2015-02-19 11:02:11 +01:00
Dave Halter
9d048623dd Delete the old and unused MultiLevelStopIteration exception. 2015-02-19 01:43:43 +01:00
Dave Halter
0e73bf7d80 Account for code parts that were not parsed in the fast parser. 2015-02-19 01:42:13 +01:00
Dave Halter
39bf9f426b Handle backslash escaping. 2015-02-18 17:32:34 +01:00
Dave Halter
595da50ab8 The fast parser splitting now also checks for parentheses levels, because without that, sometimes we split in very strange positions, while ignoring others. 2015-02-18 13:49:03 +01:00
Dave Halter
38e26892f2 The fast parser doesn't work with open parentheses properly, document that. 2015-02-18 12:50:26 +01:00
Dave Halter
cefd76e5d1 Testing open parentheses in the fast parser. 2015-02-17 17:26:00 +01:00
Dave Halter
506d602795 Fix multi line param issues in the fast parser. 2015-02-17 15:24:49 +01:00
Dave Halter
7663703989 Fix issues with multi line for loops in the fast parser. 2015-02-17 14:57:00 +01:00
Dave Halter
4d9608ea6f Check more precisely for flow keywords. 2015-02-16 16:04:48 +01:00
Dave Halter
e1c28d2c3f variables starting with 'class' and 'def' should not slow down the parser, changed the check to 'class ' and 'def '. 2015-02-16 10:08:22 +01:00
Dave Halter
3680784234 Add another for in one line test for the fast parser. 2015-02-15 20:28:59 +01:00
Dave Halter
db31e0e37d The fast parser works now faster in case of for flows with a simple_stmt after. 2015-02-14 18:57:04 +01:00
Dave Halter
a3b32729a7 Test for an issue with for loops and a statement on the same line. (fast parser) 2015-02-14 16:27:04 +01:00
Dave Halter
4613a810a5 Some small refactorings to the names_dict/deep_ast_copy logic. 2015-02-12 13:24:08 +01:00
Dave Halter
774b3d5ce8 Python 2 compatibility. 2015-02-12 11:36:36 +01:00
Dave Halter
a8d3a9ab42 Remove old deep_ast_copy code. 2015-02-12 11:24:17 +01:00
Dave Halter
bcf6be0636 Radically rewrote deep_ast_copy. 2015-02-12 02:25:54 +01:00
Dave Halter
a12f259a0f Actually remove check_first from deep_ast_copy. 2015-02-11 14:47:28 +01:00
Dave Halter
315c687048 Remove the need for the check_first param in deep_ast_copy. 2015-02-11 14:46:51 +01:00
Dave Halter
bc722a70f2 Simplify deep_ast_copy. 2015-02-11 02:16:57 +01:00
Dave Halter
6e5ba3de87 Fix remaining issue siwh the Param refactoring. 2015-02-11 01:40:18 +01:00
Dave Halter
cdbe26786a Trying to get ird of the weird param generation in the parser tree. 2015-02-10 15:49:26 +01:00
Dave Halter
8775d90173 Merge the master branch into the dev branch. 2015-02-09 14:41:41 +01:00
Dave Halter
07156b427c Fix some compatibilty issues in the test suite for Python 2.7. 2015-02-09 14:15:25 +01:00
Dave Halter
28d3ba6c04 Fix a test about regex goto, don't know how that one even worked in the first place. 2015-02-09 12:35:20 +01:00
Dave Halter
a095f8d9e0 Replace some isinstance checks in the parser tree with .type checks. 2015-02-09 12:27:29 +01:00
Dave Halter
a9a3387cb0 Refactor user scope search. 2015-02-05 21:52:57 +01:00
Dave Halter
8125d5f562 Remove asserts and calculate them dynamically. 2015-02-05 20:16:55 +01:00
Dave Halter
0a3797cf6e Small refactorings. 2015-02-05 19:47:26 +01:00
Dave Halter
2dd08594fc Simplify the indent calculation in the fast parser. 2015-02-05 14:37:24 +01:00
Dave Halter
abe6c8934c Update the parser pickling protocol version. 2015-02-05 14:19:22 +01:00
Dave Halter
d0f1fd5267 Rename Simple -> BaseNode. 2015-02-05 14:18:30 +01:00
Dave Halter
0c1bbf78e2 Rename SubModule to Module, because that's a more fitting description. There were reasons for the name before the new fast parser, but those don't exist anymore. 2015-02-05 14:16:43 +01:00
Dave Halter
c689573b0b Removed the line_offset from tokenize, we have better ways to modify positions, now. 2015-02-05 14:00:58 +01:00
Dave Halter
59cf1bce5d Delete legacy code from the fast parser. 2015-02-05 13:47:35 +01:00
Dave Halter
4ace58e29e Make get_statement_for_position faster. 2015-02-05 13:35:43 +01:00
Dave Halter
a77ecdbed6 Remove param from get_statement_for_position. 2015-02-05 12:28:55 +01:00
Dave Halter
2d9c644ab6 Fixed some minor mocking differences in Python 2 and 3. 2015-02-05 01:25:53 +01:00
Dave Halter
109fdc53e0 Fix the remaining fast parser issues. 2015-02-05 01:13:00 +01:00
Dave Halter
b57ee880af Remove assertEqual from tokenize tests, we can do it with just assert, py.test converts all of that automatically. 2015-02-05 00:48:40 +01:00
Dave Halter
fdfe17ada5 Import the token IDs directly, this way we minimize lookups. 2015-02-05 00:44:01 +01:00
Dave Halter
c6b818c504 Changed a tokenize test to match the current intended behavior of the tokenizer. 2015-02-05 00:43:25 +01:00
Dave Halter
3a4235eb33 The interpreter is not using the fast parser anymore. 2015-02-05 00:28:54 +01:00
Dave Halter
dce952aec6 Fix an issue with omited dedents in the parser. 2015-02-05 00:11:12 +01:00
Dave Halter
e1c623d3f3 Python 2 compatibility. 2015-02-04 17:09:18 +01:00
Dave Halter
e23e354fe8 Simplified the line splitting and with that a few other things in the fast parser. 2015-02-03 22:22:57 +01:00
Dave Halter
66dfa59286 Fix some endmarker prefix issues in the fast parser. 2015-02-03 22:09:55 +01:00
Dave Halter
6cdfecb541 Fix a number of issues in the fast parser around functions with only one statement (no suite) and wrong indentations). 2015-02-02 15:03:57 +01:00
Dave Halter
f9fe6b47eb Fix error statement stacks positions. 2015-02-02 10:43:47 +01:00
Dave Halter
a4bd412801 Fix an issue with the positions of InstanceNames that used the original position_modifier. 2015-02-02 02:29:39 +01:00
Dave Halter
c58cdbbf9b Fix an issue that comes from a combination of property/__slots__/pickle 2015-02-02 00:45:17 +01:00
Dave Halter
e913872192 Merged the tokenize is_identifier changes. 2015-02-01 20:32:01 +01:00
Dave Halter
9a0f1363e3 Start removing the print statements that were used for debugging. 2015-02-01 02:32:52 +01:00
Dave Halter
bc118e8047 Simplify the fast parser tokenizer more. Now it is more readable and less buggy (+bugfixes). 2015-01-31 20:09:44 +01:00
Dave Halter
1826f432c8 Fix an issue in the fast parser splitting. 2015-01-30 15:17:38 +01:00
Dave Halter
413da3b790 Remove the line_offset calculation. We can now also remove it from tokenize. With the position_modifier we have enough tools to change a position, we don't need to do that in tokenize.py. 2015-01-29 17:57:01 +01:00
Dave Halter
a3cdec819e Fix the prefix in tokenize, which was the wrong way around. 2015-01-29 17:10:00 +01:00
Dave Halter
cf1b2ff54b Function tests now pass with the fast parser. 2015-01-29 15:47:38 +01:00
Dave Halter
a221eee02c Fix more issues in the fast parser. 2015-01-29 15:38:38 +01:00
Dave Halter
0a537c05c4 Fix an issue with Function/Flow combination in the fast parser. 2015-01-29 02:24:11 +01:00
Dave Halter
dde0e9c7c6 Fix for loop issues in the fast parser. 2015-01-29 01:36:16 +01:00
Dave Halter
e412694fa2 Fix issues with flows in the fast parser. 2015-01-28 17:06:18 +01:00
Dave Halter
b8c63f366c FastModule seems to be compatible now with the normal Module, because it inherits from it and makes some minor modifications in some cases. 2015-01-28 15:11:53 +01:00
Dave Halter
c7563470b1 We don't need set_global_names, just set the attribute directly. 2015-01-28 15:00:17 +01:00
Dave Halter
d0589430bb FastModule should inherit from SubModule, because it has almost all the same properties. 2015-01-28 14:59:00 +01:00
Dave Halter
6ec89e6785 Fix issues with flows. 2015-01-28 13:03:57 +01:00
Dave Halter
5e8f8f7a8d Fix issues with error correction / newline correction. 2015-01-27 12:24:54 +01:00
Dave Halter
62e45aa42b Fix issues with the new newline end_pos positions. 2015-01-27 02:21:05 +01:00
Dave Halter
4a07f97f10 Reenable a few get_code tests. 2015-01-27 01:19:09 +01:00
Dave Halter
88a3e25814 Fix newline stuff for empty parsers. 2015-01-27 01:15:39 +01:00
Dave Halter
39e869d146 Test added newline module end_pos as well. 2015-01-26 22:02:11 +01:00
Dave Halter
cdae250b36 code -> source and also care for added newlines in the fast parser. 2015-01-26 22:01:39 +01:00
Dave Halter
07c60d7ff6 Fix DEDENT issues in _remove_newline. 2015-01-26 21:17:50 +01:00
Dave Halter
61e2bba380 Tests and implementation to remove the last newline again in the parser tree, to be able to exactly reproduce the parser input. 2015-01-26 21:07:14 +01:00
Dave Halter
e5d265e845 Add a method Leaf.get_previous, to get previous leafs. 2015-01-26 21:02:56 +01:00
Daniel Hahler
8621aae73c Add any .egg-link paths from VIRTUAL_ENV to sys.path
Adding test_get_sys_path required factoring out
`_get_venv_sitepackages`, because `sys.version_info` cannot be mocked
apparently.
2015-01-25 21:35:09 +01:00
Dave Halter
a8943b8a80 Get the position modifiers right. 2015-01-24 20:42:28 +01:00
Dave Halter
446f5b9018 Fix issues with the right count of parsers used. 2015-01-24 20:19:03 +01:00
Dave Halter
4d6afd3c99 Fix fast parser tests. 2015-01-24 00:06:16 +01:00
Dave Halter
8569651bf4 Fast parser simplifications and bug fixes. 2015-01-21 18:34:22 +01:00
Dave Halter
91ab1d0ecd Fix an issue in the fast parser that caused stuff to be parsed always. 2015-01-21 02:03:06 +01:00
Dave Halter
7188105dc7 The fast parser is now in a more readable shape. 2015-01-19 16:21:25 +01:00
Dave Halter
ce793b1066 Trying to restructure the fast parser. 2015-01-19 14:49:44 +01:00
Dave Halter
d6b3b76d26 First fast parser version that actually let a test pass. 2015-01-19 00:39:51 +01:00
Dave Halter
add0cafbf1 Merge pull request #530 from felipeacsi/arch-installation
Updated Arch Linux installation
2015-01-17 13:50:49 +01:00
felipeacsi
f348aaeab6 Updated Arch Linux installation 2015-01-16 14:38:39 -03:00
Dave Halter
01c209dc00 MergedNamesDicts for the parser. 2015-01-16 15:25:58 +01:00
Dave Halter
e477fab856 Playing with the fast parser implementation. 2015-01-16 15:23:49 +01:00
Dave Halter
86391268a7 Merge pull request #528 from KenetJervet/parser
Fixed issue #526.
2015-01-16 13:03:56 +01:00
Savor d'Isavano
c3c07c4ec2 Fixed issue #526. 2015-01-16 18:45:34 +08:00
Dave Halter
cc7483498c Start using the position modifier. 2015-01-15 14:18:22 +01:00
Dave Halter
cf223a71f5 Add a position modifier for the fast parser. Not yet in use though. 2015-01-15 13:57:56 +01:00
Dave Halter
c963706418 Delete legacy logic. 2015-01-15 02:19:48 +01:00
Dave Halter
e82d51e161 Correct a path in memory check. 2015-01-15 02:03:26 +01:00
Dave Halter
95b518e9fc Use the Python 3.4 parser for docstring types.
We had to switch, because Ellipsis was otherwise not parseable.
2015-01-13 13:17:21 +01:00
Dave Halter
e6b9111749 Python 2.7 compatibility. 2015-01-13 02:12:49 +01:00
Dave Halter
cc64265187 Grammar modifications so that the Python2.7 grammar looks more like the Python 3.4 grammar. 2015-01-13 01:05:13 +01:00
Dave Halter
09da6ec0d3 Function annotations don't need to be tested in Python 2.7. 2015-01-13 01:00:08 +01:00
Dave Halter
f59e05f8e7 Switch grammars depending on Python version. 2015-01-12 13:33:44 +01:00
Dave Halter
582b9b01af Get invalid INDENTs working.
The following DEDENT's are removed.
2015-01-12 12:22:57 +01:00
Dave Halter
ef72f4fb6c Test the new error correction feature. 2015-01-12 01:27:25 +01:00
Dave Halter
5c98f6cf04 Suites don't have to contain statements anymore, this makes autocompletion better in certain cases. 2015-01-12 01:11:46 +01:00
Dave Halter
f8570b1f03 Test for error recovery with try statements. 2015-01-09 18:02:15 +01:00
Dave Halter
5334f8dbad Implemented the in operator in a very simple fashion: It returns nothing. 2015-01-09 16:05:09 +01:00
Dave Halter
53b456dff2 Cleaning up. 2015-01-09 01:55:23 +01:00
Dave Halter
e8ef3b8ad4 Remove legacy code. 2015-01-09 01:45:09 +01:00
Dave Halter
d78a89df51 Move filter_after_position. 2015-01-09 01:37:42 +01:00
Dave Halter
26ecb16e5f CompiledObject.type resembles now the Node.type values. 2015-01-09 01:33:59 +01:00
Dave Halter
b75ba1e16c interpreter documentation. 2015-01-08 18:34:55 +01:00
Dave Halter
81c4792349 Simplify the interpreter completion. 2015-01-08 18:30:49 +01:00
Dave Halter
ed7500bfaa Delete deprecations from 0.6.0 and 0.5.0. 2015-01-08 18:22:38 +01:00
Dave Halter
7c6a6006fd Delete commented code. 2015-01-08 18:19:54 +01:00
Dave Halter
301b4ca649 Deprecate NotFoundError, because it wasn't used anymore. 2015-01-08 18:17:37 +01:00
Dave Halter
8ec8a74a3f Removed base in completions 2015-01-08 18:02:55 +01:00
Dave Halter
108cab21f4 Added a closure test that would have failed before the names_dict refactoring. 2015-01-08 17:58:24 +01:00
Dave Halter
144c20579b Get rid of get_defined_names in compiled modules. 2015-01-08 17:53:20 +01:00
Dave Halter
bd304d33c7 Get rid of Function's get_magic_function_X, they are not used anymore. 2015-01-08 14:17:33 +01:00
Dave Halter
47fc3cbdfe Functions are not exceptions anymore in the name finder. 2015-01-08 14:14:01 +01:00
Dave Halter
0dc61292b9 Remove get_defined_names methods from evaluate representation objects. 2015-01-08 13:42:52 +01:00
Dave Halter
6d58fed0e8 Remove get_defined_names in favor of names_dict in the parser tree. 2015-01-08 13:38:03 +01:00
Dave Halter
a20fd12de9 Remove all scope_names_generator usages. 2015-01-08 13:24:01 +01:00
Dave Halter
af20eff943 Get completely rid of get_names_of_scope. 2015-01-08 13:19:42 +01:00
Dave Halter
705b569e32 Get rid of all get_names_of_scope calls. 2015-01-08 12:48:57 +01:00
Dave Halter
05a9f19429 Delete more legacy code. 2015-01-08 02:43:13 +01:00
Dave Halter
7891cdfd48 Start deleting legacy code. 2015-01-08 02:33:35 +01:00
Dave Halter
82d8e45a1c Fix descriptors. 2015-01-08 02:29:33 +01:00
Dave Halter
83a94c12c9 Correct global name issues. 2015-01-08 01:20:53 +01:00
Dave Halter
f5e687bc22 Use names_dicts now for all completions. 2015-01-07 23:49:13 +01:00
Dave Halter
dd40991669 filtering private variables is now also possible for CompiledObject (important for fake/builtins.pym). 2015-01-07 15:09:03 +01:00
Dave Halter
c451c0b29e Private variable filtering improved. 2015-01-07 14:44:19 +01:00
Dave Halter
987121ae5c Filter names in a separate function so that it can be used for both completion and name lookups. 2015-01-07 13:56:35 +01:00
Dave Halter
ec76d57679 Start using names_dicts for completion as well. 2015-01-07 01:49:38 +01:00
Dave Halter
494a3e3307 Fix usages. 2015-01-06 16:54:01 +01:00
Dave Halter
9178d314b0 Add search_global to names_dicts calls. 2015-01-06 15:30:59 +01:00
Dave Halter
b982b746e7 Fix problems with += stmts. 2015-01-06 11:24:13 +01:00
Dave Halter
8bad12522a Fix issues with module attributes 2015-01-06 01:12:55 +01:00
Dave Halter
7abdbb563c Fix list comprehensions 2015-01-06 00:24:11 +01:00
Dave Halter
54fcf7af9d Fix goto. 2015-01-05 23:55:38 +01:00
Dave Halter
65b33013e5 Few small issues. 2015-01-05 23:31:32 +01:00
Dave Halter
9cd8fabf2c Fix issues with generators. 2015-01-05 19:11:09 +01:00
Dave Halter
91710e0310 Versions should be PEP440 compatible, fixes #521. 2015-01-05 13:15:34 +01:00
Dave Halter
1d2704fb68 Descriptors work with names_dicts now. 2015-01-03 11:07:38 +01:00
Dave Halter
177dcf0c0d Merge pull request #522 from msabramo/add_python_3.4_classifier
setup.py: Add python3.4 classifier
2015-01-02 22:38:33 +01:00
Marc Abramowitz
672982a2f5 setup.py: Add python3.4 classifier 2015-01-02 11:07:01 -08:00
Dave Halter
36819b3241 Filtering private variables seems to be working now at least in the evaluation engine. 2015-01-02 01:50:14 +01:00
Dave Halter
8157dd2da8 Fix most instance related issues. 2015-01-02 01:12:14 +01:00
Dave Halter
0478ff907f names_dicts for instances. 2015-01-02 00:19:07 +01:00
Dave Halter
9de4a5479c Start using names_dicts instead of scope_names_generator. 2015-01-01 23:27:03 +01:00
Dave Halter
ed3cf5577e Compiled objects should also have a names_dict. 2014-12-26 12:49:40 +01:00
Dave Halter
bfaef9815c Remove _gen_param_name_copy, because it's not useful anymore. 2014-12-19 12:51:56 +01:00
Dave Halter
e22aed9ef4 Restructure ExecutedParam so that it works better with generated instances. 2014-12-19 12:42:09 +01:00
Dave Halter
4a08335fd8 Simplify ExecutedParam. 2014-12-19 01:21:00 +01:00
Dave Halter
b802e97c18 Delete legacy code from params. 2014-12-19 01:11:14 +01:00
Dave Halter
da582117ac Array.type docstring. 2014-12-19 01:07:51 +01:00
Dave Halter
47615ae786 Remove pr.Array.type identifiers. 2014-12-19 01:05:52 +01:00
Dave Halter
98eb4a71a1 Clean up the parser tree. 2014-12-18 03:38:24 +01:00
Dave Halter
ab9571bccd Remove FakeStatement 2014-12-18 03:24:12 +01:00
Dave Halter
64ebfb0644 Usages/imports cleanup. 2014-12-18 03:22:46 +01:00
Dave Halter
1fb13837c4 Fix import completion issues. 2014-12-18 02:55:03 +01:00
Dave Halter
f8cd3c661a Fix slots in the parser tree. 2014-12-17 20:36:17 +01:00
Dave Halter
b2e54ca1eb The tokenizer now includes all newlines and comments in its prefix. 2014-12-17 20:11:42 +01:00
Dave Halter
9cdf6de206 More positioning for backwards tokenizer. 2014-12-17 17:51:12 +01:00
Dave Halter
d918f8be73 Give the backwards tokenizer a better structure and comments. 2014-12-17 17:30:00 +01:00
Dave Halter
f164dd8892 Fix some newline issues in the backwards tokenizer. 2014-12-17 14:56:46 +01:00
Dave Halter
6eb2af301d Simplifying reversed line generation of user_context. 2014-12-17 14:04:54 +01:00
Dave Halter
62609cb6f1 Resolve tox issues. 2014-12-17 14:01:00 +01:00
Dave Halter
c6315e0b45 todo updates. 2014-12-17 01:48:32 +01:00
Dave Halter
0147a7f68d usages cleanup 2014-12-17 01:44:06 +01:00
Dave Halter
4897791901 Remove old precedence stuff. 2014-12-16 18:28:45 +01:00
Dave Halter
7f95a9806a api cleanup. 2014-12-16 18:13:49 +01:00
Dave Halter
5730e5add0 parser tree docstring updates. 2014-12-16 18:10:28 +01:00
Dave Halter
f702a91813 iterable cleanup. 2014-12-16 17:45:01 +01:00
Dave Halter
580dcb06ff Clean up the dynamic module. 2014-12-16 17:39:50 +01:00
Dave Halter
576a1182af Remove legacy code from param. However, this there's still work needed on params. 2014-12-16 17:37:20 +01:00
Dave Halter
3d080afd71 Cleanup finder. 2014-12-16 17:31:28 +01:00
Dave Halter
869b0b4189 Cleaning up api classes. 2014-12-16 17:23:59 +01:00
Dave Halter
237f0e526c Cleaning up evaluate.helpers. 2014-12-16 17:19:14 +01:00
Dave Halter
6821ccba91 Add the pgen2 packages and grammar files to be able to deploy Jedi. 2014-12-16 15:23:49 +01:00
Dave Halter
e53e211325 Python 2 compatibility in fake module. 2014-12-16 02:07:20 +01:00
Dave Halter
d5e3a09c44 Python 2 compatibility with the new tokens. 2014-12-16 02:03:05 +01:00
Dave Halter
fd1cb86765 Now able to remove both tokenize and token from pgen2. 2014-12-16 02:00:33 +01:00
Dave Halter
d9d3740c92 Trying to replace the old pgen2 token module with a token module more tightly coupled to the standard library. 2014-12-16 01:52:15 +01:00
Dave Halter
eaace104dd Replace the tokenizer's output with a tuple (switching back from a Token class). 2014-12-16 00:10:07 +01:00
Dave Halter
680fdd574b Remove some old unused tokenize stuff. 2014-12-15 17:44:40 +01:00
Dave Halter
955f125c0d Trying to remove token from pgen2. 2014-12-15 17:36:15 +01:00
Dave Halter
491b4ad76d Pgen2 license amendments. 2014-12-15 17:29:32 +01:00
Dave Halter
b911a39fb4 The driver file is now empty. 2014-12-15 17:27:27 +01:00
Dave Halter
55a6dbc8a2 Remove the old driver code of pgen2. 2014-12-15 17:18:01 +01:00
Dave Halter
4e0172a915 Partial parser.__init__' cleanup. 2014-12-15 16:21:35 +01:00
Dave Halter
af303e10c8 Statement -> ExprStmt. 2014-12-15 16:18:09 +01:00
Dave Halter
9431d89797 Imports cleanup. 2014-12-15 16:07:43 +01:00
Dave Halter
4af51a9516 Removing legacy code from evaluate/representation. 2014-12-15 16:02:19 +01:00
Dave Halter
b03330c5d7 Updating the docs of evaluate/__init__. 2014-12-15 16:00:16 +01:00
Dave Halter
5f892d62a6 Delete legacy code from evaluate. 2014-12-15 15:34:15 +01:00
Dave Halter
f2d35c3ff1 Reenable star import caching. 2014-12-15 15:19:22 +01:00
Dave Halter
24cfa62c8a documentation 2014-12-15 15:10:44 +01:00
Dave Halter
f0c6e5709c Some temporary args/kwargs related changes to static analysis. 2014-12-15 14:58:16 +01:00
Dave Halter
4a8bbd9583 Restructure dynamic param search, so that it can be cached better. 2014-12-15 13:39:53 +01:00
Dave Halter
70e80a5d1c star argument bug fixes. 2014-12-13 08:37:20 +01:00
Dave Halter
7d9f85c762 invalid star star arguments. 2014-12-13 08:34:03 +01:00
Dave Halter
ddd4d675f6 star args improvements 2014-12-13 08:17:38 +01:00
Dave Halter
1b48f6fbce Fix static analysis' argument tests. 2014-12-13 07:33:03 +01:00
Dave Halter
a4c454c103 Fix for unwanted NameError exception in static analysis with named params. 2014-12-12 14:52:34 +01:00
Dave Halter
a762e0bcec Fix a potential issue with star args. 2014-12-12 14:30:42 +01:00
Dave Halter
e8cc8f0a83 Get hasattr checks completely working 2014-12-12 02:34:25 +01:00
Dave Halter
8eaa008b5f Fix try/except checks in static analysis. 2014-12-12 02:26:16 +01:00
Dave Halter
c3106c10ef Fix flow's AttributeError detection. 2014-12-11 19:26:49 +01:00
Dave Halter
d11ea73ef4 Re-enable AttributeError/NameError detection for more complicated occurances than just statements. 2014-12-11 19:18:00 +01:00
Dave Halter
77fdbac234 static analysis: Import tests working again. 2014-12-11 16:25:18 +01:00
Dave Halter
6818d3affa Implement Import.is_nested method. 2014-12-11 16:17:07 +01:00
Dave Halter
6406bfb3c2 First static analysis test working. 2014-12-11 15:42:16 +01:00
Dave Halter
6afc5ccca5 Few docstring fixes. 2014-12-11 15:32:45 +01:00
Dave Halter
003d1249c5 empty import statement completion. 2014-12-11 15:24:19 +01:00
Dave Halter
bf8645d615 namedtuple fix 2014-12-11 13:08:09 +01:00
Dave Halter
d6b2a64343 Some small import completion fixes. 2014-12-11 13:00:57 +01:00
Dave Halter
c4c3ef5a21 goto_definition on a name definition (statement) should land on the statement. 2014-12-11 12:48:23 +01:00
Dave Halter
d8067a7286 Small test corrections. 2014-12-11 04:44:27 +01:00
Dave Halter
2dd8ed2270 Fix interpreter stuff, fix slicing with CompiledObject and a few other things. 2014-12-11 04:24:50 +01:00
Dave Halter
4aac363413 Some changes to the interpreter completions. 2014-12-11 03:49:05 +01:00
Dave Halter
220610bbf4 Importer now handles follow rest as well. 2014-12-11 02:28:55 +01:00
Dave Halter
48d2e99e55 os.path handling. 2014-12-11 01:49:59 +01:00
Dave Halter
ef0958a43c Temporarily change the behavior of a defined names test. Hard to say how we really want it to behave. 2014-12-11 00:44:31 +01:00
Dave Halter
d0ade9b2e9 Set a new version number: 0.9.0-alpha0. 2014-12-11 00:42:34 +01:00
Dave Halter
bb7bbf51ec Deprecate jedi.defined_names in favor of jedi.names. 2014-12-11 00:41:36 +01:00
Dave Halter
243fb8ef34 Small import fix. 2014-12-11 00:14:03 +01:00
Dave Halter
23417f0288 Fix docstrings in fake/skeleton objects. 2014-12-11 00:05:49 +01:00
Dave Halter
95620accdb Fix tests for namespace packages. 2014-12-10 19:19:13 +01:00
Dave Halter
897c4cded6 Fix issues with sys.path modifications that directly assign the sys.path or use the slicing notation. 2014-12-10 19:18:53 +01:00
Dave Halter
5af665abd8 Dynamic array checking in combination with FakeSequences might have caused an exception. 2014-12-10 11:42:02 +01:00
Dave Halter
4bef8895a0 Fix dynamic arrays: They work in instances, now. 2014-12-10 11:34:11 +01:00
Dave Halter
d4dfcfe321 NameFinder refactoring to make it possible to cache names_to_types. 2014-12-10 11:23:12 +01:00
Dave Halter
2536dede28 Check for recursions in dynamic arrays. 2014-12-10 02:02:55 +01:00
Dave Halter
e429144979 Fix some stuff list.append stuff combined with functions executions. 2014-12-10 01:58:04 +01:00
Dave Halter
5ed914ea21 dynamic array improvements. 2014-12-08 20:18:33 +01:00
Dave Halter
1c44336d60 First array addition working. 2014-12-08 18:25:38 +01:00
Dave Halter
4c3584ed3c Removed the dynamic_arrays_for_instances setting, because it's a subset of dynamic_array_additions, which is more concise. 2014-12-08 16:36:37 +01:00
Dave Halter
936a3c9dfe Small cleanup: Removed a few print statements. 2014-12-08 16:03:23 +01:00
Dave Halter
51d309b0a8 Moved keyword completion around to get it working in all cases. 2014-12-08 15:52:05 +01:00
Dave Halter
94ea2c1096 Issues with argument clinic parser. 2014-12-08 15:45:40 +01:00
Dave Halter
01b9361b33 Reenable keyword completion. 2014-12-08 15:14:27 +01:00
Dave Halter
5cc9dd57a6 Finally fixed the last on_import issue, which was that goto was not working on incomplete import statements. Still a bit messy, though. 2014-12-08 15:02:35 +01:00
Dave Halter
034d782e65 Last few on_import fixes. 2014-12-08 14:15:21 +01:00
Dave Halter
6cc4d71822 Import completion improvements. 2014-12-08 13:47:23 +01:00
Dave Halter
7cc2a07cd3 Small full_name improvements. 2014-12-08 12:38:59 +01:00
Dave Halter
8868b87d42 Make imports stuff in API classes work. Now goto on imports follows even aliases. 2014-12-08 12:04:09 +01:00
Dave Halter
0ad6aeba6b Fix some API classes issues. Among them call signature generation and Definition.parent() issues. 2014-12-08 02:32:43 +01:00
Dave Halter
0f01242954 named param goto. 2014-12-08 01:52:32 +01:00
Dave Halter
6ad9c5ee76 Small fixes to the parser tests. 2014-12-08 00:58:28 +01:00
Dave Halter
dffce937f2 With the old parser we did more complicated checking for invalid statements, now the new parser does it by itself. Therefore we can stop doing crazy regex stuff in the API. 2014-12-08 00:52:40 +01:00
Dave Halter
0c77e9960a NotFoundError doesn't really exist anymore. We're deprecating it, so change the corresponding tests. 2014-12-08 00:48:06 +01:00
Dave Halter
d6595ad020 Fixed more parser tests. 2014-12-08 00:36:09 +01:00
Dave Halter
fe8a99dfd5 Fix the parser tests. 2014-12-08 00:32:38 +01:00
Dave Halter
34c9422749 The parser tests should also give the parser a grammar. 2014-12-08 00:22:33 +01:00
Dave Halter
f0c430e20c On import problem with name completion of modules. 2014-12-08 00:16:01 +01:00
Dave Halter
b24bf29fc2 Fixed named argument call signature stuff and issues with classes and call signature params. 2014-12-07 23:55:44 +01:00
Dave Halter
bb747a83e8 Small fix with big impact for the previously done simple_stmt error recovery. Now it actually works. 2014-12-07 19:45:19 +01:00
Dave Halter
2b7434342e Fix absolute imports. 2014-12-07 18:51:14 +01:00
Dave Halter
eead122636 Use grammar in test scripts. 2014-12-07 18:22:11 +01:00
Dave Halter
6058855dd3 test_helpers doesn't make sense anymore, because those the only test it consisted of, was a test with StatementElement, which does not exist anymore in the new parser. 2014-12-07 18:15:18 +01:00
Dave Halter
e3ab56504e Fixed and simplified flask imports. 2014-12-07 18:11:05 +01:00
Dave Halter
db636c35ae Error recovery should not delete parts of simple_stmt. 2014-12-07 18:04:55 +01:00
Dave Halter
33b39c2b5d Don't use the old setup_function/teardown_function pytest stuff. It's very implicit and hard to understand. 2014-12-07 17:21:52 +01:00
Dave Halter
49b34b4d01 Stuff mostly related to namespace packages. 2014-12-07 16:51:54 +01:00
Dave Halter
528b325c39 Remove precedence tests. They are not needed anymore, since precedence is now handled by the parser itself. 2014-12-07 14:41:57 +01:00
Dave Halter
b94a09f360 Fix end_pos of Literals and Whitespace leafs. 2014-12-07 14:28:40 +01:00
Dave Halter
fe1d7b7030 Replace the old tokenizer tests with the refactored attributes. 2014-12-07 14:19:21 +01:00
Dave Halter
ea4f7053d6 Fix completion/definition.py tests. 2014-12-07 14:13:59 +01:00
Dave Halter
e1e5c3a6c7 Progress with call signatures. 2014-12-07 13:56:40 +01:00
Dave Halter
24903739f2 A first implementation of call signatures. 2014-12-05 16:05:54 +01:00
Dave Halter
ab254bbcba Call signature search progress. 2014-12-05 00:23:59 +01:00
Dave Halter
24c7142810 Fix issues with scope ordering in classes/functions. 2014-12-04 18:49:09 +01:00
Dave Halter
774ade955d Fixing for loop additions. 2014-12-04 17:58:01 +01:00
Dave Halter
a96d1b8d0f fix something with not/- prefixes. 2014-12-04 17:51:14 +01:00
Dave Halter
478acf8ccf partial is working partially now with the new parser, because invalid statements are not possible anymore (two times **kwargs) 2014-12-04 14:29:37 +01:00
Dave Halter
8f1002218d Very temporary solution for doing deep_ast_copy. 2014-12-04 11:19:33 +01:00
Dave Halter
aa9057be38 Small fix for builtins. 2014-12-04 02:01:30 +01:00
Dave Halter
1725abb1fd Fix issues with docstrings. 2014-12-03 20:30:03 +01:00
Dave Halter
f1431cef40 Decorator fixes. 2014-12-03 17:09:30 +01:00
Dave Halter
09ad3411da Goto fixes. 2014-12-03 17:01:29 +01:00
Dave Halter
6314b80abd Some goto refactorings. 2014-12-03 16:52:05 +01:00
Dave Halter
b2267d3878 Fix usages. 2014-12-03 16:34:31 +01:00
Dave Halter
6bf154de5e Better goto for imports, which helps usages. 2014-12-03 16:15:31 +01:00
Dave Halter
536c188192 Change get_self_vars. Now using py__mro__ to avoid recursions. 2014-12-03 13:04:53 +01:00
Dave Halter
b9e7a2eb95 Fix assert issues in combination with comprehensions. 2014-12-02 17:55:42 +01:00
Dave Halter
5f89ceb385 Add the type attribute to all classes in the tree. Because nodes have them as well. 2014-12-02 17:50:55 +01:00
Dave Halter
425741e285 Fix assertion/isinstance type information. 2014-12-02 17:45:19 +01:00
Dave Halter
cf0407e164 Add 'if isinstance' type information. 2014-12-02 17:34:36 +01:00
Dave Halter
99febfe6c2 Fixed a very nasty bug in deep_ast_copy. 2014-12-02 04:19:22 +01:00
Dave Halter
235672efc1 Fix an issue for stdlib regex completion. deep_ast_copy had a bug and also changed the way how decorators work. 2014-12-01 18:09:21 +01:00
Dave Halter
2515d283be __getitem__ in instances. 2014-12-01 15:41:13 +01:00
Dave Halter
0ab9d331f8 Issues with dictionary/list/tuple literal methods. 2014-12-01 15:36:36 +01:00
Dave Halter
e51a393e4c Fix reversed. 2014-12-01 12:41:47 +01:00
Dave Halter
3cc4da28ed Fix lambda_nocond. 2014-12-01 11:56:28 +01:00
Dave Halter
bcd998ae02 Lambdas are own namespaces and deserve their own used_names dictionary in the parser. 2014-12-01 11:49:52 +01:00
Dave Halter
7c9de1fbeb Some class lambdas tests too assure that they are working well with instances. 2014-12-01 11:36:19 +01:00
Dave Halter
50752df6dd Fix an issue with combinations of InstanceElement and Lambdas. 2014-12-01 11:26:35 +01:00
Dave Halter
88853c78f4 Get lambdas mostly working. 2014-12-01 02:47:48 +01:00
Dave Halter
4ee5ad4ce3 iterating list comprehensions should be possible. 2014-12-01 01:08:50 +01:00
Dave Halter
ed1915eea0 Fixes for goto on list comprehensions. 2014-12-01 01:02:41 +01:00
Dave Halter
68bd9160e2 Fixed list comprehension name lookups. 2014-12-01 00:08:27 +01:00
Dave Halter
3928f466cf Fix positioning of the user statements. 2014-11-29 16:20:12 +01:00
Dave Halter
cd7044cae3 Don't use NotFoundError anymore, since it's very ambiguous what that would imply. 2014-11-29 15:57:18 +01:00
Dave Halter
0184e80120 dynamic_params correction. 2014-11-29 13:49:50 +01:00
Dave Halter
417db4e83f suites without indent can also be deleted. 2014-11-29 13:30:21 +01:00
Dave Halter
a7560069b0 Fixes for issues with empty compound_stmt. We always remove a whole stmt and just a funcdef as an error correction. 2014-11-29 13:25:31 +01:00
Dave Halter
3fb1934462 Fix invalid test issues. 2014-11-29 01:35:26 +01:00
Dave Halter
2b912cb75a The func/class dictionaries must be changed if some scopes are removed by the parser's error recovery. 2014-11-29 01:29:21 +01:00
Dave Halter
43c01afcfc invalid.py test changes. Error recovery will be different from the old one. 2014-11-28 21:58:44 +01:00
Dave Halter
2c684906e3 Working with dedents in error recovery. 2014-11-28 21:33:40 +01:00
Dave Halter
31600b9552 classes and functions are new statements and should never get removed by the error recovery. 2014-11-28 02:44:34 +01:00
Dave Halter
128dbd34b6 Check parentheses level in tokenizer. 2014-11-28 02:14:38 +01:00
Dave Halter
e1d6511f2f Trying to move the indent/dedent logic back into the tokenizer. 2014-11-28 02:04:04 +01:00
Dave Halter
97516eb26b The new tokenizer is more or less working now. Indents are calculated as they should 2014-11-27 16:03:58 +01:00
Dave Halter
c0df7003a5 Allow both the old tokenizer and the new one (able to toggle). 2014-11-27 01:12:49 +01:00
Dave Halter
c7862925f5 Small tokenizer changes & tokens now have a prefix attribute instead of preceeding_whitespace. 2014-11-27 01:10:45 +01:00
Dave Halter
02cb1fef95 Rename test_tokenizer to test_tokenize. 2014-11-26 16:16:58 +01:00
Dave Halter
cc1098b93c Fix a few tokenize tests and merge them back together. 2014-11-26 16:09:28 +01:00
Dave Halter
f43c371467 Merge @joel-wright's whitespace tokenizer branch. Thanks! 2014-11-26 15:56:11 +01:00
Dave Halter
427056a22d Change the pgen2 parser and its driver so that it can be accessed easily from the outside. This is a minor change and will allow Jedis tokenizer to work with pgen2. 2014-11-26 15:38:53 +01:00
Dave Halter
cd1e07a532 The now passing on_import tests should not worsen the performance of the other tests. 2014-11-26 03:11:22 +01:00
Dave Halter
f24a3bf997 Fix on_import tests. 2014-11-26 03:07:41 +01:00
Dave Halter
1326a2137d Change the backwards tokenizer that keywords always stop. 2014-11-26 02:32:13 +01:00
Dave Halter
a940c31a86 Improvments to on import completion. 2014-11-26 02:13:24 +01:00
Dave Halter
149b4d8ad5 Import completion on syntactically correct imports. 2014-11-26 01:15:40 +01:00
Dave Halter
499c62df43 Fixes for os.path import 2014-11-25 19:39:14 +01:00
Dave Halter
5d82b11f59 First implementation to be ready to complete corrupt imports. Working ok. 2014-11-25 19:35:27 +01:00
Dave Halter
e72eaf7a59 on import completion preparations. 2014-11-25 15:10:36 +01:00
Dave Halter
52d4aaebbe small fix for docstring parsing. 2014-11-25 15:10:19 +01:00
Dave Halter
5de84afff4 Fix __getitem__ 2014-11-24 02:10:02 +01:00
Dave Halter
fae0a7b0c4 Small fixes for past mistakes. 2014-11-24 01:56:54 +01:00
Dave Halter
db76bbccc5 Trying to change the symbols in node. They are now strings.
With this change we are finally able to get rid of parser/pytree.py
2014-11-24 01:52:41 +01:00
Dave Halter
9f45f18ad1 Added a grammar param to the parser. 2014-11-24 01:10:39 +01:00
Dave Halter
c152a1c58b Actually replace tree with representation (in all the imports). 2014-11-23 19:46:52 +01:00
Dave Halter
1fbc4c9196 Change parser.representation to parser.tree. It's shorter and says more. 2014-11-23 19:35:46 +01:00
Dave Halter
ac41c31015 Removed more of the old parser representation code. 2014-11-23 19:33:18 +01:00
Dave Halter
9b54541cae Remove quite a bit of the old parser representation logic. 2014-11-23 19:26:30 +01:00
Dave Halter
0f21d38e2c Remove the old parser. 2014-11-23 19:17:50 +01:00
Dave Halter
267016f533 Function for evaluating functions with already executed arguments. 2014-11-23 19:12:25 +01:00
Dave Halter
8adfc47297 Fix some issues with params. 2014-11-23 12:22:03 +01:00
Dave Halter
c10ec4f876 star arg iteration improved. 2014-11-23 12:05:19 +01:00
Dave Halter
f1cbd45575 Usages are pretty solid now except for parser issues. 2014-11-22 15:43:23 +01:00
Dave Halter
b82e1e28e5 Get at least some usages stuff right. 2014-11-22 02:05:36 +01:00
Dave Halter
22fbcf6c77 More goto improvements. 2014-11-21 15:45:17 +01:00
Dave Halter
eb0bfb4381 get_code in Definition.description should not return first prefix. 2014-11-21 15:33:38 +01:00
Dave Halter
f604066288 First small implementation of goto. 2014-11-21 14:21:00 +01:00
Dave Halter
fd16dfe2c7 Fix the first part of sys path checks. 2014-11-21 01:46:18 +01:00
Dave Halter
11fa71bac8 Fixes for nested star imports. 2014-11-20 14:56:47 +01:00
Dave Halter
3b7454e294 copy fixes. 2014-11-20 14:51:01 +01:00
Dave Halter
83b09f6c1e Small issue with is_definition and params. Found by looking at stdlib/random.choice tests. 2014-11-20 14:48:34 +01:00
Dave Halter
cc465364d3 Fixes towards better MergedArray and partial functions. 2014-11-20 13:33:05 +01:00
Dave Halter
a6e1348757 type implementation. 2014-11-20 12:31:11 +01:00
Dave Halter
f2e3a3d090 Just rebuilt reversed. 2014-11-20 11:56:58 +01:00
Dave Halter
53c2a1679c Fix types tests. 2014-11-20 11:16:52 +01:00
Dave Halter
2c3a7b6d6c Imports are not Statements. 2014-11-20 02:29:24 +01:00
Dave Halter
164518b993 Get docstrings working. 2014-11-20 02:19:01 +01:00
Dave Halter
ce5d428d22 At least functions generate docstrings again. 2014-11-20 01:37:18 +01:00
Dave Halter
22b288fc73 Change docstring test function names, so that dynamic param completion doesn't interfere (especially because some of those files still generate parser errors). 2014-11-20 00:35:48 +01:00
Dave Halter
b5418d9d73 Small issue with dynamic params. 2014-11-20 00:25:03 +01:00
Dave Halter
e6364fdd8b Fix os.path issues. 2014-11-19 18:40:28 +01:00
Dave Halter
ba0e61d99f Star imports are now part of the ModuleWrapper. 2014-11-19 18:09:49 +01:00
Dave Halter
08bdcfb8ca Small issue with relative imports that don't contain a path after from. 2014-11-19 15:22:18 +01:00
Dave Halter
aeaf073ca2 Move some tests that targeted completion on import statements into a separate file. 2014-11-19 14:14:27 +01:00
Dave Halter
bb9d6b4832 Temporarily disable on import completion. Not sure if we're going to do it with the normal parser. 2014-11-19 13:24:45 +01:00
Dave Halter
c71646a9a0 Fixed relative imports. 2014-11-19 13:13:06 +01:00
Dave Halter
6c5f3419ff 'as' test and implementation for ImportName. Working pretty well. 2014-11-19 13:07:08 +01:00
Dave Halter
e630eeb397 Care for import aliases better. 2014-11-19 12:45:39 +01:00
Dave Halter
1c240e75d3 Replace get_all_import_names with a leaf search method in Simple. 2014-11-19 01:31:08 +01:00
Dave Halter
bab6788b42 split and improved Import*._paths 2014-11-19 00:45:20 +01:00
Dave Halter
3c6d5dafb1 Split Import, now there is ImportFrom and ImportName as it exists in the python grammar. 2014-11-19 00:40:16 +01:00
Dave Halter
535a69e499 Small improvments to from imports 2014-11-18 18:43:16 +01:00
Dave Halter
9d5f3162d7 More import stuff. Fake imports work a little bit better. 2014-11-18 18:22:26 +01:00
Dave Halter
a4a767f8bb Import improvements. 2014-11-18 17:19:15 +01:00
Dave Halter
b0109343e4 Jedi didn't care for decorator 'dotted_name' nodes and therefore descriptor tests failed. 2014-11-18 15:44:40 +01:00
Dave Halter
90ce1ac47f Simplify decorators, make them easier to read in debugging mode. 2014-11-18 15:04:20 +01:00
Dave Halter
6d866eb915 Small cleanup: Dynamic params comparison works now with the evaluator functions. 2014-11-18 13:28:24 +01:00
Dave Halter
78b7b8ffaf add a test that wasnt working with the old dynamic param lookup. 2014-11-18 13:08:27 +01:00
Dave Halter
93ffc799f5 Because of the change in dynamic params, we can now remove the decorator hack in the name finder. 2014-11-18 13:06:20 +01:00
Dave Halter
f9276a8bd2 Fix issues with decorators and dynamic params combined. 2014-11-18 13:04:05 +01:00
Dave Halter
4fa78e3482 Fix the last remaining issue with decorators. 2014-11-17 23:59:38 +01:00
Dave Halter
fd8752f285 Now most decorator tests pass. 2014-11-17 23:46:05 +01:00
Dave Halter
f62f181066 First decorator implementations. 2014-11-17 22:24:54 +01:00
Dave Halter
df5df1ccf5 FakeArray recursion issues. 2014-11-17 20:41:32 +01:00
Dave Halter
d49a8fc073 The test runner should only start tests if the name starts with the same letters. 2014-11-17 17:56:43 +01:00
Dave Halter
9ac66261c9 Fi dynamic param stuff combined with list comprehensions. 2014-11-17 17:54:12 +01:00
Dave Halter
ae3ff35674 Fix issues with buggy trailers in dynamic params. 2014-11-17 17:49:34 +01:00
Dave Halter
b57e4c4e7c dynamic test descriptions. 2014-11-17 17:18:03 +01:00
Dave Halter
9054a3f674 Split dynamic tests into dynamic_params and dynamic_arrays. 2014-11-17 17:11:58 +01:00
Dave Halter
da5273ce20 Fix issues with dynamic class parameter completion. 2014-11-17 17:07:32 +01:00
Dave Halter
259aa6bd5f First dynamic params working. 2014-11-17 16:23:18 +01:00
Dave Halter
22f20ec715 Treat executed params different from normal ones. 2014-11-17 13:35:16 +01:00
Dave Halter
2dfbc2a0fd Start to rework dynamic params. However goto is now needed first. 2014-11-17 12:34:32 +01:00
Dave Halter
0567a886c4 Fixed an issue with set literals. (The Array type was wrong before.) 2014-11-14 16:54:54 +01:00
Dave Halter
fce715b867 By disabling dynamic arrays completely, arrays are now almost fully passing. 2014-11-14 16:43:53 +01:00
Dave Halter
7049ad58db Small fix for dict lookups. 2014-11-14 16:03:09 +01:00
Dave Halter
01178d30ea Disable dynamic array instances and fix an issue with missing the arguments class. 2014-11-14 15:55:02 +01:00
Dave Halter
e64c78503e Fix some issues with array arguments. 2014-11-14 15:19:47 +01:00
Dave Halter
278bc9d705 Fix generators. 2014-11-14 02:05:25 +01:00
Dave Halter
13a128b160 global working. 2014-11-13 18:13:56 +01:00
Dave Halter
f3c2b4fc33 Get with statements working. 2014-11-13 12:51:49 +01:00
Dave Halter
541b8872d0 Changed is_node so it can actually deal with InstanceElements. 2014-11-13 01:15:44 +01:00
Dave Halter
1ab67ebbba Fix except statements names. 2014-11-13 01:02:56 +01:00
Dave Halter
2fc67b97e5 Disable analysis for now. 2014-11-13 00:28:42 +01:00
Dave Halter
f0a3c37fa0 __file__ should be listed as a module attribute. 2014-11-13 00:24:40 +01:00
Dave Halter
408eee50dd Start fixing issues with for loops and += operations. 2014-11-12 14:54:52 +01:00
Dave Halter
13c2279fea Make Generator tuple assignments work. 2014-11-12 13:48:08 +01:00
Dave Halter
65c18f143c Separate some Generator stuff. 2014-11-12 13:42:24 +01:00
Dave Halter
65f182ff0d Separate class for Generator and List Comprehensions. 2014-11-12 13:28:31 +01:00
Dave Halter
f760a7755d Nested list comprehensions seem to be working pretty well. 2014-11-12 12:30:59 +01:00
Dave Halter
c326562c27 Implemented x if foo else y case. 2014-11-12 11:49:27 +01:00
Dave Halter
54c5591ccb Progress with list comprehensions. There is now a separate class. 2014-11-12 11:42:31 +01:00
Dave Halter
cc661473bc Trying to change used_names, so that they don't contain name definitions from CompFor. 2014-11-11 17:13:27 +01:00
Dave Halter
460d959988 Implicit tuples as a separate class. 2014-11-06 18:24:55 +01:00
Dave Halter
8200b68549 Fix for loops and for loops tuple assignments. 2014-11-06 17:21:00 +01:00
Dave Halter
3a9e9e29e1 Small pytree changes. 2014-11-06 14:17:34 +01:00
Dave Halter
00454daf57 change the new parser tests 2014-11-06 14:16:17 +01:00
Dave Halter
aa0c73c9ab Fixed a few small things in the parser. Flow analysis is working again. Completely. 2014-11-06 04:41:23 +01:00
Dave Halter
56102e408e Small glitch with the change of return statements to loops in precedences. 2014-11-06 04:06:40 +01:00
Dave Halter
d58046f38f Changed normal precedence calculation. 2014-11-06 03:47:42 +01:00
Dave Halter
fae798adfe Simplify precedence.factor_calculate. 2014-11-05 23:21:32 +01:00
Dave Halter
ca70d32f23 isinstance fixes 2014-11-05 22:14:38 +01:00
Dave Halter
fa0f4b1e00 Further small flow_analysis corrections. Keywords are only equal to other keywords if they are the same. Not in case of the same value anymore. 2014-11-05 21:29:54 +01:00
Dave Halter
186ce2b70a Improve flow analysis a bit. 2014-11-05 19:18:45 +01:00
Dave Halter
9549c2b389 Last few small changes to the parser. Now beginning to work on the tests again. 2014-11-05 17:39:56 +01:00
Dave Halter
70bc6642d8 symbols on the stack are now defined with a smaller tuple. 2014-11-05 10:27:24 +01:00
Dave Halter
da5dd0efa1 simplify the parser stack. 2014-11-05 00:35:58 +01:00
Dave Halter
61df804e6e context is not needed for nodes. 2014-11-04 19:49:39 +01:00
Dave Halter
3b4a8dcd7e convert: run away from tuples. 2014-11-04 19:26:33 +01:00
Dave Halter
73bd576bb2 Merged some convert stuff. 2014-11-04 19:12:02 +01:00
Dave Halter
3518123afa Moved the convert function away from pytree 2014-11-04 19:09:58 +01:00
Dave Halter
f3e4bf9ed1 Splitting up the convert function into leaves and nodes. 2014-11-04 18:54:25 +01:00
Dave Halter
d483d50284 Small optimizations to make parsing faster. 2014-11-04 17:23:16 +01:00
Dave Halter
c6c2768dda Parser error recovery simplified. Just fall back to scopes, if somethings wrong. 2014-11-04 09:40:32 +01:00
Dave Halter
8c775e0a18 Resolved if/else issues in instances with get_defined_names.
This also means that class tests are now passing, except for private variables, which are not that important.
2014-11-04 00:24:01 +01:00
Dave Halter
1d2980cd2d Remove a flow information thing for now. 2014-11-03 18:27:31 +01:00
Dave Halter
55db65434c Add the parsers flow information as classes to parser.representation. 2014-11-03 18:25:10 +01:00
Dave Halter
e25684d470 Start to check for name positions with names_dict name finder. 2014-11-03 15:58:56 +01:00
Dave Halter
4676998fb5 Playing with params/names_dict 2014-11-03 13:38:57 +01:00
Dave Halter
f4d7020ebf Improve Name.is_definition 2014-11-02 17:53:04 +01:00
Dave Halter
81174741a4 After error recovery, names need to be removed from the used_names list, because they are not actually defined in the current AST. 2014-11-02 17:40:45 +01:00
Dave Halter
1ff4713848 Move error recovery function. 2014-11-02 17:24:06 +01:00
Dave Halter
0c3cba166e Make names_dict available in modules. 2014-11-02 14:22:00 +01:00
Dave Halter
afca0ef047 The user context parser should ignore keywords if they are not standing alone. 2014-10-30 01:56:08 +01:00
Dave Halter
1bb0eccc86 Simple error recovery in the parser: For now just discard everything that is not a suite or file_input, if we detect an error. 2014-10-30 01:40:22 +01:00
Dave Halter
f09ff04fcc Yield expressions are now separate form ReturnStmt. 2014-10-29 18:54:05 +01:00
Dave Halter
71c3d34965 Increase ParserPickling.version. 2014-10-29 01:46:54 +01:00
Dave Halter
1c09a90ac1 Disable usage of filter_private_variable for now. 2014-10-28 17:00:12 +01:00
Dave Halter
4f2223ae7b Super is now working. Still has the same flaws like the old implementation, but the tests are passing. 2014-10-28 16:22:59 +01:00
Dave Halter
500ac9b384 A irst super() implementation in stdlib. 2014-10-28 15:50:05 +01:00
Dave Halter
d27df89681 A temporary argument clinic implementation for the stdlib. 2014-10-28 14:56:22 +01:00
Dave Halter
f8bb369467 __call__ issues 2014-10-28 13:36:13 +01:00
Dave Halter
1cc1d4480b Fix issues with copying children in combination with InstanceElement. 2014-10-28 13:33:40 +01:00
Dave Halter
b550f67bce Small instance refactoring, now adding is_generated as a param. 2014-10-28 11:33:28 +01:00
Dave Halter
19e083cbfb Make Leaf public 2014-10-28 11:13:33 +01:00
Dave Halter
d667f19c57 Remove the old is_generated. 2014-10-28 02:11:50 +01:00
Dave Halter
b3d87302f9 Small changes to adapt oto the new param structure. 2014-10-28 02:11:13 +01:00
Dave Halter
fe7c750c2c Param is now the parent of its names and not just a helper class. 2014-10-28 02:05:44 +01:00
Dave Halter
68378a1468 Merge pull request #502 from abingham/patch-1
Added link to emacs-ycmd.
2014-10-27 16:17:43 +01:00
Dave Halter
1a6ff3e8e6 Small parser fix. 2014-10-27 16:07:24 +01:00
Dave Halter
8911ecb6a5 A last change for defaults. Params are pretty usable and work smooth now. 2014-10-27 15:36:29 +01:00
abingham
df4845790e Added link to emacs-ycmd.
This is another option for using jedi in emacs via ycmd.
2014-10-27 16:12:10 +02:00
Dave Halter
14ec210891 param default values. 2014-10-27 12:03:09 +01:00
Dave Halter
db2d380441 Issues with errors *args resolution. 2014-10-27 01:29:49 +01:00
Dave Halter
c0768924f6 Managed to get dict inputs working into kwargs. This was wrong in the old version of the parser. 2014-10-27 01:07:15 +01:00
Dave Halter
8df8749f22 Dict key that are not in dict should return all value types. 2014-10-27 00:19:31 +01:00
Dave Halter
e4124fcf9a More dynamic *args 2014-10-25 15:58:09 +02:00
Dave Halter
2315d51e68 direct param evaluation 2014-10-25 14:37:01 +02:00
Dave Halter
afbdf1a7ea Fix for default arguments in combination with named arguments. 2014-10-25 13:14:01 +02:00
Dave Halter
7532f52cdd Understanding implicit tuple returns (testlist) 2014-10-25 12:50:51 +02:00
Dave Halter
97a102bd24 Get param parsing right. 2014-10-25 12:02:54 +02:00
Dave Halter
22cb3ca5f0 subscriptable errors. 2014-10-25 11:34:16 +02:00
Dave Halter
995f0700c9 Fix params, so that quite a few functions can now pass. 2014-10-25 02:35:04 +02:00
Dave Halter
4384e938e9 Get a few more things right with params. 2014-10-25 02:25:09 +02:00
Dave Halter
9f1336095b unpacking arguments. 2014-10-24 21:46:48 +02:00
Dave Halter
c58975807c Small function/param corrections. 2014-10-24 01:58:56 +02:00
Dave Halter
93c97a78a3 Fix an issue with classes and decorators combined. 2014-10-24 00:41:26 +02:00
Dave Halter
3bdd32ad87 Always create a module. 2014-10-23 16:27:16 +02:00
Dave Halter
de4db11d25 Reading dicts works now. 2014-10-23 16:21:23 +02:00
Dave Halter
51ffc54471 Temporary params of class solution. 2014-10-23 14:41:01 +02:00
Dave Halter
387fc3b038 Adding prev_sibling, getting self attributes. 2014-10-23 14:03:52 +02:00
Dave Halter
88dcbe1f48 Name.is_definition implementation. 2014-10-23 13:37:35 +02:00
Dave Halter
971f1db823 Create a next_sibling method on _Leaf, which is then used to check for self attributes. 2014-10-23 01:36:24 +02:00
Dave Halter
abb8d0e26c get_names_dict removed and use instead the names_dict attribute. 2014-10-23 01:06:50 +02:00
Dave Halter
3bbce49fd3 Scope.names_dict implementation. 2014-10-23 00:51:02 +02:00
Dave Halter
4f4aef7ac8 Param helper class in the tree. 2014-10-22 20:07:42 +02:00
Dave Halter
e9f4c60e49 Use used_names not in pgen2, but only in our parser. 2014-10-22 15:50:02 +02:00
Dave Halter
e2a07752fd '.NAME' lookups. 2014-10-22 02:33:35 +02:00
Dave Halter
34f3ea6973 More and probably the last tuple assignment stuff. 2014-10-22 02:29:47 +02:00
Dave Halter
297bcf6e19 Parentheses without commas are no tuples. 2014-10-22 02:10:48 +02:00
Dave Halter
6a8b840b29 Be able to differentiate tuple/list/dict. 2014-10-22 01:42:21 +02:00
Dave Halter
14113a1bff Fix a lot more of the tuple assignments. 2014-10-22 01:27:12 +02:00
Dave Halter
5b29e2c54d Add a method 'Name.assignment_indexes', to process tuple assignments. 2014-10-21 15:45:29 +02:00
Dave Halter
c1807e5f33 Reworked ExprStmt.get_definition 2014-10-21 14:21:59 +02:00
Dave Halter
1c27759c4f Few fixes. 2014-10-21 13:54:03 +02:00
Dave Halter
d119902496 Slices 2014-10-21 13:36:56 +02:00
Dave Halter
ab53942e55 Start working with arithmetics. 2014-10-21 12:18:03 +02:00
Dave Halter
2eed6b7b5f Unaccessible array indexes should still produce results. 2014-10-21 12:03:01 +02:00
Dave Halter
8f3b7f9d44 A first array test passing. 2014-10-21 11:58:53 +02:00
Dave Halter
fb2ef5a7a0 Start using arrays. 2014-10-21 11:05:12 +02:00
Dave Halter
718f43431c Introduce error recovery for the parser: At the moment just recover from broken statements. 2014-10-21 09:57:22 +02:00
Dave Halter
c821b30017 Fix a first test: complex.py 2014-10-20 17:06:18 +02:00
Dave Halter
43e3452474 Fix more argument related stuff. 2014-10-20 16:34:17 +02:00
Dave Halter
1a639bd118 Arguments move to params. 2014-10-20 15:43:56 +02:00
Dave Halter
b2c95cb02f Generating return statements. 2014-10-18 12:40:36 +02:00
Dave Halter
74d4fcf4e7 globals are more or less ready. 2014-10-17 18:48:07 +02:00
Dave Halter
f08811fba7 Start implementing GlobalStmt. 2014-10-17 15:13:45 +02:00
Dave Halter
6abafc40fa ExprStmt now doesn't contain imports anymore. 2014-10-17 14:56:12 +02:00
Dave Halter
d7face17f6 Using expr_stmt now instead of simple_stmt as ExprStmt, because that resembles the official grammar better. 2014-10-17 12:03:41 +02:00
Dave Halter
19acdd32b7 Fixed issues with the Python3.4 grammar file.
The order of symbols matters. 'file_input' needs to be the first symbol.
2014-10-17 01:34:47 +02:00
Dave Halter
ae8969a0d1 New kind of KeywordStatement 2014-10-17 01:26:34 +02:00
Dave Halter
aefc5ec15f add the 3.4 grammar. 2014-10-16 14:47:14 +02:00
Dave Halter
01ce93cb5c Preparation for a parser refactoring. 2014-10-16 11:45:25 +02:00
Dave Halter
887949e23f Start making executions work. 2014-10-16 10:58:27 +02:00
Dave Halter
7b91050c85 introduce something that resembles argument clinic in stdlib. 2014-10-16 10:58:11 +02:00
Dave Halter
631aa0ea61 Processing atom and power nodes. 2014-10-15 13:40:56 +02:00
Dave Halter
485b8ae3da Move Node away from pytree into the parser representation. 2014-10-13 13:47:32 +02:00
Dave Halter
6458047bac Name in statement definition implementation. 2014-10-13 13:33:49 +02:00
Dave Halter
0def3afaaa A move function for Nodes. 2014-10-12 23:37:46 +02:00
Dave Halter
660124aca1 Get an example running: 'import json; json.dump'. 2014-10-12 22:37:23 +02:00
Dave Halter
e2b7e74aef Import improvements. 2014-10-10 17:52:36 +02:00
Dave Halter
a192f4f347 Add keywords, improve import support. 2014-10-10 17:48:40 +02:00
Dave Halter
54c91b1509 Get a first local test passing. 2014-10-10 12:07:08 +02:00
Dave Halter
3bf1fec568 Start implementing an algorithm for actually evaluating the parser tree. 2014-10-10 11:29:22 +02:00
Dave Halter
66840a742c Implement the new parser in jedi.parser.Parser 2014-10-10 00:06:28 +02:00
Dave Halter
05fd7f992e Don't use the fast parser at the moment. It's more important for now to even get a parser working. 2014-10-09 14:36:52 +02:00
Dave Halter
aa75140f96 Remove old base class for Nodes. 2014-10-09 13:33:42 +02:00
Dave Halter
92ee2a912e Actually get the first few written tests passing. 2014-10-09 13:21:30 +02:00
Dave Halter
68d23840bb Start actual testing of the new parser. 2014-10-09 13:16:28 +02:00
Dave Halter
daee273a08 Remove all the old get_code references. 2014-10-09 11:38:57 +02:00
Dave Halter
843efb43e9 statement refactoring. 2014-10-09 11:36:27 +02:00
Dave Halter
a0092c2653 remove old statement parser code. 2014-10-09 11:20:49 +02:00
Dave Halter
140320143a Remove the old Leave class. 2014-10-09 11:02:03 +02:00
Dave Halter
c7c222daab Implement WhiteSpace as well and merge with pytree. 2014-10-09 10:55:03 +02:00
Dave Halter
8236ce18a2 Refactor Literal. 2014-10-09 03:23:52 +02:00
Dave Halter
eb384fc2d1 Name refactoring. 2014-10-09 03:21:18 +02:00
Dave Halter
432ec8f186 Operator refactoring. 2014-10-09 03:15:50 +02:00
Dave Halter
6bb88ddd85 Function is starting to work. 2014-10-08 17:58:02 +02:00
Dave Halter
585e92ac9f Temporarily disable evaluate/compiled stuff, because it interferes with the current changes of the parser. This will be undone. 2014-10-08 17:45:49 +02:00
Dave Halter
308c971ad7 Change pr.Simple. 2014-10-08 17:02:37 +02:00
Dave Halter
d9aa84f971 simplify the use of 'context' 2014-10-08 16:33:35 +02:00
Dave Halter
e54dac3777 Start setting up the parser.representation merge 2014-10-08 16:15:27 +02:00
Dave Halter
220ddc8a74 Probably the last big removals, there's not a lot left. 2014-10-08 15:35:45 +02:00
Dave Halter
a23ad3df10 Remove more and more from pytree. 2014-10-08 15:30:15 +02:00
Dave Halter
2781a4ac98 Remove comparison methods. 2014-10-08 15:22:44 +02:00
Dave Halter
1fb6e15750 Fix prefixes of Leafs. 2014-10-08 15:19:36 +02:00
Dave Halter
36368cd606 More removals and cleanups. 2014-10-08 15:13:43 +02:00
Dave Halter
07d9111c77 Remove some more pytree methods. 2014-10-08 15:05:29 +02:00
Dave Halter
a7fff54d7b Remove some fixer related stuff (lib2to3 stuff). 2014-10-08 15:04:01 +02:00
Dave Halter
834172a3e9 Add a new parser, check it pgen2 would work. (already modified outside this repository) 2014-10-08 14:26:52 +02:00
Dave Halter
09a7317bc9 Merge branch 'dev' of github.com:davidhalter/jedi into dev 2014-10-06 17:55:46 +02:00
Dave Halter
e9a3a44780 Remove some other _star_import_cache stuff, and with this, #489 should be fixed. 2014-10-06 17:55:28 +02:00
Dave Halter
3638d5149d Change time_cache, to also host the star_import_cache. 2014-10-06 17:37:34 +02:00
Dave Halter
bbdb4703ec change cache_call_signatures, so that it has a well defined input. 2014-10-06 16:07:33 +02:00
Dave Halter
87574e9d2e star_import_cache refactorings: Make it more readable. 2014-10-04 12:43:08 +02:00
Dave Halter
a1b55a9df7 clear_caches -> clear_time_caches 2014-10-03 14:23:46 +02:00
Dave Halter
116e9e72fc is_definition/Import issue 2014-10-02 11:27:01 +02:00
Dave Halter
8ca48f03db Tests for imports and is_definition. 2014-10-02 11:14:03 +02:00
Dave Halter
90d159eadd Merge pull request #483 from ColinDuquesnoy/fix_runtime_error
Fix RuntimeError: the PyQt5.QtCore and PyQt4.QtCore modules both wrap the QObject class
2014-09-29 16:07:01 +04:30
ColinDuquesnoy
d7836c1034 Add a comment and link to issue #483 2014-09-29 11:57:38 +02:00
ColinDuquesnoy
42596dba15 Merge remote-tracking branch 'upstream/dev' into fix_runtime_error
Conflicts:
	jedi/evaluate/imports.py
2014-09-29 11:53:35 +02:00
ColinDuquesnoy
d1ae447362 Simplify code 2014-09-29 09:50:49 +02:00
Dave Halter
27444ed64d Remove Import.alias_name_part, it was simply an alias for another lookup. 2014-09-26 16:32:36 +02:00
Dave Halter
03e01631cc Remove NamePart from existance and rename it to Name. 2014-09-26 16:29:53 +02:00
Dave Halter
522c9eda90 Remove pr.Name completely. 2014-09-26 16:18:10 +02:00
Dave Halter
4d7db35340 Fix a few last tests, now Jedi's working again, tests are passing. 2014-09-26 16:02:03 +02:00
Dave Halter
6f29e802c2 Fix an issue with as_names. 2014-09-26 15:48:49 +02:00
Dave Halter
7fea6437d9 Fix issues with Definition.full_name 2014-09-26 13:07:21 +02:00
Dave Halter
4f4ac505a3 Fix isses with interpreter completions. 2014-09-26 13:07:08 +02:00
Dave Halter
3add6e4289 Fix various bugs. 2014-09-26 12:22:56 +02:00
Dave Halter
ce3ec6b534 Finally remove ArrayMethod and use an InstanceElement instead (which it basically is). 2014-09-26 12:08:04 +02:00
Dave Halter
90842ce62d Fixed global variables. 2014-09-26 11:58:11 +02:00
Dave Halter
4eaee09d6e Fix named param issues. 2014-09-26 11:52:26 +02:00
Dave Halter
47c4369d28 Fix remaining issues with sys path checks. 2014-09-25 18:28:03 +02:00
Dave Halter
f4c99259b5 Fix an issue with sys.path. Also moved the names closure for isinstance checks away (used for sys.path stuff) and use a get_code check instead, which is more flexible. 2014-09-25 12:35:53 +02:00
Dave Halter
c2d645b7c1 Fix one of the really hard issues: deep_ast_copy didn't copy the newly created _names_dict. 2014-09-25 12:15:15 +02:00
Dave Halter
16f244a1b2 Fix isinstance issues. 2014-09-25 00:36:53 +02:00
Dave Halter
59225ceaa3 usages issues. 2014-09-25 00:14:43 +02:00
Dave Halter
9ecf3774a0 Import issues again. 2014-09-24 21:59:08 +02:00
Dave Halter
c43afae24a Issues with imports. 2014-09-24 21:12:38 +02:00
Dave Halter
ff61c1d81c Fix an extremely annoying bug that made pickling impossible. 2014-09-24 20:58:29 +02:00
Dave Halter
56243e10c6 The get_code generation of imports was buggy. 2014-09-24 17:48:11 +02:00
Dave Halter
db31536d78 Fix issue with descriptors. 2014-09-24 16:52:44 +02:00
Dave Halter
19b32a3657 And by changing small things about NamePart/InstanceElement usage, we're finally able to pass the class tests again. 2014-09-24 15:42:44 +02:00
Dave Halter
f300e63dae A couple of changes:
- parser.representation now uses ArrayStmt and ExprStatement to be able to differentiate easily with isinstance.
- NameParts are temporarily allowed again as InstanceElements.
2014-09-24 15:21:56 +02:00
Dave Halter
0a65eea2cf Start to change the logic for self.foo variables. 2014-09-24 14:33:15 +02:00
Dave Halter
12e391c97a Add a LazyDict to be able to use it within FunctionExecution for get_names_dict 2014-09-24 12:57:26 +02:00
Dave Halter
d5fbc006e2 Add a names_dict to scopes. This is good for the future parser and now useful to process self.foo and other stuff. 2014-09-24 12:44:24 +02:00
Dave Halter
c61f79314b Introduce get_names_dict to statements to actually fetch the calls out of a statement.
This is going to be the new default method to do dynamic stuff as well as self.foo resolutions for instances.
2014-09-24 12:06:41 +02:00
Dave Halter
5efa467449 Few import issues. 2014-09-22 23:45:48 +02:00
Dave Halter
1d71b25109 Previously forgot to add the NameParts to used_names. (which had worked before that) 2014-09-22 23:24:29 +02:00
Dave Halter
6819deb404 Resolve some **kwargs issues. 2014-09-22 23:06:22 +02:00
Dave Halter
dae1a48d70 Remove a lot of the old Name.names usages in favor of a direct NamePart usage. 2014-09-22 22:34:33 +02:00
Dave Halter
04cf742973 Temporary parser implementation. Now we're pretty much done with pr.Name. 2014-09-22 17:05:23 +02:00
Dave Halter
6bd7ef56f1 Now most tests pass and we're able to continue getting rid of parsing.representation.Name. 2014-09-22 15:41:27 +02:00
Dave Halter
8f3301f281 Passing Function tests now. 2014-09-22 14:06:38 +02:00
Dave Halter
c4e45916c6 Modules also use a NamePart as a name, now. 2014-09-22 12:52:48 +02:00
Dave Halter
779618c08b First changes to eventually replace Name by NamePart. 2014-09-22 11:57:19 +02:00
Dave Halter
b26f51ded2 Fix obvious UnboundLocalError. 2014-09-19 18:08:30 +02:00
Dave Halter
78bd775889 Only real modules should be added in get_modules_containing_name. 2014-09-19 18:05:57 +02:00
Dave Halter
e0f84ccb86 Tests for issues with default args in dynamic param contexts. 2014-09-19 16:56:26 +02:00
Dave Halter
d4503c77a5 get_parent_until should always have the same signature.
Fix it for ArrayMethod.get_parent_until.
2014-09-19 16:17:05 +02:00
Dave Halter
99d35e57b6 Fix alias usages in goto_assignments. 2014-09-19 13:42:47 +02:00
Dave Halter
ed56f73836 Care for nested imports in goto_assignments. 2014-09-19 12:14:29 +02:00
Dave Halter
fc5f73861c Fix issues with the os module.
Using a try/finally assures that the recursion checkers work the right way.
2014-09-19 10:59:24 +02:00
Dave Halter
b2342c76be Refactoring: Make Import.get_all_import_names return NameParts. 2014-09-19 01:40:09 +02:00
Dave Halter
83d2af5138 First imports are working with goto. 2014-09-19 01:21:17 +02:00
Dave Halter
610b2fc832 tests for goto on imports. 2014-09-19 00:49:22 +02:00
Dave Halter
7b0bb83d16 Change the behavior of eval_statement_element and follow_call_path. Arrays should only be looked at in the latter. 2014-09-18 23:44:11 +02:00
Dave Halter
69e6139527 Goto on named params in class calls is now working. 2014-09-18 20:11:58 +02:00
Dave Halter
ba80e35204 Test for an issue with named params in class calls (instead of functions). 2014-09-18 13:30:52 +02:00
Dave Halter
9fa6a86a19 Tests for Definition.is_definition(). 2014-09-17 18:17:22 +02:00
ColinDuquesnoy
fb86388890 Fix RuntimeError: the PyQt5.QtCore and PyQt4.QtCore modules both wrap the QObject class 2014-09-13 12:18:34 +02:00
Dave Halter
9983898162 Temporarily disable a test for goto on nested imports. The positions are currently wrong. But this is a known issue. 2014-09-11 02:27:53 +02:00
Dave Halter
085c8034b3 Apply evaluate.representation wrappers already before they go out into the goto world. 2014-09-11 02:20:54 +02:00
Dave Halter
1624fa0872 Replace BaseDefinition._name.get_definition() calls with BaseDefinition._definition. 2014-09-11 01:36:21 +02:00
Dave Halter
71efb51f2a Remove BaseDefinition._start_pos. 2014-09-11 01:21:08 +02:00
Dave Halter
283afa78f1 Remove code that is not needed anymore, because the Definition/Completion import is now standardized (to NamePart). 2014-09-11 01:15:00 +02:00
Dave Halter
9f16555f47 Big refactoring: BaseDefinition._definnition changes to BaseDefinition._name, because it's a NamePart now.
This also includes changes to tests and some simplifications like deleting the old name logic of Definition.
2014-09-11 01:03:30 +02:00
Dave Halter
58526e2302 Completion now also uses only NameParts as its _definition attribute. 2014-09-10 20:12:19 +02:00
Dave Halter
8f892e3922 Use FakeName instead of a custom KeywordName. 2014-09-10 20:07:13 +02:00
Dave Halter
1fb9b4bc6b Completion now always takes a NamePart as input. 2014-09-10 18:59:08 +02:00
Dave Halter
0eea30f227 NamePart migration of Definition is complete. Now Completion. 2014-09-10 18:29:10 +02:00
Dave Halter
2aa538999e Removed an old test from the days where it was allowed to add Keywords to Definitions. 2014-09-10 18:05:04 +02:00
Dave Halter
46b49af5d9 Even params should be NameParts as a Definition input. 2014-09-10 17:41:06 +02:00
Dave Halter
5e28d69437 Fix remaining usage issues. 2014-09-10 17:15:58 +02:00
Dave Halter
4060c4dc55 Fix some goto issues. 2014-09-10 16:39:09 +02:00
Dave Halter
a93a389d5c Fix all the normal issues with the NameFinder change. Now goto... 2014-09-10 16:30:22 +02:00
Dave Halter
657a2c7d4f Trying to get the NameFinder to use only NameParts. 2014-09-10 16:20:30 +02:00
Dave Halter
43cf1d451f Python 2/3 compatibility issues that were not resolved in the latest commit. 2014-09-09 17:51:39 +02:00
Dave Halter
fdc637c5c4 Add a forgotten test module, test_sys_path.py and fix Python2/3 compatibility issues. 2014-09-09 17:08:22 +02:00
Dave Halter
38f7296f39 Typo. 2014-09-09 16:51:46 +02:00
Dave Halter
87aa76678a Goto should work on named params, too. 2014-09-09 16:48:53 +02:00
Dave Halter
110f130741 Make it possible to get previous statements of Calls. 2014-09-09 16:44:56 +02:00
Dave Halter
b68a59daef Fix the last remaining issues of the first part of the NamePart switch. 2014-09-09 15:58:20 +02:00
Dave Halter
45e033c50e Quite a few fixes to be eventually able to use NameParts as Definition inputs. 2014-09-09 15:21:27 +02:00
Dave Halter
1199defabb Start to use NameParts only in Definition contexts. 2014-09-09 14:13:10 +02:00
Dave Halter
ff7680c15f Generate the expression_list of a statement in any case.
This is a consequence of being able to have Calls as parents of Names. Otherwise we would have changing Name.parent values.
2014-09-09 12:54:46 +02:00
Dave Halter
740fd0657f Add a goto_assignments test for named params 2014-09-09 00:06:24 +02:00
Dave Halter
0dcb91d236 Add a Definition.is_definition function to be able to check if a name is a definition or not. 2014-09-08 23:44:35 +02:00
Dave Halter
851717a968 Publicize jedi.names and add a first test. 2014-09-08 22:39:47 +02:00
Dave Halter
be85391321 Create a 'jedi.names' function with the proper docstring.
Modelled according the discussion in #477.
2014-09-08 21:43:16 +02:00
Dave Halter
ca536baf9b Last fixes, because of the Name.get_definition change. The recent parser.representation changes are now fully working and we're ready to improve Evaluator.goto again. 2014-09-06 13:23:00 +02:00
Dave Halter
ece9fdf4ae Fixing most of the issues that existed, because of the recent Name.get_definition/Call.name.parent change. 2014-09-06 13:02:52 +02:00
Dave Halter
2e7e2f0a29 Name parents are now Calls (once their statements have generated the Calls).
This makes the goto function more powerful. Also fixes an issue with the deep_ast_copy, that I tried to fix previously, but failed, because I hadn't tested it.
2014-09-06 12:19:07 +02:00
Dave Halter
5a3ee02399 Use ExprStmt pretty much everywhere where it should be used.
ExprStmt is now really a normal statement. All the other statements are from now on considered legacy code. As a side effect this increases the parser pickling version.
2014-09-06 11:13:58 +02:00
Dave Halter
cb84bc0829 Start using ExprStmt. 2014-09-06 10:46:59 +02:00
Dave Halter
f57d9ef675 Rename Name.get_parent_stmt to Name.get_definition, because it's not always a statement. Also start using it in the NameFinder. 2014-09-06 10:43:26 +02:00
Dave Halter
99116cdcb7 Add a Name.get_parent_stmt() function. 2014-09-05 22:26:55 +02:00
Dave Halter
6c07c7acfe Create an ExprStatement class to replace the Statement class in the future and separate array parts of actual statements 2014-09-05 22:21:26 +02:00
Dave Halter
12154fdecf Refactor a few name finder things. 2014-09-05 12:12:29 +02:00
Dave Halter
ba805879b4 Updated helpers.deep_ast_copy. Now the function copies statements in a better way.
Previously statement attributes where scanned like every other objects. Now two statements get priority: _assignment_details and _expression_list. This is necessary if we want to switch towards proper name parents (A Call would be the parent of a name, not a Statement)
2014-09-05 11:49:45 +02:00
Dave Halter
42d6b57599 precedence._is_string -> precedence.is_string 2014-09-04 14:13:26 +02:00
Dave Halter
7b2e11d71b Rewrote sys_path._paths_from_assignment. 2014-09-04 14:12:10 +02:00
Dave Halter
4180005893 Forgot to add the the params in the case of a class in the previous commit. 2014-09-04 12:53:42 +02:00
Dave Halter
06699993f1 Class inheritance definitions shouldn't be params. It's just statements. 2014-09-04 12:28:50 +02:00
Dave Halter
1df025c39d Definitions should not be followed in Evaluator.goto. 2014-09-04 11:55:42 +02:00
Dave Halter
e872d9e073 Script.goto_assignments now always needs a call_path. Otherwise it raises a NotFoundError.
This change makes Jedi's behavior more consistent.
2014-09-04 00:56:58 +02:00
Dave Halter
fb10199f37 Remove search_name and search_name_part from goto returns.
The search_name can be retrieved by checking definitions for it. Definitions should always be names or even better name_parts in case of goto. Therefore we can just get it there.
2014-09-03 23:28:19 +02:00
Dave Halter
f7a1c110ba Fix remaining issues with CompiledName.name change. 2014-09-03 19:47:37 +02:00
Dave Halter
bb5ffe9343 CompiledObject.name returns a Name now, not a string. This is more consistent with the Jedi design and doesn't lead to bugs while ducktyping. 2014-09-03 19:43:21 +02:00
Dave Halter
18204c4c19 By trying to get rid of search_name in usages, we had to fix an issue with imports:
If used like 'follow(is_goto)', it could return a ModuleWrapper instead of a Name, which is what we actually want.
2014-09-03 19:30:00 +02:00
Dave Halter
59578966cf Remove code that is not used anymore from Script._goto 2014-09-03 18:02:20 +02:00
Dave Halter
e2ca11435c Script._goto improvements and documentation. 2014-09-03 17:27:26 +02:00
Dave Halter
95852f5e7f Clarify inner workings of Evaluator.goto 2014-09-03 14:58:24 +02:00
Dave Halter
bcc84820fe Fix issues with unreachable flows.
This benefits static analysis as well as autocompletion: Unreachable code (things like code within 'if 0:') should still be resolveable.
2014-09-03 00:05:37 +02:00
Dave Halter
ea5b98905e Make statement_elements_in_statement work with ListComprehensions, Lambdas and 'except foo as' expressions 2014-09-02 14:52:04 +02:00
Dave Halter
38c71fce3f Added tests for statement_elements_in_statement 2014-09-02 12:10:16 +02:00
Dave Halter
f785aa26dd Additional helper methods, to find all the statement elements that are needed. 2014-09-02 03:26:17 +02:00
Dave Halter
be9e77d7d3 Add a temporary api._names, to make it possible to annotate a full script with types. 2014-09-01 18:10:40 +02:00
Dave Halter
bbf1070ad9 Add a helper function to list all name parts of a given module. 2014-09-01 13:20:01 +02:00
Dave Halter
6b88da4d2d Merge pull request #467 from phillipberndt/master
Alter jedi.Interpreter() namespace argument default from [] to [{}]
2014-08-23 16:05:12 +04:30
Dave Halter
76d91ba72a Rename fast_parent_copy to deep_ast_copy. 2014-08-22 00:59:46 +02:00
Dave Halter
ed3b507ab7 cleanup 2014-08-22 00:47:08 +02:00
Dave Halter
6ba0b7b81e lower than/greater than operators evaluate to a boolean now. This is not a 100% correct, because it doesn't evaluate __gt__, etc. But that's ok for now. 2014-08-22 00:26:55 +02:00
Dave Halter
039a5ecaf9 Fix issues caused by KeywordStatement, which needs to be copied as well. 2014-08-21 16:51:00 +02:00
Dave Halter
0ef030848d refactor fast_parent_copy, use new_elements_default to hand in a dictionary, that contains all the generated duplicates. 2014-08-21 13:17:33 +02:00
Dave Halter
3cf8bfa8e1 Fix a few tests by either fixing the test cases or adding py__bool__ functions to objects that should have such a method. 2014-08-20 17:28:54 +02:00
Dave Halter
f911050300 Rewrote the isistance implementation, so that it works properly with tuples as well as normal classes. 2014-08-20 16:58:19 +02:00
Dave Halter
2a964d4e48 Also implement the or operator. 2014-08-20 16:28:04 +02:00
Dave Halter
148d17b3be Implementation of the and operation. 2014-08-20 16:21:33 +02:00
Dave Halter
d6dd7cd55e Move process_precedence_element from the Evaluator to the precedence module. 2014-08-20 15:59:37 +02:00
Dave Halter
9abc8a19e7 By adding a py__class__ method to CompiledObject and Class, we Jedi is able to understand isinstance checks, now.
This also includes a CheckAttribute class in evaluatue.compiled, because it's way easier to generalize the AttributeErrors there.
2014-08-20 14:46:18 +02:00
Dave Halter
0e66aef511 Use IterableWrapper in the iterable module to be able to add methods like is_class quickly. 2014-08-20 14:01:41 +02:00
Dave Halter
442a1a1d08 wrap some more values with er.wrap 2014-08-20 13:52:49 +02:00
Dave Halter
c9542cbc04 Start implementing an is_class function that will determine if an object is a class or not in the future. 2014-08-20 11:43:25 +02:00
Dave Halter
7f874620db Start documenting all the py__foo__ methods 2014-08-20 11:31:23 +02:00
Dave Halter
2e949b43bb Ignore FunctionExecutions in old style isinstance checks for now, because it collides with new style isinstance checks. 2014-08-20 11:31:11 +02:00
Dave Halter
09ca47fa93 Introduce a dedicated isinstance function implementation. 2014-08-19 23:57:59 +02:00
Phillip Berndt
3189ba7662 Add type check to jedi.Interpreter() namespace argument and remove default value 2014-08-19 09:41:17 +02:00
Dave Halter
49163e135c flow_analysis test for isinstance as well as and/or operations. 2014-08-19 01:03:14 +02:00
Dave Halter
77673ba986 Add an optional param 'parent' to parser.representation.Simple, which simplifies some calls to that superclass. 2014-08-19 00:30:17 +02:00
Dave Halter
8bde89cc58 Fix the remaining issues with the StatementElement.next refactoring. 2014-08-19 00:12:14 +02:00
Dave Halter
8006d6f190 Change implementation of StatementElement.
Instead of having both next and execution as attributes, we now only have next, because it's an execution if there's an array.
2014-08-18 22:25:55 +02:00
Dave Halter
7619bf27d1 Simplify goto_definition in case it done on a function. 2014-08-18 15:00:14 +02:00
Dave Halter
00d15da143 refactor search_call_signatures. Now we don't need to set Call.next.parent in a strange way anymore and the whole thing seems to be more logical. 2014-08-18 14:51:38 +02:00
Dave Halter
542648f5a0 first step in refactoring call_signature_array_for_pos, use original_call as a param. 2014-08-18 13:39:01 +02:00
Dave Halter
9f38f10366 fix tests. Operators should not equal to other operators with a different position. or even parent. 2014-08-18 13:13:07 +02:00
Dave Halter
1d812c2414 Use the "wrong" parents again for next/execution in StatementElement. This is important for call_signature lookups. We might still be able to change this somewhere in time. 2014-08-18 11:22:38 +02:00
Dave Halter
fd90dfc4f5 Use a LazyName for module attributes, they should only be generated if needed. 2014-08-15 15:20:40 +02:00
Dave Halter
868dab4f51 small debug change 2014-08-15 02:26:13 +02:00
Dave Halter
89ab0ba137 Fix fast_parent_copy. The caching is now more solid than before (and doesn't produce weird side effects. This also solves an issue with Lambdas. However, by fixing all of this we have broken some other things. 2014-08-15 01:55:43 +02:00
Dave Halter
1965469050 fast_parent_copy should also change the parent of NameParts. 2014-08-14 23:48:27 +02:00
Dave Halter
1f9e7ddff8 Remove code in the parser that didn't make sense. 2014-08-14 13:24:26 +02:00
Dave Halter
425290aa8f Fix an issue with partial keyword inputs. 2014-08-14 12:25:00 +02:00
Dave Halter
1540ac89f8 Implement the descriptor protocoll properly for instances. 2014-08-14 12:15:48 +02:00
Dave Halter
f743619fb8 Tests for conditions in descriptors. 2014-08-13 14:49:42 +02:00
Dave Halter
ec7b3bf433 refactor py_base to py__bases__, because that's the general naming schema 2014-08-13 14:34:37 +02:00
Dave Halter
cd433adf84 Speedup object lookup even further in classes. 2014-08-13 14:17:57 +02:00
Dave Halter
9702c4cdc6 Restructure the way we get self arguments (probably reduces executions of object). 2014-08-13 14:07:09 +02:00
Dave Halter
cf32e15f65 Remove the old 'is not' logic to detect if not instances and use them to do branch predictions. This is not necessary anymore, since we now support that in a more general way (flow_analysis). 2014-08-12 18:14:03 +02:00
Dave Halter
eeac77d360 Also support the not operator. 2014-08-12 18:09:59 +02:00
Dave Halter
8ed89e8245 implement !=, ==, is, is not operators to work in if statements (they also work in in non if contexts and return a bool value.), includes tests. 2014-08-12 17:59:19 +02:00
Dave Halter
6f018e4884 introduce maybe_docstr in parse_statement, which should have been introduced way earlier. 2014-08-12 17:13:14 +02:00
Dave Halter
fb1dba269a re-enable the interpretation of the None keyword 2014-08-12 14:38:56 +02:00
Dave Halter
469988be9c actually add tests for the flow analysis of variables. 2014-08-12 09:57:00 +02:00
Dave Halter
968bc45314 even tests should not suffer too much from side effects. 2014-08-12 01:46:07 +02:00
Dave Halter
6b7ce590fa Simplify get_parent_scope 2014-08-12 01:37:58 +02:00
Dave Halter
33e5a3280a Remove IsScope in favor of an is_scope function.
This function was partially implemented anway. Now we've also added a function called 'get_parent_scope', to make it easy to get a scope of a Call, Statement, whatever.
2014-08-12 01:19:19 +02:00
Dave Halter
1865284fa9 fix the interpreter (previously broken by flow analysis) 2014-08-12 00:19:20 +02:00
Dave Halter
242072976a use py__mro__ in a classes scope_names_generator 2014-08-11 23:53:45 +02:00
Dave Halter
526af7ccbe settings should not be affected by exceptions. 2014-08-11 17:27:40 +02:00
Dave Halter
f1711f8f9c possible direction of branch checks for name resolution. 2014-08-10 13:17:37 +02:00
Dave Halter
483f5c14ee Listeneres should be removed even in exception cases. Do a 'finally' cleanup. 2014-08-07 16:27:57 +02:00
Dave Halter
01bdd1e4fa Test fixes and for loops need to be handled a awell in flow_analysis. 2014-08-07 15:51:41 +02:00
Dave Halter
0ae9e520c1 flow analysis working for elif statements (even in combination with else) 2014-08-07 15:18:30 +02:00
Dave Halter
743d064e6d exception while using else as a scope 2014-08-07 12:10:31 +02:00
Dave Halter
ee65764c3a more complicated logic working with else 2014-08-07 12:02:08 +02:00
Dave Halter
d94a70b524 fix a logic issue in the flow_analysis.Status.__and__ 2014-08-07 03:02:40 +02:00
Dave Halter
b7151c1ef9 add a first flow analysis test 2014-08-06 23:35:30 +02:00
Dave Halter
138fa1b4de deletion of returns from SCOPE_CONTENTS was wrong. 2014-08-06 22:42:38 +02:00
Dave Halter
e7e7bd29e8 fix generator tests (multiple yields must be called with an if random. 2014-08-06 12:45:38 +02:00
Dave Halter
23c39eff9a fix lambda issues 2014-08-06 12:40:08 +02:00
Dave Halter
e3bb0ccc2e fix a keyword statement issue 2014-08-06 12:10:36 +02:00
Dave Halter
15ec0a77fe a first very simple implementation of reachable/unreachable return statements. 2014-08-05 17:02:16 +02:00
Dave Halter
f5e49e3218 flow analysis preparation 2014-08-05 12:06:58 +02:00
Dave Halter
c44168f7ad add a Flow.previous attribute to be able to access the if flow from an else clause. 2014-08-05 11:17:18 +02:00
Dave Halter
54dce0e3b2 fix strange issues of Python's std lib tokenizer, might be in there as well (not sure, cause I modified a lot). fixes #449 2014-08-04 16:47:36 +02:00
Dave Halter
b2b4827ce3 moved test_token to test_tokenize 2014-08-04 16:25:33 +02:00
Dave Halter
cba100a801 test for #414 which doesn't seem to be failing anymore. 2014-08-04 16:08:47 +02:00
Dave Halter
625e88e851 isinstance checks now also give you type hints in class contexts, fixes #241. 2014-08-04 02:11:30 +02:00
Dave Halter
0a0673e87c refactoring in dynamic param searching 2014-08-04 01:39:05 +02:00
Dave Halter
7bba12e8c5 comments 2014-08-03 23:00:32 +02:00
Dave Halter
6e5d80a6b2 builtins shouldn't be unique if called by compiled.create 2014-08-01 15:51:59 +02:00
Dave Halter
68cecad996 tests for py__mro__ 2014-08-01 15:50:18 +02:00
Dave Halter
2c0a46fafe Fix an issue with CallSignatures:
If used in a longer statement, it could happen that parts of the statement was still evaluated, but the call signature is only valid at the cursor.
2014-07-31 17:47:56 +02:00
Dave Halter
7b4a188948 fix a few small issues that remained in the tests 2014-07-31 17:34:35 +02:00
Dave Halter
59b8c6b015 CompiledObjects should execute everything when reading the return information from docstring (because it's always types, not values) 2014-07-31 17:16:24 +02:00
Dave Halter
332a16a27e elements in tuples/lists in docstrings were not executed. 2014-07-31 17:13:56 +02:00
Dave Halter
d09279e0ad change tests that provided wrong instance information 2014-07-31 15:16:24 +02:00
Dave Halter
50fa3a732d actually start checking if the integration tests are instances on both sides of the comparison. This wasnt necessary for just autocompletion, but it's way more important now. 2014-07-31 14:58:32 +02:00
Dave Halter
d899f69686 simplify a lot of the current InstanceElement behavior, because we know now, that there's either a Statement or a Function inside (or maybe some other parser objects like an Array. 2014-07-31 13:41:10 +02:00
Dave Halter
0fbd5efefd wrap instance element creation so that it only contains functions and statements, not CompiledObject or Instance. 2014-07-31 13:16:11 +02:00
Dave Halter
870abe73d4 Calling an InstanceElement of an Instance of CompiledObject doesn't raise an error anymore. Yes, it's really that complicated. 2014-07-30 19:49:41 +02:00
Dave Halter
0851e7667e A module shouldn't be callable. 2014-07-30 17:07:57 +02:00
Dave Halter
723d1e4631 Nicer usage of py_call within InstanceElement and Python 2.7 compatibility 2014-07-30 17:00:16 +02:00
Dave Halter
7cc35fe0b8 remove a very old function call in FunctionExecution that had no effect 2014-07-30 16:41:02 +02:00
Dave Halter
cf63d20988 get rid of the evaluate_generator param 2014-07-30 16:36:27 +02:00
Dave Halter
565cfce2fe Executable -> Executed 2014-07-30 16:00:38 +02:00
Dave Halter
7bd76022bf get rid of the whole is_callable stuff, because now we can just check for hasattr(obj, 'py__call__') 2014-07-30 15:50:47 +02:00
Dave Halter
e58dc0a3d9 simplify evaluator.execute, because now everything is using py__call__ 2014-07-30 15:40:10 +02:00
Dave Halter
373ff2c45a fix most issues related to the py__call__ stuff and generalize it. 2014-07-30 15:23:41 +02:00
Dave Halter
1e6a950aec further progress in changing to py__call__ 2014-07-30 14:40:56 +02:00
Dave Halter
ccd304bcb7 start switching to a more python similar approach of naming, start by naming execution stuff py__call__ 2014-07-30 14:06:32 +02:00
Joel Wright
07d0a43f7e Add preceding whitespace collection to tokenizer 2014-07-30 11:59:20 +01:00
Dave Halter
196afaacbf always operate on class in super and not on an instance. that's the proper way. 2014-07-30 11:34:27 +02:00
Dave Halter
e81749bbe1 Merge branch 'dev' of github.com:davidhalter/jedi into dev 2014-07-30 11:27:46 +02:00
Dave Halter
3c92d175da using super() in actual executed classes wasn't possible. fixes #421 2014-07-30 11:27:27 +02:00
Dave Halter
53671bca84 replace get_super_classes with py_bases 2014-07-30 10:54:39 +02:00
Dave Halter
a6855029d2 added a few EuroPython sprint guys to AUTHORS.txt 2014-07-30 09:15:17 +02:00
Dave Halter
ddd4e92e84 temporary SuperInstance class to eventually handle super. But need something like mro() resolution first. 2014-07-29 23:57:29 +02:00
Dave Halter
cfe54e83ff incomplete functions shouldn't cause any trouble. fixes #429. 2014-07-28 17:42:20 +02:00
Dave Halter
a86cfa2dd7 Merge pull request #453 from alga/dev
Europython2014 davidhalter/jedi#361attempt
2014-07-27 20:58:03 +04:30
Albertas Agejevas
ecb2085174 Add flask.ext to the test fixture so tests pass even without flask installed. 2014-07-27 17:59:09 +02:00
Albertas Agejevas
25978cf591 Mentioned framework support in the docs. 2014-07-27 17:35:50 +02:00
Albertas Agejevas
ab486ba84f List old-style flask extensions, too. 2014-07-27 17:00:17 +02:00
Albertas Agejevas
733eee94b6 Fix breaking tests. 2014-07-27 16:18:24 +02:00
Albertas Agejevas
7f45bfe689 More on #361: enumerate new-style flask extensions. 2014-07-27 16:08:26 +02:00
Danilo Bargen
605b0c5881 Added note about numpydoc package
Refs #450.
2014-07-27 16:01:11 +02:00
Dave Halter
440b9b072e Merge pull request #441 from davidhalter/dynamic_inheritance
Dynamic superclasses
2014-07-27 18:26:48 +04:30
Dave Halter
4e04770a75 Merge pull request #451 from davidhalter/issue436
Issue 436: Operator statement wrapper was missing
2014-07-27 18:22:43 +04:30
Albertas Agejevas
5edd2274b2 Fix an exception in the flask ext code. 2014-07-27 15:04:55 +02:00
Albertas Agejevas
a18f8a7cbb Make tests terser. pytest rules! 2014-07-27 15:04:46 +02:00
Albertas Agejevas
13c1f79d5c A stab at davidhalter/jedi#361 (Flask extension imports)
Both new-style and old-style extensions work, but only when imported
with a 'from'.  There are two skipped tests of the full dotted name
imports.

Also, our fixture has a normal flaskext package, whereas in practice
the flaskext module is injected from a pth file and does not have
__init__.py, we need to figure out to handle that.
2014-07-27 15:04:31 +02:00
Danilo Bargen
e8f479172a Implemented dynamic superclasses 2014-07-27 14:11:48 +02:00
Danilo Bargen
73637d7e3f Small doc fix 2014-07-27 13:12:26 +02:00
Danilo Bargen
176da139d8 Added docs for numpydoc docstrings (fixes #450) 2014-07-27 13:10:14 +02:00
Danilo Bargen
c97e1732ee Operator statement wrapper was missing (fixes #436) 2014-07-27 12:53:18 +02:00
Dave Halter
6d99e639cd Merge branch 'add-numpydoc-support' of git://github.com/immerrr/jedi into dev 2014-07-27 11:23:39 +02:00
immerrr
194d87bbad Add basic numpydoc support 2014-07-27 12:51:31 +04:00
Dave Halter
e2cdbf61de Merge remote-tracking branch 'origin/travis_py34' into dev 2014-07-27 09:52:54 +02:00
Dave Halter
9028641ca7 Merge remote-tracking branch 'origin/namedtuple' into dev 2014-07-27 09:51:50 +02:00
Dave Halter
97a204a985 Merge branch 'dev' of github.com:davidhalter/jedi into dev 2014-07-27 09:44:25 +02:00
Dave Halter
606b6851ff remove the scope_names_generator stuff again. We should enable it somewhere in time, but for now it just breaks tests. 2014-07-27 09:43:22 +02:00
Dave Halter
cd648e933b Merge pull request #448 from ppalucki/master
Sphinx oneline param type declaration feature
2014-07-27 12:05:32 +04:30
Pawel Palucki
d359f5d043 Sphinx oneline param type declaration feature
allows for type definition in ":param keyword"
2014-07-26 22:15:56 +02:00
Danilo Bargen
d3620fd84f Implemented support for namedtuples (fixes #107)
Note that namedtuples are only supported for Python >2.6.
2014-07-26 17:51:38 +02:00
Danilo Bargen
81e066097d Added pytest-cache to tox.ini
This allows you to only run the last failed tests using `py.test --lf`
or `tox -- --lf`.
2014-07-26 17:48:46 +02:00
Danilo Bargen
49089c06ff Travis now provides Python 3.4 2014-07-26 17:41:32 +02:00
Danilo Bargen
efebb2d6d0 Added tests for random.choice 2014-07-26 17:40:08 +02:00
Danilo Bargen
2a1c108bbf Fixed whitespace problems in completion tests 2014-07-26 17:39:05 +02:00
Dave Halter
c85bdb8ff1 little edge case of modules that don't have a scope_names_generator, which is unfortunately missing, but not really used in Jedi.
At europython's hackathon we played with it and @scoder added a small script to cython/Cython/Compiler/JediTyper.py, which makes it possible to add Cython types to a Python script.
2014-07-26 13:18:04 +02:00
Dave Halter
4f1d39d3df set changelog date for v0.8.1 correct 2014-07-25 00:45:39 +02:00
Dave Halter
293fa5a14f Merge pull request #437 from blueyed/doc-changelog-minor
Minor fixes to the changelog
2014-07-25 03:14:06 +04:30
Daniel Hahler
fd2f56f3b6 minor: changelog: formatting 2014-07-25 00:31:17 +02:00
Daniel Hahler
93f6d45e11 minor: changelog: fix typo 2014-07-25 00:30:53 +02:00
242 changed files with 15855 additions and 11157 deletions

View File

@@ -1,6 +1,7 @@
[run]
omit =
jedi/_compatibility.py
jedi/evaluate/site.py
[report]
# Regexes for lines to exclude from consideration

4
.gitignore vendored
View File

@@ -1,6 +1,5 @@
*~
*.swp
*.swo
*.sw?
*.pyc
.ropeproject
.tox
@@ -11,3 +10,4 @@
/dist/
jedi.egg-info/
record.json
/.cache/

View File

@@ -1,30 +1,32 @@
language: python
env:
- TOXENV=py26
- TOXENV=py27
- TOXENV=py32
- TOXENV=py33
- TOXENV=py34
- TOXENV=pypy
- TOXENV=cov
- TOXENV=sith
sudo: false
python:
- 2.6
- 2.7
- 3.3
- 3.4
- 3.5
- 3.6
- pypy
- "3.7-dev"
matrix:
allow_failures:
- env: TOXENV=cov
- env: TOXENV=sith
- env: TOXENV=pypy
allow_failures:
- python: pypy
- env: TOXENV=cov
- env: TOXENV=sith
- python: 3.7-dev
include:
- python: 3.5
env: TOXENV=cov
- python: 3.5
env: TOXENV=sith
install:
- pip install --quiet --use-mirrors tox
# install python 3.4 from PPA since Travis does not have python 3.4 yet
- if [ "$TOXENV" = "py34" ]; then
sudo apt-add-repository -y ppa:fkrull/deadsnakes;
sudo apt-get update;
sudo apt-get install python3.4;
fi
- pip install --quiet tox-travis
script:
- tox
after_script:
- if [ $TOXENV == "cov" ]; then
pip install --quiet --use-mirrors coveralls;
pip install --quiet coveralls;
coveralls;
fi

View File

@@ -26,5 +26,25 @@ Jorgen Schaefer (@jorgenschaefer) <contact@jorgenschaefer.de>
Fredrik Bergroth (@fbergroth)
Mathias Fußenegger (@mfussenegger)
Syohei Yoshida (@syohex) <syohex@gmail.com>
ppalucky (@ppalucky)
immerrr (@immerrr) immerrr@gmail.com
Albertas Agejevas (@alga)
Savor d'Isavano (@KenetJervet) <newelevenken@163.com>
Phillip Berndt (@phillipberndt) <phillip.berndt@gmail.com>
Ian Lee (@IanLee1521) <IanLee1521@gmail.com>
Farkhad Khatamov (@hatamov) <comsgn@gmail.com>
Kevin Kelley (@kelleyk) <kelleyk@kelleyk.net>
Sid Shanker (@squidarth) <sid.p.shanker@gmail.com>
Reinoud Elhorst (@reinhrst)
Guido van Rossum (@gvanrossum) <guido@python.org>
Dmytro Sadovnychyi (@sadovnychyi) <jedi@dmit.ro>
Cristi Burcă (@scribu)
bstaint (@bstaint)
Mathias Rav (@Mortal) <rav@cs.au.dk>
Daniel Fiterman (@dfit99) <fitermandaniel2@gmail.com>
Simon Ruggier (@sruggier)
Élie Gouzien (@ElieGouzien)
Robin Roth (@robinro)
Malte Plath (@langsamer)
Note: (@user) means a github user name.

View File

@@ -3,11 +3,56 @@
Changelog
---------
0.8.1 (2014-07-15)
0.11.0 (2017-09-20)
+++++++++++++++++++
* Bugfix release, the last release forgot to include files that improve
autocompletion for builtin libraries. Fixed.
- Split Jedi's parser into a separate project called ``parso``.
- Avoiding side effects in REPL completion.
- Numpy docstring support should be much better.
- Moved the `settings.*recursion*` away, they are no longer usable.
0.10.2 (2017-04-05)
+++++++++++++++++++
- Python Packaging sucks. Some files were not included in 0.10.1.
0.10.1 (2017-04-05)
+++++++++++++++++++
- Fixed a few very annoying bugs.
- Prepared the parser to be factored out of Jedi.
0.10.0 (2017-02-03)
+++++++++++++++++++
- Actual semantic completions for the complete Python syntax.
- Basic type inference for ``yield from`` PEP 380.
- PEP 484 support (most of the important features of it). Thanks Claude! (@reinhrst)
- Added ``get_line_code`` to ``Definition`` and ``Completion`` objects.
- Completely rewritten the type inference engine.
- A new and better parser for (fast) parsing diffs of Python code.
0.9.0 (2015-04-10)
++++++++++++++++++
- The import logic has been rewritten to look more like Python's. There is now
an ``Evaluator.modules`` import cache, which resembles ``sys.modules``.
- Integrated the parser of 2to3. This will make refactoring possible. It will
also be possible to check for error messages (like compiling an AST would give)
in the future.
- With the new parser, the evaluation also completely changed. It's now simpler
and more readable.
- Completely rewritten REPL completion.
- Added ``jedi.names``, a command to do static analysis. Thanks to that
sourcegraph guys for sponsoring this!
- Alpha version of the linter.
0.8.1 (2014-07-23)
+++++++++++++++++++
- Bugfix release, the last release forgot to include files that improve
autocompletion for builtin libraries. Fixed.
0.8.0 (2014-05-05)
+++++++++++++++++++
@@ -16,26 +61,27 @@ Changelog
drastically. Loading times are down as well (it takes basically as long as an
import).
- REPL completion is starting to become usable.
- Various small API changes. Generally this released focuses on stability and
- Various small API changes. Generally this release focuses on stability and
refactoring of internal APIs.
- Introducing operator precedence, which makes calculating correct Array indices
and ``__getattr__`` strings possible.
- Introducing operator precedence, which makes calculating correct Array
indices and ``__getattr__`` strings possible.
0.7.0 (2013-08-09)
++++++++++++++++++
- Switched from LGPL to MIT license
- Added an Interpreter class to the API to make autocompletion in REPL possible.
- Added autocompletion support for namespace packages
- Add sith.py, a new random testing method
- Switched from LGPL to MIT license.
- Added an Interpreter class to the API to make autocompletion in REPL
possible.
- Added autocompletion support for namespace packages.
- Add sith.py, a new random testing method.
0.6.0 (2013-05-14)
++++++++++++++++++
- Much faster parser with builtin part caching
- A test suite, thanks @tkf
- Much faster parser with builtin part caching.
- A test suite, thanks @tkf.
0.5 versions (2012)
+++++++++++++++++++
- Initial development
- Initial development.

View File

@@ -1,28 +1,8 @@
Pull Requests are great (on the **dev** branch)! Readme/Documentation changes
are ok in the master branch.
Pull Requests are great.
1. Fork the Repo on github.
2. If you are adding functionality or fixing a bug, please add a test!
3. Add your name to AUTHORS.txt
4. Push to your fork and submit a **pull request to the dev branch**.
My **master** branch is a 100% stable (should be). I only push to it after I am
certain that things are working out. Many people are using Jedi directly from
the github master branch.
4. Push to your fork and submit a pull request.
**Try to use the PEP8 style guide.**
Changing Issues to Pull Requests (Github)
-----------------------------------------
If you have have previously filed a GitHub issue and want to contribute code
that addresses that issue, we prefer it if you use
[hub](https://github.com/github/hub) to convert your existing issue to a pull
request. To do that, first push the changes to a separate branch in your fork
and then issue the following command:
hub pull-request -b davidhalter:dev -i <issue-number> -h <your-github-username>:<your-branch-name>
It's no strict requirement though, if you don't have hub installed or prefer to
use the web interface, then feel free to post a traditional pull request.

View File

@@ -1,3 +1,6 @@
All contributions towards Jedi are MIT licensed.
-------------------------------------------------------------------------------
The MIT License (MIT)
Copyright (c) <2013> <David Halter and others, see AUTHORS.txt>

View File

@@ -7,7 +7,9 @@ include sith.py
include conftest.py
include pytest.ini
include tox.ini
include requirements.txt
include jedi/evaluate/compiled/fake/*.pym
include jedi/parser/python/grammar*.txt
recursive-include test *
recursive-include docs *
recursive-exclude * *.pyc

View File

@@ -1,6 +1,6 @@
###################################################
Jedi - an awesome autocompletion library for Python
###################################################
###################################################################
Jedi - an awesome autocompletion/static analysis library for Python
###################################################################
.. image:: https://secure.travis-ci.org/davidhalter/jedi.png?branch=master
:target: http://travis-ci.org/davidhalter/jedi
@@ -10,43 +10,47 @@ Jedi - an awesome autocompletion library for Python
:target: https://coveralls.io/r/davidhalter/jedi
:alt: Coverage Status
.. image:: https://pypip.in/d/jedi/badge.png
:target: https://crate.io/packages/jedi/
:alt: Number of PyPI downloads
.. image:: https://pypip.in/v/jedi/badge.png
:target: https://crate.io/packages/jedi/
:alt: Latest PyPI version
*If you have specific questions, please add an issue or ask on* `stackoverflow
<https://stackoverflow.com/questions/tagged/python-jedi>`_ *with the label* ``python-jedi``.
Jedi is an autocompletion tool for Python that can be used in IDEs/editors.
Jedi works. Jedi is fast. It understands all of the basic Python syntax
elements including many builtin functions.
Additionaly, Jedi suports two different goto functions and has support for
renaming as well as Pydoc support and some other IDE features.
Jedi is a static analysis tool for Python that can be used in IDEs/editors. Its
historic focus is autocompletion, but does static analysis for now as well.
Jedi is fast and is very well tested. It understands Python on a deeper level
than all other static analysis frameworks for Python.
Jedi has support for two different goto functions. It's possible to search for
related names and to list all names in a Python file and infer them. Jedi
understands docstrings and you can use Jedi autocompletion in your REPL as
well.
Jedi uses a very simple API to connect with IDE's. There's a reference
implementation as a `VIM-Plugin <https://github.com/davidhalter/jedi-vim>`_,
which uses Jedi's autocompletion. I encourage you to use Jedi in your IDEs.
It's really easy. If there are any problems (also with licensing), just contact
me.
which uses Jedi's autocompletion. We encourage you to use Jedi in your IDEs.
It's really easy.
Jedi can be used with the following editors:
Jedi can currently be used with the following editors/projects:
- Vim (jedi-vim_, YouCompleteMe_)
- Emacs (Jedi.el_, elpy_, anaconda-mode_)
- Vim (jedi-vim_, YouCompleteMe_, deoplete-jedi_, completor.vim_)
- Emacs (Jedi.el_, company-mode_, elpy_, anaconda-mode_, ycmd_)
- Sublime Text (SublimeJEDI_ [ST2 + ST3], anaconda_ [only ST3])
- SynWrite_
- TextMate_ (Not sure if it's actually working)
- Kate_ version 4.13+ supports it natively, you have to enable it, though. [`proof
<https://projects.kde.org/projects/kde/applications/kate/repository/show?rev=KDE%2F4.13>`_]
And it powers the following projects:
- Atom_ (autocomplete-python-jedi_)
- SourceLair_
- `GNOME Builder`_ (with support for GObject Introspection)
- `Visual Studio Code`_ (via `Python Extension <https://marketplace.visualstudio.com/items?itemName=donjayamanne.python>`_)
- Gedit (gedi_)
- wdb_ - Web Debugger
- `Eric IDE`_ (Available as a plugin)
- `Ipython 6.0.0+ <http://ipython.readthedocs.io/en/stable/whatsnew/version6.html>`_
and many more!
Here are some pictures:
Here are some pictures taken from jedi-vim_:
.. image:: https://github.com/davidhalter/jedi/raw/master/docs/_screenshots/screenshot_complete.png
@@ -58,15 +62,15 @@ Display of function/class bodies, docstrings.
.. image:: https://github.com/davidhalter/jedi/raw/master/docs/_screenshots/screenshot_pydoc.png
Pydoc support (with highlighting, Shift+k).
Pydoc support (Shift+k).
There is also support for goto and renaming.
Get the latest version from `github <https://github.com/davidhalter/jedi>`_
(master branch should always be kind of stable/working).
Docs are available at `https://jedi.jedidjah.ch/
<https://jedi.jedidjah.ch/>`_. Pull requests with documentation
Docs are available at `https://jedi.readthedocs.org/en/latest/
<https://jedi.readthedocs.org/en/latest/>`_. Pull requests with documentation
enhancements and/or fixes are awesome and most welcome. Jedi uses `semantic
versioning <http://semver.org/>`_.
@@ -81,40 +85,77 @@ information about how to make it work with your editor, refer to the
corresponding documentation.
You don't want to use ``pip``? Please refer to the `manual
<https://jedi.jedidjah.ch/en/latest/docs/installation.html>`_.
<https://jedi.readthedocs.org/en/latest/docs/installation.html>`_.
Feature Support and Caveats
===========================
Jedi really understands your Python code. For a comprehensive list what Jedi
can do, see: `Features
<https://jedi.jedidjah.ch/en/latest/docs/features.html>`_. A list of
understands, see: `Features
<https://jedi.readthedocs.org/en/latest/docs/features.html>`_. A list of
caveats can be found on the same page.
You can run Jedi on cPython 2.6, 2.7, 3.2 or 3.3, but it should also
You can run Jedi on cPython 2.6, 2.7, 3.3, 3.4 or 3.5 but it should also
understand/parse code older than those versions.
Tips on how to use Jedi efficiently can be found `here
<https://jedi.jedidjah.ch/en/latest/docs/recipes.html>`_.
<https://jedi.readthedocs.org/en/latest/docs/features.html#recipes>`_.
API
---
You can find the documentation for the `API here <https://jedi.readthedocs.org/en/latest/docs/plugin-api.html>`_.
API for IDEs
============
Autocompletion / Goto / Pydoc
-----------------------------
It's very easy to create an editor plugin that uses Jedi. See `Plugin API
<https://jedi.jedidjah.ch/en/latest/docs/plugin-api.html>`_ for more
information.
Please check the API for a good explanation. There are the following commands:
If you have specific questions, please add an issue or ask on `stackoverflow
<https://stackoverflow.com>`_ with the label ``python-jedi``.
- ``jedi.Script.goto_assignments``
- ``jedi.Script.completions``
- ``jedi.Script.usages``
The returned objects are very powerful and really all you might need.
Autocompletion in your REPL (IPython, etc.)
-------------------------------------------
Starting with Ipython `6.0.0` Jedi is a dependency of IPython. Autocompletion
in IPython is therefore possible without additional configuration.
It's possible to have Jedi autocompletion in REPL modes - `example video <https://vimeo.com/122332037>`_.
This means that in Python you can enable tab completion in a `REPL
<https://jedi.readthedocs.org/en/latest/docs/usage.html#tab-completion-in-the-python-shell>`_.
Static Analysis / Linter
------------------------
To do all forms of static analysis, please try to use ``jedi.names``. It will
return a list of names that you can use to infer types and so on.
Linting is another thing that is going to be part of Jedi. For now you can try
an alpha version ``python -m jedi linter``. The API might change though and
it's still buggy. It's Jedi's goal to be smarter than classic linter and
understand ``AttributeError`` and other code issues.
Refactoring
-----------
Jedi's parser would support refactoring, but there's no API to use it right
now. If you're interested in helping out here, let me know. With the latest
parser changes, it should be very easy to actually make it work.
Development
===========
There's a pretty good and extensive `development documentation
<https://jedi.jedidjah.ch/en/latest/docs/development.html>`_.
<https://jedi.readthedocs.org/en/latest/docs/development.html>`_.
Testing
@@ -137,17 +178,38 @@ Tests are also run automatically on `Travis CI
<https://travis-ci.org/davidhalter/jedi/>`_.
For more detailed information visit the `testing documentation
<https://jedi.jedidjah.ch/en/latest/docs/testing.html>`_
<https://jedi.readthedocs.org/en/latest/docs/testing.html>`_
Acknowledgements
================
- Takafumi Arakaki (@tkf) for creating a solid test environment and a lot of
other things.
- Danilo Bargen (@dbrgn) for general housekeeping and being a good friend :).
- Guido van Rossum (@gvanrossum) for creating the parser generator pgen2
(originally used in lib2to3).
.. _jedi-vim: https://github.com/davidhalter/jedi-vim
.. _youcompleteme: http://valloric.github.io/YouCompleteMe/
.. _deoplete-jedi: https://github.com/zchee/deoplete-jedi
.. _completor.vim: https://github.com/maralla/completor.vim
.. _Jedi.el: https://github.com/tkf/emacs-jedi
.. _company-mode: https://github.com/syohex/emacs-company-jedi
.. _elpy: https://github.com/jorgenschaefer/elpy
.. _anaconda-mode: https://github.com/proofit404/anaconda-mode
.. _ycmd: https://github.com/abingham/emacs-ycmd
.. _sublimejedi: https://github.com/srusskih/SublimeJEDI
.. _anaconda: https://github.com/DamnWidget/anaconda
.. _SynWrite: http://uvviewsoft.com/synjedi/
.. _wdb: https://github.com/Kozea/wdb
.. _TextMate: https://github.com/lawrenceakka/python-jedi.tmbundle
.. _Kate: http://kate-editor.org
.. _Atom: https://atom.io/
.. _autocomplete-python-jedi: https://atom.io/packages/autocomplete-python-jedi
.. _SourceLair: https://www.sourcelair.com
.. _GNOME Builder: https://wiki.gnome.org/Apps/Builder
.. _Visual Studio Code: https://code.visualstudio.com/
.. _gedi: https://github.com/isamert/gedi
.. _Eric IDE: http://eric-ide.python-projects.org

View File

@@ -1,8 +1,9 @@
import tempfile
import shutil
import jedi
import pytest
import jedi
collect_ignore = ["setup.py"]
@@ -47,3 +48,25 @@ def pytest_unconfigure(config):
global jedi_cache_directory_orig, jedi_cache_directory_temp
jedi.settings.cache_directory = jedi_cache_directory_orig
shutil.rmtree(jedi_cache_directory_temp)
@pytest.fixture(scope='session')
def clean_jedi_cache(request):
"""
Set `jedi.settings.cache_directory` to a temporary directory during test.
Note that you can't use built-in `tmpdir` and `monkeypatch`
fixture here because their scope is 'function', which is not used
in 'session' scope fixture.
This fixture is activated in ../pytest.ini.
"""
from jedi import settings
old = settings.cache_directory
tmp = tempfile.mkdtemp(prefix='jedi-test-')
settings.cache_directory = tmp
@request.addfinalizer
def restore():
settings.cache_directory = old
shutil.rmtree(tmp)

52
deploy-master.sh Executable file
View File

@@ -0,0 +1,52 @@
#!/usr/bin/env bash
# The script creates a separate folder in build/ and creates tags there, pushes
# them and then uploads the package to PyPI.
set -eu -o pipefail
BASE_DIR=$(dirname $(readlink -f "$0"))
cd $BASE_DIR
git fetch --tags
PROJECT_NAME=jedi
BRANCH=master
BUILD_FOLDER=build
[ -d $BUILD_FOLDER ] || mkdir $BUILD_FOLDER
# Remove the previous deployment first.
# Checkout the right branch
cd $BUILD_FOLDER
rm -rf $PROJECT_NAME
git clone .. $PROJECT_NAME
cd $PROJECT_NAME
git checkout $BRANCH
# Test first.
tox
# Create tag
tag=v$(python -c "import $PROJECT_NAME; print($PROJECT_NAME.__version__)")
master_ref=$(git show-ref -s heads/$BRANCH)
tag_ref=$(git show-ref -s $tag || true)
if [[ $tag_ref ]]; then
if [[ $tag_ref != $master_ref ]]; then
echo 'Cannot tag something that has already been tagged with another commit.'
exit 1
fi
else
git tag $tag
git push --tags
fi
# Package and upload to PyPI
#rm -rf dist/ - Not needed anymore, because the folder is never reused.
echo `pwd`
python setup.py sdist bdist_wheel
# Maybe do a pip install twine before.
twine upload dist/*
cd $BASE_DIR
# The tags have been pushed to this repo. Push the tags to github, now.
git push --tags

14
docs/README.md Normal file
View File

@@ -0,0 +1,14 @@
Installation
------------
Install the graphviz library::
sudo apt-get install graphviz
Install sphinx::
sudo pip install sphinx
You might also need to install the Python graphviz interface::
sudo pip install graphviz

View File

@@ -14,7 +14,7 @@ Introduction
------------
This page tries to address the fundamental demand for documentation of the
|jedi| interals. Understanding a dynamic language is a complex task. Especially
|jedi| internals. Understanding a dynamic language is a complex task. Especially
because type inference in Python can be a very recursive task. Therefore |jedi|
couldn't get rid of complexity. I know that **simple is better than complex**,
but unfortunately it sometimes requires complex solutions to understand complex
@@ -59,27 +59,25 @@ Parser (parser/__init__.py)
.. automodule:: jedi.parser
Parser Representation (parser/representation.py)
Parser Tree (parser/tree.py)
++++++++++++++++++++++++++++++++++++++++++++++++
.. automodule:: jedi.parser.representation
.. automodule:: jedi.parser.tree
Class inheritance diagram:
.. inheritance-diagram::
SubModule
Module
Class
Function
Lambda
Flow
ForFlow
ForStmt
Import
Statement
ExprStmt
Param
Call
Array
Name
ListComprehension
CompFor
:parts: 1
.. _evaluate:
@@ -95,12 +93,10 @@ Evaluation Representation (evaluate/representation.py)
.. automodule:: jedi.evaluate.representation
.. inheritance-diagram::
Executable
Instance
InstanceElement
Class
Function
FunctionExecution
jedi.evaluate.instance.TreeInstance
jedi.evaluate.representation.ClassContext
jedi.evaluate.representation.FunctionContext
jedi.evaluate.representation.FunctionExecutionContext
:parts: 1
@@ -133,7 +129,7 @@ Core Extensions is a summary of the following topics:
- :ref:`Iterables & Dynamic Arrays <iterables>`
- :ref:`Dynamic Parameters <dynamic>`
- :ref:`Fast Parser <fast_parser>`
- :ref:`Diff Parser <diff-parser>`
- :ref:`Docstrings <docstrings>`
- :ref:`Refactoring <refactoring>`
@@ -147,7 +143,7 @@ Iterables & Dynamic Arrays (evaluate/iterable.py)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To understand Python on a deeper level, |jedi| needs to understand some of the
dynamic features of Python, however this probably the most complicated part:
dynamic features of Python like lists that are filled after creation:
.. automodule:: jedi.evaluate.iterable
@@ -160,12 +156,12 @@ Parameter completion (evaluate/dynamic.py)
.. automodule:: jedi.evaluate.dynamic
.. _fast_parser:
.. _diff-parser:
Fast Parser (parser/fast.py)
Diff Parser (parser/diff.py)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. automodule:: jedi.parser.fast
.. automodule:: jedi.parser.python.diff
.. _docstrings:

View File

@@ -3,26 +3,39 @@
Features and Caveats
====================
|jedi| supports many of the widely used Python features:
Jedi obviously supports autocompletion. It's also possible to get it working in
(:ref:`your REPL (IPython, etc.) <repl-completion>`).
Static analysis is also possible by using the command ``jedi.names``.
The Jedi Linter is currently in an alpha version and can be tested by calling
``python -m jedi linter``.
Jedi would in theory support refactoring, but we have never publicized it,
because it's not production ready. If you're interested in helping out here,
let me know. With the latest parser changes, it should be very easy to actually
make it work.
General Features
----------------
- python 2.6+ and 3.2+ support
- python 2.6+ and 3.3+ support
- ignores syntax errors and wrong indentation
- can deal with complex module / function / class structures
- virtualenv support
- can infer function arguments from sphinx and epydoc docstrings (:ref:`type
hinting <type-hinting>`)
- can infer function arguments from sphinx, epydoc and basic numpydoc docstrings,
and PEP0484-style type hints (:ref:`type hinting <type-hinting>`)
Supported Python Features
-------------------------
|jedi| supports many of the widely used Python features:
- builtins
- multiple returns or yields
- tuple assignments / array indexing / dictionary indexing
- returns, yields, yield from
- tuple assignments / array indexing / dictionary indexing / star unpacking
- with-statement / exception handling
- ``*args`` / ``**kwargs``
- decorators / lambdas / closures
@@ -41,15 +54,17 @@ Supported Python Features
- simple/usual ``sys.path`` modifications
- ``isinstance`` checks for if/while/assert
- namespace packages (includes ``pkgutil`` and ``pkg_resources`` namespaces)
- Django / Flask / Buildout support
Unsupported Features
--------------------
Not Supported
-------------
Not yet implemented:
- manipulations of instances outside the instance variables without using
methods
- implicit namespace packages (Python 3.3+, `PEP 420 <https://www.python.org/dev/peps/pep-0420/>`_)
Will probably never be implemented:
@@ -62,21 +77,6 @@ Will probably never be implemented:
Caveats
-------
**Malformed Syntax**
Syntax errors and other strange stuff may lead to undefined behaviour of the
completion. |jedi| is **NOT** a Python compiler, that tries to correct you. It
is a tool that wants to help you. But **YOU** have to know Python, not |jedi|.
**Legacy Python 2 Features**
This framework should work for both Python 2/3. However, some things were just
not as *pythonic* in Python 2 as things should be. To keep things simple, some
older Python 2 features have been left out:
- Classes: Always Python 3 like, therefore all classes inherit from ``object``.
- Generators: No ``next()`` method. The ``__next__()`` method is used instead.
**Slow Performance**
Importing ``numpy`` can be quite slow sometimes, as well as loading the
@@ -94,7 +94,7 @@ option than to execute those modules. However: Execute isn't that critical (as
e.g. in pythoncomplete, which used to execute *every* import!), because it
means one import and no more. So basically the only dangerous thing is using
the import itself. If your ``c_builtin`` uses some strange initializations, it
might be dangerous. But if it does you're screwed anyways, because eventualy
might be dangerous. But if it does you're screwed anyways, because eventually
you're going to execute your code, which executes the import.
@@ -111,7 +111,49 @@ Type Hinting
If |jedi| cannot detect the type of a function argument correctly (due to the
dynamic nature of Python), you can help it by hinting the type using
Sphinx-style info field lists or Epydoc docstrings.
one of the following docstring/annotation syntax styles:
**PEP-0484 style**
https://www.python.org/dev/peps/pep-0484/
function annotations (python 3 only; python 2 function annotations with
comments in planned but not yet implemented)
::
def myfunction(node: ProgramNode, foo: str) -> None:
"""Do something with a ``node``.
"""
node.| # complete here
assignment, for-loop and with-statement type hints (all python versions).
Note that the type hints must be on the same line as the statement
::
x = foo() # type: int
x, y = 2, 3 # type: typing.Optional[int], typing.Union[int, str] # typing module is mostly supported
for key, value in foo.items(): # type: str, Employee # note that Employee must be in scope
pass
with foo() as f: # type: int
print(f + 3)
Most of the features in PEP-0484 are supported including the typing module
(for python < 3.5 you have to do ``pip install typing`` to use these),
and forward references.
Things that are missing (and this is not an exhaustive list; some of these
are planned, others might be hard to implement and provide little worth):
- annotating functions with comments: https://www.python.org/dev/peps/pep-0484/#suggested-syntax-for-python-2-7-and-straddling-code
- understanding ``typing.cast()``
- stub files: https://www.python.org/dev/peps/pep-0484/#stub-files
- ``typing.Callable``
- ``typing.TypeVar``
- User defined generic types: https://www.python.org/dev/peps/pep-0484/#user-defined-generic-types
**Sphinx style**
@@ -119,10 +161,11 @@ http://sphinx-doc.org/domains.html#info-field-lists
::
def myfunction(node):
def myfunction(node, foo):
"""Do something with a ``node``.
:type node: ProgramNode
:param str foo: foo parameter description
"""
node.| # complete here
@@ -141,12 +184,45 @@ http://epydoc.sourceforge.net/manual-fields.html
"""
node.| # complete here
**Numpydoc**
https://github.com/numpy/numpy/blob/master/doc/HOWTO_DOCUMENT.rst.txt
In order to support the numpydoc format, you need to install the `numpydoc
<https://pypi.python.org/pypi/numpydoc>`__ package.
::
def foo(var1, var2, long_var_name='hi'):
r"""A one-line summary that does not use variable names or the
function name.
...
Parameters
----------
var1 : array_like
Array_like means all those objects -- lists, nested lists,
etc. -- that can be converted to an array. We can also
refer to variables like `var1`.
var2 : int
The type above can either refer to an actual Python type
(e.g. ``int``), or describe the type of the variable in more
detail, e.g. ``(N,) ndarray`` or ``array_like``.
long_variable_name : {'hi', 'ho'}, optional
Choices in brackets, default first when optional.
...
"""
var2.| # complete here
A little history
----------------
The Star Wars Jedi are awesome. My Jedi software tries to imitate a little bit
of the precognition the Jedi have. There's even an awesome `scene
<http://www.youtube.com/watch?v=5BDO3pyavOY>`_ of Monty Python Jedis :-).
<https://youtu.be/yHRJLIf7wMU>`_ of Monty Python Jedis :-).
But actually the name hasn't so much to do with Star Wars. It's part of my
second name.

View File

@@ -30,18 +30,20 @@ System-wide installation via a package manager
Arch Linux
~~~~~~~~~~
You can install |jedi| directly from official AUR packages:
You can install |jedi| directly from official Arch Linux packages:
- `python-jedi <https://aur.archlinux.org/packages/python-jedi/>`__ (Python 3)
- `python2-jedi <https://aur.archlinux.org/packages/python2-jedi/>`__ (Python 2)
- `python-jedi <https://www.archlinux.org/packages/community/any/python-jedi/>`__
(Python 3)
- `python2-jedi <https://www.archlinux.org/packages/community/any/python2-jedi/>`__
(Python 2)
The specified Python version just refers to the *runtime environment* for
|jedi|. Use the Python 2 version if you're running vim (or whatever editor you
use) under Python 2. Otherwise, use the Python 3 version. But whatever version
you choose, both are able to complete both Python 2 and 3 *code*.
(There is also a packaged version of the vim plugin available: `vim-jedi at AUR
<https://aur.archlinux.org/packages/vim-jedi/>`__.)
(There is also a packaged version of the vim plugin available:
`vim-jedi at Arch Linux <https://www.archlinux.org/packages/community/any/vim-jedi/>`__.)
Debian
~~~~~~

36
docs/docs/parser.rst Normal file
View File

@@ -0,0 +1,36 @@
.. _xxx:
Parser Tree
===========
Usage
-----
.. automodule:: jedi.parser.python
:members:
:undoc-members:
Parser Tree Base Class
----------------------
All nodes and leaves have these methods/properties:
.. autoclass:: jedi.parser.tree.NodeOrLeaf
:members:
:undoc-members:
Python Parser Tree
------------------
.. automodule:: jedi.parser.python.tree
:members:
:undoc-members:
:show-inheritance:
Utility
-------
.. autofunction:: jedi.parser.tree.search_ancestor

View File

@@ -47,14 +47,14 @@ Completions:
>>> script = jedi.Script(source, 1, 19, '')
>>> script
<jedi.api.Script object at 0x2121b10>
>>> completions = script.complete()
>>> completions = script.completions()
>>> completions
[<Completion: load>, <Completion: loads>]
>>> completions[1]
<Completion: loads>
>>> completions[1].complete
'oads'
>>> completions[1].word
>>> completions[1].name
'loads'
Definitions / Goto:

View File

@@ -0,0 +1,106 @@
This file is the start of the documentation of how static analysis works.
Below is a list of parser names that are used within nodes_to_execute.
------------ cared for:
global_stmt
exec_stmt # no priority
assert_stmt
if_stmt
while_stmt
for_stmt
try_stmt
(except_clause)
with_stmt
(with_item)
(with_var)
print_stmt
del_stmt
return_stmt
raise_stmt
yield_expr
file_input
funcdef
param
old_lambdef
lambdef
import_name
import_from
(import_as_name)
(dotted_as_name)
(import_as_names)
(dotted_as_names)
(dotted_name)
classdef
comp_for
(comp_if) ?
decorator
----------- add basic
test
or_test
and_test
not_test
expr
xor_expr
and_expr
shift_expr
arith_expr
term
factor
power
atom
comparison
expr_stmt
testlist
testlist1
testlist_safe
----------- special care:
# mostly depends on how we handle the other ones.
testlist_star_expr # should probably just work with expr_stmt
star_expr
exprlist # just ignore? then names are just resolved. Strange anyway, bc expr is not really allowed in the list, typically.
----------- ignore:
suite
subscriptlist
subscript
simple_stmt
?? sliceop # can probably just be added.
testlist_comp # prob ignore and care about it with atom.
dictorsetmaker
trailer
decorators
decorated
# always execute function arguments? -> no problem with stars.
# Also arglist and argument are different in different grammars.
arglist
argument
----------- remove:
tname # only exists in current Jedi parser. REMOVE!
tfpdef # python 2: tuple assignment; python 3: annotation
vfpdef # reduced in python 3 and therefore not existing.
tfplist # not in 3
vfplist # not in 3
--------- not existing with parser reductions.
small_stmt
import_stmt
flow_stmt
compound_stmt
stmt
pass_stmt
break_stmt
continue_stmt
comp_op
augassign
old_test
typedargslist # afaik becomes [param]
varargslist # dito
vname
comp_iter
test_nocond

View File

@@ -21,6 +21,7 @@ Vim:
- jedi-vim_
- YouCompleteMe_
- deoplete-jedi_
Emacs:
@@ -47,20 +48,46 @@ Kate:
<https://projects.kde.org/projects/kde/applications/kate/repository/entry/addons/kate/pate/src/plugins/python_autocomplete_jedi.py?rev=KDE%2F4.13>`__,
you have to enable it, though.
Visual Studio Code:
.. _other-software:
- `Python Extension`_
Other Software Using Jedi
-------------------------
Atom:
- wdb_ - Web Debugger
- autocomplete-python-jedi_
SourceLair:
- SourceLair_
GNOME Builder:
- `GNOME Builder`_ `supports it natively
<https://git.gnome.org/browse/gnome-builder/tree/plugins/jedi>`__,
and is enabled by default.
Gedit:
- gedi_
Eric IDE:
- `Eric IDE`_ (Available as a plugin)
Web Debugger:
- wdb_
and many more!
.. _repl-completion:
Tab completion in the Python Shell
Tab Completion in the Python Shell
----------------------------------
Starting with Ipython `6.0.0` Jedi is a dependency of IPython. Autocompletion
in IPython is therefore possible without additional configuration.
There are two different options how you can use Jedi autocompletion in
your Python interpreter. One with your custom ``$HOME/.pythonrc.py`` file
and one that uses ``PYTHONSTARTUP``.
@@ -68,7 +95,7 @@ and one that uses ``PYTHONSTARTUP``.
Using ``PYTHONSTARTUP``
~~~~~~~~~~~~~~~~~~~~~~~
.. automodule:: jedi.replstartup
.. automodule:: jedi.api.replstartup
Using a custom ``$HOME/.pythonrc.py``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -77,6 +104,7 @@ Using a custom ``$HOME/.pythonrc.py``
.. _jedi-vim: https://github.com/davidhalter/jedi-vim
.. _youcompleteme: http://valloric.github.io/YouCompleteMe/
.. _deoplete-jedi: https://github.com/zchee/deoplete-jedi
.. _Jedi.el: https://github.com/tkf/emacs-jedi
.. _elpy: https://github.com/jorgenschaefer/elpy
.. _anaconda-mode: https://github.com/proofit404/anaconda-mode
@@ -86,3 +114,9 @@ Using a custom ``$HOME/.pythonrc.py``
.. _wdb: https://github.com/Kozea/wdb
.. _TextMate: https://github.com/lawrenceakka/python-jedi.tmbundle
.. _kate: http://kate-editor.org/
.. _autocomplete-python-jedi: https://atom.io/packages/autocomplete-python-jedi
.. _SourceLair: https://www.sourcelair.com
.. _GNOME Builder: https://wiki.gnome.org/Apps/Builder/
.. _gedi: https://github.com/isamert/gedi
.. _Eric IDE: http://eric-ide.python-projects.org
.. _Python Extension: https://marketplace.visualstudio.com/items?itemName=donjayamanne.python

View File

@@ -1,7 +1,7 @@
.. include global.rst
Jedi - an awesome autocompletion library for Python
===================================================
Jedi - an awesome autocompletion/static analysis library for Python
===================================================================
Release v\ |release|. (:doc:`Installation <docs/installation>`)

View File

@@ -1,16 +1,18 @@
"""
Jedi is an autocompletion tool for Python that can be used in IDEs/editors.
Jedi works. Jedi is fast. It understands all of the basic Python syntax
elements including many builtin functions.
Jedi is a static analysis tool for Python that can be used in IDEs/editors. Its
historic focus is autocompletion, but does static analysis for now as well.
Jedi is fast and is very well tested. It understands Python on a deeper level
than all other static analysis frameworks for Python.
Additionaly, Jedi suports two different goto functions and has support for
renaming as well as Pydoc support and some other IDE features.
Jedi has support for two different goto functions. It's possible to search for
related names and to list all names in a Python file and infer them. Jedi
understands docstrings and you can use Jedi autocompletion in your REPL as
well.
Jedi uses a very simple API to connect with IDE's. There's a reference
implementation as a `VIM-Plugin <http://github.com/davidhalter/jedi-vim>`_,
which uses Jedi's autocompletion. I encourage you to use Jedi in your IDEs.
It's really easy. If there are any problems (also with licensing), just contact
me.
implementation as a `VIM-Plugin <https://github.com/davidhalter/jedi-vim>`_,
which uses Jedi's autocompletion. We encourage you to use Jedi in your IDEs.
It's really easy.
To give you a simple example how you can use the Jedi library, here is an
example for the autocompletion feature:
@@ -34,8 +36,8 @@ As you see Jedi is pretty simple and allows you to concentrate on writing a
good text editor, while still having very good IDE features for Python.
"""
__version__ = '0.8.1-final0'
__version__ = '0.11.1'
from jedi.api import Script, Interpreter, NotFoundError, set_debug_function
from jedi.api import preload_module, defined_names
from jedi.api import Script, Interpreter, set_debug_function, \
preload_module, names
from jedi import settings

View File

@@ -1,18 +1,13 @@
from sys import argv
import sys
from os.path import join, dirname, abspath, isdir
if len(argv) == 2 and argv[1] == 'repl':
# don't want to use __main__ only for repl yet, maybe we want to use it for
# something else. So just use the keyword ``repl`` for now.
print(join(dirname(abspath(__file__)), 'api', 'replstartup.py'))
elif len(argv) > 1 and argv[1] == 'linter':
def _start_linter():
"""
This is a pre-alpha API. You're not supposed to use it at all, except for
testing. It will very likely change.
"""
import jedi
import sys
if '--debug' in sys.argv:
jedi.set_debug_function()
@@ -37,7 +32,17 @@ elif len(argv) > 1 and argv[1] == 'linter':
print(error)
except Exception:
if '--pdb' in sys.argv:
import traceback
traceback.print_exc()
import pdb
pdb.post_mortem()
else:
raise
if len(sys.argv) == 2 and sys.argv[1] == 'repl':
# don't want to use __main__ only for repl yet, maybe we want to use it for
# something else. So just use the keyword ``repl`` for now.
print(join(dirname(abspath(__file__)), 'api', 'replstartup.py'))
elif len(sys.argv) > 1 and sys.argv[1] == 'linter':
_start_linter()

View File

@@ -6,21 +6,72 @@ import sys
import imp
import os
import re
import pkgutil
import warnings
try:
import importlib
except ImportError:
pass
# Cannot use sys.version.major and minor names, because in Python 2.6 it's not
# a namedtuple.
is_py3 = sys.version_info[0] >= 3
is_py33 = is_py3 and sys.version_info.minor >= 3
is_py33 = is_py3 and sys.version_info[1] >= 3
is_py34 = is_py3 and sys.version_info[1] >= 4
is_py35 = is_py3 and sys.version_info[1] >= 5
is_py26 = not is_py3 and sys.version_info[1] < 7
py_version = int(str(sys.version_info[0]) + str(sys.version_info[1]))
def find_module_py33(string, path=None):
loader = importlib.machinery.PathFinder.find_module(string, path)
class DummyFile(object):
def __init__(self, loader, string):
self.loader = loader
self.string = string
def read(self):
return self.loader.get_source(self.string)
def close(self):
del self.loader
def find_module_py34(string, path=None, fullname=None):
implicit_namespace_pkg = False
spec = None
loader = None
spec = importlib.machinery.PathFinder.find_spec(string, path)
if hasattr(spec, 'origin'):
origin = spec.origin
implicit_namespace_pkg = origin == 'namespace'
# We try to disambiguate implicit namespace pkgs with non implicit namespace pkgs
if implicit_namespace_pkg:
fullname = string if not path else fullname
implicit_ns_info = ImplicitNSInfo(fullname, spec.submodule_search_locations._path)
return None, implicit_ns_info, False
# we have found the tail end of the dotted path
if hasattr(spec, 'loader'):
loader = spec.loader
return find_module_py33(string, path, loader)
def find_module_py33(string, path=None, loader=None, fullname=None):
loader = loader or importlib.machinery.PathFinder.find_module(string, path)
if loader is None and path is None: # Fallback to find builtins
loader = importlib.find_loader(string)
try:
with warnings.catch_warnings(record=True):
# Mute "DeprecationWarning: Use importlib.util.find_spec()
# instead." While we should replace that in the future, it's
# probably good to wait until we deprecate Python 3.3, since
# it was added in Python 3.4 and find_loader hasn't been
# removed in 3.6.
loader = importlib.find_loader(string)
except ValueError as e:
# See #491. Importlib might raise a ValueError, to avoid this, we
# just raise an ImportError to fix the issue.
raise ImportError("Originally " + repr(e))
if loader is None:
raise ImportError("Couldn't find a loader for {0}".format(string))
@@ -28,33 +79,77 @@ def find_module_py33(string, path=None):
try:
is_package = loader.is_package(string)
if is_package:
module_path = os.path.dirname(loader.path)
module_file = None
if hasattr(loader, 'path'):
module_path = os.path.dirname(loader.path)
else:
# At least zipimporter does not have path attribute
module_path = os.path.dirname(loader.get_filename(string))
if hasattr(loader, 'archive'):
module_file = DummyFile(loader, string)
else:
module_file = None
else:
module_path = loader.get_filename(string)
module_file = open(module_path, 'rb')
module_file = DummyFile(loader, string)
except AttributeError:
# ExtensionLoader has not attribute get_filename, instead it has a
# path attribute that we can use to retrieve the module path
try:
module_path = loader.path
module_file = open(loader.path, 'rb')
module_file = DummyFile(loader, string)
except AttributeError:
module_path = string
module_file = None
finally:
is_package = False
if hasattr(loader, 'archive'):
module_path = loader.archive
return module_file, module_path, is_package
def find_module_pre_py33(string, path=None):
module_file, module_path, description = imp.find_module(string, path)
module_type = description[2]
return module_file, module_path, module_type is imp.PKG_DIRECTORY
def find_module_pre_py33(string, path=None, fullname=None):
try:
module_file, module_path, description = imp.find_module(string, path)
module_type = description[2]
return module_file, module_path, module_type is imp.PKG_DIRECTORY
except ImportError:
pass
if path is None:
path = sys.path
for item in path:
loader = pkgutil.get_importer(item)
if loader:
try:
loader = loader.find_module(string)
if loader:
is_package = loader.is_package(string)
is_archive = hasattr(loader, 'archive')
try:
module_path = loader.get_filename(string)
except AttributeError:
# fallback for py26
try:
module_path = loader._get_filename(string)
except AttributeError:
continue
if is_package:
module_path = os.path.dirname(module_path)
if is_archive:
module_path = loader.archive
file = None
if not is_package or is_archive:
file = DummyFile(loader, string)
return (file, module_path, is_package)
except ImportError:
pass
raise ImportError("No module named {0}".format(string))
find_module = find_module_py33 if is_py33 else find_module_pre_py33
find_module = find_module_py34 if is_py34 else find_module
find_module.__doc__ = """
Provides information about a module.
@@ -65,23 +160,12 @@ or the name of the module if it is a builtin one and a boolean indicating
if the module is contained in a package.
"""
# next was defined in python 2.6, in python 3 obj.next won't be possible
# anymore
try:
next = next
except NameError:
_raiseStopIteration = object()
def next(iterator, default=_raiseStopIteration):
if not hasattr(iterator, 'next'):
raise TypeError("not an iterator")
try:
return iterator.next()
except StopIteration:
if default is _raiseStopIteration:
raise
else:
return default
class ImplicitNSInfo(object):
"""Stores information returned from an implicit namespace spec"""
def __init__(self, name, paths):
self.name = name
self.paths = paths
# unicode function
try:
@@ -89,22 +173,6 @@ try:
except NameError:
unicode = str
if is_py3:
u = lambda s: s
else:
u = lambda s: s.decode('utf-8')
u.__doc__ = """
Decode a raw string into unicode object. Do nothing in Python 3.
"""
# exec function
if is_py3:
def exec_function(source, global_map):
exec(source, global_map)
else:
eval(compile("""def exec_function(source, global_map):
exec source in global_map """, 'blub', 'exec'))
# re-raise function
if is_py3:
@@ -125,18 +193,6 @@ Usage::
"""
# hasattr function used because python
if is_py3:
hasattr = hasattr
else:
def hasattr(obj, name):
try:
getattr(obj, name)
return True
except AttributeError:
return False
class Python3Method(object):
def __init__(self, func):
self.func = func
@@ -171,7 +227,8 @@ def u(string):
"""
if is_py3:
return str(string)
elif not isinstance(string, unicode):
if not isinstance(string, unicode):
return unicode(str(string), 'UTF-8')
return string
@@ -197,3 +254,38 @@ try:
from itertools import zip_longest
except ImportError:
from itertools import izip_longest as zip_longest # Python 2
try:
FileNotFoundError = FileNotFoundError
except NameError:
FileNotFoundError = IOError
def no_unicode_pprint(dct):
"""
Python 2/3 dict __repr__ may be different, because of unicode differens
(with or without a `u` prefix). Normally in doctests we could use `pprint`
to sort dicts and check for equality, but here we have to write a separate
function to do that.
"""
import pprint
s = pprint.pformat(dct)
print(re.sub("u'", "'", s))
def utf8_repr(func):
"""
``__repr__`` methods in Python 2 don't allow unicode objects to be
returned. Therefore cast them to utf-8 bytes in this decorator.
"""
def wrapper(self):
result = func(self)
if isinstance(result, unicode):
return result.encode('utf-8')
else:
return result
if is_py3:
return func
else:
return wrapper

View File

@@ -2,48 +2,44 @@
The API basically only provides one class. You can create a :class:`Script` and
use its methods.
Additionally you can add a debug function with :func:`set_debug_function` and
catch :exc:`NotFoundError` which is being raised if your completion is not
possible.
Additionally you can add a debug function with :func:`set_debug_function`.
Alternatively, if you don't need a custom function and are happy with printing
debug messages to stdout, simply call :func:`set_debug_function` without
arguments.
.. warning:: Please, note that Jedi is **not thread safe**.
"""
import re
import os
import warnings
import sys
from itertools import chain
from jedi._compatibility import next, unicode, builtins
from jedi.parser import Parser
from jedi.parser.tokenize import source_tokens
from jedi.parser import representation as pr
from jedi.parser.user_context import UserContext, UserContextParser
import parso
from parso.python import tree
from parso import python_bytes_to_unicode, split_lines
from jedi.parser_utils import get_executable_nodes, get_statement_of_position
from jedi import debug
from jedi import settings
from jedi import common
from jedi import cache
from jedi.api import keywords
from jedi.api import classes
from jedi.api import interpreter
from jedi.api import usages
from jedi.api import helpers
from jedi.evaluate import Evaluator, filter_private_variable
from jedi.evaluate import representation as er
from jedi.evaluate import compiled
from jedi.api.completion import Completion
from jedi.evaluate import Evaluator
from jedi.evaluate import imports
from jedi.evaluate.helpers import FakeName
from jedi.evaluate.finder import get_names_of_scope
from jedi.evaluate.helpers import search_call_signatures
from jedi.evaluate import analysis
from jedi.evaluate import usages
from jedi.evaluate.project import Project
from jedi.evaluate.arguments import try_iter_content
from jedi.evaluate.helpers import get_module_names, evaluate_call_of_leaf
from jedi.evaluate.sys_path import dotted_path_in_sys_path
from jedi.evaluate.filters import TreeNameDefinition
from jedi.evaluate.syntax_tree import tree_name_to_contexts
from jedi.evaluate.context import ModuleContext
from jedi.evaluate.context.module import ModuleName
from jedi.evaluate.context.iterable import unpack_tuple_to_dict
# Jedi uses lots and lots of recursion. By setting this a little bit higher, we
# can remove some "maximum recursion depth" errors.
sys.setrecursionlimit(2000)
class NotFoundError(Exception):
"""A custom error to avoid catching the wrong exceptions."""
sys.setrecursionlimit(3000)
class Script(object):
@@ -54,12 +50,24 @@ class Script(object):
You can either use the ``source`` parameter or ``path`` to read a file.
Usually you're going to want to use both of them (in an editor).
The script might be analyzed in a different ``sys.path`` than |jedi|:
- if `sys_path` parameter is not ``None``, it will be used as ``sys.path``
for the script;
- if `sys_path` parameter is ``None`` and ``VIRTUAL_ENV`` environment
variable is defined, ``sys.path`` for the specified environment will be
guessed (see :func:`jedi.evaluate.sys_path.get_venv_path`) and used for
the script;
- otherwise ``sys.path`` will match that of |jedi|.
:param source: The source code of the current file, separated by newlines.
:type source: str
:param line: The line to perform actions on (starting with 1).
:type line: int
:param col: The column of the cursor (starting with 0).
:type col: int
:param column: The column of the cursor (starting with 0).
:type column: int
:param path: The path of the file in the file system, or ``''`` if
it hasn't been saved yet.
:type path: str or None
@@ -69,51 +77,67 @@ class Script(object):
:param source_encoding: The encoding of ``source``, if it is not a
``unicode`` object (default ``'utf-8'``).
:type encoding: str
:param sys_path: ``sys.path`` to use during analysis of the script
:type sys_path: list
"""
def __init__(self, source=None, line=None, column=None, path=None,
encoding='utf-8', source_path=None, source_encoding=None):
if source_path is not None:
warnings.warn("Use path instead of source_path.", DeprecationWarning)
path = source_path
if source_encoding is not None:
warnings.warn("Use encoding instead of source_encoding.", DeprecationWarning)
encoding = source_encoding
encoding='utf-8', sys_path=None):
self._orig_path = path
self.path = None if path is None else os.path.abspath(path)
# An empty path (also empty string) should always result in no path.
self.path = os.path.abspath(path) if path else None
if source is None:
with open(path) as f:
# TODO add a better warning than the traceback!
with open(path, 'rb') as f:
source = f.read()
self.source = common.source_to_unicode(source, encoding)
lines = common.splitlines(self.source)
line = max(len(lines), 1) if line is None else line
if not (0 < line <= len(lines)):
# TODO do we really want that?
self._source = python_bytes_to_unicode(source, encoding, errors='replace')
self._code_lines = split_lines(self._source)
line = max(len(self._code_lines), 1) if line is None else line
if not (0 < line <= len(self._code_lines)):
raise ValueError('`line` parameter is not in a valid range.')
line_len = len(lines[line - 1])
line_len = len(self._code_lines[line - 1])
column = line_len if column is None else column
if not (0 <= column <= line_len):
raise ValueError('`column` parameter is not in a valid range.')
self._pos = line, column
self._path = path
cache.clear_caches()
cache.clear_time_caches()
debug.reset_time()
self._user_context = UserContext(self.source, self._pos)
self._parser = UserContextParser(self.source, path, self._pos, self._user_context)
self._evaluator = Evaluator()
# Load the Python grammar of the current interpreter.
self._grammar = parso.load_grammar()
project = Project(sys_path=sys_path)
self._evaluator = Evaluator(self._grammar, project)
project.add_script_path(self.path)
debug.speed('init')
@property
def source_path(self):
"""
.. deprecated:: 0.7.0
Use :attr:`.path` instead.
.. todo:: Remove!
"""
warnings.warn("Use path instead of source_path.", DeprecationWarning)
return self.path
@cache.memoize_method
def _get_module_node(self):
return self._grammar.parse(
code=self._source,
path=self.path,
cache=False, # No disk cache, because the current script often changes.
diff_cache=True,
cache_path=settings.cache_directory
)
@cache.memoize_method
def _get_module(self):
module = ModuleContext(
self._evaluator,
self._get_module_node(),
self.path
)
if self.path is not None:
name = dotted_path_in_sys_path(self._evaluator.project.sys_path, self.path)
if name is not None:
imports.add_module(self._evaluator, name, module)
return module
def __repr__(self):
return '<%s: %s>' % (self.__class__.__name__, repr(self._orig_path))
@@ -126,231 +150,15 @@ class Script(object):
:return: Completion objects, sorted by name and __ comes last.
:rtype: list of :class:`classes.Completion`
"""
def get_completions(user_stmt, bs):
if isinstance(user_stmt, pr.Import):
context = self._user_context.get_context()
next(context) # skip the path
if next(context) == 'from':
# completion is just "import" if before stands from ..
return ((k, bs) for k in keywords.keyword_names('import'))
return self._simple_complete(path, like)
def completion_possible(path):
"""
The completion logic is kind of complicated, because we strip the
last word part. To ignore certain strange patterns with dots, just
use regex.
"""
if re.match('\d+\.\.$|\.{4}$', path):
return True # check Ellipsis and float literal `1.`
return not re.search(r'^\.|^\d\.$|\.\.$', path)
debug.speed('completions start')
path = self._user_context.get_path_until_cursor()
if not completion_possible(path):
return []
path, dot, like = helpers.completion_parts(path)
user_stmt = self._parser.user_stmt_with_whitespace()
b = compiled.builtin
completions = get_completions(user_stmt, b)
if not dot:
# add named params
for call_sig in self.call_signatures():
# allow protected access, because it's a public API.
module = call_sig._definition.get_parent_until()
# Compiled modules typically don't allow keyword arguments.
if not isinstance(module, compiled.CompiledObject):
for p in call_sig.params:
# Allow access on _definition here, because it's a
# public API and we don't want to make the internal
# Name object public.
if p._definition.stars == 0: # no *args/**kwargs
completions.append((p._definition.get_name(), p))
if not path and not isinstance(user_stmt, pr.Import):
# add keywords
completions += ((k, b) for k in keywords.keyword_names(all=True))
needs_dot = not dot and path
comps = []
comp_dct = {}
for c, s in set(completions):
n = str(c.names[-1])
if settings.case_insensitive_completion \
and n.lower().startswith(like.lower()) \
or n.startswith(like):
if not filter_private_variable(s, user_stmt or self._parser.user_scope(), n):
new = classes.Completion(self._evaluator, c, needs_dot, len(like), s)
k = (new.name, new.complete) # key
if k in comp_dct and settings.no_completion_duplicates:
comp_dct[k]._same_name_completions.append(new)
else:
comp_dct[k] = new
comps.append(new)
completion = Completion(
self._evaluator, self._get_module(), self._code_lines,
self._pos, self.call_signatures
)
completions = completion.completions()
debug.speed('completions end')
return sorted(comps, key=lambda x: (x.name.startswith('__'),
x.name.startswith('_'),
x.name.lower()))
def _simple_complete(self, path, like):
try:
scopes = list(self._prepare_goto(path, True))
except NotFoundError:
scopes = []
scope_names_generator = get_names_of_scope(self._evaluator,
self._parser.user_scope(),
self._pos)
completions = []
for scope, name_list in scope_names_generator:
for c in name_list:
completions.append((c, scope))
else:
completions = []
debug.dbg('possible completion scopes: %s', scopes)
for s in scopes:
if s.isinstance(er.Function):
names = s.get_magic_function_names()
elif isinstance(s, imports.ImportWrapper):
under = like + self._user_context.get_path_after_cursor()
if under == 'import':
current_line = self._user_context.get_position_line()
if not current_line.endswith('import import'):
continue
a = s.import_stmt.alias
if a and a.start_pos <= self._pos <= a.end_pos:
continue
names = s.get_defined_names(on_import_stmt=True)
else:
names = []
for _, new_names in s.scope_names_generator():
names += new_names
for c in names:
completions.append((c, s))
return completions
def _prepare_goto(self, goto_path, is_completion=False):
"""
Base for completions/goto. Basically it returns the resolved scopes
under cursor.
"""
debug.dbg('start: %s in %s', goto_path, self._parser.user_scope())
user_stmt = self._parser.user_stmt_with_whitespace()
if not user_stmt and len(goto_path.split('\n')) > 1:
# If the user_stmt is not defined and the goto_path is multi line,
# something's strange. Most probably the backwards tokenizer
# matched to much.
return []
if isinstance(user_stmt, pr.Import):
scopes = [helpers.get_on_import_stmt(self._evaluator, self._user_context,
user_stmt, is_completion)[0]]
else:
# just parse one statement, take it and evaluate it
eval_stmt = self._get_under_cursor_stmt(goto_path)
if not is_completion:
# goto_definition returns definitions of its statements if the
# cursor is on the assignee. By changing the start_pos of our
# "pseudo" statement, the Jedi evaluator can find the assignees.
if user_stmt is not None:
eval_stmt.start_pos = user_stmt.end_pos
scopes = self._evaluator.eval_statement(eval_stmt)
return scopes
def _get_under_cursor_stmt(self, cursor_txt):
tokenizer = source_tokens(cursor_txt, line_offset=self._pos[0] - 1)
r = Parser(cursor_txt, no_docstr=True, tokenizer=tokenizer)
try:
# Take the last statement available.
stmt = r.module.statements[-1]
except IndexError:
raise NotFoundError()
if isinstance(stmt, pr.KeywordStatement):
stmt = stmt.stmt
if not isinstance(stmt, pr.Statement):
raise NotFoundError()
user_stmt = self._parser.user_stmt()
if user_stmt is None:
# Set the start_pos to a pseudo position, that doesn't exist but works
# perfectly well (for both completions in docstrings and statements).
stmt.start_pos = self._pos
else:
stmt.start_pos = user_stmt.start_pos
stmt.parent = self._parser.user_scope()
return stmt
def complete(self):
"""
.. deprecated:: 0.6.0
Use :attr:`.completions` instead.
.. todo:: Remove!
"""
warnings.warn("Use completions instead.", DeprecationWarning)
return self.completions()
def goto(self):
"""
.. deprecated:: 0.6.0
Use :attr:`.goto_assignments` instead.
.. todo:: Remove!
"""
warnings.warn("Use goto_assignments instead.", DeprecationWarning)
return self.goto_assignments()
def definition(self):
"""
.. deprecated:: 0.6.0
Use :attr:`.goto_definitions` instead.
.. todo:: Remove!
"""
warnings.warn("Use goto_definitions instead.", DeprecationWarning)
return self.goto_definitions()
def get_definition(self):
"""
.. deprecated:: 0.5.0
Use :attr:`.goto_definitions` instead.
.. todo:: Remove!
"""
warnings.warn("Use goto_definitions instead.", DeprecationWarning)
return self.goto_definitions()
def related_names(self):
"""
.. deprecated:: 0.6.0
Use :attr:`.usages` instead.
.. todo:: Remove!
"""
warnings.warn("Use usages instead.", DeprecationWarning)
return self.usages()
def get_in_function_call(self):
"""
.. deprecated:: 0.6.0
Use :attr:`.call_signatures` instead.
.. todo:: Remove!
"""
return self.function_definition()
def function_definition(self):
"""
.. deprecated:: 0.6.0
Use :attr:`.call_signatures` instead.
.. todo:: Remove!
"""
warnings.warn("Use call_signatures instead.", DeprecationWarning)
sig = self.call_signatures()
return sig[0] if sig else None
def goto_definitions(self):
"""
Return the definitions of a the path under the cursor. goto function!
@@ -363,140 +171,59 @@ class Script(object):
:rtype: list of :class:`classes.Definition`
"""
def resolve_import_paths(scopes):
for s in scopes.copy():
if isinstance(s, imports.ImportWrapper):
scopes.remove(s)
scopes.update(resolve_import_paths(set(s.follow())))
return scopes
module_node = self._get_module_node()
leaf = module_node.get_name_of_position(self._pos)
if leaf is None:
leaf = module_node.get_leaf_for_position(self._pos)
if leaf is None:
return []
user_stmt = self._parser.user_stmt_with_whitespace()
goto_path = self._user_context.get_path_under_cursor()
context = self._user_context.get_context()
definitions = set()
if next(context) in ('class', 'def'):
definitions = set([self._parser.user_scope()])
else:
# Fetch definition of callee, if there's no path otherwise.
if not goto_path:
(call, _) = search_call_signatures(user_stmt, self._pos)
if call is not None:
while call.next is not None:
call = call.next
# reset cursor position:
(row, col) = call.name.end_pos
pos = (row, max(col - 1, 0))
self._user_context = UserContext(self.source, pos)
# then try to find the path again
goto_path = self._user_context.get_path_under_cursor()
context = self._evaluator.create_context(self._get_module(), leaf)
definitions = helpers.evaluate_goto_definition(self._evaluator, context, leaf)
if not definitions:
if goto_path:
definitions = set(self._prepare_goto(goto_path))
names = [s.name for s in definitions]
defs = [classes.Definition(self._evaluator, name) for name in names]
# The additional set here allows the definitions to become unique in an
# API sense. In the internals we want to separate more things than in
# the API.
return helpers.sorted_definitions(set(defs))
definitions = resolve_import_paths(definitions)
d = set([classes.Definition(self._evaluator, s) for s in definitions
if s is not imports.ImportWrapper.GlobalNamespace])
return helpers.sorted_definitions(d)
def goto_assignments(self):
def goto_assignments(self, follow_imports=False):
"""
Return the first definition found. Imports and statements aren't
followed. Multiple objects may be returned, because Python itself is a
Return the first definition found, while optionally following imports.
Multiple objects may be returned, because Python itself is a
dynamic language, which means depending on an option you can have two
different versions of a function.
:rtype: list of :class:`classes.Definition`
"""
results, _ = self._goto()
d = [classes.Definition(self._evaluator, d) for d in set(results)
if d is not imports.ImportWrapper.GlobalNamespace]
return helpers.sorted_definitions(d)
def _goto(self, add_import_name=False):
"""
Used for goto_assignments and usages.
:param add_import_name: Add the the name (if import) to the result.
"""
def follow_inexistent_imports(defs):
""" Imports can be generated, e.g. following
`multiprocessing.dummy` generates an import dummy in the
multiprocessing module. The Import doesn't exist -> follow.
"""
definitions = set(defs)
for d in defs:
if isinstance(d.parent, pr.Import) \
and d.start_pos == (0, 0):
i = imports.ImportWrapper(self._evaluator, d.parent).follow(is_goto=True)
definitions.remove(d)
definitions |= follow_inexistent_imports(i)
return definitions
goto_path = self._user_context.get_path_under_cursor()
context = self._user_context.get_context()
user_stmt = self._parser.user_stmt()
if next(context) in ('class', 'def'):
user_scope = self._parser.user_scope()
definitions = set([user_scope.name])
search_name = unicode(user_scope.name)
elif isinstance(user_stmt, pr.Import):
s, name_part = helpers.get_on_import_stmt(self._evaluator,
self._user_context, user_stmt)
try:
definitions = [s.follow(is_goto=True)[0]]
except IndexError:
definitions = []
search_name = unicode(name_part)
if add_import_name:
import_name = user_stmt.get_defined_names()
# imports have only one name
if not user_stmt.star \
and unicode(name_part) == unicode(import_name[0].names[-1]):
definitions.append(import_name[0])
else:
stmt = self._get_under_cursor_stmt(goto_path)
def test_lhs():
"""
Special rule for goto, left hand side of the statement returns
itself, if the name is ``foo``, but not ``foo.bar``.
"""
if isinstance(user_stmt, pr.Statement):
for name in user_stmt.get_defined_names():
if name.start_pos <= self._pos <= name.end_pos \
and len(name.names) == 1:
return name, unicode(name.names[-1])
return None, None
lhs, search_name = test_lhs()
if lhs is None:
expression_list = stmt.expression_list()
if len(expression_list) == 0:
return [], ''
# Only the first command is important, the rest should basically not
# happen except in broken code (e.g. docstrings that aren't code).
call = expression_list[0]
if isinstance(call, pr.Call):
call_path = list(call.generate_call_path())
def filter_follow_imports(names, check):
for name in names:
if check(name):
for result in filter_follow_imports(name.goto(), check):
yield result
else:
call_path = [call]
yield name
defs, search_name_part = self._evaluator.goto(stmt, call_path)
search_name = unicode(search_name_part)
definitions = follow_inexistent_imports(defs)
else:
definitions = [lhs]
if isinstance(user_stmt, pr.Statement):
c = user_stmt.expression_list()
if c and not isinstance(c[0], (str, unicode)) \
and c[0].start_pos > self._pos \
and not re.search(r'\.\w+$', goto_path):
# The cursor must be after the start, otherwise the
# statement is just an assignee.
definitions = [user_stmt]
return definitions, search_name
tree_name = self._get_module_node().get_name_of_position(self._pos)
if tree_name is None:
return []
context = self._evaluator.create_context(self._get_module(), tree_name)
names = list(self._evaluator.goto(context, tree_name))
if follow_imports:
def check(name):
if isinstance(name, ModuleName):
return False
return name.api_type == 'module'
else:
def check(name):
return isinstance(name, imports.SubModuleName)
names = filter_follow_imports(names, check)
defs = [classes.Definition(self._evaluator, d) for d in set(names)]
return helpers.sorted_definitions(defs)
def usages(self, additional_module_paths=()):
"""
@@ -509,34 +236,15 @@ class Script(object):
:rtype: list of :class:`classes.Definition`
"""
temp, settings.dynamic_flow_information = \
settings.dynamic_flow_information, False
user_stmt = self._parser.user_stmt()
definitions, search_name = self._goto(add_import_name=True)
if isinstance(user_stmt, pr.Statement):
c = user_stmt.expression_list()[0]
if not isinstance(c, unicode) and self._pos < c.start_pos:
# the search_name might be before `=`
definitions = [v for v in user_stmt.get_defined_names()
if unicode(v.names[-1]) == search_name]
if not isinstance(user_stmt, pr.Import):
# import case is looked at with add_import_name option
definitions = usages.usages_add_import_modules(self._evaluator, definitions, search_name)
tree_name = self._get_module_node().get_name_of_position(self._pos)
if tree_name is None:
# Must be syntax
return []
module = set([d.get_parent_until() for d in definitions])
module.add(self._parser.module())
names = usages.usages(self._evaluator, definitions, search_name, module)
names = usages.usages(self._get_module(), tree_name)
for d in set(definitions):
try:
name_part = d.names[-1]
except AttributeError:
names.append(classes.Definition(self._evaluator, d))
else:
names.append(classes.Definition(self._evaluator, name_part))
settings.dynamic_flow_information = temp
return helpers.sorted_definitions(set(names))
definitions = [classes.Definition(self._evaluator, n) for n in names]
return helpers.sorted_definitions(definitions)
def call_signatures(self):
"""
@@ -550,59 +258,67 @@ class Script(object):
abs()# <-- cursor is here
This would return ``None``.
This would return an empty list..
:rtype: list of :class:`classes.CallSignature`
"""
user_stmt = self._parser.user_stmt_with_whitespace()
call, index = search_call_signatures(user_stmt, self._pos)
if call is None:
call_signature_details = \
helpers.get_call_signature_details(self._get_module_node(), self._pos)
if call_signature_details is None:
return []
stmt_el = call
while isinstance(stmt_el.parent, pr.StatementElement):
# Go to parent literal/variable until not possible anymore. This
# makes it possible to return the whole expression.
stmt_el = stmt_el.parent
# We can reset the execution since it's a new object
# (fast_parent_copy).
execution_arr, call.execution = call.execution, None
with common.scale_speed_settings(settings.scale_call_signatures):
_callable = lambda: self._evaluator.eval_call(stmt_el)
origins = cache.cache_call_signatures(_callable, self.source,
self._pos, user_stmt)
context = self._evaluator.create_context(
self._get_module(),
call_signature_details.bracket_leaf
)
definitions = helpers.cache_call_signatures(
self._evaluator,
context,
call_signature_details.bracket_leaf,
self._code_lines,
self._pos
)
debug.speed('func_call followed')
key_name = None
try:
detail = execution_arr[index].assignment_details[0]
except IndexError:
pass
else:
try:
key_name = unicode(detail[0][0].name)
except (IndexError, AttributeError):
pass
return [classes.CallSignature(self._evaluator, o, call, index, key_name)
for o in origins if o.is_callable()]
return [classes.CallSignature(self._evaluator, d.name,
call_signature_details.bracket_leaf.start_pos,
call_signature_details.call_index,
call_signature_details.keyword_name_str)
for d in definitions if hasattr(d, 'py__call__')]
def _analysis(self):
#statements = set(chain(*self._parser.module().used_names.values()))
stmts, imps = analysis.get_module_statements(self._parser.module())
# Sort the statements so that the results are reproducible.
for i in imps:
iw = imports.ImportWrapper(self._evaluator, i,
nested_resolve=True).follow()
if i.is_nested() and any(not isinstance(i, pr.Module) for i in iw):
analysis.add(self._evaluator, 'import-error', i.namespace.names[-1])
for stmt in sorted(stmts, key=lambda obj: obj.start_pos):
if not (isinstance(stmt.parent, pr.ForFlow)
and stmt.parent.set_stmt == stmt):
self._evaluator.eval_statement(stmt)
self._evaluator.is_analysis = True
module_node = self._get_module_node()
self._evaluator.analysis_modules = [module_node]
try:
for node in get_executable_nodes(module_node):
context = self._get_module().create_context(node)
if node.type in ('funcdef', 'classdef'):
# Resolve the decorators.
tree_name_to_contexts(self._evaluator, context, node.children[1])
elif isinstance(node, tree.Import):
import_names = set(node.get_defined_names())
if node.is_nested():
import_names |= set(path[-1] for path in node.get_paths())
for n in import_names:
imports.infer_import(context, n)
elif node.type == 'expr_stmt':
types = context.eval_node(node)
for testlist in node.children[:-1:2]:
# Iterate tuples.
unpack_tuple_to_dict(context, types, testlist)
else:
if node.type == 'name':
defs = self._evaluator.goto_definitions(context, node)
else:
defs = evaluate_call_of_leaf(context, node)
try_iter_content(defs)
self._evaluator.reset_recursion_limitations()
ana = [a for a in self._evaluator.analysis if self.path == a.path]
return sorted(set(ana), key=lambda x: x.line)
ana = [a for a in self._evaluator.analysis if self.path == a.path]
return sorted(set(ana), key=lambda x: x.line)
finally:
self._evaluator.is_analysis = False
class Interpreter(Script):
@@ -616,12 +332,12 @@ class Interpreter(Script):
>>> from os.path import join
>>> namespace = locals()
>>> script = Interpreter('join().up', [namespace])
>>> script = Interpreter('join("").up', [namespace])
>>> print(script.completions()[0].name)
upper
"""
def __init__(self, source, namespaces=[], **kwds):
def __init__(self, source, namespaces, **kwds):
"""
Parse `source` and mixin interpreted Python objects from `namespaces`.
@@ -635,69 +351,57 @@ class Interpreter(Script):
If `line` and `column` are None, they are assumed be at the end of
`source`.
"""
try:
namespaces = [dict(n) for n in namespaces]
except Exception:
raise TypeError("namespaces must be a non-empty list of dicts.")
super(Interpreter, self).__init__(source, **kwds)
self.namespaces = namespaces
# Here we add the namespaces to the current parser.
interpreter.create(self._evaluator, namespaces[0], self._parser.module())
def _simple_complete(self, path, like):
user_stmt = self._parser.user_stmt_with_whitespace()
is_simple_path = not path or re.search('^[\w][\w\d.]*$', path)
if isinstance(user_stmt, pr.Import) or not is_simple_path:
return super(Interpreter, self)._simple_complete(path, like)
else:
class NamespaceModule(object):
def __getattr__(_, name):
for n in self.namespaces:
try:
return n[name]
except KeyError:
pass
raise AttributeError()
def __dir__(_):
gen = (n.keys() for n in self.namespaces)
return list(set(chain.from_iterable(gen)))
paths = path.split('.') if path else []
namespaces = (NamespaceModule(), builtins)
for p in paths:
old, namespaces = namespaces, []
for n in old:
try:
namespaces.append(getattr(n, p))
except AttributeError:
pass
completions = []
for namespace in namespaces:
for name in dir(namespace):
if name.lower().startswith(like.lower()):
scope = self._parser.module()
n = FakeName(name, scope)
completions.append((n, scope))
return completions
def _get_module(self):
parser_module = super(Interpreter, self)._get_module_node()
return interpreter.MixedModuleContext(
self._evaluator,
parser_module,
self.namespaces,
path=self.path
)
def defined_names(source, path=None, encoding='utf-8'):
def names(source=None, path=None, encoding='utf-8', all_scopes=False,
definitions=True, references=False):
"""
Get all definitions in `source` sorted by its position.
Returns a list of `Definition` objects, containing name parts.
This means you can call ``Definition.goto_assignments()`` and get the
reference of a name.
The parameters are the same as in :py:class:`Script`, except or the
following ones:
This functions can be used for listing functions, classes and
data defined in a file. This can be useful if you want to list
them in "sidebar". Each element in the returned list also has
`defined_names` method which can be used to get sub-definitions
(e.g., methods in class).
:rtype: list of classes.Definition
:param all_scopes: If True lists the names of all scopes instead of only
the module namespace.
:param definitions: If True lists the names that have been defined by a
class, function or a statement (``a = b`` returns ``a``).
:param references: If True lists all the names that are not listed by
``definitions=True``. E.g. ``a = b`` returns ``b``.
"""
parser = Parser(
common.source_to_unicode(source, encoding),
module_path=path,
)
return classes.defined_names(Evaluator(), parser.module)
def def_ref_filter(_def):
is_def = _def._name.tree_name.is_definition()
return definitions and is_def or references and not is_def
# Set line/column to a random position, because they don't matter.
script = Script(source, line=1, column=0, path=path, encoding=encoding)
module_context = script._get_module()
defs = [
classes.Definition(
script._evaluator,
TreeNameDefinition(
module_context.create_context(name if name.parent.type == 'file_input' else name.parent),
name
)
) for name in get_module_names(script._get_module_node(), all_scopes)
]
return sorted(filter(def_ref_filter, defs), key=lambda x: (x.line, x.column))
def preload_module(*modules):
@@ -717,6 +421,8 @@ def set_debug_function(func_cb=debug.print_to_stdout, warnings=True,
"""
Define a callback debug function to get all the debug messages.
If you don't specify any arguments, debug messages will be printed to stdout.
:param func_cb: The callback function for debug messages, with n params.
"""
debug.debug_function = func_cb

View File

@@ -3,41 +3,38 @@ The :mod:`jedi.api.classes` module contains the return classes of the API.
These classes are the much bigger part of the whole API, because they contain
the interesting information about completion and goto operations.
"""
import warnings
from itertools import chain
import re
from jedi._compatibility import next, unicode, use_metaclass
from parso.cache import parser_cache
from parso.python.tree import search_ancestor
from jedi._compatibility import u
from jedi import settings
from jedi import common
from jedi.parser import representation as pr
from jedi.cache import underscore_memoization
from jedi.evaluate.cache import memoize_default, CachedMetaClass
from jedi.evaluate import representation as er
from jedi.evaluate import iterable
from jedi.evaluate.utils import ignored, unite
from jedi.cache import memoize_method
from jedi.evaluate import imports
from jedi.evaluate import compiled
from jedi.api import keywords
from jedi.evaluate.finder import get_names_of_scope
from jedi.evaluate.filters import ParamName
from jedi.evaluate.imports import ImportName
from jedi.evaluate.context import instance
from jedi.evaluate.context import ClassContext, FunctionContext, FunctionExecutionContext
from jedi.api.keywords import KeywordName
def defined_names(evaluator, scope):
def _sort_names_by_start_pos(names):
return sorted(names, key=lambda s: s.start_pos or (0, 0))
def defined_names(evaluator, context):
"""
List sub-definitions (e.g., methods in class).
:type scope: Scope
:rtype: list of Definition
"""
# Calling get_names_of_scope doesn't make sense always. It might include
# star imports or inherited stuff. Wanted?
# TODO discuss!
if isinstance(scope, pr.Module):
pair = scope, scope.get_defined_names()
else:
pair = next(get_names_of_scope(evaluator, scope, star_search=False,
include_builtin=False), None)
names = pair[1] if pair else []
names = [n for n in names if isinstance(n, pr.Import) or (len(n) == 1)]
return [Definition(evaluator, d) for d in sorted(names, key=lambda s: s.start_pos)]
filter = next(context.get_filters(search_global=True))
names = [name for name in filter.values()]
return [Definition(evaluator, n) for n in _sort_names_by_start_pos(names)]
class BaseDefinition(object):
@@ -58,36 +55,34 @@ class BaseDefinition(object):
_tuple_mapping = dict((tuple(k.split('.')), v) for (k, v) in {
'argparse._ActionsContainer': 'argparse.ArgumentParser',
'_sre.SRE_Match': 're.MatchObject',
'_sre.SRE_Pattern': 're.RegexObject',
}.items())
def __init__(self, evaluator, definition, start_pos):
def __init__(self, evaluator, name):
self._evaluator = evaluator
self._start_pos = start_pos
self._definition = definition
self._name = name
"""
An instance of :class:`jedi.parsing_representation.Base` subclass.
An instance of :class:`parso.reprsentation.Name` subclass.
"""
self.is_keyword = isinstance(definition, keywords.Keyword)
self.is_keyword = isinstance(self._name, KeywordName)
# generate a path to the definition
self._module = definition.get_parent_until()
self._module = name.get_root_context()
if self.in_builtin_module():
self.module_path = None
else:
self.module_path = self._module.path
self.module_path = self._module.py__file__()
"""Shows the file path of a module. e.g. ``/usr/lib/python2.7/os.py``"""
@property
def start_pos(self):
def name(self):
"""
.. deprecated:: 0.7.0
Use :attr:`.line` and :attr:`.column` instead.
.. todo:: Remove!
Name of variable/function/class/module.
For example, for ``x = None`` it returns ``'x'``.
:rtype: str or None
"""
warnings.warn("Use line/column instead.", DeprecationWarning)
return self._start_pos
return self._name.string_name
@property
def type(self):
@@ -114,8 +109,10 @@ class BaseDefinition(object):
... def f():
... pass
...
... variable = keyword or f or C or x'''
>>> script = Script(source, len(source.splitlines()), 3, 'example.py')
... for variable in [keyword, f, C, x]:
... variable'''
>>> script = Script(source)
>>> defs = script.goto_definitions()
Before showing what is in ``defs``, let's sort it by :attr:`line`
@@ -124,7 +121,7 @@ class BaseDefinition(object):
>>> defs = sorted(defs, key=lambda d: d.line)
>>> defs # doctest: +NORMALIZE_WHITESPACE
[<Definition module keyword>, <Definition class C>,
<Definition class D>, <Definition def f>]
<Definition instance D>, <Definition def f>]
Finally, here is what you can get from :attr:`type`:
@@ -138,38 +135,59 @@ class BaseDefinition(object):
'function'
"""
# generate the type
stripped = self._definition
if isinstance(stripped, compiled.CompiledObject):
return stripped.type()
if isinstance(stripped, er.InstanceElement):
stripped = stripped.var
if isinstance(stripped, pr.NamePart):
stripped = stripped.parent
if isinstance(stripped, pr.Name):
stripped = stripped.parent
return type(stripped).__name__.lower().replace('wrapper', '')
tree_name = self._name.tree_name
resolve = False
if tree_name is not None:
# TODO move this to their respective names.
definition = tree_name.get_definition()
if definition is not None and definition.type == 'import_from' and \
tree_name.is_definition():
resolve = True
if isinstance(self._name, imports.SubModuleName) or resolve:
for context in self._name.infer():
return context.api_type
return self._name.api_type
def _path(self):
"""The module path."""
path = []
"""The path to a module/class/function definition."""
def to_reverse():
name = self._name
if name.api_type == 'module':
try:
name = list(name.infer())[0].name
except IndexError:
pass
def insert_nonnone(x):
if x:
path.insert(0, x)
if name.api_type == 'module':
module_contexts = name.infer()
if module_contexts:
module_context, = module_contexts
for n in reversed(module_context.py__name__().split('.')):
yield n
else:
# We don't really know anything about the path here. This
# module is just an import that would lead in an
# ImportError. So simply return the name.
yield name.string_name
return
else:
yield name.string_name
if not isinstance(self._definition, keywords.Keyword):
par = self._definition
while par is not None:
if isinstance(par, pr.Import):
insert_nonnone(par.namespace)
insert_nonnone(par.from_ns)
if par.relative_count == 0:
break
with common.ignored(AttributeError):
path.insert(0, par.name)
par = par.parent
return path
parent_context = name.parent_context
while parent_context is not None:
try:
method = parent_context.py__name__
except AttributeError:
try:
yield parent_context.name.string_name
except AttributeError:
pass
else:
for name in reversed(method().split('.')):
yield name
parent_context = parent_context.parent_context
return reversed(list(to_reverse()))
@property
def module_name(self):
@@ -183,37 +201,29 @@ class BaseDefinition(object):
>>> print(d.module_name) # doctest: +ELLIPSIS
json
"""
return str(self._module.name)
return self._module.name.string_name
def in_builtin_module(self):
"""Whether this is a builtin module."""
return isinstance(self._module, compiled.CompiledObject)
@property
def line_nr(self):
"""
.. deprecated:: 0.5.0
Use :attr:`.line` instead.
.. todo:: Remove!
"""
warnings.warn("Use line instead.", DeprecationWarning)
return self.line
@property
def line(self):
"""The line where the definition occurs (starting with 1)."""
if self.in_builtin_module():
start_pos = self._name.start_pos
if start_pos is None:
return None
return self._start_pos[0]
return start_pos[0]
@property
def column(self):
"""The column where the definition occurs (starting with 0)."""
if self.in_builtin_module():
start_pos = self._name.start_pos
if start_pos is None:
return None
return self._start_pos[1]
return start_pos[1]
def docstring(self, raw=False):
def docstring(self, raw=False, fast=True):
r"""
Return a document string for this completion object.
@@ -227,7 +237,7 @@ class BaseDefinition(object):
>>> script = Script(source, 1, len('def f'), 'example.py')
>>> doc = script.goto_definitions()[0].docstring()
>>> print(doc)
f(a, b = 1)
f(a, b=1)
<BLANKLINE>
Document for function f.
@@ -238,36 +248,18 @@ class BaseDefinition(object):
>>> print(script.goto_definitions()[0].docstring(raw=True))
Document for function f.
:param fast: Don't follow imports that are only one level deep like
``import foo``, but follow ``from foo import bar``. This makes
sense for speed reasons. Completing `import a` is slow if you use
the ``foo.docstring(fast=False)`` on every object, because it
parses all libraries starting with ``a``.
"""
if raw:
return _Help(self._definition).raw()
else:
return _Help(self._definition).full()
@property
def doc(self):
"""
.. deprecated:: 0.8.0
Use :meth:`.docstring` instead.
.. todo:: Remove!
"""
warnings.warn("Use docstring() instead.", DeprecationWarning)
return self.docstring()
@property
def raw_doc(self):
"""
.. deprecated:: 0.8.0
Use :meth:`.docstring` instead.
.. todo:: Remove!
"""
warnings.warn("Use docstring() instead.", DeprecationWarning)
return self.docstring(raw=True)
return _Help(self._name).docstring(fast=fast, raw=raw)
@property
def description(self):
"""A textual description of the object."""
return unicode(self._definition)
return u(self._name.string_name)
@property
def full_name(self):
@@ -288,16 +280,17 @@ class BaseDefinition(object):
>>> print(script.goto_definitions()[0].full_name)
os.path.join
Notice that it correctly returns ``'os.path.join'`` instead of
(for example) ``'posixpath.join'``.
Notice that it returns ``'os.path.join'`` instead of (for example)
``'posixpath.join'``. This is not correct, since the modules name would
be ``<module 'posixpath' ...>```. However most users find the latter
more practical.
"""
path = [unicode(p) for p in self._path()]
path = list(self._path())
# TODO add further checks, the mapping should only occur on stdlib.
if not path:
return None # for keywords the path is empty
with common.ignored(KeyError):
with ignored(KeyError):
path[0] = self._mapping[path[0]]
for key, repl in self._tuple_mapping.items():
if tuple(path[:len(key)]) == key:
@@ -305,101 +298,123 @@ class BaseDefinition(object):
return '.'.join(path if path[0] else path[1:])
@memoize_default()
def _follow_statements_imports(self):
"""
Follow both statements and imports, as far as possible.
"""
stripped = self._definition
if isinstance(stripped, pr.Name):
stripped = stripped.parent
def goto_assignments(self):
if self._name.tree_name is None:
return self
# We should probably work in `Finder._names_to_types` here.
if isinstance(stripped, pr.Function):
stripped = er.Function(self._evaluator, stripped)
elif isinstance(stripped, pr.Class):
stripped = er.Class(self._evaluator, stripped)
names = self._evaluator.goto(self._name.parent_context, self._name.tree_name)
return [Definition(self._evaluator, n) for n in names]
if stripped.isinstance(pr.Statement):
return self._evaluator.eval_statement(stripped)
elif stripped.isinstance(pr.Import):
return imports.follow_imports(self._evaluator, [stripped])
else:
return [stripped]
def _goto_definitions(self):
# TODO make this function public.
return [Definition(self._evaluator, d.name) for d in self._name.infer()]
@property
@memoize_default()
@memoize_method
def params(self):
"""
Raises an ``AttributeError``if the definition is not callable.
Otherwise returns a list of `Definition` that represents the params.
"""
followed = self._follow_statements_imports()
if not followed or not followed[0].is_callable():
raise AttributeError()
followed = followed[0] # only check the first one.
def get_param_names(context):
param_names = []
if context.api_type == 'function':
param_names = list(context.get_param_names())
if isinstance(context, instance.BoundMethod):
param_names = param_names[1:]
elif isinstance(context, (instance.AbstractInstanceContext, ClassContext)):
if isinstance(context, ClassContext):
search = '__init__'
else:
search = '__call__'
names = context.get_function_slot_names(search)
if not names:
return []
if followed.isinstance(er.Function):
if isinstance(followed, er.InstanceElement):
params = followed.params[1:]
else:
params = followed.params
elif followed.isinstance(er.compiled.CompiledObject):
params = followed.params
else:
try:
sub = followed.get_subscope_by_name('__init__')
params = sub.params[1:] # ignore self
except KeyError:
return []
return [_Param(self._evaluator, p) for p in params]
# Just take the first one here, not optimal, but currently
# there's no better solution.
inferred = names[0].infer()
param_names = get_param_names(next(iter(inferred)))
if isinstance(context, ClassContext):
param_names = param_names[1:]
return param_names
elif isinstance(context, compiled.CompiledObject):
return list(context.get_param_names())
return param_names
followed = list(self._name.infer())
if not followed or not hasattr(followed[0], 'py__call__'):
raise AttributeError()
context = followed[0] # only check the first one.
return [Definition(self._evaluator, n) for n in get_param_names(context)]
def parent(self):
if isinstance(self._definition, compiled.CompiledObject):
non_flow = self._definition.parent
else:
scope = self._definition.get_parent_until(pr.IsScope, include_current=False)
non_flow = scope.get_parent_until(pr.Flow, reverse=True)
return Definition(self._evaluator, non_flow)
context = self._name.parent_context
if context is None:
return None
if isinstance(context, FunctionExecutionContext):
# TODO the function context should be a part of the function
# execution context.
context = FunctionContext(
self._evaluator, context.parent_context, context.tree_node)
return Definition(self._evaluator, context.name)
def __repr__(self):
return "<%s %s>" % (type(self).__name__, self.description)
def get_line_code(self, before=0, after=0):
"""
Returns the line of code where this object was defined.
:param before: Add n lines before the current line to the output.
:param after: Add n lines after the current line to the output.
:return str: Returns the line(s) of code or an empty string if it's a
builtin.
"""
if self.in_builtin_module():
return ''
path = self._name.get_root_context().py__file__()
lines = parser_cache[self._evaluator.grammar._hashed][path].lines
index = self._name.start_pos[0] - 1
start_index = max(index - before, 0)
return ''.join(lines[start_index:index + after + 1])
class Completion(BaseDefinition):
"""
`Completion` objects are returned from :meth:`api.Script.completions`. They
provide additional information about a completion.
"""
def __init__(self, evaluator, name, needs_dot, like_name_length, base):
super(Completion, self).__init__(evaluator, name.parent, name.start_pos)
def __init__(self, evaluator, name, stack, like_name_length):
super(Completion, self).__init__(evaluator, name)
self._name = name
self._needs_dot = needs_dot
self._like_name_length = like_name_length
self._base = base
self._stack = stack
# Completion objects with the same Completion name (which means
# duplicate items in the completion)
self._same_name_completions = []
def _complete(self, like_name):
dot = '.' if self._needs_dot else ''
append = ''
if settings.add_bracket_after_function \
and self.type == 'Function':
append = '('
if settings.add_dot_after_module:
if isinstance(self._base, pr.Module):
append += '.'
if isinstance(self._base, pr.Param):
append += '='
if isinstance(self._name, ParamName) and self._stack is not None:
node_names = list(self._stack.get_node_names(self._evaluator.grammar._pgen_grammar))
if 'trailer' in node_names and 'argument' not in node_names:
append += '='
name = str(self._name.names[-1])
name = self._name.string_name
if like_name:
name = name[self._like_name_length:]
return dot + name + append
return name + append
@property
def complete(self):
@@ -410,117 +425,51 @@ class Completion(BaseDefinition):
would return the string 'ce'. It also adds additional stuff, depending
on your `settings.py`.
Assuming the following function definition::
def foo(param=0):
pass
completing ``foo(par`` would give a ``Completion`` which `complete`
would be `am=`
"""
return self._complete(True)
@property
def name(self):
"""
Similar to :attr:`complete`, but return the whole word, for
example::
isinstan
would return `isinstance`.
"""
return unicode(self._name.names[-1])
@property
def name_with_symbols(self):
"""
Similar to :attr:`name`, but like :attr:`name`
returns also the symbols, for example::
Similar to :attr:`name`, but like :attr:`name` returns also the
symbols, for example assuming the following function definition::
list()
def foo(param=0):
pass
completing ``foo(`` would give a ``Completion`` which
``name_with_symbols`` would be "param=".
would return ``.append`` and others (which means it adds a dot).
"""
return self._complete(False)
@property
def word(self):
"""
.. deprecated:: 0.6.0
Use :attr:`.name` instead.
.. todo:: Remove!
"""
warnings.warn("Use name instead.", DeprecationWarning)
return self.name
def docstring(self, raw=False, fast=True):
if self._like_name_length >= 3:
# In this case we can just resolve the like name, because we
# wouldn't load like > 100 Python modules anymore.
fast = False
return super(Completion, self).docstring(raw=raw, fast=fast)
@property
def description(self):
"""Provide a description of the completion object."""
parent = self._name.parent
if parent is None:
return ''
t = self.type
if t == 'statement' or t == 'import':
desc = self._definition.get_code(False)
else:
desc = '.'.join(unicode(p) for p in self._path())
line = '' if self.in_builtin_module else '@%s' % self.line
return '%s: %s%s' % (t, desc, line)
# TODO improve the class structure.
return Definition.description.__get__(self)
def __repr__(self):
return '<%s: %s>' % (type(self).__name__, self._name)
return '<%s: %s>' % (type(self).__name__, self._name.string_name)
def docstring(self, raw=False, fast=True):
"""
:param fast: Don't follow imports that are only one level deep like
``import foo``, but follow ``from foo import bar``. This makes
sense for speed reasons. Completing `import a` is slow if you use
the ``foo.docstring(fast=False)`` on every object, because it
parses all libraries starting with ``a``.
"""
definition = self._definition
if isinstance(self._definition, pr.Import):
i = imports.ImportWrapper(self._evaluator, self._definition)
if len(i.import_path) > 1 or not fast:
followed = self._follow_statements_imports()
if followed:
# TODO: Use all of the followed objects as input to Documentation.
definition = followed[0]
if raw:
return _Help(definition).raw()
else:
return _Help(definition).full()
@property
def type(self):
"""
The type of the completion objects. Follows imports. For a further
description, look at :attr:`jedi.api.classes.BaseDefinition.type`.
"""
if isinstance(self._definition, pr.Import):
i = imports.ImportWrapper(self._evaluator, self._definition)
if len(i.import_path) <= 1:
return 'module'
followed = self.follow_definition()
if followed:
# Caveat: Only follows the first one, ignore the other ones.
# This is ok, since people are almost never interested in
# variations.
return followed[0].type
return super(Completion, self).type
@memoize_default()
def _follow_statements_imports(self):
# imports completion is very complicated and needs to be treated
# separately in Completion.
if self._definition.isinstance(pr.Import) and self._definition.alias is None:
i = imports.ImportWrapper(self._evaluator, self._definition, True)
import_path = i.import_path + (unicode(self._name),)
try:
return imports.get_importer(self._evaluator, import_path,
i._importer.module).follow(self._evaluator)
except imports.ModuleNotFound:
pass
return super(Completion, self)._follow_statements_imports()
@memoize_default()
@memoize_method
def follow_definition(self):
"""
Return the original definitions. I strongly recommend not using it for
@@ -530,61 +479,17 @@ class Completion(BaseDefinition):
follows all results. This means with 1000 completions (e.g. numpy),
it's just PITA-slow.
"""
defs = self._follow_statements_imports()
return [Definition(self._evaluator, d) for d in defs]
defs = self._name.infer()
return [Definition(self._evaluator, d.name) for d in defs]
class Definition(use_metaclass(CachedMetaClass, BaseDefinition)):
class Definition(BaseDefinition):
"""
*Definition* objects are returned from :meth:`api.Script.goto_assignments`
or :meth:`api.Script.goto_definitions`.
"""
def __init__(self, evaluator, definition):
super(Definition, self).__init__(evaluator, definition, definition.start_pos)
@property
@underscore_memoization
def name(self):
"""
Name of variable/function/class/module.
For example, for ``x = None`` it returns ``'x'``.
:rtype: str or None
"""
d = self._definition
if isinstance(d, er.InstanceElement):
d = d.var
if isinstance(d, (compiled.CompiledObject, compiled.CompiledName)):
name = d.name
elif isinstance(d, pr.Name):
name = d.names[-1]
elif isinstance(d, iterable.Array):
name = d.type
elif isinstance(d, (pr.Class, er.Class, er.Instance,
er.Function, pr.Function)):
name = d.name
elif isinstance(d, pr.Module):
name = self.module_name
elif isinstance(d, pr.Import):
try:
name = d.get_defined_names()[0].names[-1]
except (AttributeError, IndexError):
return None
elif isinstance(d, pr.Param):
name = d.get_name()
elif isinstance(d, pr.Statement):
try:
expression_list = d.assignment_details[0][0]
name = expression_list[0].name.names[-1]
except IndexError:
return None
elif isinstance(d, iterable.Generator):
return None
elif isinstance(d, pr.NamePart):
name = d
return unicode(name)
super(Definition, self).__init__(evaluator, definition)
@property
def description(self):
@@ -602,7 +507,7 @@ class Definition(use_metaclass(CachedMetaClass, BaseDefinition)):
... class C:
... pass
...
... variable = f or C'''
... variable = f if random.choice([0,1]) else C'''
>>> script = Script(source, column=3) # line is maximum by default
>>> defs = script.goto_definitions()
>>> defs = sorted(defs, key=lambda d: d.line)
@@ -614,28 +519,30 @@ class Definition(use_metaclass(CachedMetaClass, BaseDefinition)):
'class C'
"""
d = self._definition
if isinstance(d, er.InstanceElement):
d = d.var
if isinstance(d, pr.Name):
d = d.parent
typ = self.type
tree_name = self._name.tree_name
if typ in ('function', 'class', 'module', 'instance') or tree_name is None:
if typ == 'function':
# For the description we want a short and a pythonic way.
typ = 'def'
return typ + ' ' + u(self._name.string_name)
elif typ == 'param':
code = search_ancestor(tree_name, 'param').get_code(
include_prefix=False,
include_comma=False
)
return typ + ' ' + code
if isinstance(d, compiled.CompiledObject):
d = d.type() + ' ' + d.name
elif isinstance(d, iterable.Array):
d = 'class ' + d.type
elif isinstance(d, (pr.Class, er.Class, er.Instance)):
d = 'class ' + unicode(d.name)
elif isinstance(d, (er.Function, pr.Function)):
d = 'def ' + unicode(d.name)
elif isinstance(d, pr.Module):
# only show module name
d = 'module %s' % self.module_name
elif self.is_keyword:
d = 'keyword %s' % d.name
else:
d = d.get_code().replace('\n', '').replace('\r', '')
return d
definition = tree_name.get_definition() or tree_name
# Remove the prefix, because that's not what we want for get_code
# here.
txt = definition.get_code(include_prefix=False)
# Delete comments:
txt = re.sub('#[^\n]+\n', ' ', txt)
# Delete multi spaces/newlines
txt = re.sub('\s+', ' ', txt).strip()
return txt
@property
def desc_with_module(self):
@@ -651,22 +558,31 @@ class Definition(use_metaclass(CachedMetaClass, BaseDefinition)):
position = '' if self.in_builtin_module else '@%s' % (self.line)
return "%s:%s%s" % (self.module_name, self.description, position)
@memoize_default()
@memoize_method
def defined_names(self):
"""
List sub-definitions (e.g., methods in class).
:rtype: list of Definition
"""
defs = self._follow_statements_imports()
# For now we don't want base classes or evaluate decorators.
defs = [d.base if isinstance(d, (er.Class, er.Function)) else d for d in defs]
iterable = (defined_names(self._evaluator, d) for d in defs)
iterable = list(iterable)
return list(chain.from_iterable(iterable))
defs = self._name.infer()
return sorted(
unite(defined_names(self._evaluator, d) for d in defs),
key=lambda s: s._name.start_pos or (0, 0)
)
def is_definition(self):
"""
Returns True, if defined as a name in a statement, function or class.
Returns False, if it's a reference to such a definition.
"""
if self._name.tree_name is None:
return True
else:
return self._name.tree_name.is_definition()
def __eq__(self, other):
return self._start_pos == other._start_pos \
return self._name.start_pos == other._name.start_pos \
and self.module_path == other.module_path \
and self.name == other.name \
and self._evaluator == other._evaluator
@@ -675,7 +591,7 @@ class Definition(use_metaclass(CachedMetaClass, BaseDefinition)):
return not self.__eq__(other)
def __hash__(self):
return hash((self._start_pos, self.module_path, self.name, self._evaluator))
return hash((self._name.start_pos, self.module_path, self.name, self._evaluator))
class CallSignature(Definition):
@@ -684,33 +600,36 @@ class CallSignature(Definition):
It knows what functions you are currently in. e.g. `isinstance(` would
return the `isinstance` function. without `(` it would return nothing.
"""
def __init__(self, evaluator, executable, call, index, key_name):
super(CallSignature, self).__init__(evaluator, executable)
def __init__(self, evaluator, executable_name, bracket_start_pos, index, key_name_str):
super(CallSignature, self).__init__(evaluator, executable_name)
self._index = index
self._key_name = key_name
self._call = call
self._key_name_str = key_name_str
self._bracket_start_pos = bracket_start_pos
@property
def index(self):
"""
The Param index of the current call.
Returns None if the index doesn't is not defined.
Returns None if the index cannot be found in the curent call.
"""
if self._key_name is not None:
if self._key_name_str is not None:
for i, param in enumerate(self.params):
if self._key_name == param.name:
if self._key_name_str == param.name:
return i
if self.params and self.params[-1]._definition.stars == 2:
return i
else:
return None
if self.params:
param_name = self.params[-1]._name
if param_name.tree_name is not None:
if param_name.tree_name.get_definition().star_count == 2:
return i
return None
if self._index >= len(self.params):
for i, param in enumerate(self.params):
# *args case
if param._definition.stars == 1:
return i
tree_name = param._name.tree_name
if tree_name is not None:
# *args case
if tree_name.get_definition().star_count == 1:
return i
return None
return self._index
@@ -720,51 +639,11 @@ class CallSignature(Definition):
The indent of the bracket that is responsible for the last function
call.
"""
c = self._call
while c.next is not None:
c = c.next
return c.name.end_pos
@property
def call_name(self):
"""
.. deprecated:: 0.8.0
Use :attr:`.name` instead.
.. todo:: Remove!
The name (e.g. 'isinstance') as a string.
"""
warnings.warn("Use name instead.", DeprecationWarning)
return unicode(self._definition.name)
@property
def module(self):
"""
.. deprecated:: 0.8.0
Use :attr:`.module_name` for the module name.
.. todo:: Remove!
"""
return self._executable.get_parent_until()
return self._bracket_start_pos
def __repr__(self):
return '<%s: %s index %s>' % (type(self).__name__, self._definition,
self.index)
class _Param(Definition):
"""
Just here for backwards compatibility.
"""
def get_code(self):
"""
.. deprecated:: 0.8.0
Use :attr:`.description` and :attr:`.name` instead.
.. todo:: Remove!
A function to get the whole code of the param.
"""
warnings.warn("Use description instead.", DeprecationWarning)
return self.description
return '<%s: %s index %s>' % \
(type(self).__name__, self._name.string_name, self.index)
class _Help(object):
@@ -773,21 +652,27 @@ class _Help(object):
the future.
"""
def __init__(self, definition):
self._definition = definition
self._name = definition
def full(self):
try:
return self._definition.doc
except AttributeError:
return self.raw()
@memoize_method
def _get_contexts(self, fast):
if isinstance(self._name, ImportName) and fast:
return {}
def raw(self):
if self._name.api_type == 'statement':
return {}
return self._name.infer()
def docstring(self, fast=True, raw=True):
"""
The raw docstring ``__doc__`` for any object.
The docstring ``__doc__`` for any object.
See :attr:`doc` for example.
"""
try:
return self._definition.raw_doc
except AttributeError:
return ''
# TODO: Use all of the followed objects as output. Possibly divinding
# them by a few dashes.
for context in self._get_contexts(fast=fast):
return context.py__doc__(include_call_signature=not raw)
return ''

291
jedi/api/completion.py Normal file
View File

@@ -0,0 +1,291 @@
from parso.python import token
from parso.python import tree
from parso.tree import search_ancestor, Leaf
from jedi import debug
from jedi import settings
from jedi.api import classes
from jedi.api import helpers
from jedi.evaluate import imports
from jedi.api import keywords
from jedi.evaluate.helpers import evaluate_call_of_leaf
from jedi.evaluate.filters import get_global_filters
from jedi.parser_utils import get_statement_of_position
def get_call_signature_param_names(call_signatures):
# add named params
for call_sig in call_signatures:
for p in call_sig.params:
# Allow protected access, because it's a public API.
tree_name = p._name.tree_name
# Compiled modules typically don't allow keyword arguments.
if tree_name is not None:
# Allow access on _definition here, because it's a
# public API and we don't want to make the internal
# Name object public.
tree_param = tree.search_ancestor(tree_name, 'param')
if tree_param.star_count == 0: # no *args/**kwargs
yield p._name
def filter_names(evaluator, completion_names, stack, like_name):
comp_dct = {}
for name in completion_names:
if settings.case_insensitive_completion \
and name.string_name.lower().startswith(like_name.lower()) \
or name.string_name.startswith(like_name):
new = classes.Completion(
evaluator,
name,
stack,
len(like_name)
)
k = (new.name, new.complete) # key
if k in comp_dct and settings.no_completion_duplicates:
comp_dct[k]._same_name_completions.append(new)
else:
comp_dct[k] = new
yield new
def get_user_scope(module_context, position):
"""
Returns the scope in which the user resides. This includes flows.
"""
user_stmt = get_statement_of_position(module_context.tree_node, position)
if user_stmt is None:
def scan(scope):
for s in scope.children:
if s.start_pos <= position <= s.end_pos:
if isinstance(s, (tree.Scope, tree.Flow)):
return scan(s) or s
elif s.type in ('suite', 'decorated'):
return scan(s)
return None
scanned_node = scan(module_context.tree_node)
if scanned_node:
return module_context.create_context(scanned_node, node_is_context=True)
return module_context
else:
return module_context.create_context(user_stmt)
def get_flow_scope_node(module_node, position):
node = module_node.get_leaf_for_position(position, include_prefixes=True)
while not isinstance(node, (tree.Scope, tree.Flow)):
node = node.parent
return node
class Completion:
def __init__(self, evaluator, module, code_lines, position, call_signatures_method):
self._evaluator = evaluator
self._module_context = module
self._module_node = module.tree_node
self._code_lines = code_lines
# The first step of completions is to get the name
self._like_name = helpers.get_on_completion_name(self._module_node, code_lines, position)
# The actual cursor position is not what we need to calculate
# everything. We want the start of the name we're on.
self._position = position[0], position[1] - len(self._like_name)
self._call_signatures_method = call_signatures_method
def completions(self):
completion_names = self._get_context_completions()
completions = filter_names(self._evaluator, completion_names,
self.stack, self._like_name)
return sorted(completions, key=lambda x: (x.name.startswith('__'),
x.name.startswith('_'),
x.name.lower()))
def _get_context_completions(self):
"""
Analyzes the context that a completion is made in and decides what to
return.
Technically this works by generating a parser stack and analysing the
current stack for possible grammar nodes.
Possible enhancements:
- global/nonlocal search global
- yield from / raise from <- could be only exceptions/generators
- In args: */**: no completion
- In params (also lambda): no completion before =
"""
grammar = self._evaluator.grammar
try:
self.stack = helpers.get_stack_at_position(
grammar, self._code_lines, self._module_node, self._position
)
except helpers.OnErrorLeaf as e:
self.stack = None
if e.error_leaf.value == '.':
# After ErrorLeaf's that are dots, we will not do any
# completions since this probably just confuses the user.
return []
# If we don't have a context, just use global completion.
return self._global_completions()
allowed_keywords, allowed_tokens = \
helpers.get_possible_completion_types(grammar._pgen_grammar, self.stack)
if 'if' in allowed_keywords:
leaf = self._module_node.get_leaf_for_position(self._position, include_prefixes=True)
previous_leaf = leaf.get_previous_leaf()
indent = self._position[1]
if not (leaf.start_pos <= self._position <= leaf.end_pos):
indent = leaf.start_pos[1]
if previous_leaf is not None:
stmt = previous_leaf
while True:
stmt = search_ancestor(
stmt, 'if_stmt', 'for_stmt', 'while_stmt', 'try_stmt',
'error_node',
)
if stmt is None:
break
type_ = stmt.type
if type_ == 'error_node':
first = stmt.children[0]
if isinstance(first, Leaf):
type_ = first.value + '_stmt'
# Compare indents
if stmt.start_pos[1] == indent:
if type_ == 'if_stmt':
allowed_keywords += ['elif', 'else']
elif type_ == 'try_stmt':
allowed_keywords += ['except', 'finally', 'else']
elif type_ == 'for_stmt':
allowed_keywords.append('else')
completion_names = list(self._get_keyword_completion_names(allowed_keywords))
if token.NAME in allowed_tokens or token.INDENT in allowed_tokens:
# This means that we actually have to do type inference.
symbol_names = list(self.stack.get_node_names(grammar._pgen_grammar))
nodes = list(self.stack.get_nodes())
if nodes and nodes[-1] in ('as', 'def', 'class'):
# No completions for ``with x as foo`` and ``import x as foo``.
# Also true for defining names as a class or function.
return list(self._get_class_context_completions(is_function=True))
elif "import_stmt" in symbol_names:
level, names = self._parse_dotted_names(nodes, "import_from" in symbol_names)
only_modules = not ("import_from" in symbol_names and 'import' in nodes)
completion_names += self._get_importer_names(
names,
level,
only_modules=only_modules,
)
elif symbol_names[-1] in ('trailer', 'dotted_name') and nodes[-1] == '.':
dot = self._module_node.get_leaf_for_position(self._position)
completion_names += self._trailer_completions(dot.get_previous_leaf())
else:
completion_names += self._global_completions()
completion_names += self._get_class_context_completions(is_function=False)
if 'trailer' in symbol_names:
call_signatures = self._call_signatures_method()
completion_names += get_call_signature_param_names(call_signatures)
return completion_names
def _get_keyword_completion_names(self, keywords_):
for k in keywords_:
yield keywords.keyword(self._evaluator, k).name
def _global_completions(self):
context = get_user_scope(self._module_context, self._position)
debug.dbg('global completion scope: %s', context)
flow_scope_node = get_flow_scope_node(self._module_node, self._position)
filters = get_global_filters(
self._evaluator,
context,
self._position,
origin_scope=flow_scope_node
)
completion_names = []
for filter in filters:
completion_names += filter.values()
return completion_names
def _trailer_completions(self, previous_leaf):
user_context = get_user_scope(self._module_context, self._position)
evaluation_context = self._evaluator.create_context(
self._module_context, previous_leaf
)
contexts = evaluate_call_of_leaf(evaluation_context, previous_leaf)
completion_names = []
debug.dbg('trailer completion contexts: %s', contexts)
for context in contexts:
for filter in context.get_filters(
search_global=False, origin_scope=user_context.tree_node):
completion_names += filter.values()
return completion_names
def _parse_dotted_names(self, nodes, is_import_from):
level = 0
names = []
for node in nodes[1:]:
if node in ('.', '...'):
if not names:
level += len(node.value)
elif node.type == 'dotted_name':
names += node.children[::2]
elif node.type == 'name':
names.append(node)
elif node == ',':
if not is_import_from:
names = []
else:
# Here if the keyword `import` comes along it stops checking
# for names.
break
return level, names
def _get_importer_names(self, names, level=0, only_modules=True):
names = [n.value for n in names]
i = imports.Importer(self._evaluator, names, self._module_context, level)
return i.completion_names(self._evaluator, only_modules=only_modules)
def _get_class_context_completions(self, is_function=True):
"""
Autocomplete inherited methods when overriding in child class.
"""
leaf = self._module_node.get_leaf_for_position(self._position, include_prefixes=True)
cls = tree.search_ancestor(leaf, 'classdef')
if isinstance(cls, (tree.Class, tree.Function)):
# Complete the methods that are defined in the super classes.
random_context = self._module_context.create_context(
cls,
node_is_context=True
)
else:
return
if cls.start_pos[1] >= leaf.start_pos[1]:
return
filters = random_context.get_filters(search_global=False, is_instance=True)
# The first dict is the dictionary of class itself.
next(filters)
for filter in filters:
for name in filter.values():
if (name.api_type == 'function') == is_function:
yield name

View File

@@ -2,17 +2,20 @@
Helpers for the API
"""
import re
from collections import namedtuple
from textwrap import dedent
from jedi.evaluate import imports
from parso.python.parser import Parser
from parso.python import tree
from parso import split_lines
from jedi._compatibility import u
from jedi.evaluate.syntax_tree import eval_atom
from jedi.evaluate.helpers import evaluate_call_of_leaf
from jedi.cache import time_cache
def completion_parts(path_until_cursor):
"""
Returns the parts for the completion
:return: tuple - (path, dot, like)
"""
match = re.match(r'^(.*?)(\.|)(\w?[\w\d]*)$', path_until_cursor, flags=re.S)
return match.groups()
CompletionParts = namedtuple('CompletionParts', ['path', 'has_dot', 'name'])
def sorted_definitions(defs):
@@ -20,27 +23,294 @@ def sorted_definitions(defs):
return sorted(defs, key=lambda x: (x.module_path or '', x.line or 0, x.column or 0))
def get_on_import_stmt(evaluator, user_context, user_stmt, is_like_search=False):
"""
Resolve the user statement, if it is an import. Only resolve the
parts until the user position.
"""
import_names = user_stmt.get_all_import_names()
kill_count = -1
cur_name_part = None
for i in import_names:
if user_stmt.alias == i:
continue
for name_part in i.names:
if name_part.end_pos >= user_context.position:
if not cur_name_part:
cur_name_part = name_part
kill_count += 1
def get_on_completion_name(module_node, lines, position):
leaf = module_node.get_leaf_for_position(position)
if leaf is None or leaf.type in ('string', 'error_leaf'):
# Completions inside strings are a bit special, we need to parse the
# string. The same is true for comments and error_leafs.
line = lines[position[0] - 1]
# The first step of completions is to get the name
return re.search(r'(?!\d)\w+$|$', line[:position[1]]).group(0)
elif leaf.type not in ('name', 'keyword'):
return ''
context = user_context.get_context()
just_from = next(context) == 'from'
return leaf.value[:position[1] - leaf.start_pos[1]]
i = imports.ImportWrapper(evaluator, user_stmt, is_like_search,
kill_count=kill_count, nested_resolve=True,
is_just_from=just_from)
return i, cur_name_part
def _get_code(code_lines, start_pos, end_pos):
# Get relevant lines.
lines = code_lines[start_pos[0] - 1:end_pos[0]]
# Remove the parts at the end of the line.
lines[-1] = lines[-1][:end_pos[1]]
# Remove first line indentation.
lines[0] = lines[0][start_pos[1]:]
return '\n'.join(lines)
class OnErrorLeaf(Exception):
@property
def error_leaf(self):
return self.args[0]
def _is_on_comment(leaf, position):
comment_lines = split_lines(leaf.prefix)
difference = leaf.start_pos[0] - position[0]
prefix_start_pos = leaf.get_start_pos_of_prefix()
if difference == 0:
indent = leaf.start_pos[1]
elif position[0] == prefix_start_pos[0]:
indent = prefix_start_pos[1]
else:
indent = 0
line = comment_lines[-difference - 1][:position[1] - indent]
return '#' in line
def _get_code_for_stack(code_lines, module_node, position):
leaf = module_node.get_leaf_for_position(position, include_prefixes=True)
# It might happen that we're on whitespace or on a comment. This means
# that we would not get the right leaf.
if leaf.start_pos >= position:
if _is_on_comment(leaf, position):
return u('')
# If we're not on a comment simply get the previous leaf and proceed.
leaf = leaf.get_previous_leaf()
if leaf is None:
return u('') # At the beginning of the file.
is_after_newline = leaf.type == 'newline'
while leaf.type == 'newline':
leaf = leaf.get_previous_leaf()
if leaf is None:
return u('')
if leaf.type == 'error_leaf' or leaf.type == 'string':
if leaf.start_pos[0] < position[0]:
# On a different line, we just begin anew.
return u('')
# Error leafs cannot be parsed, completion in strings is also
# impossible.
raise OnErrorLeaf(leaf)
else:
user_stmt = leaf
while True:
if user_stmt.parent.type in ('file_input', 'suite', 'simple_stmt'):
break
user_stmt = user_stmt.parent
if is_after_newline:
if user_stmt.start_pos[1] > position[1]:
# This means that it's actually a dedent and that means that we
# start without context (part of a suite).
return u('')
# This is basically getting the relevant lines.
return _get_code(code_lines, user_stmt.get_start_pos_of_prefix(), position)
def get_stack_at_position(grammar, code_lines, module_node, pos):
"""
Returns the possible node names (e.g. import_from, xor_test or yield_stmt).
"""
class EndMarkerReached(Exception):
pass
def tokenize_without_endmarker(code):
# TODO This is for now not an official parso API that exists purely
# for Jedi.
tokens = grammar._tokenize(code)
for token_ in tokens:
if token_.string == safeword:
raise EndMarkerReached()
else:
yield token_
# The code might be indedented, just remove it.
code = dedent(_get_code_for_stack(code_lines, module_node, pos))
# We use a word to tell Jedi when we have reached the start of the
# completion.
# Use Z as a prefix because it's not part of a number suffix.
safeword = 'ZZZ_USER_WANTS_TO_COMPLETE_HERE_WITH_JEDI'
code = code + safeword
p = Parser(grammar._pgen_grammar, error_recovery=True)
try:
p.parse(tokens=tokenize_without_endmarker(code))
except EndMarkerReached:
return Stack(p.pgen_parser.stack)
raise SystemError("This really shouldn't happen. There's a bug in Jedi.")
class Stack(list):
def get_node_names(self, grammar):
for dfa, state, (node_number, nodes) in self:
yield grammar.number2symbol[node_number]
def get_nodes(self):
for dfa, state, (node_number, nodes) in self:
for node in nodes:
yield node
def get_possible_completion_types(pgen_grammar, stack):
def add_results(label_index):
try:
grammar_labels.append(inversed_tokens[label_index])
except KeyError:
try:
keywords.append(inversed_keywords[label_index])
except KeyError:
t, v = pgen_grammar.labels[label_index]
assert t >= 256
# See if it's a symbol and if we're in its first set
inversed_keywords
itsdfa = pgen_grammar.dfas[t]
itsstates, itsfirst = itsdfa
for first_label_index in itsfirst.keys():
add_results(first_label_index)
inversed_keywords = dict((v, k) for k, v in pgen_grammar.keywords.items())
inversed_tokens = dict((v, k) for k, v in pgen_grammar.tokens.items())
keywords = []
grammar_labels = []
def scan_stack(index):
dfa, state, node = stack[index]
states, first = dfa
arcs = states[state]
for label_index, new_state in arcs:
if label_index == 0:
# An accepting state, check the stack below.
scan_stack(index - 1)
else:
add_results(label_index)
scan_stack(-1)
return keywords, grammar_labels
def evaluate_goto_definition(evaluator, context, leaf):
if leaf.type == 'name':
# In case of a name we can just use goto_definition which does all the
# magic itself.
return evaluator.goto_definitions(context, leaf)
parent = leaf.parent
if parent.type == 'atom':
return context.eval_node(leaf.parent)
elif parent.type == 'trailer':
return evaluate_call_of_leaf(context, leaf)
elif isinstance(leaf, tree.Literal):
return eval_atom(context, leaf)
return []
CallSignatureDetails = namedtuple(
'CallSignatureDetails',
['bracket_leaf', 'call_index', 'keyword_name_str']
)
def _get_index_and_key(nodes, position):
"""
Returns the amount of commas and the keyword argument string.
"""
nodes_before = [c for c in nodes if c.start_pos < position]
if nodes_before[-1].type == 'arglist':
nodes_before = [c for c in nodes_before[-1].children if c.start_pos < position]
key_str = None
if nodes_before:
last = nodes_before[-1]
if last.type == 'argument' and last.children[1].end_pos <= position:
# Checked if the argument
key_str = last.children[0].value
elif last == '=':
key_str = nodes_before[-2].value
return nodes_before.count(','), key_str
def _get_call_signature_details_from_error_node(node, position):
for index, element in reversed(list(enumerate(node.children))):
# `index > 0` means that it's a trailer and not an atom.
if element == '(' and element.end_pos <= position and index > 0:
# It's an error node, we don't want to match too much, just
# until the parentheses is enough.
children = node.children[index:]
name = element.get_previous_leaf()
if name is None:
continue
if name.type == 'name' or name.parent.type in ('trailer', 'atom'):
return CallSignatureDetails(
element,
*_get_index_and_key(children, position)
)
def get_call_signature_details(module, position):
leaf = module.get_leaf_for_position(position, include_prefixes=True)
if leaf.start_pos >= position:
# Whitespace / comments after the leaf count towards the previous leaf.
leaf = leaf.get_previous_leaf()
if leaf is None:
return None
if leaf == ')':
if leaf.end_pos == position:
leaf = leaf.get_next_leaf()
# Now that we know where we are in the syntax tree, we start to look at
# parents for possible function definitions.
node = leaf.parent
while node is not None:
if node.type in ('funcdef', 'classdef'):
# Don't show call signatures if there's stuff before it that just
# makes it feel strange to have a call signature.
return None
for n in node.children[::-1]:
if n.start_pos < position and n.type == 'error_node':
result = _get_call_signature_details_from_error_node(n, position)
if result is not None:
return result
if node.type == 'trailer' and node.children[0] == '(':
leaf = node.get_previous_leaf()
if leaf is None:
return None
return CallSignatureDetails(
node.children[0], *_get_index_and_key(node.children, position))
node = node.parent
return None
@time_cache("call_signatures_validity")
def cache_call_signatures(evaluator, context, bracket_leaf, code_lines, user_pos):
"""This function calculates the cache key."""
index = user_pos[0] - 1
before_cursor = code_lines[index][:user_pos[1]]
other_lines = code_lines[bracket_leaf.start_pos[0]:index]
whole = '\n'.join(other_lines + [before_cursor])
before_bracket = re.match(r'.*\(', whole, re.DOTALL)
module_path = context.get_root_context().py__file__()
if module_path is None:
yield None # Don't cache!
else:
yield (module_path, before_bracket, bracket_leaf.start_pos)
yield evaluate_goto_definition(
evaluator,
context,
bracket_leaf.get_previous_leaf()
)

View File

@@ -1,108 +1,47 @@
import inspect
import re
"""
TODO Some parts of this module are still not well documented.
"""
from jedi._compatibility import builtins
from jedi import debug
from jedi.common import source_to_unicode
from jedi.cache import underscore_memoization
from jedi.evaluate.context import ModuleContext
from jedi.evaluate import compiled
from jedi.evaluate.compiled.fake import get_module
from jedi.parser import representation as pr
from jedi.parser.fast import FastParser
from jedi.evaluate import helpers
from jedi.evaluate.compiled import mixed
from jedi.evaluate.base_context import Context
class InterpreterNamespace(pr.Module):
def __init__(self, evaluator, namespace, parser_module):
self.namespace = namespace
self.parser_module = parser_module
self._evaluator = evaluator
class NamespaceObject(object):
def __init__(self, dct):
self.__dict__ = dct
@underscore_memoization
def get_defined_names(self):
for name in self.parser_module.get_defined_names():
yield name
for key, value in self.namespace.items():
yield LazyName(self._evaluator, key, value)
def scope_names_generator(self, position=None):
yield self, list(self.get_defined_names())
class MixedModuleContext(Context):
resets_positions = True
type = 'mixed_module'
def __init__(self, evaluator, tree_module, namespaces, path):
self.evaluator = evaluator
self._namespaces = namespaces
self._namespace_objects = [NamespaceObject(n) for n in namespaces]
self._module_context = ModuleContext(evaluator, tree_module, path=path)
self.tree_node = tree_module
def get_node(self):
return self.tree_node
def get_filters(self, *args, **kwargs):
for filter in self._module_context.get_filters(*args, **kwargs):
yield filter
for namespace_obj in self._namespace_objects:
compiled_object = compiled.create(self.evaluator, namespace_obj)
mixed_object = mixed.MixedObject(
self.evaluator,
parent_context=self,
compiled_object=compiled_object,
tree_context=self._module_context
)
for filter in mixed_object.get_filters(*args, **kwargs):
yield filter
def __getattr__(self, name):
return getattr(self.parser_module, name)
class LazyName(helpers.FakeName):
def __init__(self, evaluator, name, value):
super(LazyName, self).__init__(name)
self._evaluator = evaluator
self._value = value
self._name = name
@property
@underscore_memoization
def parent(self):
obj = self._value
parser_path = []
if inspect.ismodule(obj):
module = obj
else:
class FakeParent(pr.Base):
parent = None # To avoid having no parent for NamePart.
path = None
names = []
try:
o = obj.__objclass__
names.append(obj.__name__)
obj = o
except AttributeError:
pass
try:
module_name = obj.__module__
names.insert(0, obj.__name__)
except AttributeError:
# Unfortunately in some cases like `int` there's no __module__
module = builtins
else:
module = __import__(module_name)
fake_name = helpers.FakeName(names, FakeParent())
parser_path = fake_name.names
raw_module = get_module(self._value)
try:
path = module.__file__
except AttributeError:
pass
else:
path = re.sub('c$', '', path)
if path.endswith('.py'):
# cut the `c` from `.pyc`
with open(path) as f:
source = source_to_unicode(f.read())
mod = FastParser(source, path[:-1]).module
if not parser_path:
return mod
found = self._evaluator.eval_call_path(iter(parser_path), mod, None)
if found:
return found[0]
debug.warning('Interpreter lookup for Python code failed %s',
mod)
module = compiled.CompiledObject(raw_module)
if raw_module == builtins:
# The builtins module is special and always cached.
module = compiled.builtin
return compiled.create(self._evaluator, self._value, module, module)
@parent.setter
def parent(self, value):
"""Needed because of the ``representation.Simple`` super class."""
def create(evaluator, namespace, parser_module):
ns = InterpreterNamespace(evaluator, namespace, parser_module)
for attr_name in pr.SCOPE_CONTENTS:
for something in getattr(parser_module, attr_name):
something.parent = ns
return getattr(self._module_context, name)

View File

@@ -1,70 +1,111 @@
import pydoc
import keyword
from jedi._compatibility import is_py3
from jedi import common
from jedi.evaluate import compiled
from jedi._compatibility import is_py3, is_py35
from jedi.evaluate.utils import ignored
from jedi.evaluate.filters import AbstractNameDefinition
from parso.python.tree import Leaf
try:
from pydoc_data import topics as pydoc_topics
except ImportError:
# Python 2.6
import pydoc_topics
# Python 2
try:
import pydoc_topics
except ImportError:
# This is for Python 3 embeddable version, which dont have
# pydoc_data module in its file python3x.zip.
pydoc_topics = None
if is_py3:
keys = keyword.kwlist
if is_py35:
# in python 3.5 async and await are not proper keywords, but for
# completion pursposes should as as though they are
keys = keyword.kwlist + ["async", "await"]
else:
keys = keyword.kwlist
else:
keys = keyword.kwlist + ['None', 'False', 'True']
def keywords(string='', pos=(0, 0), all=False):
if all:
return set([Keyword(k, pos) for k in keys])
def has_inappropriate_leaf_keyword(pos, module):
relevant_errors = filter(
lambda error: error.first_pos[0] == pos[0],
module.error_statement_stacks)
for error in relevant_errors:
if error.next_token in keys:
return True
return False
def completion_names(evaluator, stmt, pos, module):
keyword_list = all_keywords(evaluator)
if not isinstance(stmt, Leaf) or has_inappropriate_leaf_keyword(pos, module):
keyword_list = filter(
lambda keyword: not keyword.only_valid_as_leaf,
keyword_list
)
return [keyword.name for keyword in keyword_list]
def all_keywords(evaluator, pos=(0, 0)):
return set([Keyword(evaluator, k, pos) for k in keys])
def keyword(evaluator, string, pos=(0, 0)):
if string in keys:
return set([Keyword(string, pos)])
return set()
return Keyword(evaluator, string, pos)
else:
return None
def keyword_names(*args, **kwargs):
kwds = []
for k in keywords(*args, **kwargs):
start = k.start_pos
kwds.append(KeywordName(k, k.name, start))
return kwds
def get_operator(evaluator, string, pos):
return Keyword(evaluator, string, pos)
def get_operator(string, pos):
return Keyword(string, pos)
keywords_only_valid_as_leaf = (
'continue',
'break',
)
class KeywordName(object):
def __init__(self, parent, name, start_pos):
self.parent = parent
self.names = [name]
self.start_pos = start_pos
class KeywordName(AbstractNameDefinition):
api_type = 'keyword'
@property
def end_pos(self):
return self.start_pos[0], self.start_pos[1] + len(self.name)
def __init__(self, evaluator, name):
self.evaluator = evaluator
self.string_name = name
self.parent_context = evaluator.BUILTINS
def eval(self):
return set()
def infer(self):
return [Keyword(self.evaluator, self.string_name, (0, 0))]
class Keyword(object):
def __init__(self, name, pos):
self.name = name
self.start_pos = pos
self.parent = compiled.builtin
api_type = 'keyword'
def get_parent_until(self):
return self.parent
def __init__(self, evaluator, name, pos):
self.name = KeywordName(evaluator, name)
self.start_pos = pos
self.parent = evaluator.BUILTINS
@property
def only_valid_as_leaf(self):
return self.name.value in keywords_only_valid_as_leaf
@property
def names(self):
""" For a `parsing.Name` like comparision """
return [self.name]
@property
def docstr(self):
return imitate_pydoc(self.name)
def py__doc__(self, include_call_signature=False):
return imitate_pydoc(self.name.string_name)
def __repr__(self):
return '<%s: %s>' % (type(self).__name__, self.name)
@@ -75,11 +116,14 @@ def imitate_pydoc(string):
It's not possible to get the pydoc's without starting the annoying pager
stuff.
"""
if pydoc_topics is None:
return ''
# str needed because of possible unicode stuff in py2k (pydoc doesn't work
# with unicode strings)
string = str(string)
h = pydoc.help
with common.ignored(KeyError):
with ignored(KeyError):
# try to access symbols
string = h.symbols[string]
string, _, related = string.partition(' ')
@@ -95,6 +139,6 @@ def imitate_pydoc(string):
return ''
try:
return pydoc_topics.topics[label] if pydoc_topics else ''
return pydoc_topics.topics[label].strip() if pydoc_topics else ''
except KeyError:
return ''

View File

@@ -1,97 +0,0 @@
from jedi._compatibility import u, unicode
from jedi import common
from jedi.api import classes
from jedi.parser import representation as pr
from jedi.evaluate import imports
from jedi.evaluate import helpers
def usages(evaluator, definitions, search_name, mods):
def compare_array(definitions):
""" `definitions` are being compared by module/start_pos, because
sometimes the id's of the objects change (e.g. executions).
"""
result = []
for d in definitions:
module = d.get_parent_until()
result.append((module, d.start_pos))
return result
def check_call_for_usage(call):
stmt = call.parent
while not isinstance(stmt.parent, pr.IsScope):
stmt = stmt.parent
# New definition, call cannot be a part of stmt
if len(call.name) == 1 and call.execution is None \
and call.name in stmt.get_defined_names():
# Class params are not definitions (like function params). They
# are super classes, that need to be resolved.
if not (isinstance(stmt, pr.Param) and isinstance(stmt.parent, pr.Class)):
return
follow = [] # There might be multiple search_name's in one call_path
call_path = list(call.generate_call_path())
for i, name in enumerate(call_path):
# name is `pr.NamePart`.
if u(name) == search_name:
follow.append(call_path[:i + 1])
for call_path in follow:
follow_res, search = evaluator.goto(call.parent, call_path)
# names can change (getattr stuff), therefore filter names that
# don't match `search`.
# TODO add something like that in the future - for now usages are
# completely broken anyway.
#follow_res = [r for r in follow_res if str(r) == search]
#print search.start_pos,search_name.start_pos
#print follow_res, search, search_name, [(r, r.start_pos) for r in follow_res]
follow_res = usages_add_import_modules(evaluator, follow_res, search)
compare_follow_res = compare_array(follow_res)
# compare to see if they match
if any(r in compare_definitions for r in compare_follow_res):
yield classes.Definition(evaluator, search)
if not definitions:
return set()
compare_definitions = compare_array(definitions)
mods |= set([d.get_parent_until() for d in definitions])
names = []
for m in imports.get_modules_containing_name(mods, search_name):
try:
stmts = m.used_names[search_name]
except KeyError:
continue
for stmt in stmts:
if isinstance(stmt, pr.Import):
count = 0
imps = []
for i in stmt.get_all_import_names():
for name_part in i.names:
count += 1
if unicode(name_part) == search_name:
imps.append((count, name_part))
for used_count, name_part in imps:
i = imports.ImportWrapper(evaluator, stmt, kill_count=count - used_count,
nested_resolve=True)
f = i.follow(is_goto=True)
if set(f) & set(definitions):
names.append(classes.Definition(evaluator, name_part))
else:
for call in helpers.scan_statement_for_calls(stmt, search_name, assignment_details=True):
names += check_call_for_usage(call)
return names
def usages_add_import_modules(evaluator, definitions, search_name):
""" Adds the modules of the imports """
new = set()
for d in definitions:
if isinstance(d.parent, pr.Import):
s = imports.ImportWrapper(evaluator, d.parent, nested_resolve=True)
with common.ignored(IndexError):
new.add(s.follow(is_goto=True)[0])
return set(definitions) | new

View File

@@ -3,8 +3,6 @@ This caching is very important for speed and memory optimizations. There's
nothing really spectacular, just some decorators. The following cache types are
available:
- module caching (`load_parser` and `save_parser`), which uses pickle and is
really important to assure low load times of modules like ``numpy``.
- ``time_cache`` can be used to cache something for just a limited time span,
which can be useful if there's user interaction and the user cannot react
faster than a certain time.
@@ -14,101 +12,11 @@ there are global variables, which are holding the cache information. Some of
these variables are being cleaned after every API usage.
"""
import time
import os
import sys
import json
import hashlib
import gc
import inspect
import shutil
import re
try:
import cPickle as pickle
except ImportError:
import pickle
from jedi import settings
from jedi import common
from jedi import debug
from parso.cache import parser_cache
_time_caches = []
_star_import_cache = {}
# for fast_parser, should not be deleted
parser_cache = {}
class ParserCacheItem(object):
def __init__(self, parser, change_time=None):
self.parser = parser
if change_time is None:
change_time = time.time()
self.change_time = change_time
def clear_caches(delete_all=False):
""" Jedi caches many things, that should be completed after each completion
finishes.
:param delete_all: Deletes also the cache that is normally not deleted,
like parser cache, which is important for faster parsing.
"""
global _time_caches
if delete_all:
_time_caches = []
_star_import_cache.clear()
parser_cache.clear()
else:
# normally just kill the expired entries, not all
for tc in _time_caches:
# check time_cache for expired entries
for key, (t, value) in list(tc.items()):
if t < time.time():
# delete expired entries
del tc[key]
def time_cache(time_add_setting):
""" This decorator works as follows: Call it with a setting and after that
use the function with a callable that returns the key.
But: This function is only called if the key is not available. After a
certain amount of time (`time_add_setting`) the cache is invalid.
"""
def _temp(key_func):
dct = {}
_time_caches.append(dct)
def wrapper(optional_callable, *args, **kwargs):
key = key_func(*args, **kwargs)
value = None
if key in dct:
expiry, value = dct[key]
if expiry > time.time():
return value
value = optional_callable()
time_add = getattr(settings, time_add_setting)
if key is not None:
dct[key] = time.time() + time_add, value
return value
return wrapper
return _temp
@time_cache("call_signatures_validity")
def cache_call_signatures(source, user_pos, stmt):
"""This function calculates the cache key."""
index = user_pos[0] - 1
lines = common.splitlines(source)
before_cursor = lines[index][:user_pos[1]]
other_lines = lines[stmt.start_pos[0]:index]
whole = '\n'.join(other_lines + [before_cursor])
before_bracket = re.match(r'.*\(', whole, re.DOTALL)
module_path = stmt.get_parent_until().path
return None if module_path is None else (module_path, before_bracket, stmt.start_pos)
_time_caches = {}
def underscore_memoization(func):
@@ -137,218 +45,77 @@ def underscore_memoization(func):
return getattr(self, name)
except AttributeError:
result = func(self)
if inspect.isgenerator(result):
result = list(result)
setattr(self, name, result)
return result
return wrapper
def memoize(func):
"""A normal memoize function."""
dct = {}
def clear_time_caches(delete_all=False):
""" Jedi caches many things, that should be completed after each completion
finishes.
def wrapper(*args, **kwargs):
:param delete_all: Deletes also the cache that is normally not deleted,
like parser cache, which is important for faster parsing.
"""
global _time_caches
if delete_all:
for cache in _time_caches.values():
cache.clear()
parser_cache.clear()
else:
# normally just kill the expired entries, not all
for tc in _time_caches.values():
# check time_cache for expired entries
for key, (t, value) in list(tc.items()):
if t < time.time():
# delete expired entries
del tc[key]
def time_cache(time_add_setting):
"""
This decorator works as follows: Call it with a setting and after that
use the function with a callable that returns the key.
But: This function is only called if the key is not available. After a
certain amount of time (`time_add_setting`) the cache is invalid.
If the given key is None, the function will not be cached.
"""
def _temp(key_func):
dct = {}
_time_caches[time_add_setting] = dct
def wrapper(*args, **kwargs):
generator = key_func(*args, **kwargs)
key = next(generator)
try:
expiry, value = dct[key]
if expiry > time.time():
return value
except KeyError:
pass
value = next(generator)
time_add = getattr(settings, time_add_setting)
if key is not None:
dct[key] = time.time() + time_add, value
return value
return wrapper
return _temp
def memoize_method(method):
"""A normal memoize function."""
def wrapper(self, *args, **kwargs):
cache_dict = self.__dict__.setdefault('_memoize_method_dct', {})
dct = cache_dict.setdefault(method, {})
key = (args, frozenset(kwargs.items()))
try:
return dct[key]
except KeyError:
result = func(*args, **kwargs)
result = method(self, *args, **kwargs)
dct[key] = result
return result
return wrapper
def cache_star_import(func):
def wrapper(evaluator, scope, *args, **kwargs):
with common.ignored(KeyError):
mods = _star_import_cache[scope]
if mods[0] + settings.star_import_cache_validity > time.time():
return mods[1]
# cache is too old and therefore invalid or not available
_invalidate_star_import_cache_module(scope)
mods = func(evaluator, scope, *args, **kwargs)
_star_import_cache[scope] = time.time(), mods
return mods
return wrapper
def _invalidate_star_import_cache_module(module, only_main=False):
""" Important if some new modules are being reparsed """
with common.ignored(KeyError):
t, mods = _star_import_cache[module]
del _star_import_cache[module]
for m in mods:
_invalidate_star_import_cache_module(m, only_main=True)
if not only_main:
# We need a list here because otherwise the list is being changed
# during the iteration in py3k: iteritems -> items.
for key, (t, mods) in list(_star_import_cache.items()):
if module in mods:
_invalidate_star_import_cache_module(key)
def invalidate_star_import_cache(path):
"""On success returns True."""
try:
parser_cache_item = parser_cache[path]
except KeyError:
return False
else:
_invalidate_star_import_cache_module(parser_cache_item.parser.module)
return True
def load_parser(path, name):
"""
Returns the module or None, if it fails.
"""
if path is None and name is None:
return None
p_time = os.path.getmtime(path) if path else None
n = name if path is None else path
try:
parser_cache_item = parser_cache[n]
if not path or p_time <= parser_cache_item.change_time:
return parser_cache_item.parser
else:
# In case there is already a module cached and this module
# has to be reparsed, we also need to invalidate the import
# caches.
_invalidate_star_import_cache_module(parser_cache_item.parser.module)
except KeyError:
if settings.use_filesystem_cache:
return ParserPickling.load_parser(n, p_time)
def save_parser(path, name, parser, pickling=True):
try:
p_time = None if not path else os.path.getmtime(path)
except OSError:
p_time = None
pickling = False
n = name if path is None else path
item = ParserCacheItem(parser, p_time)
parser_cache[n] = item
if settings.use_filesystem_cache and pickling:
ParserPickling.save_parser(n, item)
class ParserPickling(object):
version = 13
"""
Version number (integer) for file system cache.
Increment this number when there are any incompatible changes in
parser representation classes. For example, the following changes
are regarded as incompatible.
- Class name is changed.
- Class is moved to another module.
- Defined slot of the class is changed.
"""
def __init__(self):
self.__index = None
self.py_tag = 'cpython-%s%s' % sys.version_info[:2]
"""
Short name for distinguish Python implementations and versions.
It's like `sys.implementation.cache_tag` but for Python < 3.3
we generate something similar. See:
http://docs.python.org/3/library/sys.html#sys.implementation
.. todo:: Detect interpreter (e.g., PyPy).
"""
def load_parser(self, path, original_changed_time):
try:
pickle_changed_time = self._index[path]
except KeyError:
return None
if original_changed_time is not None \
and pickle_changed_time < original_changed_time:
# the pickle file is outdated
return None
with open(self._get_hashed_path(path), 'rb') as f:
try:
gc.disable()
parser_cache_item = pickle.load(f)
finally:
gc.enable()
debug.dbg('pickle loaded: %s', path)
parser_cache[path] = parser_cache_item
return parser_cache_item.parser
def save_parser(self, path, parser_cache_item):
self.__index = None
try:
files = self._index
except KeyError:
files = {}
self._index = files
with open(self._get_hashed_path(path), 'wb') as f:
pickle.dump(parser_cache_item, f, pickle.HIGHEST_PROTOCOL)
files[path] = parser_cache_item.change_time
self._flush_index()
@property
def _index(self):
if self.__index is None:
try:
with open(self._get_path('index.json')) as f:
data = json.load(f)
except (IOError, ValueError):
self.__index = {}
else:
# 0 means version is not defined (= always delete cache):
if data.get('version', 0) != self.version:
self.clear_cache()
self.__index = {}
else:
self.__index = data['index']
return self.__index
def _remove_old_modules(self):
# TODO use
change = False
if change:
self._flush_index(self)
self._index # reload index
def _flush_index(self):
data = {'version': self.version, 'index': self._index}
with open(self._get_path('index.json'), 'w') as f:
json.dump(data, f)
self.__index = None
def clear_cache(self):
shutil.rmtree(self._cache_directory())
def _get_hashed_path(self, path):
return self._get_path('%s.pkl' % hashlib.md5(path.encode("utf-8")).hexdigest())
def _get_path(self, file):
dir = self._cache_directory()
if not os.path.exists(dir):
os.makedirs(dir)
return os.path.join(dir, file)
def _cache_directory(self):
return os.path.join(settings.cache_directory, self.py_tag)
# is a singleton
ParserPickling = ParserPickling()

1
jedi/common/__init__.py Normal file
View File

@@ -0,0 +1 @@
from jedi.common.context import BaseContextSet, BaseContext

67
jedi/common/context.py Normal file
View File

@@ -0,0 +1,67 @@
class BaseContext(object):
def __init__(self, evaluator, parent_context=None):
self.evaluator = evaluator
self.parent_context = parent_context
def get_root_context(self):
context = self
while True:
if context.parent_context is None:
return context
context = context.parent_context
class BaseContextSet(object):
def __init__(self, *args):
self._set = set(args)
@classmethod
def from_iterable(cls, iterable):
return cls.from_set(set(iterable))
@classmethod
def from_set(cls, set_):
self = cls()
self._set = set_
return self
@classmethod
def from_sets(cls, sets):
"""
Used to work with an iterable of set.
"""
aggregated = set()
sets = list(sets)
for set_ in sets:
if isinstance(set_, BaseContextSet):
aggregated |= set_._set
else:
aggregated |= set_
return cls.from_set(aggregated)
def __or__(self, other):
return type(self).from_set(self._set | other._set)
def __iter__(self):
for element in self._set:
yield element
def __bool__(self):
return bool(self._set)
def __len__(self):
return len(self._set)
def __repr__(self):
return '%s(%s)' % (self.__class__.__name__, ', '.join(str(s) for s in self._set))
def filter(self, filter_func):
return type(self).from_iterable(filter(filter_func, self._set))
def __getattr__(self, name):
def mapper(*args, **kwargs):
return type(self).from_sets(
getattr(context, name)(*args, **kwargs)
for context in self._set
)
return mapper

View File

@@ -1,25 +1,53 @@
from jedi._compatibility import encoding, is_py3
import inspect
from jedi._compatibility import encoding, is_py3, u
import os
import time
def _lazy_colorama_init():
"""
Lazily init colorama if necessary, not to screw up stdout is debug not
enabled.
This version of the function does nothing.
"""
pass
_inited=False
try:
if os.name == 'nt':
# does not work on Windows, as pyreadline and colorama interfere
# Does not work on Windows, as pyreadline and colorama interfere
raise ImportError
else:
# Use colorama for nicer console output.
from colorama import Fore, init
from colorama import initialise
# pytest resets the stream at the end - causes troubles. Since after
# every output the stream is reset automatically we don't need this.
initialise.atexit_done = True
init()
def _lazy_colorama_init():
"""
Lazily init colorama if necessary, not to screw up stdout is
debug not enabled.
This version of the function does init colorama.
"""
global _inited
if not _inited:
# pytest resets the stream at the end - causes troubles. Since
# after every output the stream is reset automatically we don't
# need this.
initialise.atexit_done = True
try:
init()
except Exception:
# Colorama fails with initializing under vim and is buggy in
# version 0.3.6.
pass
_inited = True
except ImportError:
class Fore(object):
RED = ''
GREEN = ''
YELLOW = ''
MAGENTA = ''
RESET = ''
NOTICE = object()
@@ -32,15 +60,14 @@ enable_notice = False
# callback, interface: level, str
debug_function = None
ignored_modules = ['jedi.evaluate.builtin', 'jedi.parser']
_debug_indent = -1
_debug_indent = 0
_start_time = time.time()
def reset_time():
global _start_time, _debug_indent
_start_time = time.time()
_debug_indent = -1
_debug_indent = 0
def increase_indent(func):
@@ -48,43 +75,51 @@ def increase_indent(func):
def wrapper(*args, **kwargs):
global _debug_indent
_debug_indent += 1
result = func(*args, **kwargs)
_debug_indent -= 1
return result
try:
return func(*args, **kwargs)
finally:
_debug_indent -= 1
return wrapper
def dbg(message, *args):
def dbg(message, *args, **kwargs):
""" Looks at the stack, to see if a debug message should be printed. """
# Python 2 compatibility, because it doesn't understand default args
color = kwargs.pop('color', 'GREEN')
assert color
if debug_function and enable_notice:
frm = inspect.stack()[1]
mod = inspect.getmodule(frm[0])
if not (mod.__name__ in ignored_modules):
i = ' ' * _debug_indent
debug_function(NOTICE, i + 'dbg: ' + message % args)
i = ' ' * _debug_indent
_lazy_colorama_init()
debug_function(color, i + 'dbg: ' + message % tuple(u(repr(a)) for a in args))
def warning(message, *args):
def warning(message, *args, **kwargs):
format = kwargs.pop('format', True)
assert not kwargs
if debug_function and enable_warning:
i = ' ' * _debug_indent
debug_function(WARNING, i + 'warning: ' + message % args)
if format:
message = message % tuple(u(repr(a)) for a in args)
debug_function('RED', i + 'warning: ' + message)
def speed(name):
if debug_function and enable_speed:
now = time.time()
i = ' ' * _debug_indent
debug_function(SPEED, i + 'speed: ' + '%s %s' % (name, now - _start_time))
debug_function('YELLOW', i + 'speed: ' + '%s %s' % (name, now - _start_time))
def print_to_stdout(level, str_out):
""" The default debug function """
if level == NOTICE:
col = Fore.GREEN
elif level == WARNING:
col = Fore.RED
else:
col = Fore.YELLOW
def print_to_stdout(color, str_out):
"""
The default debug function that prints to standard out.
:param str color: A string that is an attribute of ``colorama.Fore``.
"""
col = getattr(Fore, color)
_lazy_colorama_init()
if not is_py3:
str_out = str_out.encode(encoding, 'replace')
print(col + str_out + Fore.RESET)

View File

@@ -1,8 +1,9 @@
"""
Evaluation of Python code in |jedi| is based on three assumptions:
* Code is recursive (to weaken this assumption, the
:mod:`jedi.evaluate.dynamic` module exists).
* The code uses as least side effects as possible. Jedi understands certain
list/tuple/set modifications, but there's no guarantee that Jedi detects
everything (list.append in different modules for example).
* No magic is being used:
- metaclasses
@@ -11,372 +12,348 @@ Evaluation of Python code in |jedi| is based on three assumptions:
* The programmer is not a total dick, e.g. like `this
<https://github.com/davidhalter/jedi/issues/24>`_ :-)
That said, there's mainly one entry point in this script: ``eval_statement``.
This is where autocompletion starts. Everything you want to complete is either
a ``Statement`` or some special name like ``class``, which is easy to complete.
The actual algorithm is based on a principle called lazy evaluation. That
said, the typical entry point for static analysis is calling
``eval_expr_stmt``. There's separate logic for autocompletion in the API, the
evaluator is all about evaluating an expression.
Therefore you need to understand what follows after ``eval_statement``. Let's
TODO this paragraph is not what jedi does anymore.
Now you need to understand what follows after ``eval_expr_stmt``. Let's
make an example::
import datetime
datetime.date.toda# <-- cursor here
First of all, this module doesn't care about completion. It really just cares
about ``datetime.date``. At the end of the procedure ``eval_statement`` will
return the ``datetime`` class.
about ``datetime.date``. At the end of the procedure ``eval_expr_stmt`` will
return the ``date`` class.
To *visualize* this (simplified):
- ``eval_statement`` - ``<Statement: datetime.date>``
- ``Evaluator.eval_expr_stmt`` doesn't do much, because there's no assignment.
- ``Context.eval_node`` cares for resolving the dotted path
- ``Evaluator.find_types`` searches for global definitions of datetime, which
it finds in the definition of an import, by scanning the syntax tree.
- Using the import logic, the datetime module is found.
- Now ``find_types`` is called again by ``eval_node`` to find ``date``
inside the datetime module.
- Unpacking of the statement into ``[[<Call: datetime.date>]]``
- ``eval_expression_list``, calls ``eval_call`` with ``<Call: datetime.date>``
- ``eval_call`` - searches the ``datetime`` name within the module.
Now what would happen if we wanted ``datetime.date.foo.bar``? Two more
calls to ``find_types``. However the second call would be ignored, because the
first one would return nothing (there's no foo attribute in ``date``).
This is exactly where it starts to get complicated. Now recursions start to
kick in. The statement has not been resolved fully, but now we need to resolve
the datetime import. So it continues
- follow import, which happens in the :mod:`jedi.evaluate.imports` module.
- now the same ``eval_call`` as above calls ``follow_path`` to follow the
second part of the statement ``date``.
- After ``follow_path`` returns with the desired ``datetime.date`` class, the
result is being returned and the recursion finishes.
Now what would happen if we wanted ``datetime.date.foo.bar``? Just two more
calls to ``follow_path`` (which calls itself with a recursion). What if the
import would contain another Statement like this::
What if the import would contain another ``ExprStmt`` like this::
from foo import bar
Date = bar.baz
Well... You get it. Just another ``eval_statement`` recursion. It's really
easy. Just that Python is not that easy sometimes. To understand tuple
assignments and different class scopes, a lot more code had to be written. Yet
we're still not talking about Descriptors and Nested List Comprehensions, just
the simple stuff.
Well... You get it. Just another ``eval_expr_stmt`` recursion. It's really
easy. Python can obviously get way more complicated then this. To understand
tuple assignments, list comprehensions and everything else, a lot more code had
to be written.
So if you want to change something, write a test and then just change what you
want. This module has been tested by about 600 tests. Don't be afraid to break
something. The tests are good enough.
Jedi has been tested very well, so you can just start modifying code. It's best
to write your own test first for your "new" feature. Don't be scared of
breaking stuff. As long as the tests pass, you're most likely to be fine.
I need to mention now that this recursive approach is really good because it
I need to mention now that lazy evaluation is really good because it
only *evaluates* what needs to be *evaluated*. All the statements and modules
that are not used are just being ignored. It's a little bit similar to the
backtracking algorithm.
.. todo:: nonlocal statement, needed or can be ignored? (py3k)
that are not used are just being ignored.
"""
import itertools
from jedi._compatibility import next, hasattr, unicode
from jedi.parser import representation as pr
from jedi.parser.tokenize import Token
from jedi.parser import fast
import sys
from parso.python import tree
import parso
from jedi import debug
from jedi.evaluate import representation as er
from jedi import parser_utils
from jedi.evaluate.utils import unite
from jedi.evaluate import imports
from jedi.evaluate import recursion
from jedi.evaluate import iterable
from jedi.evaluate.cache import memoize_default
from jedi.evaluate import stdlib
from jedi.evaluate import finder
from jedi.evaluate.cache import evaluator_function_cache
from jedi.evaluate import compiled
from jedi.evaluate import precedence
from jedi.evaluate.helpers import FakeStatement
from jedi.evaluate import helpers
from jedi.evaluate.filters import TreeNameDefinition, ParamName
from jedi.evaluate.base_context import ContextualizedName, ContextualizedNode, \
ContextSet, NO_CONTEXTS, iterate_contexts
from jedi.evaluate.context import ClassContext, FunctionContext, \
AnonymousInstance, BoundMethod
from jedi.evaluate.context.iterable import CompForContext
from jedi.evaluate.syntax_tree import eval_trailer, eval_expr_stmt, \
eval_node, check_tuple_assignments
class Evaluator(object):
def __init__(self):
def __init__(self, grammar, project):
self.grammar = grammar
self.latest_grammar = parso.load_grammar(version='3.6')
self.memoize_cache = {} # for memoize decorators
self.import_cache = {} # like `sys.modules`.
self.compiled_cache = {} # see `compiled.create()`
self.recursion_detector = recursion.RecursionDetector()
self.execution_recursion_detector = recursion.ExecutionRecursionDetector()
# To memorize modules -> equals `sys.modules`.
self.modules = {} # like `sys.modules`.
self.compiled_cache = {} # see `evaluate.compiled.create()`
self.inferred_element_counts = {}
self.mixed_cache = {} # see `evaluate.compiled.mixed._create()`
self.analysis = []
self.dynamic_params_depth = 0
self.is_analysis = False
self.python_version = sys.version_info[:2]
self.project = project
project.add_evaluator(self)
def find_types(self, scope, name_str, position=None, search_global=False,
is_goto=False, resolve_decorator=True):
"""
This is the search function. The most important part to debug.
`remove_statements` and `filter_statements` really are the core part of
this completion.
self.reset_recursion_limitations()
:param position: Position of the last statement -> tuple of line, column
:return: List of Names. Their parents are the types.
"""
f = finder.NameFinder(self, scope, name_str, position)
scopes = f.scopes(search_global)
if is_goto:
return f.filter_name(scopes)
return f.find(scopes, resolve_decorator, search_global)
# Constants
self.BUILTINS = compiled.get_special_object(self, 'BUILTINS')
@memoize_default(default=[], evaluator_is_first_arg=True)
@recursion.recursion_decorator
@debug.increase_indent
def eval_statement(self, stmt, seek_name=None):
"""
The starting point of the completion. A statement always owns a call
list, which are the calls, that a statement does. In case multiple
names are defined in the statement, `seek_name` returns the result for
this name.
def reset_recursion_limitations(self):
self.recursion_detector = recursion.RecursionDetector()
self.execution_recursion_detector = recursion.ExecutionRecursionDetector(self)
:param stmt: A `pr.Statement`.
"""
debug.dbg('eval_statement %s (%s)', stmt, seek_name)
expression_list = stmt.expression_list()
if isinstance(stmt, FakeStatement):
return expression_list # Already contains the results.
def eval_element(self, context, element):
if isinstance(context, CompForContext):
return eval_node(context, element)
result = self.eval_expression_list(expression_list)
if_stmt = element
while if_stmt is not None:
if_stmt = if_stmt.parent
if if_stmt.type in ('if_stmt', 'for_stmt'):
break
if parser_utils.is_scope(if_stmt):
if_stmt = None
break
predefined_if_name_dict = context.predefined_names.get(if_stmt)
if predefined_if_name_dict is None and if_stmt and if_stmt.type == 'if_stmt':
if_stmt_test = if_stmt.children[1]
name_dicts = [{}]
# If we already did a check, we don't want to do it again -> If
# context.predefined_names is filled, we stop.
# We don't want to check the if stmt itself, it's just about
# the content.
if element.start_pos > if_stmt_test.end_pos:
# Now we need to check if the names in the if_stmt match the
# names in the suite.
if_names = helpers.get_names_of_node(if_stmt_test)
element_names = helpers.get_names_of_node(element)
str_element_names = [e.value for e in element_names]
if any(i.value in str_element_names for i in if_names):
for if_name in if_names:
definitions = self.goto_definitions(context, if_name)
# Every name that has multiple different definitions
# causes the complexity to rise. The complexity should
# never fall below 1.
if len(definitions) > 1:
if len(name_dicts) * len(definitions) > 16:
debug.dbg('Too many options for if branch evaluation %s.', if_stmt)
# There's only a certain amount of branches
# Jedi can evaluate, otherwise it will take to
# long.
name_dicts = [{}]
break
ass_details = stmt.assignment_details
if ass_details and ass_details[0][1] != '=' and not isinstance(stmt, er.InstanceElement): # TODO don't check for this.
expr_list, operator = ass_details[0]
# `=` is always the last character in aug assignments -> -1
operator = operator[:-1]
name = str(expr_list[0].name)
parent = stmt.parent
if isinstance(parent, (pr.SubModule, fast.Module)):
parent = er.ModuleWrapper(self, parent)
left = self.find_types(parent, name, stmt.start_pos)
if isinstance(stmt.parent, pr.ForFlow):
# iterate through result and add the values, that's possible
# only in for loops without clutter, because they are
# predictable.
for r in result:
left = precedence.calculate(self, left, operator, [r])
result = left
original_name_dicts = list(name_dicts)
name_dicts = []
for definition in definitions:
new_name_dicts = list(original_name_dicts)
for i, name_dict in enumerate(new_name_dicts):
new_name_dicts[i] = name_dict.copy()
new_name_dicts[i][if_name.value] = ContextSet(definition)
name_dicts += new_name_dicts
else:
for name_dict in name_dicts:
name_dict[if_name.value] = definitions
if len(name_dicts) > 1:
result = ContextSet()
for name_dict in name_dicts:
with helpers.predefine_names(context, if_stmt, name_dict):
result |= eval_node(context, element)
return result
else:
result = precedence.calculate(self, left, operator, result)
elif len(stmt.get_defined_names()) > 1 and seek_name and ass_details:
# Assignment checking is only important if the statement defines
# multiple variables.
new_result = []
for ass_expression_list, op in ass_details:
new_result += finder.find_assignments(ass_expression_list[0], result, seek_name)
result = new_result
return result
def eval_expression_list(self, expression_list):
"""
`expression_list` can be either `pr.Array` or `list of list`.
It is used to evaluate a two dimensional object, that has calls, arrays and
operators in it.
"""
debug.dbg('eval_expression_list: %s', expression_list)
p = precedence.create_precedence(expression_list)
return self.process_precedence_element(p) or []
def process_precedence_element(self, el):
if el is None:
return None
return self._eval_element_if_evaluated(context, element)
else:
if isinstance(el, precedence.Precedence):
return self._eval_precedence(el)
if predefined_if_name_dict:
return eval_node(context, element)
else:
# normal element, no operators
return self.eval_statement_element(el)
return self._eval_element_if_evaluated(context, element)
def _eval_precedence(self, _precedence):
left = self.process_precedence_element(_precedence.left)
right = self.process_precedence_element(_precedence.right)
return precedence.calculate(self, left, _precedence.operator, right)
def eval_statement_element(self, element):
if pr.Array.is_type(element, pr.Array.NOARRAY):
try:
lst_cmp = element[0].expression_list()[0]
if not isinstance(lst_cmp, pr.ListComprehension):
raise IndexError
except IndexError:
r = list(itertools.chain.from_iterable(self.eval_statement(s)
for s in element))
else:
r = [iterable.GeneratorComprehension(self, lst_cmp)]
call_path = element.generate_call_path()
next(call_path, None) # the first one has been used already
return self.follow_path(call_path, r, element.parent)
elif isinstance(element, pr.ListComprehension):
return self.eval_statement(element.stmt)
elif isinstance(element, pr.Lambda):
return [er.Function(self, element)]
# With things like params, these can also be functions...
elif isinstance(element, pr.Base) and element.isinstance(
er.Function, er.Class, er.Instance, iterable.ArrayInstance):
return [element]
# The string tokens are just operations (+, -, etc.)
elif isinstance(element, compiled.CompiledObject):
return [element]
elif isinstance(element, Token):
return []
else:
return self.eval_call(element)
def eval_call(self, call):
"""Follow a call is following a function, variable, string, etc."""
path = call.generate_call_path()
# find the statement of the Scope
s = call
while not s.parent.isinstance(pr.IsScope):
s = s.parent
par = s.parent
return self.eval_call_path(path, par, s.start_pos)
def eval_call_path(self, path, scope, position):
def _eval_element_if_evaluated(self, context, element):
"""
Follows a path generated by `pr.StatementElement.generate_call_path()`.
TODO This function is temporary: Merge with eval_element.
"""
current = next(path)
parent = element
while parent is not None:
parent = parent.parent
predefined_if_name_dict = context.predefined_names.get(parent)
if predefined_if_name_dict is not None:
return eval_node(context, element)
return self._eval_element_cached(context, element)
if isinstance(current, pr.Array):
types = [iterable.Array(self, current)]
else:
if isinstance(current, pr.NamePart):
# This is the first global lookup.
types = self.find_types(scope, current, position=position,
search_global=True)
else:
# for pr.Literal
types = [compiled.create(self, current.value)]
types = imports.follow_imports(self, types)
@evaluator_function_cache(default=NO_CONTEXTS)
def _eval_element_cached(self, context, element):
return eval_node(context, element)
return self.follow_path(path, types, scope)
def goto_definitions(self, context, name):
def_ = name.get_definition(import_name_always=True)
if def_ is not None:
type_ = def_.type
if type_ == 'classdef':
return [ClassContext(self, context, name.parent)]
elif type_ == 'funcdef':
return [FunctionContext(self, context, name.parent)]
def follow_path(self, path, types, call_scope):
"""
Follows a path like::
if type_ == 'expr_stmt':
is_simple_name = name.parent.type not in ('power', 'trailer')
if is_simple_name:
return eval_expr_stmt(context, def_, name)
if type_ == 'for_stmt':
container_types = context.eval_node(def_.children[3])
cn = ContextualizedNode(context, def_.children[3])
for_types = iterate_contexts(container_types, cn)
c_node = ContextualizedName(context, name)
return check_tuple_assignments(self, c_node, for_types)
if type_ in ('import_from', 'import_name'):
return imports.infer_import(context, name)
self.follow_path(iter(['Foo', 'bar']), [a_type], from_somewhere)
return helpers.evaluate_call_of_leaf(context, name)
to follow a call like ``module.a_type.Foo.bar`` (in ``from_somewhere``).
"""
results_new = []
iter_paths = itertools.tee(path, len(types))
def goto(self, context, name):
definition = name.get_definition(import_name_always=True)
if definition is not None:
type_ = definition.type
if type_ == 'expr_stmt':
# Only take the parent, because if it's more complicated than just
# a name it's something you can "goto" again.
is_simple_name = name.parent.type not in ('power', 'trailer')
if is_simple_name:
return [TreeNameDefinition(context, name)]
elif type_ == 'param':
return [ParamName(context, name)]
elif type_ in ('funcdef', 'classdef'):
return [TreeNameDefinition(context, name)]
elif type_ in ('import_from', 'import_name'):
module_names = imports.infer_import(context, name, is_goto=True)
return module_names
for i, typ in enumerate(types):
fp = self._follow_path(iter_paths[i], typ, call_scope)
if fp is not None:
results_new += fp
else:
# This means stop iteration.
return types
return results_new
def _follow_path(self, path, typ, scope):
"""
Uses a generator and tries to complete the path, e.g.::
foo.bar.baz
`_follow_path` is only responsible for completing `.bar.baz`, the rest
is done in the `follow_call` function.
"""
# current is either an Array or a Scope.
try:
current = next(path)
except StopIteration:
return None
debug.dbg('_follow_path: %s in scope %s', current, typ)
result = []
if isinstance(current, pr.Array):
# This must be an execution, either () or [].
if current.type == pr.Array.LIST:
if hasattr(typ, 'get_index_types'):
if isinstance(typ, compiled.CompiledObject):
# CompiledObject doesn't contain an evaluator instance.
result = typ.get_index_types(self, current)
else:
result = typ.get_index_types(current)
elif current.type not in [pr.Array.DICT]:
# Scope must be a class or func - make an instance or execution.
result = self.execute(typ, current)
else:
# Curly braces are not allowed, because they make no sense.
debug.warning('strange function call with {} %s %s', current, typ)
else:
# The function must not be decorated with something else.
if typ.isinstance(er.Function):
typ = typ.get_magic_function_scope()
else:
# This is the typical lookup while chaining things.
if filter_private_variable(typ, scope, current):
return []
types = self.find_types(typ, current)
result = imports.follow_imports(self, types)
return self.follow_path(path, result, scope)
@debug.increase_indent
def execute(self, obj, params=(), evaluate_generator=False):
if obj.isinstance(er.Function):
obj = obj.get_decorated_func()
debug.dbg('execute: %s %s', obj, params)
try:
return stdlib.execute(self, obj, params)
except stdlib.NotInStdLib:
pass
if isinstance(obj, iterable.GeneratorMethod):
return obj.execute()
elif obj.isinstance(compiled.CompiledObject):
if obj.is_executable_class():
return [er.Instance(self, obj, params)]
else:
return list(obj.execute_function(self, params))
elif obj.isinstance(er.Class):
# There maybe executions of executions.
return [er.Instance(self, obj, params)]
else:
stmts = []
if obj.isinstance(er.Function):
stmts = er.FunctionExecution(self, obj, params).get_return_types(evaluate_generator)
else:
if hasattr(obj, 'execute_subscope_by_name'):
try:
stmts = obj.execute_subscope_by_name('__call__', params)
except KeyError:
debug.warning("no __call__ func available %s", obj)
par = name.parent
node_type = par.type
if node_type == 'argument' and par.children[1] == '=' and par.children[0] == name:
# Named param goto.
trailer = par.parent
if trailer.type == 'arglist':
trailer = trailer.parent
if trailer.type != 'classdef':
if trailer.type == 'decorator':
context_set = context.eval_node(trailer.children[1])
else:
debug.warning("no execution possible %s", obj)
i = trailer.parent.children.index(trailer)
to_evaluate = trailer.parent.children[:i]
if to_evaluate[0] == 'await':
to_evaluate.pop(0)
context_set = context.eval_node(to_evaluate[0])
for trailer in to_evaluate[1:]:
context_set = eval_trailer(context, context_set, trailer)
param_names = []
for context in context_set:
try:
get_param_names = context.get_param_names
except AttributeError:
pass
else:
for param_name in get_param_names():
if param_name.string_name == name.value:
param_names.append(param_name)
return param_names
elif node_type == 'dotted_name': # Is a decorator.
index = par.children.index(name)
if index > 0:
new_dotted = helpers.deep_ast_copy(par)
new_dotted.children[index - 1:] = []
values = context.eval_node(new_dotted)
return unite(
value.py__getattribute__(name, name_context=context, is_goto=True)
for value in values
)
debug.dbg('execute result: %s in %s', stmts, obj)
return imports.follow_imports(self, stmts)
def goto(self, stmt, call_path):
scope = stmt.get_parent_until(pr.IsScope)
pos = stmt.start_pos
call_path, search_name_part = call_path[:-1], call_path[-1]
if call_path:
scopes = self.eval_call_path(iter(call_path), scope, pos)
search_global = False
pos = None
if node_type == 'trailer' and par.children[0] == '.':
values = helpers.evaluate_call_of_leaf(context, name, cut_own_trailer=True)
return unite(
value.py__getattribute__(name, name_context=context, is_goto=True)
for value in values
)
else:
# TODO does this exist? i don't think so
scopes = [scope]
search_global = True
follow_res = []
for s in scopes:
follow_res += self.find_types(s, search_name_part, pos,
search_global=search_global, is_goto=True)
return follow_res, search_name_part
stmt = tree.search_ancestor(
name, 'expr_stmt', 'lambdef'
) or name
if stmt.type == 'lambdef':
stmt = name
return context.py__getattribute__(
name,
position=stmt.start_pos,
search_global=True, is_goto=True
)
def create_context(self, base_context, node, node_is_context=False, node_is_object=False):
def parent_scope(node):
while True:
node = node.parent
def filter_private_variable(scope, call_scope, var_name):
"""private variables begin with a double underline `__`"""
var_name = str(var_name) # var_name could be a NamePart
if isinstance(var_name, (str, unicode)) and isinstance(scope, er.Instance)\
and var_name.startswith('__') and not var_name.endswith('__'):
s = call_scope.get_parent_until((pr.Class, er.Instance, compiled.CompiledObject))
if s != scope:
if isinstance(scope.base, compiled.CompiledObject):
if s != scope.base:
return True
else:
if s != scope.base.base:
return True
return False
if parser_utils.is_scope(node):
return node
elif node.type in ('argument', 'testlist_comp'):
if node.children[1].type == 'comp_for':
return node.children[1]
elif node.type == 'dictorsetmaker':
for n in node.children[1:4]:
# In dictionaries it can be pretty much anything.
if n.type == 'comp_for':
return n
def from_scope_node(scope_node, child_is_funcdef=None, is_nested=True, node_is_object=False):
if scope_node == base_node:
return base_context
is_funcdef = scope_node.type in ('funcdef', 'lambdef')
parent_scope = parser_utils.get_parent_scope(scope_node)
parent_context = from_scope_node(parent_scope, child_is_funcdef=is_funcdef)
if is_funcdef:
if isinstance(parent_context, AnonymousInstance):
func = BoundMethod(
self, parent_context, parent_context.class_context,
parent_context.parent_context, scope_node
)
else:
func = FunctionContext(
self,
parent_context,
scope_node
)
if is_nested and not node_is_object:
return func.get_function_execution()
return func
elif scope_node.type == 'classdef':
class_context = ClassContext(self, parent_context, scope_node)
if child_is_funcdef:
# anonymous instance
return AnonymousInstance(self, parent_context, class_context)
else:
return class_context
elif scope_node.type == 'comp_for':
if node.start_pos >= scope_node.children[-1].start_pos:
return parent_context
return CompForContext.from_comp_for(parent_context, scope_node)
raise Exception("There's a scope that was not managed.")
base_node = base_context.tree_node
if node_is_context and parser_utils.is_scope(node):
scope_node = node
else:
if node.parent.type in ('funcdef', 'classdef') and node.parent.name == node:
# When we're on class/function names/leafs that define the
# object itself and not its contents.
node = node.parent
scope_node = parent_scope(node)
return from_scope_node(scope_node, is_nested=True, node_is_object=node_is_object)

View File

@@ -1,9 +1,8 @@
"""
Module for statical analysis.
"""
from jedi import debug
from jedi.parser import representation as pr
from parso.python import tree
from jedi.evaluate.compiled import CompiledObject
@@ -11,14 +10,18 @@ CODES = {
'attribute-error': (1, AttributeError, 'Potential AttributeError.'),
'name-error': (2, NameError, 'Potential NameError.'),
'import-error': (3, ImportError, 'Potential ImportError.'),
'type-error-generator': (4, TypeError, "TypeError: 'generator' object is not subscriptable."),
'type-error-too-many-arguments': (5, TypeError, None),
'type-error-too-few-arguments': (6, TypeError, None),
'type-error-keyword-argument': (7, TypeError, None),
'type-error-multiple-values': (8, TypeError, None),
'type-error-star-star': (9, TypeError, None),
'type-error-star': (10, TypeError, None),
'type-error-operation': (11, TypeError, None),
'type-error-too-many-arguments': (4, TypeError, None),
'type-error-too-few-arguments': (5, TypeError, None),
'type-error-keyword-argument': (6, TypeError, None),
'type-error-multiple-values': (7, TypeError, None),
'type-error-star-star': (8, TypeError, None),
'type-error-star': (9, TypeError, None),
'type-error-operation': (10, TypeError, None),
'type-error-not-iterable': (11, TypeError, None),
'type-error-isinstance': (12, TypeError, None),
'type-error-not-subscriptable': (13, TypeError, None),
'value-error-too-many-values': (14, ValueError, None),
'value-error-too-few-values': (15, ValueError, None),
}
@@ -53,8 +56,8 @@ class Error(object):
return self.__unicode__()
def __eq__(self, other):
return (self.path == other.path and self.name == other.name
and self._start_pos == other._start_pos)
return (self.path == other.path and self.name == other.name and
self._start_pos == other._start_pos)
def __ne__(self, other):
return not self.__eq__(other)
@@ -63,63 +66,70 @@ class Error(object):
return hash((self.path, self._start_pos, self.name))
def __repr__(self):
return '<%s %s: %s@%s,%s' % (self.__class__.__name__,
self.name, self.path,
self._start_pos[0], self._start_pos[1])
return '<%s %s: %s@%s,%s>' % (self.__class__.__name__,
self.name, self.path,
self._start_pos[0], self._start_pos[1])
class Warning(Error):
pass
def add(evaluator, name, jedi_obj, message=None, typ=Error, payload=None):
exception = CODES[name][1]
if _check_for_exception_catch(evaluator, jedi_obj, exception, payload):
def add(node_context, error_name, node, message=None, typ=Error, payload=None):
exception = CODES[error_name][1]
if _check_for_exception_catch(node_context, node, exception, payload):
return
module_path = jedi_obj.get_parent_until().path
instance = typ(name, module_path, jedi_obj.start_pos, message)
debug.warning(str(instance))
evaluator.analysis.append(instance)
# TODO this path is probably not right
module_context = node_context.get_root_context()
module_path = module_context.py__file__()
instance = typ(error_name, module_path, node.start_pos, message)
debug.warning(str(instance), format=False)
node_context.evaluator.analysis.append(instance)
def _check_for_setattr(instance):
"""
Check if there's any setattr method inside an instance. If so, return True.
"""
module = instance.get_parent_until()
from jedi.evaluate.context import ModuleContext
module = instance.get_root_context()
if not isinstance(module, ModuleContext):
return False
node = module.tree_node
try:
stmts = module.used_names['setattr']
stmts = node.get_used_names()['setattr']
except KeyError:
return False
return any(instance.start_pos < stmt.start_pos < instance.end_pos
return any(node.start_pos < stmt.start_pos < node.end_pos
for stmt in stmts)
def add_attribute_error(evaluator, scope, name_part):
message = ('AttributeError: %s has no attribute %s.' % (scope, name_part))
from jedi.evaluate.representation import Instance
def add_attribute_error(name_context, lookup_context, name):
message = ('AttributeError: %s has no attribute %s.' % (lookup_context, name))
from jedi.evaluate.context.instance import AbstractInstanceContext, CompiledInstanceName
# Check for __getattr__/__getattribute__ existance and issue a warning
# instead of an error, if that happens.
if isinstance(scope, Instance):
typ = Warning
try:
scope.get_subscope_by_name('__getattr__')
except KeyError:
try:
scope.get_subscope_by_name('__getattribute__')
except KeyError:
if not _check_for_setattr(scope):
typ = Error
else:
typ = Error
typ = Error
if isinstance(lookup_context, AbstractInstanceContext):
slot_names = lookup_context.get_function_slot_names('__getattr__') + \
lookup_context.get_function_slot_names('__getattribute__')
for n in slot_names:
if isinstance(name, CompiledInstanceName) and \
n.parent_context.obj == object:
typ = Warning
break
payload = scope, name_part
add(evaluator, 'attribute-error', name_part, message, typ, payload)
if _check_for_setattr(lookup_context):
typ = Warning
payload = lookup_context, name
add(name_context, 'attribute-error', name, message, typ, payload)
def _check_for_exception_catch(evaluator, jedi_obj, exception, payload=None):
def _check_for_exception_catch(node_context, jedi_name, exception, payload=None):
"""
Checks if a jedi object (e.g. `Statement`) sits inside a try/catch and
doesn't count as an error (if equal to `exception`).
@@ -127,109 +137,78 @@ def _check_for_exception_catch(evaluator, jedi_obj, exception, payload=None):
it.
Returns True if the exception was catched.
"""
def check_match(cls):
def check_match(cls, exception):
try:
return isinstance(cls, CompiledObject) and issubclass(exception, cls.obj)
except TypeError:
return False
def check_try_for_except(obj):
while obj.next is not None:
obj = obj.next
if not obj.inputs:
# No import implies a `except:` catch, which catches
# everything.
return True
def check_try_for_except(obj, exception):
# Only nodes in try
iterator = iter(obj.children)
for branch_type in iterator:
colon = next(iterator)
suite = next(iterator)
if branch_type == 'try' \
and not (branch_type.start_pos < jedi_name.start_pos <= suite.end_pos):
return False
for i in obj.inputs:
except_classes = evaluator.eval_statement(i)
for node in obj.get_except_clause_tests():
if node is None:
return True # An exception block that catches everything.
else:
except_classes = node_context.eval_node(node)
for cls in except_classes:
from jedi.evaluate import iterable
if isinstance(cls, iterable.Array) and cls.type == 'tuple':
from jedi.evaluate.context import iterable
if isinstance(cls, iterable.AbstractIterable) and \
cls.array_type == 'tuple':
# multiple exceptions
for c in cls.values():
if check_match(c):
return True
for lazy_context in cls.py__iter__():
for typ in lazy_context.infer():
if check_match(typ, exception):
return True
else:
if check_match(cls):
if check_match(cls, exception):
return True
return False
def check_hasattr(stmt):
expression_list = stmt.expression_list()
def check_hasattr(node, suite):
try:
assert len(expression_list) == 1
call = expression_list[0]
assert isinstance(call, pr.Call) and str(call.name) == 'hasattr'
execution = call.execution
assert execution and len(execution) == 2
assert suite.start_pos <= jedi_name.start_pos < suite.end_pos
assert node.type in ('power', 'atom_expr')
base = node.children[0]
assert base.type == 'name' and base.value == 'hasattr'
trailer = node.children[1]
assert trailer.type == 'trailer'
arglist = trailer.children[1]
assert arglist.type == 'arglist'
from jedi.evaluate.arguments import TreeArguments
args = list(TreeArguments(node_context.evaluator, node_context, arglist).unpack())
# Arguments should be very simple
assert len(args) == 2
# check if the names match
names = evaluator.eval_statement(execution[1])
# Check name
key, lazy_context = args[1]
names = list(lazy_context.infer())
assert len(names) == 1 and isinstance(names[0], CompiledObject)
assert names[0].obj == str(payload[1])
assert names[0].obj == payload[1].value
objects = evaluator.eval_statement(execution[0])
# Check objects
key, lazy_context = args[0]
objects = lazy_context.infer()
return payload[0] in objects
except AssertionError:
pass
return False
return False
obj = jedi_obj
while obj is not None and not obj.isinstance(pr.Function, pr.Class):
if obj.isinstance(pr.Flow):
obj = jedi_name
while obj is not None and not isinstance(obj, (tree.Function, tree.Class)):
if isinstance(obj, tree.Flow):
# try/except catch check
if obj.command == 'try' and check_try_for_except(obj):
if obj.type == 'try_stmt' and check_try_for_except(obj, exception):
return True
# hasattr check
if exception == AttributeError and obj.command in ('if', 'while'):
if obj.inputs and check_hasattr(obj.inputs[0]):
if exception == AttributeError and obj.type in ('if_stmt', 'while_stmt'):
if check_hasattr(obj.children[1], obj.children[3]):
return True
obj = obj.parent
return False
def get_module_statements(module):
"""
Returns the statements used in a module. All these statements should be
evaluated to check for potential exceptions.
"""
def add_stmts(stmts):
new = set()
for stmt in stmts:
if isinstance(stmt, pr.Flow):
while stmt is not None:
new |= add_stmts(stmt.inputs)
stmt = stmt.next
continue
if isinstance(stmt, pr.KeywordStatement):
stmt = stmt.stmt
if stmt is None:
continue
for expression in stmt.expression_list():
if isinstance(expression, pr.Array):
new |= add_stmts(expression.values)
if isinstance(expression, pr.StatementElement):
for element in expression.generate_call_path():
if isinstance(element, pr.Array):
new |= add_stmts(element.values)
new.add(stmt)
return new
stmts = set()
imports = set()
for scope in module.walk():
imports |= set(scope.imports)
stmts |= add_stmts(scope.statements)
stmts |= add_stmts(r for r in scope.returns if r is not None)
try:
decorators = scope.decorators
except AttributeError:
pass
else:
stmts |= add_stmts(decorators)
return stmts, imports

245
jedi/evaluate/arguments.py Normal file
View File

@@ -0,0 +1,245 @@
from parso.python import tree
from jedi._compatibility import zip_longest
from jedi import debug
from jedi.evaluate import analysis
from jedi.evaluate.lazy_context import LazyKnownContext, LazyKnownContexts, \
LazyTreeContext, get_merged_lazy_context
from jedi.evaluate.filters import ParamName
from jedi.evaluate.base_context import NO_CONTEXTS
from jedi.evaluate.context import iterable
from jedi.evaluate.param import get_params, ExecutedParam
def try_iter_content(types, depth=0):
"""Helper method for static analysis."""
if depth > 10:
# It's possible that a loop has references on itself (especially with
# CompiledObject). Therefore don't loop infinitely.
return
for typ in types:
try:
f = typ.py__iter__
except AttributeError:
pass
else:
for lazy_context in f():
try_iter_content(lazy_context.infer(), depth + 1)
class AbstractArguments(object):
context = None
def eval_argument_clinic(self, parameters):
"""Uses a list with argument clinic information (see PEP 436)."""
iterator = self.unpack()
for i, (name, optional, allow_kwargs) in enumerate(parameters):
key, argument = next(iterator, (None, None))
if key is not None:
raise NotImplementedError
if argument is None and not optional:
debug.warning('TypeError: %s expected at least %s arguments, got %s',
name, len(parameters), i)
raise ValueError
values = NO_CONTEXTS if argument is None else argument.infer()
if not values and not optional:
# For the stdlib we always want values. If we don't get them,
# that's ok, maybe something is too hard to resolve, however,
# we will not proceed with the evaluation of that function.
debug.warning('argument_clinic "%s" not resolvable.', name)
raise ValueError
yield values
def eval_all(self, funcdef=None):
"""
Evaluates all arguments as a support for static analysis
(normally Jedi).
"""
for key, lazy_context in self.unpack():
types = lazy_context.infer()
try_iter_content(types)
def get_calling_nodes(self):
raise NotImplementedError
def unpack(self, funcdef=None):
raise NotImplementedError
def get_params(self, execution_context):
return get_params(execution_context, self)
class AnonymousArguments(AbstractArguments):
def get_params(self, execution_context):
from jedi.evaluate.dynamic import search_params
return search_params(
execution_context.evaluator,
execution_context,
execution_context.tree_node
)
class TreeArguments(AbstractArguments):
def __init__(self, evaluator, context, argument_node, trailer=None):
"""
The argument_node is either a parser node or a list of evaluated
objects. Those evaluated objects may be lists of evaluated objects
themselves (one list for the first argument, one for the second, etc).
:param argument_node: May be an argument_node or a list of nodes.
"""
self.argument_node = argument_node
self.context = context
self._evaluator = evaluator
self.trailer = trailer # Can be None, e.g. in a class definition.
def _split(self):
if isinstance(self.argument_node, (tuple, list)):
for el in self.argument_node:
yield 0, el
else:
if not (self.argument_node.type == 'arglist' or (
# in python 3.5 **arg is an argument, not arglist
(self.argument_node.type == 'argument') and
self.argument_node.children[0] in ('*', '**'))):
yield 0, self.argument_node
return
iterator = iter(self.argument_node.children)
for child in iterator:
if child == ',':
continue
elif child in ('*', '**'):
yield len(child.value), next(iterator)
elif child.type == 'argument' and \
child.children[0] in ('*', '**'):
assert len(child.children) == 2
yield len(child.children[0].value), child.children[1]
else:
yield 0, child
def unpack(self, funcdef=None):
named_args = []
for star_count, el in self._split():
if star_count == 1:
arrays = self.context.eval_node(el)
iterators = [_iterate_star_args(self.context, a, el, funcdef)
for a in arrays]
iterators = list(iterators)
for values in list(zip_longest(*iterators)):
# TODO zip_longest yields None, that means this would raise
# an exception?
yield None, get_merged_lazy_context(
[v for v in values if v is not None]
)
elif star_count == 2:
arrays = self._evaluator.eval_element(self.context, el)
for dct in arrays:
for key, values in _star_star_dict(self.context, dct, el, funcdef):
yield key, values
else:
if el.type == 'argument':
c = el.children
if len(c) == 3: # Keyword argument.
named_args.append((c[0].value, LazyTreeContext(self.context, c[2]),))
else: # Generator comprehension.
# Include the brackets with the parent.
comp = iterable.GeneratorComprehension(
self._evaluator, self.context, self.argument_node.parent)
yield None, LazyKnownContext(comp)
else:
yield None, LazyTreeContext(self.context, el)
# Reordering var_args is necessary, because star args sometimes appear
# after named argument, but in the actual order it's prepended.
for named_arg in named_args:
yield named_arg
def as_tree_tuple_objects(self):
for star_count, argument in self._split():
if argument.type == 'argument':
argument, default = argument.children[::2]
else:
default = None
yield argument, default, star_count
def __repr__(self):
return '<%s: %s>' % (self.__class__.__name__, self.argument_node)
def get_calling_nodes(self):
from jedi.evaluate.dynamic import MergedExecutedParams
old_arguments_list = []
arguments = self
while arguments not in old_arguments_list:
if not isinstance(arguments, TreeArguments):
break
old_arguments_list.append(arguments)
for name, default, star_count in reversed(list(arguments.as_tree_tuple_objects())):
if not star_count or not isinstance(name, tree.Name):
continue
names = self._evaluator.goto(arguments.context, name)
if len(names) != 1:
break
if not isinstance(names[0], ParamName):
break
param = names[0].get_param()
if isinstance(param, MergedExecutedParams):
# For dynamic searches we don't even want to see errors.
return []
if not isinstance(param, ExecutedParam):
break
if param.var_args is None:
break
arguments = param.var_args
break
return [arguments.argument_node or arguments.trailer]
class ValuesArguments(AbstractArguments):
def __init__(self, values_list):
self._values_list = values_list
def unpack(self, funcdef=None):
for values in self._values_list:
yield None, LazyKnownContexts(values)
def get_calling_nodes(self):
return []
def __repr__(self):
return '<%s: %s>' % (self.__class__.__name__, self._values_list)
def _iterate_star_args(context, array, input_node, funcdef=None):
try:
iter_ = array.py__iter__
except AttributeError:
if funcdef is not None:
# TODO this funcdef should not be needed.
m = "TypeError: %s() argument after * must be a sequence, not %s" \
% (funcdef.name.value, array)
analysis.add(context, 'type-error-star', input_node, message=m)
else:
for lazy_context in iter_():
yield lazy_context
def _star_star_dict(context, array, input_node, funcdef):
from jedi.evaluate.context.instance import CompiledInstance
if isinstance(array, CompiledInstance) and array.name.string_name == 'dict':
# For now ignore this case. In the future add proper iterators and just
# make one call without crazy isinstance checks.
return {}
elif isinstance(array, iterable.AbstractIterable) and array.array_type == 'dict':
return array.exact_key_items()
else:
if funcdef is not None:
m = "TypeError: %s argument after ** must be a mapping, not %s" \
% (funcdef.name.value, array)
analysis.add(context, 'type-error-star-star', input_node, message=m)
return {}

View File

@@ -0,0 +1,260 @@
from parso.python.tree import ExprStmt, CompFor
from jedi import debug
from jedi._compatibility import Python3Method, zip_longest, unicode
from jedi.parser_utils import clean_scope_docstring, get_doc_with_call_signature
from jedi.common import BaseContextSet, BaseContext
class Context(BaseContext):
"""
Should be defined, otherwise the API returns empty types.
"""
predefined_names = {}
tree_node = None
"""
To be defined by subclasses.
"""
@property
def api_type(self):
# By default just lower name of the class. Can and should be
# overwritten.
return self.__class__.__name__.lower()
@debug.increase_indent
def execute(self, arguments):
"""
In contrast to py__call__ this function is always available.
`hasattr(x, py__call__)` can also be checked to see if a context is
executable.
"""
if self.evaluator.is_analysis:
arguments.eval_all()
debug.dbg('execute: %s %s', self, arguments)
from jedi.evaluate import stdlib
try:
# Some stdlib functions like super(), namedtuple(), etc. have been
# hard-coded in Jedi to support them.
return stdlib.execute(self.evaluator, self, arguments)
except stdlib.NotInStdLib:
pass
try:
func = self.py__call__
except AttributeError:
debug.warning("no execution possible %s", self)
return NO_CONTEXTS
else:
context_set = func(arguments)
debug.dbg('execute result: %s in %s', context_set, self)
return context_set
return self.evaluator.execute(self, arguments)
def execute_evaluated(self, *value_list):
"""
Execute a function with already executed arguments.
"""
from jedi.evaluate.arguments import ValuesArguments
arguments = ValuesArguments([ContextSet(value) for value in value_list])
return self.execute(arguments)
def iterate(self, contextualized_node=None):
debug.dbg('iterate')
try:
iter_method = self.py__iter__
except AttributeError:
if contextualized_node is not None:
from jedi.evaluate import analysis
analysis.add(
contextualized_node.context,
'type-error-not-iterable',
contextualized_node.node,
message="TypeError: '%s' object is not iterable" % self)
return iter([])
else:
return iter_method()
def get_item(self, index_contexts, contextualized_node):
from jedi.evaluate.compiled import CompiledObject
from jedi.evaluate.context.iterable import Slice, AbstractIterable
result = ContextSet()
for index in index_contexts:
if isinstance(index, (CompiledObject, Slice)):
index = index.obj
if type(index) not in (float, int, str, unicode, slice, type(Ellipsis)):
# If the index is not clearly defined, we have to get all the
# possiblities.
if isinstance(self, AbstractIterable) and self.array_type == 'dict':
result |= self.dict_values()
else:
result |= iterate_contexts(ContextSet(self))
continue
# The actual getitem call.
try:
getitem = self.py__getitem__
except AttributeError:
from jedi.evaluate import analysis
# TODO this context is probably not right.
analysis.add(
contextualized_node.context,
'type-error-not-subscriptable',
contextualized_node.node,
message="TypeError: '%s' object is not subscriptable" % self
)
else:
try:
result |= getitem(index)
except IndexError:
result |= iterate_contexts(ContextSet(self))
except KeyError:
# Must be a dict. Lists don't raise KeyErrors.
result |= self.dict_values()
return result
def eval_node(self, node):
return self.evaluator.eval_element(self, node)
@Python3Method
def py__getattribute__(self, name_or_str, name_context=None, position=None,
search_global=False, is_goto=False,
analysis_errors=True):
"""
:param position: Position of the last statement -> tuple of line, column
"""
if name_context is None:
name_context = self
from jedi.evaluate import finder
f = finder.NameFinder(self.evaluator, self, name_context, name_or_str,
position, analysis_errors=analysis_errors)
filters = f.get_filters(search_global)
if is_goto:
return f.filter_name(filters)
return f.find(filters, attribute_lookup=not search_global)
return self.evaluator.find_types(
self, name_or_str, name_context, position, search_global, is_goto,
analysis_errors)
def create_context(self, node, node_is_context=False, node_is_object=False):
return self.evaluator.create_context(self, node, node_is_context, node_is_object)
def is_class(self):
return False
def py__bool__(self):
"""
Since Wrapper is a super class for classes, functions and modules,
the return value will always be true.
"""
return True
def py__doc__(self, include_call_signature=False):
try:
self.tree_node.get_doc_node
except AttributeError:
return ''
else:
if include_call_signature:
return get_doc_with_call_signature(self.tree_node)
else:
return clean_scope_docstring(self.tree_node)
return None
def iterate_contexts(contexts, contextualized_node=None):
"""
Calls `iterate`, on all contexts but ignores the ordering and just returns
all contexts that the iterate functions yield.
"""
return ContextSet.from_sets(
lazy_context.infer()
for lazy_context in contexts.iterate(contextualized_node)
)
class TreeContext(Context):
def __init__(self, evaluator, parent_context=None):
super(TreeContext, self).__init__(evaluator, parent_context)
self.predefined_names = {}
def __repr__(self):
return '<%s: %s>' % (self.__class__.__name__, self.tree_node)
class ContextualizedNode(object):
def __init__(self, context, node):
self.context = context
self.node = node
def get_root_context(self):
return self.context.get_root_context()
def infer(self):
return self.context.eval_node(self.node)
class ContextualizedName(ContextualizedNode):
# TODO merge with TreeNameDefinition?!
@property
def name(self):
return self.node
def assignment_indexes(self):
"""
Returns an array of tuple(int, node) of the indexes that are used in
tuple assignments.
For example if the name is ``y`` in the following code::
x, (y, z) = 2, ''
would result in ``[(1, xyz_node), (0, yz_node)]``.
"""
indexes = []
node = self.node.parent
compare = self.node
while node is not None:
if node.type in ('testlist', 'testlist_comp', 'testlist_star_expr', 'exprlist'):
for i, child in enumerate(node.children):
if child == compare:
indexes.insert(0, (int(i / 2), node))
break
else:
raise LookupError("Couldn't find the assignment.")
elif isinstance(node, (ExprStmt, CompFor)):
break
compare = node
node = node.parent
return indexes
class ContextSet(BaseContextSet):
def py__class__(self):
return ContextSet.from_iterable(c.py__class__() for c in self._set)
def iterate(self, contextualized_node=None):
from jedi.evaluate.lazy_context import get_merged_lazy_context
type_iters = [c.iterate(contextualized_node) for c in self._set]
for lazy_contexts in zip_longest(*type_iters):
yield get_merged_lazy_context(
[l for l in lazy_contexts if l is not None]
)
NO_CONTEXTS = ContextSet()
def iterator_to_context_set(func):
def wrapper(*args, **kwargs):
return ContextSet.from_iterable(func(*args, **kwargs))
return wrapper

View File

@@ -1,13 +1,13 @@
"""
- the popular ``memoize_default`` works like a typical memoize and returns the
- the popular ``_memoize_default`` works like a typical memoize and returns the
default otherwise.
- ``CachedMetaClass`` uses ``memoize_default`` to do the same with classes.
- ``CachedMetaClass`` uses ``_memoize_default`` to do the same with classes.
"""
NO_DEFAULT = object()
_NO_DEFAULT = object()
def memoize_default(default=None, evaluator_is_first_arg=False, second_arg_is_evaluator=False):
def _memoize_default(default=_NO_DEFAULT, evaluator_is_first_arg=False, second_arg_is_evaluator=False):
""" This is a typical memoization decorator, BUT there is one difference:
To prevent recursion it sets defaults.
@@ -17,12 +17,13 @@ def memoize_default(default=None, evaluator_is_first_arg=False, second_arg_is_ev
"""
def func(function):
def wrapper(obj, *args, **kwargs):
# TODO These checks are kind of ugly and slow.
if evaluator_is_first_arg:
cache = obj.memoize_cache
elif second_arg_is_evaluator: # needed for meta classes
cache = args[0].memoize_cache
elif second_arg_is_evaluator:
cache = args[0].memoize_cache # needed for meta classes
else:
cache = obj._evaluator.memoize_cache
cache = obj.evaluator.memoize_cache
try:
memo = cache[function]
@@ -34,21 +35,43 @@ def memoize_default(default=None, evaluator_is_first_arg=False, second_arg_is_ev
if key in memo:
return memo[key]
else:
if default is not NO_DEFAULT:
if default is not _NO_DEFAULT:
memo[key] = default
rv = function(obj, *args, **kwargs)
memo[key] = rv
return rv
return wrapper
return func
def evaluator_function_cache(default=_NO_DEFAULT):
def decorator(func):
return _memoize_default(default=default, evaluator_is_first_arg=True)(func)
return decorator
def evaluator_method_cache(default=_NO_DEFAULT):
def decorator(func):
return _memoize_default(default=default)(func)
return decorator
def _memoize_meta_class():
def decorator(call):
return _memoize_default(second_arg_is_evaluator=True)(call)
return decorator
class CachedMetaClass(type):
"""
This is basically almost the same than the decorator above, it just caches
class initializations. I haven't found any other way, so I'm doing it with
meta classes.
class initializations. Either you do it this way or with decorators, but
with decorators you lose class access (isinstance, etc).
"""
@memoize_default(None, second_arg_is_evaluator=True)
@_memoize_meta_class()
def __call__(self, *args, **kwargs):
return super(CachedMetaClass, self).__call__(*args, **kwargs)

View File

@@ -5,13 +5,17 @@ import inspect
import re
import sys
import os
import types
from functools import partial
from jedi._compatibility import builtins as _builtins, unicode
from jedi._compatibility import builtins as _builtins, unicode, py_version
from jedi import debug
from jedi.cache import underscore_memoization, memoize
from jedi.evaluate.sys_path import get_sys_path
from jedi.parser.representation import Param, SubModule, Base, IsScope, Operator
from jedi.evaluate.helpers import FakeName
from jedi.cache import underscore_memoization, memoize_method
from jedi.evaluate.filters import AbstractFilter, AbstractNameDefinition, \
ContextNameMixin
from jedi.evaluate.base_context import Context, ContextSet
from jedi.evaluate.lazy_context import LazyKnownContext
from jedi.evaluate.compiled.getattr_static import getattr_static
from . import fake
@@ -21,169 +25,252 @@ if os.path.altsep is not None:
_path_re = re.compile('(?:\.[^{0}]+|[{0}]__init__\.py)$'.format(re.escape(_sep)))
del _sep
# Those types don't exist in typing.
MethodDescriptorType = type(str.replace)
WrapperDescriptorType = type(set.__iter__)
# `object.__subclasshook__` is an already executed descriptor.
object_class_dict = type.__dict__["__dict__"].__get__(object)
ClassMethodDescriptorType = type(object_class_dict['__subclasshook__'])
class CompiledObject(Base):
# comply with the parser
start_pos = 0, 0
asserts = []
ALLOWED_DESCRIPTOR_ACCESS = (
types.FunctionType,
types.GetSetDescriptorType,
types.MemberDescriptorType,
MethodDescriptorType,
WrapperDescriptorType,
ClassMethodDescriptorType,
staticmethod,
classmethod,
)
class CheckAttribute(object):
"""Raises an AttributeError if the attribute X isn't available."""
def __init__(self, func):
self.func = func
# Remove the py in front of e.g. py__call__.
self.check_name = func.__name__[2:]
def __get__(self, instance, owner):
# This might raise an AttributeError. That's wanted.
if self.check_name == '__iter__':
# Python iterators are a bit strange, because there's no need for
# the __iter__ function as long as __getitem__ is defined (it will
# just start with __getitem__(0). This is especially true for
# Python 2 strings, where `str.__iter__` is not even defined.
try:
iter(instance.obj)
except TypeError:
raise AttributeError
else:
getattr(instance.obj, self.check_name)
return partial(self.func, instance)
class CompiledObject(Context):
path = None # modules have this attribute - set it to None.
used_names = lambda self: {} # To be consistent with modules.
def __init__(self, obj, parent=None):
def __init__(self, evaluator, obj, parent_context=None, faked_class=None):
super(CompiledObject, self).__init__(evaluator, parent_context)
self.obj = obj
self.parent = parent
# This attribute will not be set for most classes, except for fakes.
self.tree_node = faked_class
@property
def doc(self):
def get_root_node(self):
# To make things a bit easier with filters we add this method here.
return self.get_root_context()
@CheckAttribute
def py__call__(self, params):
if inspect.isclass(self.obj):
from jedi.evaluate.context import CompiledInstance
return ContextSet(CompiledInstance(self.evaluator, self.parent_context, self, params))
else:
return ContextSet.from_iterable(self._execute_function(params))
@CheckAttribute
def py__class__(self):
return create(self.evaluator, self.obj.__class__)
@CheckAttribute
def py__mro__(self):
return (self,) + tuple(create(self.evaluator, cls) for cls in self.obj.__mro__[1:])
@CheckAttribute
def py__bases__(self):
return tuple(create(self.evaluator, cls) for cls in self.obj.__bases__)
def py__bool__(self):
return bool(self.obj)
def py__file__(self):
try:
return self.obj.__file__
except AttributeError:
return None
def is_class(self):
return inspect.isclass(self.obj)
def py__doc__(self, include_call_signature=False):
return inspect.getdoc(self.obj) or ''
@property
def params(self):
params_str, ret = self._parse_function_doc()
tokens = params_str.split(',')
params = []
module = SubModule(self.get_parent_until().name)
# it seems like start_pos/end_pos is always (0, 0) for a compiled
# object
start_pos, end_pos = (0, 0), (0, 0)
for p in tokens:
parts = [FakeName(part) for part in p.strip().split('=')]
if len(parts) >= 2:
parts.insert(1, Operator(module, '=', module, (0, 0)))
params.append(Param(module, parts, start_pos,
end_pos, builtin))
return params
def get_param_names(self):
obj = self.obj
try:
if py_version < 33:
raise ValueError("inspect.signature was introduced in 3.3")
if py_version == 34:
# In 3.4 inspect.signature are wrong for str and int. This has
# been fixed in 3.5. The signature of object is returned,
# because no signature was found for str. Here we imitate 3.5
# logic and just ignore the signature if the magic methods
# don't match object.
# 3.3 doesn't even have the logic and returns nothing for str
# and classes that inherit from object.
user_def = inspect._signature_get_user_defined_method
if (inspect.isclass(obj)
and not user_def(type(obj), '__init__')
and not user_def(type(obj), '__new__')
and (obj.__init__ != object.__init__
or obj.__new__ != object.__new__)):
raise ValueError
signature = inspect.signature(obj)
except ValueError: # Has no signature
params_str, ret = self._parse_function_doc()
tokens = params_str.split(',')
if inspect.ismethoddescriptor(obj):
tokens.insert(0, 'self')
for p in tokens:
parts = p.strip().split('=')
yield UnresolvableParamName(self, parts[0])
else:
for signature_param in signature.parameters.values():
yield SignatureParamName(self, signature_param)
def __repr__(self):
return '<%s: %s>' % (type(self).__name__, repr(self.obj))
return '<%s: %s>' % (self.__class__.__name__, repr(self.obj))
@underscore_memoization
def _parse_function_doc(self):
if self.doc is None:
doc = self.py__doc__()
if doc is None:
return '', ''
return _parse_function_doc(self.doc)
return _parse_function_doc(doc)
def type(self):
cls = self._cls().obj
if inspect.isclass(cls):
@property
def api_type(self):
obj = self.obj
if inspect.isclass(obj):
return 'class'
elif inspect.ismodule(cls):
elif inspect.ismodule(obj):
return 'module'
elif inspect.isbuiltin(cls) or inspect.ismethod(cls) \
or inspect.ismethoddescriptor(cls):
elif inspect.isbuiltin(obj) or inspect.ismethod(obj) \
or inspect.ismethoddescriptor(obj) or inspect.isfunction(obj):
return 'function'
# Everything else...
return 'instance'
def is_executable_class(self):
return inspect.isclass(self.obj)
@property
def type(self):
"""Imitate the tree.Node.type values."""
cls = self._get_class()
if inspect.isclass(cls):
return 'classdef'
elif inspect.ismodule(cls):
return 'file_input'
elif inspect.isbuiltin(cls) or inspect.ismethod(cls) or \
inspect.ismethoddescriptor(cls):
return 'funcdef'
@underscore_memoization
def _cls(self):
"""
We used to limit the lookups for instantiated objects like list(), but
this is not the case anymore. Python itself
"""
# Ensures that a CompiledObject is returned that is not an instance (like list)
if fake.is_class_instance(self.obj):
try:
c = self.obj.__class__
except AttributeError:
# happens with numpy.core.umath._UFUNC_API (you get it
# automatically by doing `import numpy`.
c = type(None)
return CompiledObject(c, self.parent)
return self
def get_defined_names(self):
if inspect.ismodule(self.obj):
return self.instance_names()
else:
return type_names + self.instance_names()
def _get_class(self):
if not fake.is_class_instance(self.obj) or \
inspect.ismethoddescriptor(self.obj): # slots
return self.obj
def scope_names_generator(self, position=None):
yield self, self.get_defined_names()
try:
return self.obj.__class__
except AttributeError:
# happens with numpy.core.umath._UFUNC_API (you get it
# automatically by doing `import numpy`.
return type
@underscore_memoization
def instance_names(self):
names = []
cls = self._cls()
for name in dir(cls.obj):
names.append(CompiledName(cls, name))
return names
def get_filters(self, search_global=False, is_instance=False,
until_position=None, origin_scope=None):
yield self._ensure_one_filter(is_instance)
def get_subscope_by_name(self, name):
if name in dir(self._cls().obj):
return CompiledName(self._cls(), name).parent
else:
raise KeyError("CompiledObject doesn't have an attribute '%s'." % name)
@memoize_method
def _ensure_one_filter(self, is_instance):
"""
search_global shouldn't change the fact that there's one dict, this way
there's only one `object`.
"""
return CompiledObjectFilter(self.evaluator, self, is_instance)
def get_index_types(self, evaluator, index_array):
# If the object doesn't have `__getitem__`, just raise the
# AttributeError.
if not hasattr(self.obj, '__getitem__'):
debug.warning('Tried to call __getitem__ on non-iterable.')
return []
@CheckAttribute
def py__getitem__(self, index):
if type(self.obj) not in (str, list, tuple, unicode, bytes, bytearray, dict):
# Get rid of side effects, we won't call custom `__getitem__`s.
return []
return ContextSet()
result = []
from jedi.evaluate.iterable import create_indexes_or_slices
for typ in create_indexes_or_slices(evaluator, index_array):
index = None
try:
index = typ.obj
new = self.obj[index]
except (KeyError, IndexError, TypeError, AttributeError):
# Just try, we don't care if it fails, except for slices.
if isinstance(index, slice):
result.append(self)
else:
result.append(CompiledObject(new))
if not result:
try:
for obj in self.obj:
result.append(CompiledObject(obj))
except TypeError:
pass # self.obj maynot have an __iter__ method.
return result
return ContextSet(create(self.evaluator, self.obj[index]))
@CheckAttribute
def py__iter__(self):
if type(self.obj) not in (str, list, tuple, unicode, bytes, bytearray, dict):
# Get rid of side effects, we won't call custom `__getitem__`s.
return
for i, part in enumerate(self.obj):
if i > 20:
# Should not go crazy with large iterators
break
yield LazyKnownContext(create(self.evaluator, part))
def py__name__(self):
try:
return self._get_class().__name__
except AttributeError:
return None
@property
def name(self):
# might not exist sometimes (raises AttributeError)
return self._cls().obj.__name__
try:
name = self._get_class().__name__
except AttributeError:
name = repr(self.obj)
return CompiledContextName(self, name)
def execute_function(self, evaluator, params):
if self.type() != 'function':
def _execute_function(self, params):
from jedi.evaluate import docstrings
if self.type != 'funcdef':
return
for name in self._parse_function_doc()[1].split():
try:
bltn_obj = _create_from_name(builtin, builtin, name)
bltn_obj = getattr(_builtins, name)
except AttributeError:
continue
else:
if isinstance(bltn_obj, CompiledObject):
# We want everything except None.
if bltn_obj.obj is not None:
yield bltn_obj
else:
for result in evaluator.execute(bltn_obj, params):
yield result
@property
@underscore_memoization
def subscopes(self):
"""
Returns only the faked scopes - the other ones are not important for
internal analysis.
"""
module = self.get_parent_until()
faked_subscopes = []
for name in dir(self._cls().obj):
f = fake.get_faked(module.obj, self.obj, name)
if f:
f.parent = self
faked_subscopes.append(f)
return faked_subscopes
def is_scope(self):
return True
if bltn_obj is None:
# We want to evaluate everything except None.
# TODO do we?
continue
bltn_obj = create(self.evaluator, bltn_obj)
for result in bltn_obj.execute(params):
yield result
for type_ in docstrings.infer_return_types(self):
yield type_
def get_self_attributes(self):
return [] # Instance compatibility
@@ -191,44 +278,145 @@ class CompiledObject(Base):
def get_imports(self):
return [] # Builtins don't have imports
def is_callable(self):
"""Check if the object has a ``__call__`` method."""
return hasattr(self.obj, '__call__')
def dict_values(self):
return ContextSet.from_iterable(
create(self.evaluator, v) for v in self.obj.values()
)
class CompiledName(FakeName):
def __init__(self, obj, name):
super(CompiledName, self).__init__(name)
self._obj = obj
self.name = name
self.start_pos = 0, 0 # an illegal start_pos, to make sorting easy.
class CompiledName(AbstractNameDefinition):
def __init__(self, evaluator, parent_context, name):
self._evaluator = evaluator
self.parent_context = parent_context
self.string_name = name
def __repr__(self):
try:
name = self._obj.name # __name__ is not defined all the time
name = self.parent_context.name # __name__ is not defined all the time
except AttributeError:
name = None
return '<%s: (%s).%s>' % (type(self).__name__, name, self.name)
return '<%s: (%s).%s>' % (self.__class__.__name__, name, self.string_name)
@property
def api_type(self):
return next(iter(self.infer())).api_type
@underscore_memoization
def parent(self):
module = self._obj.get_parent_until()
return _create_from_name(module, self._obj, self.name)
@parent.setter
def parent(self, value):
pass # Just ignore this, FakeName tries to overwrite the parent attribute.
def infer(self):
module = self.parent_context.get_root_context()
return ContextSet(_create_from_name(
self._evaluator, module, self.parent_context, self.string_name
))
def dotted_from_fs_path(fs_path, sys_path=None):
class SignatureParamName(AbstractNameDefinition):
api_type = 'param'
def __init__(self, compiled_obj, signature_param):
self.parent_context = compiled_obj.parent_context
self._signature_param = signature_param
@property
def string_name(self):
return self._signature_param.name
def infer(self):
p = self._signature_param
evaluator = self.parent_context.evaluator
contexts = ContextSet()
if p.default is not p.empty:
contexts = ContextSet(create(evaluator, p.default))
if p.annotation is not p.empty:
annotation = create(evaluator, p.annotation)
contexts |= annotation.execute_evaluated()
return contexts
class UnresolvableParamName(AbstractNameDefinition):
api_type = 'param'
def __init__(self, compiled_obj, name):
self.parent_context = compiled_obj.parent_context
self.string_name = name
def infer(self):
return ContextSet()
class CompiledContextName(ContextNameMixin, AbstractNameDefinition):
def __init__(self, context, name):
self.string_name = name
self._context = context
self.parent_context = context.parent_context
class EmptyCompiledName(AbstractNameDefinition):
"""
Accessing some names will raise an exception. To avoid not having any
completions, just give Jedi the option to return this object. It infers to
nothing.
"""
def __init__(self, evaluator, name):
self.parent_context = evaluator.BUILTINS
self.string_name = name
def infer(self):
return ContextSet()
class CompiledObjectFilter(AbstractFilter):
name_class = CompiledName
def __init__(self, evaluator, compiled_object, is_instance=False):
self._evaluator = evaluator
self._compiled_object = compiled_object
self._is_instance = is_instance
@memoize_method
def get(self, name):
name = str(name)
obj = self._compiled_object.obj
try:
attr, is_get_descriptor = getattr_static(obj, name)
except AttributeError:
return []
else:
if is_get_descriptor \
and not type(attr) in ALLOWED_DESCRIPTOR_ACCESS:
# In case of descriptors that have get methods we cannot return
# it's value, because that would mean code execution.
return [EmptyCompiledName(self._evaluator, name)]
if self._is_instance and name not in dir(obj):
return []
return [self._create_name(name)]
def values(self):
obj = self._compiled_object.obj
names = []
for name in dir(obj):
names += self.get(name)
is_instance = self._is_instance or fake.is_class_instance(obj)
# ``dir`` doesn't include the type names.
if not inspect.ismodule(obj) and (obj is not type) and not is_instance:
for filter in create(self._evaluator, type).get_filters():
names += filter.values()
return names
def _create_name(self, name):
return self.name_class(self._evaluator, self._compiled_object, name)
def dotted_from_fs_path(fs_path, sys_path):
"""
Changes `/usr/lib/python3.4/email/utils.py` to `email.utils`. I.e.
compares the path with sys.path and then returns the dotted_path. If the
path is not in the sys.path, just returns None.
"""
if sys_path is None:
sys_path = get_sys_path()
if os.path.basename(fs_path).startswith('__init__.'):
# We are calculating the path. __init__ files are not interesting.
fs_path = os.path.dirname(fs_path)
# prefer
# - UNIX
@@ -244,53 +432,46 @@ def dotted_from_fs_path(fs_path, sys_path=None):
# C:\path\to\Lib
path = ''
for s in sys_path:
if (fs_path.startswith(s) and
len(path) < len(s)):
if (fs_path.startswith(s) and len(path) < len(s)):
path = s
return _path_re.sub('', fs_path[len(path):].lstrip(os.path.sep)).replace(os.path.sep, '.')
# - Window
# X:\path\to\lib-dynload/datetime.pyd => datetime
module_path = fs_path[len(path):].lstrip(os.path.sep).lstrip('/')
# - Window
# Replace like X:\path\to\something/foo/bar.py
return _path_re.sub('', module_path).replace(os.path.sep, '.').replace('/', '.')
def load_module(path, name):
"""
if not name:
name = os.path.basename(path)
name = name.rpartition('.')[0] # cut file type (normally .so)
# sometimes there are endings like `_sqlite3.cpython-32mu`
name = re.sub(r'\..*', '', name)
dot_path = []
if path:
p = path
# if path is not in sys.path, we need to make a well defined import
# like `from numpy.core import umath.`
while p and p not in sys.path:
p, sep, mod = p.rpartition(os.path.sep)
dot_path.insert(0, mod.partition('.')[0])
if p:
name = ".".join(dot_path)
path = p
else:
path = os.path.dirname(path)
"""
def load_module(evaluator, path=None, name=None):
sys_path = list(evaluator.project.sys_path)
if path is not None:
dotted_path = dotted_from_fs_path(path)
dotted_path = dotted_from_fs_path(path, sys_path=sys_path)
else:
dotted_path = name
sys_path = get_sys_path()
if dotted_path is None:
p, _, dotted_path = path.partition(os.path.sep)
sys_path.insert(0, p)
temp, sys.path = sys.path, sys_path
__import__(dotted_path)
try:
__import__(dotted_path)
except RuntimeError:
if 'PySide' in dotted_path or 'PyQt' in dotted_path:
# RuntimeError: the PyQt4.QtCore and PyQt5.QtCore modules both wrap
# the QObject class.
# See https://github.com/davidhalter/jedi/pull/483
return None
raise
except ImportError:
# If a module is "corrupt" or not really a Python module or whatever.
debug.warning('Module %s not importable in path %s.', dotted_path, path)
return None
finally:
sys.path = temp
# Just access the cache after import, because of #59 as well as the very
# complicated import structure of Python.
module = sys.modules[dotted_path]
sys.path = temp
return CompiledObject(module)
return create(evaluator, module)
docstr_defaults = {
@@ -362,11 +543,30 @@ def _parse_function_doc(doc):
return param_str, ret
class Builtin(CompiledObject, IsScope):
@memoize
def get_by_name(self, name):
item = [n for n in self.get_defined_names() if n.get_code() == name][0]
return item.parent
def _create_from_name(evaluator, module, compiled_object, name):
obj = compiled_object.obj
faked = None
try:
faked = fake.get_faked(evaluator, module, obj, parent_context=compiled_object, name=name)
if faked.type == 'funcdef':
from jedi.evaluate.context.function import FunctionContext
return FunctionContext(evaluator, compiled_object, faked)
except fake.FakeDoesNotExist:
pass
try:
obj = getattr(obj, name)
except AttributeError:
# Happens e.g. in properties of
# PyQt4.QtGui.QStyleOptionComboBox.currentText
# -> just set it to None
obj = None
return create(evaluator, obj, parent_context=compiled_object, faked=faked)
def builtin_from_name(evaluator, string):
bltn_obj = getattr(_builtins, string)
return create(evaluator, bltn_obj)
def _a_generator(foo):
@@ -375,55 +575,64 @@ def _a_generator(foo):
yield foo
def _create_from_name(module, parent, name):
faked = fake.get_faked(module.obj, parent.obj, name)
# only functions are necessary.
if faked is not None:
faked.parent = parent
return faked
try:
obj = getattr(parent.obj, name)
except AttributeError:
# happens e.g. in properties of
# PyQt4.QtGui.QStyleOptionComboBox.currentText
# -> just set it to None
obj = None
return CompiledObject(obj, parent)
_SPECIAL_OBJECTS = {
'FUNCTION_CLASS': type(load_module),
'METHOD_CLASS': type(CompiledObject.is_class),
'MODULE_CLASS': type(os),
'GENERATOR_OBJECT': _a_generator(1.0),
'BUILTINS': _builtins,
}
builtin = Builtin(_builtins)
magic_function_class = CompiledObject(type(load_module), parent=builtin)
generator_obj = CompiledObject(_a_generator(1.0))
type_names = [] # Need this, because it's return in get_defined_names.
type_names = builtin.get_by_name('type').get_defined_names()
def get_special_object(evaluator, identifier):
obj = _SPECIAL_OBJECTS[identifier]
return create(evaluator, obj, parent_context=create(evaluator, _builtins))
def compiled_objects_cache(func):
def wrapper(evaluator, obj, parent=builtin, module=None):
# Do a very cheap form of caching here.
key = id(obj), id(parent), id(module)
try:
return evaluator.compiled_cache[key][0]
except KeyError:
result = func(evaluator, obj, parent, module)
# Need to cache all of them, otherwise the id could be overwritten.
evaluator.compiled_cache[key] = result, obj, parent, module
return result
return wrapper
def compiled_objects_cache(attribute_name):
def decorator(func):
"""
This decorator caches just the ids, oopposed to caching the object itself.
Caching the id has the advantage that an object doesn't need to be
hashable.
"""
def wrapper(evaluator, obj, parent_context=None, module=None, faked=None):
cache = getattr(evaluator, attribute_name)
# Do a very cheap form of caching here.
key = id(obj), id(parent_context)
try:
return cache[key][0]
except KeyError:
# TODO this whole decorator is way too ugly
result = func(evaluator, obj, parent_context, module, faked)
# Need to cache all of them, otherwise the id could be overwritten.
cache[key] = result, obj, parent_context, module, faked
return result
return wrapper
return decorator
@compiled_objects_cache
def create(evaluator, obj, parent=builtin, module=None):
@compiled_objects_cache('compiled_cache')
def create(evaluator, obj, parent_context=None, module=None, faked=None):
"""
A very weird interface class to this module. The more options provided the
more acurate loading compiled objects is.
"""
if inspect.ismodule(obj):
if parent_context is not None:
# Modules don't have parents, be careful with caching: recurse.
return create(evaluator, obj)
else:
if parent_context is None and obj is not _builtins:
return create(evaluator, obj, create(evaluator, _builtins))
if not inspect.ismodule(obj):
faked = fake.get_faked(module and module.obj, obj)
if faked is not None:
faked.parent = parent
return faked
try:
faked = fake.get_faked(evaluator, module, obj, parent_context=parent_context)
if faked.type == 'funcdef':
from jedi.evaluate.context.function import FunctionContext
return FunctionContext(evaluator, parent_context, faked)
except fake.FakeDoesNotExist:
pass
return CompiledObject(obj, parent)
return CompiledObject(evaluator, obj, parent_context, faked)

View File

@@ -6,17 +6,48 @@ mixing in Python code, the autocompletion should work much better for builtins.
import os
import inspect
import types
from itertools import chain
from jedi._compatibility import is_py3, builtins, unicode
from jedi.parser import Parser
from jedi.parser import tokenize
from jedi.parser.representation import Class
from jedi.evaluate.helpers import FakeName
from parso.python import tree
from jedi._compatibility import is_py3, builtins, unicode, is_py34
modules = {}
def _load_faked_module(module):
MethodDescriptorType = type(str.replace)
# These are not considered classes and access is granted even though they have
# a __class__ attribute.
NOT_CLASS_TYPES = (
types.BuiltinFunctionType,
types.CodeType,
types.FrameType,
types.FunctionType,
types.GeneratorType,
types.GetSetDescriptorType,
types.LambdaType,
types.MemberDescriptorType,
types.MethodType,
types.ModuleType,
types.TracebackType,
MethodDescriptorType
)
if is_py3:
NOT_CLASS_TYPES += (
types.MappingProxyType,
types.SimpleNamespace
)
if is_py34:
NOT_CLASS_TYPES += (types.DynamicClassAttribute,)
class FakeDoesNotExist(Exception):
pass
def _load_faked_module(grammar, module):
module_name = module.__name__
if module_name == '__builtin__' and not is_py3:
module_name = 'builtins'
@@ -31,22 +62,21 @@ def _load_faked_module(module):
except IOError:
modules[module_name] = None
return
module = Parser(unicode(source), module_name).module
modules[module_name] = module
modules[module_name] = m = grammar.parse(unicode(source))
if module_name == 'builtins' and not is_py3:
# There are two implementations of `open` for either python 2/3.
# -> Rename the python2 version (`look at fake/builtins.pym`).
open_func = search_scope(module, 'open')
open_func.name = FakeName('open_python3')
open_func = search_scope(module, 'open_python2')
open_func.name = FakeName('open')
return module
open_func = _search_scope(m, 'open')
open_func.children[1].value = 'open_python3'
open_func = _search_scope(m, 'open_python2')
open_func.children[1].value = 'open'
return m
def search_scope(scope, obj_name):
for s in scope.subscopes:
if str(s.name) == obj_name:
def _search_scope(scope, obj_name):
for s in chain(scope.iter_classdefs(), scope.iter_funcdefs()):
if s.name.value == obj_name:
return s
@@ -64,53 +94,120 @@ def get_module(obj):
# Unfortunately in some cases like `int` there's no __module__
return builtins
else:
return __import__(imp_plz)
if imp_plz is None:
# Happens for example in `(_ for _ in []).send.__module__`.
return builtins
else:
try:
return __import__(imp_plz)
except ImportError:
# __module__ can be something arbitrary that doesn't exist.
return builtins
def _faked(module, obj, name):
def _faked(grammar, module, obj, name):
# Crazy underscore actions to try to escape all the internal madness.
if module is None:
module = get_module(obj)
faked_mod = _load_faked_module(module)
faked_mod = _load_faked_module(grammar, module)
if faked_mod is None:
return
return None, None
# Having the module as a `parser.representation.module`, we need to scan
# Having the module as a `parser.python.tree.Module`, we need to scan
# for methods.
if name is None:
if inspect.isbuiltin(obj):
return search_scope(faked_mod, obj.__name__)
if inspect.isbuiltin(obj) or inspect.isclass(obj):
return _search_scope(faked_mod, obj.__name__), faked_mod
elif not inspect.isclass(obj):
# object is a method or descriptor
cls = search_scope(faked_mod, obj.__objclass__.__name__)
if cls is None:
return
return search_scope(cls, obj.__name__)
try:
objclass = obj.__objclass__
except AttributeError:
return None, None
else:
cls = _search_scope(faked_mod, objclass.__name__)
if cls is None:
return None, None
return _search_scope(cls, obj.__name__), faked_mod
else:
if obj == module:
return search_scope(faked_mod, name)
if obj is module:
return _search_scope(faked_mod, name), faked_mod
else:
cls = search_scope(faked_mod, obj.__name__)
try:
cls_name = obj.__name__
except AttributeError:
return None, None
cls = _search_scope(faked_mod, cls_name)
if cls is None:
return
return search_scope(cls, name)
return None, None
return _search_scope(cls, name), faked_mod
return None, None
def get_faked(module, obj, name=None):
obj = obj.__class__ if is_class_instance(obj) else obj
result = _faked(module, obj, name)
if not isinstance(result, Class) and result is not None:
def memoize_faked(obj):
"""
A typical memoize function that ignores issues with non hashable results.
"""
cache = obj.cache = {}
def memoizer(*args, **kwargs):
key = (obj, args, frozenset(kwargs.items()))
try:
result = cache[key]
except (TypeError, ValueError):
return obj(*args, **kwargs)
except KeyError:
result = obj(*args, **kwargs)
if result is not None:
cache[key] = obj(*args, **kwargs)
return result
else:
return result
return memoizer
@memoize_faked
def _get_faked(grammar, module, obj, name=None):
result, fake_module = _faked(grammar, module, obj, name)
if result is None:
# We're not interested in classes. What we want is functions.
raise FakeDoesNotExist
elif result.type == 'classdef':
return result, fake_module
else:
# Set the docstr which was previously not set (faked modules don't
# contain it).
doc = '''"""%s"""''' % obj.__doc__ # TODO need escapes.
result.add_docstr(tokenize.Token(tokenize.STRING, doc, (0, 0)))
return result
assert result.type == 'funcdef'
doc = '"""%s"""' % obj.__doc__ # TODO need escapes.
suite = result.children[-1]
string = tree.String(doc, (0, 0), '')
new_line = tree.Newline('\n', (0, 0))
docstr_node = tree.PythonNode('simple_stmt', [string, new_line])
suite.children.insert(1, docstr_node)
return result, fake_module
def get_faked(evaluator, module, obj, name=None, parent_context=None):
if parent_context and parent_context.tree_node is not None:
# Try to search in already clearly defined stuff.
found = _search_scope(parent_context.tree_node, name)
if found is not None:
return found
else:
raise FakeDoesNotExist
faked, fake_module = _get_faked(evaluator.latest_grammar, module and module.obj, obj, name)
if module is not None:
module.get_used_names = fake_module.get_used_names
return faked
def is_class_instance(obj):
"""Like inspect.* methods."""
return not (inspect.isclass(obj) or inspect.ismodule(obj)
or inspect.isbuiltin(obj) or inspect.ismethod(obj)
or inspect.ismethoddescriptor(obj) or inspect.iscode(obj)
or inspect.isgenerator(obj))
try:
cls = obj.__class__
except AttributeError:
return False
else:
return cls != type and not issubclass(cls, NOT_CLASS_TYPES)

View File

@@ -5,5 +5,5 @@ class partial():
self.__keywords = keywords
def __call__(self, *args, **kwargs):
# I know this doesn't work in Python, but in Jedi it does ;-)
return self.__func(*self.__args, *args, **self.keywords, **kwargs)
# TODO should be **dict(self.__keywords, **kwargs)
return self.__func(*(self.__args + args), **self.__keywords)

View File

@@ -1,8 +1,9 @@
def proxy(object, callback=None):
return object
class weakref():
class ref():
def __init__(self, object, callback=None):
self.__object = object
def __call__(self):
return self.__object

View File

@@ -7,11 +7,14 @@ possible for the auto completion.
def next(iterator, default=None):
if hasattr("next"):
return iterator.next()
if random.choice([0, 1]):
if hasattr("next"):
return iterator.next()
else:
return iterator.__next__()
else:
return iterator.__next__()
return default
if default is not None:
return default
def iter(collection, sentinel=None):
@@ -29,9 +32,16 @@ def range(start, stop=None, step=1):
class file():
def __iter__(self):
yield ''
def next(self):
return ''
def readlines(self):
return ['']
def __enter__(self):
return self
class xrange():
# Attention: this function doesn't exist in Py3k (there it is range).
@@ -121,7 +131,7 @@ class list():
return self.__iterable[y]
def pop(self):
return self.__iterable[-1]
return self.__iterable[int()]
class tuple():
@@ -153,7 +163,7 @@ class set():
yield i
def pop(self):
return self.__iterable.pop()
return list(self.__iterable)[-1]
def copy(self):
return self
@@ -199,11 +209,29 @@ class dict():
except KeyError:
return d
def values(self):
return self.__elements.values()
def setdefault(self, k, d):
# TODO maybe also return the content
return d
class enumerate():
def __init__(self, sequence, start=0):
self.__sequence = sequence
def __iter__(self):
for i in self.__sequence:
yield 1, i
def __next__(self):
return next(self.__iter__())
def next(self):
return next(self.__iter__())
class reversed():
def __init__(self, sequence):
self.__sequence = sequence
@@ -235,6 +263,11 @@ class str():
def __init__(self, obj):
pass
def strip(self):
return str()
def split(self):
return [str()]
class type():
def mro():

View File

@@ -1,3 +1,12 @@
class TextIOWrapper():
def __next__(self):
return 'hacked io return'
return str()
def __iter__(self):
yield str()
def readlines(self):
return ['']
def __enter__(self):
return self

View File

@@ -0,0 +1,33 @@
# Just copied this code from Python 3.6.
class itemgetter:
"""
Return a callable object that fetches the given item(s) from its operand.
After f = itemgetter(2), the call f(r) returns r[2].
After g = itemgetter(2, 5, 3), the call g(r) returns (r[2], r[5], r[3])
"""
__slots__ = ('_items', '_call')
def __init__(self, item, *items):
if not items:
self._items = (item,)
def func(obj):
return obj[item]
self._call = func
else:
self._items = items = (item,) + items
def func(obj):
return tuple(obj[i] for i in items)
self._call = func
def __call__(self, obj):
return self._call(obj)
def __repr__(self):
return '%s.%s(%s)' % (self.__class__.__module__,
self.__class__.__name__,
', '.join(map(repr, self._items)))
def __reduce__(self):
return self.__class__, self._items

View File

@@ -0,0 +1,175 @@
"""
A static version of getattr.
This is a backport of the Python 3 code with a little bit of additional
information returned to enable Jedi to make decisions.
"""
import types
from jedi._compatibility import py_version
_sentinel = object()
def _check_instance(obj, attr):
instance_dict = {}
try:
instance_dict = object.__getattribute__(obj, "__dict__")
except AttributeError:
pass
return dict.get(instance_dict, attr, _sentinel)
def _check_class(klass, attr):
for entry in _static_getmro(klass):
if _shadowed_dict(type(entry)) is _sentinel:
try:
return entry.__dict__[attr]
except KeyError:
pass
return _sentinel
def _is_type(obj):
try:
_static_getmro(obj)
except TypeError:
return False
return True
def _shadowed_dict_newstyle(klass):
dict_attr = type.__dict__["__dict__"]
for entry in _static_getmro(klass):
try:
class_dict = dict_attr.__get__(entry)["__dict__"]
except KeyError:
pass
else:
if not (type(class_dict) is types.GetSetDescriptorType and
class_dict.__name__ == "__dict__" and
class_dict.__objclass__ is entry):
return class_dict
return _sentinel
def _static_getmro_newstyle(klass):
return type.__dict__['__mro__'].__get__(klass)
if py_version >= 30:
_shadowed_dict = _shadowed_dict_newstyle
_get_type = type
_static_getmro = _static_getmro_newstyle
else:
def _shadowed_dict(klass):
"""
In Python 2 __dict__ is not overwritable:
class Foo(object): pass
setattr(Foo, '__dict__', 4)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: __dict__ must be a dictionary object
It applies to both newstyle and oldstyle classes:
class Foo(object): pass
setattr(Foo, '__dict__', 4)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: attribute '__dict__' of 'type' objects is not writable
It also applies to instances of those objects. However to keep things
straight forward, newstyle classes always use the complicated way of
accessing it while oldstyle classes just use getattr.
"""
if type(klass) is _oldstyle_class_type:
return getattr(klass, '__dict__', _sentinel)
return _shadowed_dict_newstyle(klass)
class _OldStyleClass():
pass
_oldstyle_instance_type = type(_OldStyleClass())
_oldstyle_class_type = type(_OldStyleClass)
def _get_type(obj):
type_ = object.__getattribute__(obj, '__class__')
if type_ is _oldstyle_instance_type:
# Somehow for old style classes we need to access it directly.
return obj.__class__
return type_
def _static_getmro(klass):
if type(klass) is _oldstyle_class_type:
def oldstyle_mro(klass):
"""
Oldstyle mro is a really simplistic way of look up mro:
https://stackoverflow.com/questions/54867/what-is-the-difference-between-old-style-and-new-style-classes-in-python
"""
yield klass
for base in klass.__bases__:
for yield_from in oldstyle_mro(base):
yield yield_from
return oldstyle_mro(klass)
return _static_getmro_newstyle(klass)
def _safe_hasattr(obj, name):
return _check_class(_get_type(obj), name) is not _sentinel
def _safe_is_data_descriptor(obj):
return (_safe_hasattr(obj, '__set__') or _safe_hasattr(obj, '__delete__'))
def getattr_static(obj, attr, default=_sentinel):
"""Retrieve attributes without triggering dynamic lookup via the
descriptor protocol, __getattr__ or __getattribute__.
Note: this function may not be able to retrieve all attributes
that getattr can fetch (like dynamically created attributes)
and may find attributes that getattr can't (like descriptors
that raise AttributeError). It can also return descriptor objects
instead of instance members in some cases. See the
documentation for details.
Returns a tuple `(attr, is_get_descriptor)`. is_get_descripter means that
the attribute is a descriptor that has a `__get__` attribute.
"""
instance_result = _sentinel
if not _is_type(obj):
klass = _get_type(obj)
dict_attr = _shadowed_dict(klass)
if (dict_attr is _sentinel or
type(dict_attr) is types.MemberDescriptorType):
instance_result = _check_instance(obj, attr)
else:
klass = obj
klass_result = _check_class(klass, attr)
if instance_result is not _sentinel and klass_result is not _sentinel:
if _safe_hasattr(klass_result, '__get__') \
and _safe_is_data_descriptor(klass_result):
# A get/set descriptor has priority over everything.
return klass_result, True
if instance_result is not _sentinel:
return instance_result, False
if klass_result is not _sentinel:
return klass_result, _safe_hasattr(klass_result, '__get__')
if obj is klass:
# for types we check the metaclass too
for entry in _static_getmro(type(klass)):
if _shadowed_dict(type(entry)) is _sentinel:
try:
return entry.__dict__[attr], False
except KeyError:
pass
if default is not _sentinel:
return default, False
raise AttributeError(attr)

View File

@@ -0,0 +1,231 @@
"""
Used only for REPL Completion.
"""
import inspect
import os
from jedi import settings
from jedi.evaluate import compiled
from jedi.cache import underscore_memoization
from jedi.evaluate import imports
from jedi.evaluate.base_context import Context, ContextSet
from jedi.evaluate.context import ModuleContext
from jedi.evaluate.cache import evaluator_function_cache
from jedi.evaluate.compiled.getattr_static import getattr_static
class MixedObject(object):
"""
A ``MixedObject`` is used in two ways:
1. It uses the default logic of ``parser.python.tree`` objects,
2. except for getattr calls. The names dicts are generated in a fashion
like ``CompiledObject``.
This combined logic makes it possible to provide more powerful REPL
completion. It allows side effects that are not noticable with the default
parser structure to still be completeable.
The biggest difference from CompiledObject to MixedObject is that we are
generally dealing with Python code and not with C code. This will generate
fewer special cases, because we in Python you don't have the same freedoms
to modify the runtime.
"""
def __init__(self, evaluator, parent_context, compiled_object, tree_context):
self.evaluator = evaluator
self.parent_context = parent_context
self.compiled_object = compiled_object
self._context = tree_context
self.obj = compiled_object.obj
# We have to overwrite everything that has to do with trailers, name
# lookups and filters to make it possible to route name lookups towards
# compiled objects and the rest towards tree node contexts.
def py__getattribute__(*args, **kwargs):
return Context.py__getattribute__(*args, **kwargs)
def get_filters(self, *args, **kwargs):
yield MixedObjectFilter(self.evaluator, self)
def __repr__(self):
return '<%s: %s>' % (type(self).__name__, repr(self.obj))
def __getattr__(self, name):
return getattr(self._context, name)
class MixedName(compiled.CompiledName):
"""
The ``CompiledName._compiled_object`` is our MixedObject.
"""
@property
def start_pos(self):
contexts = list(self.infer())
if not contexts:
# This means a start_pos that doesn't exist (compiled objects).
return (0, 0)
return contexts[0].name.start_pos
@start_pos.setter
def start_pos(self, value):
# Ignore the __init__'s start_pos setter call.
pass
@underscore_memoization
def infer(self):
obj = self.parent_context.obj
try:
# TODO use logic from compiled.CompiledObjectFilter
obj = getattr(obj, self.string_name)
except AttributeError:
# Happens e.g. in properties of
# PyQt4.QtGui.QStyleOptionComboBox.currentText
# -> just set it to None
obj = None
return ContextSet(
_create(self._evaluator, obj, parent_context=self.parent_context)
)
@property
def api_type(self):
return next(iter(self.infer())).api_type
class MixedObjectFilter(compiled.CompiledObjectFilter):
name_class = MixedName
def __init__(self, evaluator, mixed_object, is_instance=False):
super(MixedObjectFilter, self).__init__(
evaluator, mixed_object, is_instance)
self._mixed_object = mixed_object
#def _create(self, name):
#return MixedName(self._evaluator, self._compiled_object, name)
@evaluator_function_cache()
def _load_module(evaluator, path, python_object):
module = evaluator.grammar.parse(
path=path,
cache=True,
diff_cache=True,
cache_path=settings.cache_directory
).get_root_node()
python_module = inspect.getmodule(python_object)
evaluator.modules[python_module.__name__] = module
return module
def _get_object_to_check(python_object):
"""Check if inspect.getfile has a chance to find the source."""
if (inspect.ismodule(python_object) or
inspect.isclass(python_object) or
inspect.ismethod(python_object) or
inspect.isfunction(python_object) or
inspect.istraceback(python_object) or
inspect.isframe(python_object) or
inspect.iscode(python_object)):
return python_object
try:
return python_object.__class__
except AttributeError:
raise TypeError # Prevents computation of `repr` within inspect.
def find_syntax_node_name(evaluator, python_object):
try:
python_object = _get_object_to_check(python_object)
path = inspect.getsourcefile(python_object)
except TypeError:
# The type might not be known (e.g. class_with_dict.__weakref__)
return None, None
if path is None or not os.path.exists(path):
# The path might not exist or be e.g. <stdin>.
return None, None
module = _load_module(evaluator, path, python_object)
if inspect.ismodule(python_object):
# We don't need to check names for modules, because there's not really
# a way to write a module in a module in Python (and also __name__ can
# be something like ``email.utils``).
return module, path
try:
name_str = python_object.__name__
except AttributeError:
# Stuff like python_function.__code__.
return None, None
if name_str == '<lambda>':
return None, None # It's too hard to find lambdas.
# Doesn't always work (e.g. os.stat_result)
try:
names = module.get_used_names()[name_str]
except KeyError:
return None, None
names = [n for n in names if n.is_definition()]
try:
code = python_object.__code__
# By using the line number of a code object we make the lookup in a
# file pretty easy. There's still a possibility of people defining
# stuff like ``a = 3; foo(a); a = 4`` on the same line, but if people
# do so we just don't care.
line_nr = code.co_firstlineno
except AttributeError:
pass
else:
line_names = [name for name in names if name.start_pos[0] == line_nr]
# There's a chance that the object is not available anymore, because
# the code has changed in the background.
if line_names:
return line_names[-1].parent, path
# It's really hard to actually get the right definition, here as a last
# resort we just return the last one. This chance might lead to odd
# completions at some points but will lead to mostly correct type
# inference, because people tend to define a public name in a module only
# once.
return names[-1].parent, path
@compiled.compiled_objects_cache('mixed_cache')
def _create(evaluator, obj, parent_context=None, *args):
tree_node, path = find_syntax_node_name(evaluator, obj)
compiled_object = compiled.create(
evaluator, obj, parent_context=parent_context.compiled_object)
if tree_node is None:
return compiled_object
module_node = tree_node.get_root_node()
if parent_context.tree_node.get_root_node() == module_node:
module_context = parent_context.get_root_context()
else:
module_context = ModuleContext(evaluator, module_node, path=path)
# TODO this __name__ is probably wrong.
name = compiled_object.get_root_context().py__name__()
imports.add_module(evaluator, name, module_context)
tree_context = module_context.create_context(
tree_node,
node_is_context=True,
node_is_object=True
)
if tree_node.type == 'classdef':
if not inspect.isclass(obj):
# Is an instance, not a class.
tree_context, = tree_context.execute_evaluated()
return MixedObject(
evaluator,
parent_context,
compiled_object,
tree_context=tree_context
)

View File

@@ -0,0 +1,5 @@
from jedi.evaluate.context.module import ModuleContext
from jedi.evaluate.context.klass import ClassContext
from jedi.evaluate.context.function import FunctionContext, FunctionExecutionContext
from jedi.evaluate.context.instance import AnonymousInstance, BoundMethod, \
CompiledInstance, AbstractInstanceContext, TreeInstance

View File

@@ -0,0 +1,226 @@
from parso.python import tree
from jedi._compatibility import use_metaclass
from jedi import debug
from jedi.evaluate.cache import evaluator_method_cache, CachedMetaClass
from jedi.evaluate import compiled
from jedi.evaluate import recursion
from jedi.evaluate import docstrings
from jedi.evaluate import pep0484
from jedi.evaluate import flow_analysis
from jedi.evaluate import helpers
from jedi.evaluate.arguments import AnonymousArguments
from jedi.evaluate.filters import ParserTreeFilter, FunctionExecutionFilter, \
ContextName, AbstractNameDefinition, ParamName
from jedi.evaluate.base_context import ContextualizedNode, NO_CONTEXTS, \
ContextSet, TreeContext
from jedi.evaluate.lazy_context import LazyKnownContexts, LazyKnownContext, \
LazyTreeContext
from jedi.evaluate.context import iterable
from jedi import parser_utils
from jedi.evaluate.parser_cache import get_yield_exprs
class LambdaName(AbstractNameDefinition):
string_name = '<lambda>'
def __init__(self, lambda_context):
self._lambda_context = lambda_context
self.parent_context = lambda_context.parent_context
def start_pos(self):
return self._lambda_context.tree_node.start_pos
def infer(self):
return ContextSet(self._lambda_context)
class FunctionContext(use_metaclass(CachedMetaClass, TreeContext)):
"""
Needed because of decorators. Decorators are evaluated here.
"""
api_type = 'function'
def __init__(self, evaluator, parent_context, funcdef):
""" This should not be called directly """
super(FunctionContext, self).__init__(evaluator, parent_context)
self.tree_node = funcdef
def get_filters(self, search_global, until_position=None, origin_scope=None):
if search_global:
yield ParserTreeFilter(
self.evaluator,
context=self,
until_position=until_position,
origin_scope=origin_scope
)
else:
scope = self.py__class__()
for filter in scope.get_filters(search_global=False, origin_scope=origin_scope):
yield filter
def infer_function_execution(self, function_execution):
"""
Created to be used by inheritance.
"""
yield_exprs = get_yield_exprs(self.evaluator, self.tree_node)
if yield_exprs:
return ContextSet(iterable.Generator(self.evaluator, function_execution))
else:
return function_execution.get_return_values()
def get_function_execution(self, arguments=None):
if arguments is None:
arguments = AnonymousArguments()
return FunctionExecutionContext(self.evaluator, self.parent_context, self, arguments)
def py__call__(self, arguments):
function_execution = self.get_function_execution(arguments)
return self.infer_function_execution(function_execution)
def py__class__(self):
# This differentiation is only necessary for Python2. Python3 does not
# use a different method class.
if isinstance(parser_utils.get_parent_scope(self.tree_node), tree.Class):
name = 'METHOD_CLASS'
else:
name = 'FUNCTION_CLASS'
return compiled.get_special_object(self.evaluator, name)
@property
def name(self):
if self.tree_node.type == 'lambdef':
return LambdaName(self)
return ContextName(self, self.tree_node.name)
def get_param_names(self):
function_execution = self.get_function_execution()
return [ParamName(function_execution, param.name)
for param in self.tree_node.get_params()]
class FunctionExecutionContext(TreeContext):
"""
This class is used to evaluate functions and their returns.
This is the most complicated class, because it contains the logic to
transfer parameters. It is even more complicated, because there may be
multiple calls to functions and recursion has to be avoided. But this is
responsibility of the decorators.
"""
function_execution_filter = FunctionExecutionFilter
def __init__(self, evaluator, parent_context, function_context, var_args):
super(FunctionExecutionContext, self).__init__(evaluator, parent_context)
self.function_context = function_context
self.tree_node = function_context.tree_node
self.var_args = var_args
@evaluator_method_cache(default=NO_CONTEXTS)
@recursion.execution_recursion_decorator()
def get_return_values(self, check_yields=False):
funcdef = self.tree_node
if funcdef.type == 'lambdef':
return self.evaluator.eval_element(self, funcdef.children[-1])
if check_yields:
context_set = NO_CONTEXTS
returns = get_yield_exprs(self.evaluator, funcdef)
else:
returns = funcdef.iter_return_stmts()
context_set = docstrings.infer_return_types(self.function_context)
context_set |= pep0484.infer_return_types(self.function_context)
for r in returns:
check = flow_analysis.reachability_check(self, funcdef, r)
if check is flow_analysis.UNREACHABLE:
debug.dbg('Return unreachable: %s', r)
else:
if check_yields:
context_set |= ContextSet.from_sets(
lazy_context.infer()
for lazy_context in self._eval_yield(r)
)
else:
try:
children = r.children
except AttributeError:
context_set |= ContextSet(compiled.create(self.evaluator, None))
else:
context_set |= self.eval_node(children[1])
if check is flow_analysis.REACHABLE:
debug.dbg('Return reachable: %s', r)
break
return context_set
def _eval_yield(self, yield_expr):
if yield_expr.type == 'keyword':
# `yield` just yields None.
yield LazyKnownContext(compiled.create(self.evaluator, None))
return
node = yield_expr.children[1]
if node.type == 'yield_arg': # It must be a yield from.
cn = ContextualizedNode(self, node.children[1])
for lazy_context in cn.infer().iterate(cn):
yield lazy_context
else:
yield LazyTreeContext(self, node)
@recursion.execution_recursion_decorator(default=iter([]))
def get_yield_values(self):
for_parents = [(y, tree.search_ancestor(y, 'for_stmt', 'funcdef',
'while_stmt', 'if_stmt'))
for y in get_yield_exprs(self.evaluator, self.tree_node)]
# Calculate if the yields are placed within the same for loop.
yields_order = []
last_for_stmt = None
for yield_, for_stmt in for_parents:
# For really simple for loops we can predict the order. Otherwise
# we just ignore it.
parent = for_stmt.parent
if parent.type == 'suite':
parent = parent.parent
if for_stmt.type == 'for_stmt' and parent == self.tree_node \
and parser_utils.for_stmt_defines_one_name(for_stmt): # Simplicity for now.
if for_stmt == last_for_stmt:
yields_order[-1][1].append(yield_)
else:
yields_order.append((for_stmt, [yield_]))
elif for_stmt == self.tree_node:
yields_order.append((None, [yield_]))
else:
types = self.get_return_values(check_yields=True)
if types:
yield LazyKnownContexts(types)
return
last_for_stmt = for_stmt
for for_stmt, yields in yields_order:
if for_stmt is None:
# No for_stmt, just normal yields.
for yield_ in yields:
for result in self._eval_yield(yield_):
yield result
else:
input_node = for_stmt.get_testlist()
cn = ContextualizedNode(self, input_node)
ordered = cn.infer().iterate(cn)
ordered = list(ordered)
for lazy_context in ordered:
dct = {str(for_stmt.children[1].value): lazy_context.infer()}
with helpers.predefine_names(self, for_stmt, dct):
for yield_in_same_for_stmt in yields:
for result in self._eval_yield(yield_in_same_for_stmt):
yield result
def get_filters(self, search_global, until_position=None, origin_scope=None):
yield self.function_execution_filter(self.evaluator, self,
until_position=until_position,
origin_scope=origin_scope)
@evaluator_method_cache()
def get_params(self):
return self.var_args.get_params(self)

View File

@@ -0,0 +1,435 @@
from abc import abstractproperty
from jedi._compatibility import is_py3
from jedi import debug
from jedi.evaluate import compiled
from jedi.evaluate import filters
from jedi.evaluate.base_context import Context, NO_CONTEXTS, ContextSet, \
iterator_to_context_set
from jedi.evaluate.lazy_context import LazyKnownContext, LazyKnownContexts
from jedi.evaluate.cache import evaluator_method_cache
from jedi.evaluate.arguments import AbstractArguments, AnonymousArguments
from jedi.cache import memoize_method
from jedi.evaluate.context.function import FunctionExecutionContext, FunctionContext
from jedi.evaluate.context.klass import ClassContext, apply_py__get__
from jedi.evaluate.context import iterable
from jedi.parser_utils import get_parent_scope
class InstanceFunctionExecution(FunctionExecutionContext):
def __init__(self, instance, parent_context, function_context, var_args):
self.instance = instance
var_args = InstanceVarArgs(self, var_args)
super(InstanceFunctionExecution, self).__init__(
instance.evaluator, parent_context, function_context, var_args)
class AnonymousInstanceFunctionExecution(FunctionExecutionContext):
function_execution_filter = filters.AnonymousInstanceFunctionExecutionFilter
def __init__(self, instance, parent_context, function_context, var_args):
self.instance = instance
super(AnonymousInstanceFunctionExecution, self).__init__(
instance.evaluator, parent_context, function_context, var_args)
class AbstractInstanceContext(Context):
"""
This class is used to evaluate instances.
"""
api_type = 'instance'
function_execution_cls = InstanceFunctionExecution
def __init__(self, evaluator, parent_context, class_context, var_args):
super(AbstractInstanceContext, self).__init__(evaluator, parent_context)
# Generated instances are classes that are just generated by self
# (No var_args) used.
self.class_context = class_context
self.var_args = var_args
def is_class(self):
return False
@property
def py__call__(self):
names = self.get_function_slot_names('__call__')
if not names:
# Means the Instance is not callable.
raise AttributeError
def execute(arguments):
return ContextSet.from_sets(name.execute(arguments) for name in names)
return execute
def py__class__(self):
return self.class_context
def py__bool__(self):
# Signalize that we don't know about the bool type.
return None
def get_function_slot_names(self, name):
# Python classes don't look at the dictionary of the instance when
# looking up `__call__`. This is something that has to do with Python's
# internal slot system (note: not __slots__, but C slots).
for filter in self.get_filters(include_self_names=False):
names = filter.get(name)
if names:
return names
return []
def execute_function_slots(self, names, *evaluated_args):
return ContextSet.from_sets(
name.execute_evaluated(*evaluated_args)
for name in names
)
def py__get__(self, obj):
# Arguments in __get__ descriptors are obj, class.
# `method` is the new parent of the array, don't know if that's good.
names = self.get_function_slot_names('__get__')
if names:
if isinstance(obj, AbstractInstanceContext):
return self.execute_function_slots(names, obj, obj.class_context)
else:
none_obj = compiled.create(self.evaluator, None)
return self.execute_function_slots(names, none_obj, obj)
else:
return ContextSet(self)
def get_filters(self, search_global=None, until_position=None,
origin_scope=None, include_self_names=True):
if include_self_names:
for cls in self.class_context.py__mro__():
if isinstance(cls, compiled.CompiledObject):
if cls.tree_node is not None:
# In this case we're talking about a fake object, it
# doesn't make sense for normal compiled objects to
# search for self variables.
yield SelfNameFilter(self.evaluator, self, cls, origin_scope)
else:
yield SelfNameFilter(self.evaluator, self, cls, origin_scope)
for cls in self.class_context.py__mro__():
if isinstance(cls, compiled.CompiledObject):
yield CompiledInstanceClassFilter(self.evaluator, self, cls)
else:
yield InstanceClassFilter(self.evaluator, self, cls, origin_scope)
def py__getitem__(self, index):
try:
names = self.get_function_slot_names('__getitem__')
except KeyError:
debug.warning('No __getitem__, cannot access the array.')
return NO_CONTEXTS
else:
index_obj = compiled.create(self.evaluator, index)
return self.execute_function_slots(names, index_obj)
def py__iter__(self):
iter_slot_names = self.get_function_slot_names('__iter__')
if not iter_slot_names:
debug.warning('No __iter__ on %s.' % self)
return
for generator in self.execute_function_slots(iter_slot_names):
if isinstance(generator, AbstractInstanceContext):
# `__next__` logic.
name = '__next__' if is_py3 else 'next'
iter_slot_names = generator.get_function_slot_names(name)
if iter_slot_names:
yield LazyKnownContexts(
generator.execute_function_slots(iter_slot_names)
)
else:
debug.warning('Instance has no __next__ function in %s.', generator)
else:
for lazy_context in generator.py__iter__():
yield lazy_context
@abstractproperty
def name(self):
pass
def _create_init_execution(self, class_context, func_node):
bound_method = BoundMethod(
self.evaluator, self, class_context, self.parent_context, func_node
)
return self.function_execution_cls(
self,
class_context.parent_context,
bound_method,
self.var_args
)
def create_init_executions(self):
for name in self.get_function_slot_names('__init__'):
if isinstance(name, LazyInstanceName):
yield self._create_init_execution(name.class_context, name.tree_name.parent)
@evaluator_method_cache()
def create_instance_context(self, class_context, node):
if node.parent.type in ('funcdef', 'classdef'):
node = node.parent
scope = get_parent_scope(node)
if scope == class_context.tree_node:
return class_context
else:
parent_context = self.create_instance_context(class_context, scope)
if scope.type == 'funcdef':
if scope.name.value == '__init__' and parent_context == class_context:
return self._create_init_execution(class_context, scope)
else:
bound_method = BoundMethod(
self.evaluator, self, class_context,
parent_context, scope
)
return bound_method.get_function_execution()
elif scope.type == 'classdef':
class_context = ClassContext(self.evaluator, scope, parent_context)
return class_context
elif scope.type == 'comp_for':
# Comprehensions currently don't have a special scope in Jedi.
return self.create_instance_context(class_context, scope)
else:
raise NotImplementedError
return class_context
def __repr__(self):
return "<%s of %s(%s)>" % (self.__class__.__name__, self.class_context,
self.var_args)
class CompiledInstance(AbstractInstanceContext):
def __init__(self, *args, **kwargs):
super(CompiledInstance, self).__init__(*args, **kwargs)
# I don't think that dynamic append lookups should happen here. That
# sounds more like something that should go to py__iter__.
if self.class_context.name.string_name in ['list', 'set'] \
and self.parent_context.get_root_context() == self.evaluator.BUILTINS:
# compare the module path with the builtin name.
self.var_args = iterable.get_dynamic_array_instance(self)
@property
def name(self):
return compiled.CompiledContextName(self, self.class_context.name.string_name)
def create_instance_context(self, class_context, node):
if get_parent_scope(node).type == 'classdef':
return class_context
else:
return super(CompiledInstance, self).create_instance_context(class_context, node)
class TreeInstance(AbstractInstanceContext):
def __init__(self, evaluator, parent_context, class_context, var_args):
super(TreeInstance, self).__init__(evaluator, parent_context,
class_context, var_args)
self.tree_node = class_context.tree_node
@property
def name(self):
return filters.ContextName(self, self.class_context.name.tree_name)
class AnonymousInstance(TreeInstance):
function_execution_cls = AnonymousInstanceFunctionExecution
def __init__(self, evaluator, parent_context, class_context):
super(AnonymousInstance, self).__init__(
evaluator,
parent_context,
class_context,
var_args=AnonymousArguments(),
)
class CompiledInstanceName(compiled.CompiledName):
def __init__(self, evaluator, instance, parent_context, name):
super(CompiledInstanceName, self).__init__(evaluator, parent_context, name)
self._instance = instance
@iterator_to_context_set
def infer(self):
for result_context in super(CompiledInstanceName, self).infer():
if isinstance(result_context, FunctionContext):
parent_context = result_context.parent_context
while parent_context.is_class():
parent_context = parent_context.parent_context
yield BoundMethod(
result_context.evaluator, self._instance, self.parent_context,
parent_context, result_context.tree_node
)
else:
if result_context.api_type == 'function':
yield CompiledBoundMethod(result_context)
else:
yield result_context
class CompiledInstanceClassFilter(compiled.CompiledObjectFilter):
name_class = CompiledInstanceName
def __init__(self, evaluator, instance, compiled_object):
super(CompiledInstanceClassFilter, self).__init__(
evaluator,
compiled_object,
is_instance=True,
)
self._instance = instance
def _create_name(self, name):
return self.name_class(
self._evaluator, self._instance, self._compiled_object, name)
class BoundMethod(FunctionContext):
def __init__(self, evaluator, instance, class_context, *args, **kwargs):
super(BoundMethod, self).__init__(evaluator, *args, **kwargs)
self._instance = instance
self._class_context = class_context
def get_function_execution(self, arguments=None):
if arguments is None:
arguments = AnonymousArguments()
return AnonymousInstanceFunctionExecution(
self._instance, self.parent_context, self, arguments)
else:
return InstanceFunctionExecution(
self._instance, self.parent_context, self, arguments)
class CompiledBoundMethod(compiled.CompiledObject):
def __init__(self, func):
super(CompiledBoundMethod, self).__init__(
func.evaluator, func.obj, func.parent_context, func.tree_node)
def get_param_names(self):
return list(super(CompiledBoundMethod, self).get_param_names())[1:]
class InstanceNameDefinition(filters.TreeNameDefinition):
def infer(self):
return super(InstanceNameDefinition, self).infer()
class LazyInstanceName(filters.TreeNameDefinition):
"""
This name calculates the parent_context lazily.
"""
def __init__(self, instance, class_context, tree_name):
self._instance = instance
self.class_context = class_context
self.tree_name = tree_name
@property
def parent_context(self):
return self._instance.create_instance_context(self.class_context, self.tree_name)
class LazyInstanceClassName(LazyInstanceName):
@iterator_to_context_set
def infer(self):
for result_context in super(LazyInstanceClassName, self).infer():
if isinstance(result_context, FunctionContext):
# Classes are never used to resolve anything within the
# functions. Only other functions and modules will resolve
# those things.
parent_context = result_context.parent_context
while parent_context.is_class():
parent_context = parent_context.parent_context
yield BoundMethod(
result_context.evaluator, self._instance, self.class_context,
parent_context, result_context.tree_node
)
else:
for c in apply_py__get__(result_context, self._instance):
yield c
class InstanceClassFilter(filters.ParserTreeFilter):
name_class = LazyInstanceClassName
def __init__(self, evaluator, context, class_context, origin_scope):
super(InstanceClassFilter, self).__init__(
evaluator=evaluator,
context=context,
node_context=class_context,
origin_scope=origin_scope
)
self._class_context = class_context
def _equals_origin_scope(self):
node = self._origin_scope
while node is not None:
if node == self._parser_scope or node == self.context:
return True
node = get_parent_scope(node)
return False
def _access_possible(self, name):
return not name.value.startswith('__') or name.value.endswith('__') \
or self._equals_origin_scope()
def _filter(self, names):
names = super(InstanceClassFilter, self)._filter(names)
return [name for name in names if self._access_possible(name)]
def _convert_names(self, names):
return [self.name_class(self.context, self._class_context, name) for name in names]
class SelfNameFilter(InstanceClassFilter):
name_class = LazyInstanceName
def _filter(self, names):
names = self._filter_self_names(names)
if isinstance(self._parser_scope, compiled.CompiledObject) and False:
# This would be for builtin skeletons, which are not yet supported.
return list(names)
else:
start, end = self._parser_scope.start_pos, self._parser_scope.end_pos
return [n for n in names if start < n.start_pos < end]
def _filter_self_names(self, names):
for name in names:
trailer = name.parent
if trailer.type == 'trailer' \
and len(trailer.children) == 2 \
and trailer.children[0] == '.':
if name.is_definition() and self._access_possible(name):
yield name
def _check_flows(self, names):
return names
class InstanceVarArgs(AbstractArguments):
def __init__(self, execution_context, var_args):
self._execution_context = execution_context
self._var_args = var_args
@memoize_method
def _get_var_args(self):
return self._var_args
@property
def argument_node(self):
return self._var_args.argument_node
@property
def trailer(self):
return self._var_args.trailer
def unpack(self, func=None):
yield None, LazyKnownContext(self._execution_context.instance)
for values in self._get_var_args().unpack(func):
yield values
def get_calling_nodes(self):
return self._get_var_args().get_calling_nodes()

View File

@@ -0,0 +1,691 @@
"""
Contains all classes and functions to deal with lists, dicts, generators and
iterators in general.
Array modifications
*******************
If the content of an array (``set``/``list``) is requested somewhere, the
current module will be checked for appearances of ``arr.append``,
``arr.insert``, etc. If the ``arr`` name points to an actual array, the
content will be added
This can be really cpu intensive, as you can imagine. Because |jedi| has to
follow **every** ``append`` and check wheter it's the right array. However this
works pretty good, because in *slow* cases, the recursion detector and other
settings will stop this process.
It is important to note that:
1. Array modfications work only in the current module.
2. Jedi only checks Array additions; ``list.pop``, etc are ignored.
"""
from jedi import debug
from jedi import settings
from jedi.evaluate import compiled
from jedi.evaluate import analysis
from jedi.evaluate import recursion
from jedi.evaluate.lazy_context import LazyKnownContext, LazyKnownContexts, \
LazyTreeContext
from jedi.evaluate.helpers import is_string, predefine_names, evaluate_call_of_leaf
from jedi.evaluate.utils import safe_property
from jedi.evaluate.utils import to_list
from jedi.evaluate.cache import evaluator_method_cache
from jedi.evaluate.filters import ParserTreeFilter, has_builtin_methods, \
register_builtin_method, SpecialMethodFilter
from jedi.evaluate.base_context import ContextSet, NO_CONTEXTS, Context, \
TreeContext, ContextualizedNode
from jedi.parser_utils import get_comp_fors
class AbstractIterable(Context):
builtin_methods = {}
api_type = 'instance'
def __init__(self, evaluator):
super(AbstractIterable, self).__init__(evaluator, evaluator.BUILTINS)
def get_filters(self, search_global, until_position=None, origin_scope=None):
raise NotImplementedError
@property
def name(self):
return compiled.CompiledContextName(self, self.array_type)
@has_builtin_methods
class GeneratorMixin(object):
array_type = None
@register_builtin_method('send')
@register_builtin_method('next', python_version_match=2)
@register_builtin_method('__next__', python_version_match=3)
def py__next__(self):
# TODO add TypeError if params are given.
return ContextSet.from_sets(lazy_context.infer() for lazy_context in self.py__iter__())
def get_filters(self, search_global, until_position=None, origin_scope=None):
gen_obj = compiled.get_special_object(self.evaluator, 'GENERATOR_OBJECT')
yield SpecialMethodFilter(self, self.builtin_methods, gen_obj)
for filter in gen_obj.get_filters(search_global):
yield filter
def py__bool__(self):
return True
def py__class__(self):
gen_obj = compiled.get_special_object(self.evaluator, 'GENERATOR_OBJECT')
return gen_obj.py__class__()
@property
def name(self):
return compiled.CompiledContextName(self, 'generator')
class Generator(GeneratorMixin, Context):
"""Handling of `yield` functions."""
def __init__(self, evaluator, func_execution_context):
super(Generator, self).__init__(evaluator, parent_context=evaluator.BUILTINS)
self._func_execution_context = func_execution_context
def py__iter__(self):
return self._func_execution_context.get_yield_values()
def __repr__(self):
return "<%s of %s>" % (type(self).__name__, self._func_execution_context)
class CompForContext(TreeContext):
@classmethod
def from_comp_for(cls, parent_context, comp_for):
return cls(parent_context.evaluator, parent_context, comp_for)
def __init__(self, evaluator, parent_context, comp_for):
super(CompForContext, self).__init__(evaluator, parent_context)
self.tree_node = comp_for
def get_node(self):
return self.tree_node
def get_filters(self, search_global, until_position=None, origin_scope=None):
yield ParserTreeFilter(self.evaluator, self)
class Comprehension(AbstractIterable):
@staticmethod
def from_atom(evaluator, context, atom):
bracket = atom.children[0]
if bracket == '{':
if atom.children[1].children[1] == ':':
cls = DictComprehension
else:
cls = SetComprehension
elif bracket == '(':
cls = GeneratorComprehension
elif bracket == '[':
cls = ListComprehension
return cls(evaluator, context, atom)
def __init__(self, evaluator, defining_context, atom):
super(Comprehension, self).__init__(evaluator)
self._defining_context = defining_context
self._atom = atom
def _get_comprehension(self):
# The atom contains a testlist_comp
return self._atom.children[1]
def _get_comp_for(self):
# The atom contains a testlist_comp
return self._get_comprehension().children[1]
def _eval_node(self, index=0):
"""
The first part `x + 1` of the list comprehension:
[x + 1 for x in foo]
"""
return self._get_comprehension().children[index]
@evaluator_method_cache()
def _get_comp_for_context(self, parent_context, comp_for):
# TODO shouldn't this be part of create_context?
return CompForContext.from_comp_for(parent_context, comp_for)
def _nested(self, comp_fors, parent_context=None):
comp_for = comp_fors[0]
input_node = comp_for.children[3]
parent_context = parent_context or self._defining_context
input_types = parent_context.eval_node(input_node)
cn = ContextualizedNode(parent_context, input_node)
iterated = input_types.iterate(cn)
exprlist = comp_for.children[1]
for i, lazy_context in enumerate(iterated):
types = lazy_context.infer()
dct = unpack_tuple_to_dict(parent_context, types, exprlist)
context_ = self._get_comp_for_context(
parent_context,
comp_for,
)
with predefine_names(context_, comp_for, dct):
try:
for result in self._nested(comp_fors[1:], context_):
yield result
except IndexError:
iterated = context_.eval_node(self._eval_node())
if self.array_type == 'dict':
yield iterated, context_.eval_node(self._eval_node(2))
else:
yield iterated
@evaluator_method_cache(default=[])
@to_list
def _iterate(self):
comp_fors = tuple(get_comp_fors(self._get_comp_for()))
for result in self._nested(comp_fors):
yield result
def py__iter__(self):
for set_ in self._iterate():
yield LazyKnownContexts(set_)
def __repr__(self):
return "<%s of %s>" % (type(self).__name__, self._atom)
class ArrayMixin(object):
def get_filters(self, search_global, until_position=None, origin_scope=None):
# `array.type` is a string with the type, e.g. 'list'.
compiled_obj = compiled.builtin_from_name(self.evaluator, self.array_type)
yield SpecialMethodFilter(self, self.builtin_methods, compiled_obj)
for typ in compiled_obj.execute_evaluated(self):
for filter in typ.get_filters():
yield filter
def py__bool__(self):
return None # We don't know the length, because of appends.
def py__class__(self):
return compiled.builtin_from_name(self.evaluator, self.array_type)
@safe_property
def parent(self):
return self.evaluator.BUILTINS
def dict_values(self):
return ContextSet.from_sets(
self._defining_context.eval_node(v)
for k, v in self._items()
)
class ListComprehension(ArrayMixin, Comprehension):
array_type = 'list'
def py__getitem__(self, index):
if isinstance(index, slice):
return ContextSet(self)
all_types = list(self.py__iter__())
return all_types[index].infer()
class SetComprehension(ArrayMixin, Comprehension):
array_type = 'set'
@has_builtin_methods
class DictComprehension(ArrayMixin, Comprehension):
array_type = 'dict'
def _get_comp_for(self):
return self._get_comprehension().children[3]
def py__iter__(self):
for keys, values in self._iterate():
yield LazyKnownContexts(keys)
def py__getitem__(self, index):
for keys, values in self._iterate():
for k in keys:
if isinstance(k, compiled.CompiledObject):
if k.obj == index:
return values
return self.dict_values()
def dict_values(self):
return ContextSet.from_sets(values for keys, values in self._iterate())
@register_builtin_method('values')
def _imitate_values(self):
lazy_context = LazyKnownContexts(self.dict_values())
return ContextSet(FakeSequence(self.evaluator, 'list', [lazy_context]))
@register_builtin_method('items')
def _imitate_items(self):
items = ContextSet.from_iterable(
FakeSequence(
self.evaluator, 'tuple'
(LazyKnownContexts(keys), LazyKnownContexts(values))
) for keys, values in self._iterate()
)
return create_evaluated_sequence_set(self.evaluator, items, sequence_type='list')
class GeneratorComprehension(GeneratorMixin, Comprehension):
pass
class SequenceLiteralContext(ArrayMixin, AbstractIterable):
mapping = {'(': 'tuple',
'[': 'list',
'{': 'set'}
def __init__(self, evaluator, defining_context, atom):
super(SequenceLiteralContext, self).__init__(evaluator)
self.atom = atom
self._defining_context = defining_context
if self.atom.type in ('testlist_star_expr', 'testlist'):
self.array_type = 'tuple'
else:
self.array_type = SequenceLiteralContext.mapping[atom.children[0]]
"""The builtin name of the array (list, set, tuple or dict)."""
def py__getitem__(self, index):
"""Here the index is an int/str. Raises IndexError/KeyError."""
if self.array_type == 'dict':
for key, value in self._items():
for k in self._defining_context.eval_node(key):
if isinstance(k, compiled.CompiledObject) \
and index == k.obj:
return self._defining_context.eval_node(value)
raise KeyError('No key found in dictionary %s.' % self)
# Can raise an IndexError
if isinstance(index, slice):
return ContextSet(self)
else:
return self._defining_context.eval_node(self._items()[index])
def py__iter__(self):
"""
While values returns the possible values for any array field, this
function returns the value for a certain index.
"""
if self.array_type == 'dict':
# Get keys.
types = ContextSet()
for k, _ in self._items():
types |= self._defining_context.eval_node(k)
# We don't know which dict index comes first, therefore always
# yield all the types.
for _ in types:
yield LazyKnownContexts(types)
else:
for node in self._items():
yield LazyTreeContext(self._defining_context, node)
for addition in check_array_additions(self._defining_context, self):
yield addition
def _values(self):
"""Returns a list of a list of node."""
if self.array_type == 'dict':
return ContextSet.from_sets(v for k, v in self._items())
else:
return self._items()
def _items(self):
c = self.atom.children
if self.atom.type in ('testlist_star_expr', 'testlist'):
return c[::2]
array_node = c[1]
if array_node in (']', '}', ')'):
return [] # Direct closing bracket, doesn't contain items.
if array_node.type == 'testlist_comp':
return array_node.children[::2]
elif array_node.type == 'dictorsetmaker':
kv = []
iterator = iter(array_node.children)
for key in iterator:
op = next(iterator, None)
if op is None or op == ',':
kv.append(key) # A set.
else:
assert op == ':' # A dict.
kv.append((key, next(iterator)))
next(iterator, None) # Possible comma.
return kv
else:
return [array_node]
def exact_key_items(self):
"""
Returns a generator of tuples like dict.items(), where the key is
resolved (as a string) and the values are still lazy contexts.
"""
for key_node, value in self._items():
for key in self._defining_context.eval_node(key_node):
if is_string(key):
yield key.obj, LazyTreeContext(self._defining_context, value)
def __repr__(self):
return "<%s of %s>" % (self.__class__.__name__, self.atom)
@has_builtin_methods
class DictLiteralContext(SequenceLiteralContext):
array_type = 'dict'
def __init__(self, evaluator, defining_context, atom):
super(SequenceLiteralContext, self).__init__(evaluator)
self._defining_context = defining_context
self.atom = atom
@register_builtin_method('values')
def _imitate_values(self):
lazy_context = LazyKnownContexts(self.dict_values())
return ContextSet(FakeSequence(self.evaluator, 'list', [lazy_context]))
@register_builtin_method('items')
def _imitate_items(self):
lazy_contexts = [
LazyKnownContext(FakeSequence(
self.evaluator, 'tuple',
(LazyTreeContext(self._defining_context, key_node),
LazyTreeContext(self._defining_context, value_node))
)) for key_node, value_node in self._items()
]
return ContextSet(FakeSequence(self.evaluator, 'list', lazy_contexts))
class _FakeArray(SequenceLiteralContext):
def __init__(self, evaluator, container, type):
super(SequenceLiteralContext, self).__init__(evaluator)
self.array_type = type
self.atom = container
# TODO is this class really needed?
class FakeSequence(_FakeArray):
def __init__(self, evaluator, array_type, lazy_context_list):
"""
type should be one of "tuple", "list"
"""
super(FakeSequence, self).__init__(evaluator, None, array_type)
self._lazy_context_list = lazy_context_list
def py__getitem__(self, index):
return self._lazy_context_list[index].infer()
def py__iter__(self):
return self._lazy_context_list
def py__bool__(self):
return bool(len(self._lazy_context_list))
def __repr__(self):
return "<%s of %s>" % (type(self).__name__, self._lazy_context_list)
class FakeDict(_FakeArray):
def __init__(self, evaluator, dct):
super(FakeDict, self).__init__(evaluator, dct, 'dict')
self._dct = dct
def py__iter__(self):
for key in self._dct:
yield LazyKnownContext(compiled.create(self.evaluator, key))
def py__getitem__(self, index):
return self._dct[index].infer()
def dict_values(self):
return ContextSet.from_sets(lazy_context.infer() for lazy_context in self._dct.values())
def exact_key_items(self):
return self._dct.items()
class MergedArray(_FakeArray):
def __init__(self, evaluator, arrays):
super(MergedArray, self).__init__(evaluator, arrays, arrays[-1].array_type)
self._arrays = arrays
def py__iter__(self):
for array in self._arrays:
for lazy_context in array.py__iter__():
yield lazy_context
def py__getitem__(self, index):
return ContextSet.from_sets(lazy_context.infer() for lazy_context in self.py__iter__())
def _items(self):
for array in self._arrays:
for a in array._items():
yield a
def __len__(self):
return sum(len(a) for a in self._arrays)
def unpack_tuple_to_dict(context, types, exprlist):
"""
Unpacking tuple assignments in for statements and expr_stmts.
"""
if exprlist.type == 'name':
return {exprlist.value: types}
elif exprlist.type == 'atom' and exprlist.children[0] in '([':
return unpack_tuple_to_dict(context, types, exprlist.children[1])
elif exprlist.type in ('testlist', 'testlist_comp', 'exprlist',
'testlist_star_expr'):
dct = {}
parts = iter(exprlist.children[::2])
n = 0
for lazy_context in types.iterate(exprlist):
n += 1
try:
part = next(parts)
except StopIteration:
# TODO this context is probably not right.
analysis.add(context, 'value-error-too-many-values', part,
message="ValueError: too many values to unpack (expected %s)" % n)
else:
dct.update(unpack_tuple_to_dict(context, lazy_context.infer(), part))
has_parts = next(parts, None)
if types and has_parts is not None:
# TODO this context is probably not right.
analysis.add(context, 'value-error-too-few-values', has_parts,
message="ValueError: need more than %s values to unpack" % n)
return dct
elif exprlist.type == 'power' or exprlist.type == 'atom_expr':
# Something like ``arr[x], var = ...``.
# This is something that is not yet supported, would also be difficult
# to write into a dict.
return {}
elif exprlist.type == 'star_expr': # `a, *b, c = x` type unpackings
# Currently we're not supporting them.
return {}
raise NotImplementedError
def check_array_additions(context, sequence):
""" Just a mapper function for the internal _check_array_additions """
if sequence.array_type not in ('list', 'set'):
# TODO also check for dict updates
return NO_CONTEXTS
return _check_array_additions(context, sequence)
@evaluator_method_cache(default=NO_CONTEXTS)
@debug.increase_indent
def _check_array_additions(context, sequence):
"""
Checks if a `Array` has "add" (append, insert, extend) statements:
>>> a = [""]
>>> a.append(1)
"""
from jedi.evaluate import arguments
debug.dbg('Dynamic array search for %s' % sequence, color='MAGENTA')
module_context = context.get_root_context()
if not settings.dynamic_array_additions or isinstance(module_context, compiled.CompiledObject):
debug.dbg('Dynamic array search aborted.', color='MAGENTA')
return ContextSet()
def find_additions(context, arglist, add_name):
params = list(arguments.TreeArguments(context.evaluator, context, arglist).unpack())
result = set()
if add_name in ['insert']:
params = params[1:]
if add_name in ['append', 'add', 'insert']:
for key, whatever in params:
result.add(whatever)
elif add_name in ['extend', 'update']:
for key, lazy_context in params:
result |= set(lazy_context.infer().iterate())
return result
temp_param_add, settings.dynamic_params_for_other_modules = \
settings.dynamic_params_for_other_modules, False
is_list = sequence.name.string_name == 'list'
search_names = (['append', 'extend', 'insert'] if is_list else ['add', 'update'])
added_types = set()
for add_name in search_names:
try:
possible_names = module_context.tree_node.get_used_names()[add_name]
except KeyError:
continue
else:
for name in possible_names:
context_node = context.tree_node
if not (context_node.start_pos < name.start_pos < context_node.end_pos):
continue
trailer = name.parent
power = trailer.parent
trailer_pos = power.children.index(trailer)
try:
execution_trailer = power.children[trailer_pos + 1]
except IndexError:
continue
else:
if execution_trailer.type != 'trailer' \
or execution_trailer.children[0] != '(' \
or execution_trailer.children[1] == ')':
continue
random_context = context.create_context(name)
with recursion.execution_allowed(context.evaluator, power) as allowed:
if allowed:
found = evaluate_call_of_leaf(
random_context,
name,
cut_own_trailer=True
)
if sequence in found:
# The arrays match. Now add the results
added_types |= find_additions(
random_context,
execution_trailer.children[1],
add_name
)
# reset settings
settings.dynamic_params_for_other_modules = temp_param_add
debug.dbg('Dynamic array result %s' % added_types, color='MAGENTA')
return added_types
def get_dynamic_array_instance(instance):
"""Used for set() and list() instances."""
if not settings.dynamic_array_additions:
return instance.var_args
ai = _ArrayInstance(instance)
from jedi.evaluate import arguments
return arguments.ValuesArguments([ContextSet(ai)])
class _ArrayInstance(object):
"""
Used for the usage of set() and list().
This is definitely a hack, but a good one :-)
It makes it possible to use set/list conversions.
In contrast to Array, ListComprehension and all other iterable types, this
is something that is only used inside `evaluate/compiled/fake/builtins.py`
and therefore doesn't need filters, `py__bool__` and so on, because
we don't use these operations in `builtins.py`.
"""
def __init__(self, instance):
self.instance = instance
self.var_args = instance.var_args
def py__iter__(self):
var_args = self.var_args
try:
_, lazy_context = next(var_args.unpack())
except StopIteration:
pass
else:
for lazy in lazy_context.infer().iterate():
yield lazy
from jedi.evaluate import arguments
if isinstance(var_args, arguments.TreeArguments):
additions = _check_array_additions(var_args.context, self.instance)
for addition in additions:
yield addition
def iterate(self, contextualized_node=None):
return self.py__iter__()
class Slice(Context):
def __init__(self, context, start, stop, step):
super(Slice, self).__init__(
context.evaluator,
parent_context=context.evaluator.BUILTINS
)
self._context = context
# all of them are either a Precedence or None.
self._start = start
self._stop = stop
self._step = step
@property
def obj(self):
"""
Imitate CompiledObject.obj behavior and return a ``builtin.slice()``
object.
"""
def get(element):
if element is None:
return None
result = self._context.eval_node(element)
if len(result) != 1:
# For simplicity, we want slices to be clear defined with just
# one type. Otherwise we will return an empty slice object.
raise IndexError
try:
return list(result)[0].obj
except AttributeError:
return None
try:
return slice(get(self._start), get(self._stop), get(self._step))
except IndexError:
return slice(None, None, None)

View File

@@ -0,0 +1,197 @@
"""
Like described in the :mod:`parso.python.tree` module,
there's a need for an ast like module to represent the states of parsed
modules.
But now there are also structures in Python that need a little bit more than
that. An ``Instance`` for example is only a ``Class`` before it is
instantiated. This class represents these cases.
So, why is there also a ``Class`` class here? Well, there are decorators and
they change classes in Python 3.
Representation modules also define "magic methods". Those methods look like
``py__foo__`` and are typically mappable to the Python equivalents ``__call__``
and others. Here's a list:
====================================== ========================================
**Method** **Description**
-------------------------------------- ----------------------------------------
py__call__(params: Array) On callable objects, returns types.
py__bool__() Returns True/False/None; None means that
there's no certainty.
py__bases__() Returns a list of base classes.
py__mro__() Returns a list of classes (the mro).
py__iter__() Returns a generator of a set of types.
py__class__() Returns the class of an instance.
py__getitem__(index: int/str) Returns a a set of types of the index.
Can raise an IndexError/KeyError.
py__file__() Only on modules. Returns None if does
not exist.
py__package__() Only on modules. For the import system.
py__path__() Only on modules. For the import system.
py__get__(call_object) Only on instances. Simulates
descriptors.
py__doc__(include_call_signature: Returns the docstring for a context.
bool)
====================================== ========================================
"""
from jedi._compatibility import use_metaclass
from jedi.evaluate.cache import evaluator_method_cache, CachedMetaClass
from jedi.evaluate import compiled
from jedi.evaluate.lazy_context import LazyKnownContext
from jedi.evaluate.filters import ParserTreeFilter, TreeNameDefinition, \
ContextName, AnonymousInstanceParamName
from jedi.evaluate.base_context import ContextSet, iterator_to_context_set, \
TreeContext
def apply_py__get__(context, base_context):
try:
method = context.py__get__
except AttributeError:
yield context
else:
for descriptor_context in method(base_context):
yield descriptor_context
class ClassName(TreeNameDefinition):
def __init__(self, parent_context, tree_name, name_context):
super(ClassName, self).__init__(parent_context, tree_name)
self._name_context = name_context
@iterator_to_context_set
def infer(self):
# TODO this _name_to_types might get refactored and be a part of the
# parent class. Once it is, we can probably just overwrite method to
# achieve this.
from jedi.evaluate.syntax_tree import tree_name_to_contexts
inferred = tree_name_to_contexts(
self.parent_context.evaluator, self._name_context, self.tree_name)
for result_context in inferred:
for c in apply_py__get__(result_context, self.parent_context):
yield c
class ClassFilter(ParserTreeFilter):
name_class = ClassName
def _convert_names(self, names):
return [self.name_class(self.context, name, self._node_context)
for name in names]
class ClassContext(use_metaclass(CachedMetaClass, TreeContext)):
"""
This class is not only important to extend `tree.Class`, it is also a
important for descriptors (if the descriptor methods are evaluated or not).
"""
api_type = 'class'
def __init__(self, evaluator, parent_context, classdef):
super(ClassContext, self).__init__(evaluator, parent_context=parent_context)
self.tree_node = classdef
@evaluator_method_cache(default=())
def py__mro__(self):
def add(cls):
if cls not in mro:
mro.append(cls)
mro = [self]
# TODO Do a proper mro resolution. Currently we are just listing
# classes. However, it's a complicated algorithm.
for lazy_cls in self.py__bases__():
# TODO there's multiple different mro paths possible if this yields
# multiple possibilities. Could be changed to be more correct.
for cls in lazy_cls.infer():
# TODO detect for TypeError: duplicate base class str,
# e.g. `class X(str, str): pass`
try:
mro_method = cls.py__mro__
except AttributeError:
# TODO add a TypeError like:
"""
>>> class Y(lambda: test): pass
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: function() argument 1 must be code, not str
>>> class Y(1): pass
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: int() takes at most 2 arguments (3 given)
"""
pass
else:
add(cls)
for cls_new in mro_method():
add(cls_new)
return tuple(mro)
@evaluator_method_cache(default=())
def py__bases__(self):
arglist = self.tree_node.get_super_arglist()
if arglist:
from jedi.evaluate import arguments
args = arguments.TreeArguments(self.evaluator, self, arglist)
return [value for key, value in args.unpack() if key is None]
else:
return [LazyKnownContext(compiled.create(self.evaluator, object))]
def py__call__(self, params):
from jedi.evaluate.context import TreeInstance
return ContextSet(TreeInstance(self.evaluator, self.parent_context, self, params))
def py__class__(self):
return compiled.create(self.evaluator, type)
def get_params(self):
from jedi.evaluate.context import AnonymousInstance
anon = AnonymousInstance(self.evaluator, self.parent_context, self)
return [AnonymousInstanceParamName(anon, param.name) for param in self.funcdef.get_params()]
def get_filters(self, search_global, until_position=None, origin_scope=None, is_instance=False):
if search_global:
yield ParserTreeFilter(
self.evaluator,
context=self,
until_position=until_position,
origin_scope=origin_scope
)
else:
for cls in self.py__mro__():
if isinstance(cls, compiled.CompiledObject):
for filter in cls.get_filters(is_instance=is_instance):
yield filter
else:
yield ClassFilter(
self.evaluator, self, node_context=cls,
origin_scope=origin_scope)
def is_class(self):
return True
def get_function_slot_names(self, name):
for filter in self.get_filters(search_global=False):
names = filter.get(name)
if names:
return names
return []
def get_param_names(self):
for name in self.get_function_slot_names('__init__'):
for context_ in name.infer():
try:
method = context_.get_param_names
except AttributeError:
pass
else:
return list(method())[1:]
return []
@property
def name(self):
return ContextName(self, self.tree_node.name)

View File

@@ -0,0 +1,213 @@
import pkgutil
import imp
import re
import os
from parso import python_bytes_to_unicode
from jedi._compatibility import use_metaclass
from jedi.evaluate.cache import CachedMetaClass, evaluator_method_cache
from jedi.evaluate.filters import GlobalNameFilter, ContextNameMixin, \
AbstractNameDefinition, ParserTreeFilter, DictFilter
from jedi.evaluate import compiled
from jedi.evaluate.base_context import TreeContext
from jedi.evaluate.imports import SubModuleName, infer_import
class _ModuleAttributeName(AbstractNameDefinition):
"""
For module attributes like __file__, __str__ and so on.
"""
api_type = 'instance'
def __init__(self, parent_module, string_name):
self.parent_context = parent_module
self.string_name = string_name
def infer(self):
return compiled.create(self.parent_context.evaluator, str).execute_evaluated()
class ModuleName(ContextNameMixin, AbstractNameDefinition):
start_pos = 1, 0
def __init__(self, context, name):
self._context = context
self._name = name
@property
def string_name(self):
return self._name
class ModuleContext(use_metaclass(CachedMetaClass, TreeContext)):
api_type = 'module'
parent_context = None
def __init__(self, evaluator, module_node, path):
super(ModuleContext, self).__init__(evaluator, parent_context=None)
self.tree_node = module_node
self._path = path
def get_filters(self, search_global, until_position=None, origin_scope=None):
yield ParserTreeFilter(
self.evaluator,
context=self,
until_position=until_position,
origin_scope=origin_scope
)
yield GlobalNameFilter(self, self.tree_node)
yield DictFilter(self._sub_modules_dict())
yield DictFilter(self._module_attributes_dict())
for star_module in self.star_imports():
yield next(star_module.get_filters(search_global))
# I'm not sure if the star import cache is really that effective anymore
# with all the other really fast import caches. Recheck. Also we would need
# to push the star imports into Evaluator.modules, if we reenable this.
@evaluator_method_cache([])
def star_imports(self):
modules = []
for i in self.tree_node.iter_imports():
if i.is_star_import():
name = i.get_paths()[-1][-1]
new = infer_import(self, name)
for module in new:
if isinstance(module, ModuleContext):
modules += module.star_imports()
modules += new
return modules
@evaluator_method_cache()
def _module_attributes_dict(self):
names = ['__file__', '__package__', '__doc__', '__name__']
# All the additional module attributes are strings.
return dict((n, _ModuleAttributeName(self, n)) for n in names)
@property
def _string_name(self):
""" This is used for the goto functions. """
if self._path is None:
return '' # no path -> empty name
else:
sep = (re.escape(os.path.sep),) * 2
r = re.search(r'([^%s]*?)(%s__init__)?(\.py|\.so)?$' % sep, self._path)
# Remove PEP 3149 names
return re.sub('\.[a-z]+-\d{2}[mud]{0,3}$', '', r.group(1))
@property
@evaluator_method_cache()
def name(self):
return ModuleName(self, self._string_name)
def _get_init_directory(self):
"""
:return: The path to the directory of a package. None in case it's not
a package.
"""
for suffix, _, _ in imp.get_suffixes():
ending = '__init__' + suffix
py__file__ = self.py__file__()
if py__file__ is not None and py__file__.endswith(ending):
# Remove the ending, including the separator.
return self.py__file__()[:-len(ending) - 1]
return None
def py__name__(self):
for name, module in self.evaluator.modules.items():
if module == self and name != '':
return name
return '__main__'
def py__file__(self):
"""
In contrast to Python's __file__ can be None.
"""
if self._path is None:
return None
return os.path.abspath(self._path)
def py__package__(self):
if self._get_init_directory() is None:
return re.sub(r'\.?[^\.]+$', '', self.py__name__())
else:
return self.py__name__()
def _py__path__(self):
search_path = self.evaluator.project.sys_path
init_path = self.py__file__()
if os.path.basename(init_path) == '__init__.py':
with open(init_path, 'rb') as f:
content = python_bytes_to_unicode(f.read(), errors='replace')
# these are strings that need to be used for namespace packages,
# the first one is ``pkgutil``, the second ``pkg_resources``.
options = ('declare_namespace(__name__)', 'extend_path(__path__')
if options[0] in content or options[1] in content:
# It is a namespace, now try to find the rest of the
# modules on sys_path or whatever the search_path is.
paths = set()
for s in search_path:
other = os.path.join(s, self.name.string_name)
if os.path.isdir(other):
paths.add(other)
if paths:
return list(paths)
# TODO I'm not sure if this is how nested namespace
# packages work. The tests are not really good enough to
# show that.
# Default to this.
return [self._get_init_directory()]
@property
def py__path__(self):
"""
Not seen here, since it's a property. The callback actually uses a
variable, so use it like::
foo.py__path__(sys_path)
In case of a package, this returns Python's __path__ attribute, which
is a list of paths (strings).
Raises an AttributeError if the module is not a package.
"""
path = self._get_init_directory()
if path is None:
raise AttributeError('Only packages have __path__ attributes.')
else:
return self._py__path__
@evaluator_method_cache()
def _sub_modules_dict(self):
"""
Lists modules in the directory of this module (if this module is a
package).
"""
path = self._path
names = {}
if path is not None and path.endswith(os.path.sep + '__init__.py'):
mods = pkgutil.iter_modules([os.path.dirname(path)])
for module_loader, name, is_pkg in mods:
# It's obviously a relative import to the current module.
names[name] = SubModuleName(self, name)
# TODO add something like this in the future, its cleaner than the
# import hacks.
# ``os.path`` is a hardcoded exception, because it's a
# ``sys.modules`` modification.
# if str(self.name) == 'os':
# names.append(Name('path', parent_context=self))
return names
def py__class__(self):
return compiled.get_special_object(self.evaluator, 'MODULE_CLASS')
def __repr__(self):
return "<%s: %s@%s-%s>" % (
self.__class__.__name__, self._string_name,
self.tree_node.start_pos[0], self.tree_node.end_pos[0])

View File

@@ -0,0 +1,74 @@
import os
from itertools import chain
from jedi._compatibility import use_metaclass
from jedi.evaluate.cache import evaluator_method_cache, CachedMetaClass
from jedi.evaluate import imports
from jedi.evaluate.filters import DictFilter, AbstractNameDefinition
from jedi.evaluate.base_context import NO_CONTEXTS, TreeContext
class ImplicitNSName(AbstractNameDefinition):
"""
Accessing names for implicit namespace packages should infer to nothing.
This object will prevent Jedi from raising exceptions
"""
def __init__(self, implicit_ns_context, string_name):
self.implicit_ns_context = implicit_ns_context
self.string_name = string_name
def infer(self):
return NO_CONTEXTS
def get_root_context(self):
return self.implicit_ns_context
class ImplicitNamespaceContext(use_metaclass(CachedMetaClass, TreeContext)):
"""
Provides support for implicit namespace packages
"""
api_type = 'module'
parent_context = None
def __init__(self, evaluator, fullname):
super(ImplicitNamespaceContext, self).__init__(evaluator, parent_context=None)
self.evaluator = evaluator
self.fullname = fullname
def get_filters(self, search_global, until_position=None, origin_scope=None):
yield DictFilter(self._sub_modules_dict())
@property
@evaluator_method_cache()
def name(self):
string_name = self.py__package__().rpartition('.')[-1]
return ImplicitNSName(self, string_name)
def py__file__(self):
return None
def py__package__(self):
"""Return the fullname
"""
return self.fullname
@property
def py__path__(self):
return lambda: [self.paths]
@evaluator_method_cache()
def _sub_modules_dict(self):
names = {}
paths = self.paths
file_names = chain.from_iterable(os.listdir(path) for path in paths)
mods = [
file_name.rpartition('.')[0] if '.' in file_name else file_name
for file_name in file_names
if file_name != '__pycache__'
]
for name in mods:
names[name] = imports.SubModuleName(self, name)
return names

View File

@@ -1,11 +1,12 @@
"""
Docstrings are another source of information for functions and classes.
:mod:`jedi.evaluate.dynamic` tries to find all executions of functions, while
the docstring parsing is much easier. There are two different types of
the docstring parsing is much easier. There are three different types of
docstrings that |jedi| understands:
- `Sphinx <http://sphinx-doc.org/markup/desc.html#info-field-lists>`_
- `Epydoc <http://epydoc.sourceforge.net/manual-fields.html>`_
- `Numpydoc <https://github.com/numpy/numpy/blob/master/doc/HOWTO_DOCUMENT.rst.txt>`_
For example, the sphinx annotation ``:type foo: str`` clearly states that the
type of ``foo`` is ``str``.
@@ -15,15 +16,21 @@ annotations.
"""
import re
from itertools import chain
from textwrap import dedent
from jedi.evaluate.cache import memoize_default
from jedi.parser import Parser
from jedi.common import indent_block
from parso import parse
from jedi._compatibility import u
from jedi.evaluate.utils import indent_block
from jedi.evaluate.cache import evaluator_method_cache
from jedi.evaluate.base_context import iterator_to_context_set, ContextSet, \
NO_CONTEXTS
from jedi.evaluate.lazy_context import LazyKnownContexts
DOCSTRING_PARAM_PATTERNS = [
r'\s*:type\s+%s:\s*([^\n]+)', # Sphinx
r'\s*:param\s+(\w+)\s+%s:[^\n]*', # Sphinx param with type
r'\s*@type\s+%s:\s*([^\n]+)', # Epydoc
]
@@ -35,26 +42,99 @@ DOCSTRING_RETURN_PATTERNS = [
REST_ROLE_PATTERN = re.compile(r':[^`]+:`([^`]+)`')
@memoize_default(None, evaluator_is_first_arg=True)
def follow_param(evaluator, param):
func = param.parent_function
param_str = _search_param_in_docstr(func.raw_doc, str(param.get_name()))
return _evaluate_for_statement_string(evaluator, param_str, param.get_parent_until())
try:
from numpydoc.docscrape import NumpyDocString
except ImportError:
def _search_param_in_numpydocstr(docstr, param_str):
return []
def _search_return_in_numpydocstr(docstr):
return []
else:
def _search_param_in_numpydocstr(docstr, param_str):
"""Search `docstr` (in numpydoc format) for type(-s) of `param_str`."""
try:
# This is a non-public API. If it ever changes we should be
# prepared and return gracefully.
params = NumpyDocString(docstr)._parsed_data['Parameters']
except (KeyError, AttributeError):
return []
for p_name, p_type, p_descr in params:
if p_name == param_str:
m = re.match('([^,]+(,[^,]+)*?)(,[ ]*optional)?$', p_type)
if m:
p_type = m.group(1)
return list(_expand_typestr(p_type))
return []
def _search_return_in_numpydocstr(docstr):
"""
Search `docstr` (in numpydoc format) for type(-s) of function returns.
"""
doc = NumpyDocString(docstr)
try:
# This is a non-public API. If it ever changes we should be
# prepared and return gracefully.
returns = doc._parsed_data['Returns']
returns += doc._parsed_data['Yields']
except (KeyError, AttributeError):
raise StopIteration
for r_name, r_type, r_descr in returns:
#Return names are optional and if so the type is in the name
if not r_type:
r_type = r_name
for type_ in _expand_typestr(r_type):
yield type_
def _expand_typestr(type_str):
"""
Attempts to interpret the possible types in `type_str`
"""
# Check if alternative types are specified with 'or'
if re.search('\\bor\\b', type_str):
for t in type_str.split('or'):
yield t.split('of')[0].strip()
# Check if like "list of `type`" and set type to list
elif re.search('\\bof\\b', type_str):
yield type_str.split('of')[0]
# Check if type has is a set of valid literal values eg: {'C', 'F', 'A'}
elif type_str.startswith('{'):
node = parse(type_str, version='3.6').children[0]
if node.type == 'atom':
for leaf in node.children[1].children:
if leaf.type == 'number':
if '.' in leaf.value:
yield 'float'
else:
yield 'int'
elif leaf.type == 'string':
if 'b' in leaf.string_prefix.lower():
yield 'bytes'
else:
yield 'str'
# Ignore everything else.
# Otherwise just work with what we have.
else:
yield type_str
def _search_param_in_docstr(docstr, param_str):
"""
Search `docstr` for a type of `param_str`.
Search `docstr` for type(-s) of `param_str`.
>>> _search_param_in_docstr(':type param: int', 'param')
'int'
['int']
>>> _search_param_in_docstr('@type param: int', 'param')
'int'
['int']
>>> _search_param_in_docstr(
... ':type param: :class:`threading.Thread`', 'param')
'threading.Thread'
>>> _search_param_in_docstr('no document', 'param') is None
True
['threading.Thread']
>>> bool(_search_param_in_docstr('no document', 'param'))
False
>>> _search_param_in_docstr(':param int param: some description', 'param')
['int']
"""
# look at #40 to see definitions of those params
@@ -63,9 +143,10 @@ def _search_param_in_docstr(docstr, param_str):
for pattern in patterns:
match = pattern.search(docstr)
if match:
return _strip_rst_role(match.group(1))
return [_strip_rst_role(match.group(1))]
return None
return (_search_param_in_numpydocstr(docstr, param_str) or
[])
def _strip_rst_role(type_str):
@@ -90,12 +171,16 @@ def _strip_rst_role(type_str):
return type_str
def _evaluate_for_statement_string(evaluator, string, module):
code = dedent("""
def _evaluate_for_statement_string(module_context, string):
code = dedent(u("""
def pseudo_docstring_stuff():
'''Create a pseudo function for docstring statements.'''
%s
""")
'''
Create a pseudo function for docstring statements.
Need this docstring so that if the below part is not valid Python this
is still a function.
'''
{0}
"""))
if string is None:
return []
@@ -104,35 +189,101 @@ def _evaluate_for_statement_string(evaluator, string, module):
# (e.g., 'threading' in 'threading.Thread').
string = 'import %s\n' % element + string
p = Parser(code % indent_block(string), no_docstr=True)
pseudo_cls = p.module.subscopes[0]
# Take the default grammar here, if we load the Python 2.7 grammar here, it
# will be impossible to use `...` (Ellipsis) as a token. Docstring types
# don't need to conform with the current grammar.
grammar = module_context.evaluator.latest_grammar
module = grammar.parse(code.format(indent_block(string)))
try:
stmt = pseudo_cls.statements[-1]
except IndexError:
funcdef = next(module.iter_funcdefs())
# First pick suite, then simple_stmt and then the node,
# which is also not the last item, because there's a newline.
stmt = funcdef.children[-1].children[-1].children[-2]
except (AttributeError, IndexError):
return []
from jedi.evaluate.context import FunctionContext
function_context = FunctionContext(
module_context.evaluator,
module_context,
funcdef
)
func_execution_context = function_context.get_function_execution()
# Use the module of the param.
# TODO this module is not the module of the param in case of a function
# call. In that case it's the module of the function call.
# stuffed with content from a function call.
pseudo_cls.parent = module
definitions = evaluator.eval_statement(stmt)
it = (evaluator.execute(d) for d in definitions)
# TODO Executing tuples does not make sense, people tend to say
# `(str, int)` in a type annotation, which means that it returns a tuple
# with both types.
# At this point we just return the classes if executing wasn't possible,
# i.e. is a tuple.
return list(chain.from_iterable(it)) or definitions
return list(_execute_types_in_stmt(func_execution_context, stmt))
@memoize_default(None, evaluator_is_first_arg=True)
def find_return_types(evaluator, func):
def _execute_types_in_stmt(module_context, stmt):
"""
Executing all types or general elements that we find in a statement. This
doesn't include tuple, list and dict literals, because the stuff they
contain is executed. (Used as type information).
"""
definitions = module_context.eval_node(stmt)
return ContextSet.from_sets(
_execute_array_values(module_context.evaluator, d)
for d in definitions
)
def _execute_array_values(evaluator, array):
"""
Tuples indicate that there's not just one return value, but the listed
ones. `(str, int)` means that it returns a tuple with both types.
"""
from jedi.evaluate.context.iterable import SequenceLiteralContext, FakeSequence
if isinstance(array, SequenceLiteralContext):
values = []
for lazy_context in array.py__iter__():
objects = ContextSet.from_sets(
_execute_array_values(evaluator, typ)
for typ in lazy_context.infer()
)
values.append(LazyKnownContexts(objects))
return set([FakeSequence(evaluator, array.array_type, values)])
else:
return array.execute_evaluated()
@evaluator_method_cache()
def infer_param(execution_context, param):
from jedi.evaluate.context.instance import AnonymousInstanceFunctionExecution
def eval_docstring(docstring):
return ContextSet.from_iterable(
p
for param_str in _search_param_in_docstr(docstring, param.name.value)
for p in _evaluate_for_statement_string(module_context, param_str)
)
module_context = execution_context.get_root_context()
func = param.get_parent_function()
if func.type == 'lambdef':
return NO_CONTEXTS
types = eval_docstring(execution_context.py__doc__())
if isinstance(execution_context, AnonymousInstanceFunctionExecution) and \
execution_context.function_context.name.string_name == '__init__':
class_context = execution_context.instance.class_context
types |= eval_docstring(class_context.py__doc__())
return types
@evaluator_method_cache()
@iterator_to_context_set
def infer_return_types(function_context):
def search_return_in_docstr(code):
for p in DOCSTRING_RETURN_PATTERNS:
match = p.search(code)
if match:
return _strip_rst_role(match.group(1))
yield _strip_rst_role(match.group(1))
# Check for numpy style return hint
for type_ in _search_return_in_numpydocstr(code):
yield type_
type_str = search_return_in_docstr(func.raw_doc)
return _evaluate_for_statement_string(evaluator, type_str, func.get_parent_until())
for type_str in search_return_in_docstr(function_context.py__doc__()):
for type_eval in _evaluate_for_statement_string(function_context.get_root_context(), type_str):
yield type_eval

View File

@@ -14,140 +14,190 @@ It works as follows:
- |Jedi| sees a param
- search for function calls named ``foo``
- execute these calls and check the input. This work with a ``ParamListener``.
- execute these calls and check the input.
"""
from jedi._compatibility import unicode
from jedi.parser import representation as pr
from parso.python import tree
from jedi import settings
from jedi.evaluate import helpers
from jedi.evaluate.cache import memoize_default
from jedi import debug
from jedi.evaluate.cache import evaluator_function_cache
from jedi.evaluate import imports
# This is something like the sys.path, but only for searching params. It means
# that this is the order in which Jedi searches params.
search_param_modules = ['.']
from jedi.evaluate.arguments import TreeArguments
from jedi.evaluate.param import create_default_params
from jedi.evaluate.helpers import is_stdlib_path
from jedi.evaluate.utils import to_list
from jedi.parser_utils import get_parent_scope
from jedi.evaluate.context import ModuleContext, instance
from jedi.evaluate.base_context import ContextSet
class ParamListener(object):
MAX_PARAM_SEARCHES = 20
class MergedExecutedParams(object):
"""
This listener is used to get the params for a function.
Simulates being a parameter while actually just being multiple params.
"""
def __init__(self):
self.param_possibilities = []
def __init__(self, executed_params):
self._executed_params = executed_params
def execute(self, params):
self.param_possibilities.append(params)
def infer(self):
return ContextSet.from_sets(p.infer() for p in self._executed_params)
@memoize_default([], evaluator_is_first_arg=True)
def search_params(evaluator, param):
@debug.increase_indent
def search_params(evaluator, execution_context, funcdef):
"""
This is a dynamic search for params. If you try to complete a type:
A dynamic search for param values. If you try to complete a type:
>>> def func(foo):
... foo
>>> func(1)
>>> func("")
It is not known what the type is, because it cannot be guessed with
recursive madness. Therefore one has to analyse the statements that are
calling the function, as well as analyzing the incoming params.
It is not known what the type ``foo`` without analysing the whole code. You
have to look for all calls to ``func`` to find out what ``foo`` possibly
is.
"""
if not settings.dynamic_params:
return []
return create_default_params(execution_context, funcdef)
def get_params_for_module(module):
"""
Returns the values of a param, or an empty array.
"""
@memoize_default([], evaluator_is_first_arg=True)
def get_posibilities(evaluator, module, func_name):
try:
possible_stmts = module.used_names[func_name]
except KeyError:
return []
evaluator.dynamic_params_depth += 1
try:
path = execution_context.get_root_context().py__file__()
if path is not None and is_stdlib_path(path):
# We don't want to search for usages in the stdlib. Usually people
# don't work with it (except if you are a core maintainer, sorry).
# This makes everything slower. Just disable it and run the tests,
# you will see the slowdown, especially in 3.6.
return create_default_params(execution_context, funcdef)
for stmt in possible_stmts:
if isinstance(stmt, pr.Import):
continue
calls = helpers.scan_statement_for_calls(stmt, func_name)
for c in calls:
# no execution means that params cannot be set
call_path = list(c.generate_call_path())
pos = c.start_pos
scope = stmt.parent
debug.dbg('Dynamic param search in %s.', funcdef.name.value, color='MAGENTA')
# this whole stuff is just to not execute certain parts
# (speed improvement), basically we could just call
# ``eval_call_path`` on the call_path and it would
# also work.
def listRightIndex(lst, value):
return len(lst) - lst[-1::-1].index(value) - 1
module_context = execution_context.get_root_context()
function_executions = _search_function_executions(
evaluator,
module_context,
funcdef
)
if function_executions:
zipped_params = zip(*list(
function_execution.get_params()
for function_execution in function_executions
))
params = [MergedExecutedParams(executed_params) for executed_params in zipped_params]
# Evaluate the ExecutedParams to types.
else:
return create_default_params(execution_context, funcdef)
debug.dbg('Dynamic param result finished', color='MAGENTA')
return params
finally:
evaluator.dynamic_params_depth -= 1
# Need to take right index, because there could be a
# func usage before.
call_path_simple = [unicode(d) if isinstance(d, pr.NamePart)
else d for d in call_path]
i = listRightIndex(call_path_simple, func_name)
first, last = call_path[:i], call_path[i + 1:]
if not last and not call_path_simple.index(func_name) != i:
continue
scopes = [scope]
if first:
scopes = evaluator.eval_call_path(iter(first), c.parent, pos)
pos = None
from jedi.evaluate import representation as er
for scope in scopes:
s = evaluator.find_types(scope, func_name, position=pos,
search_global=not first,
resolve_decorator=False)
c = [getattr(escope, 'base_func', None) or escope.base
for escope in s
if escope.isinstance(er.Function, er.Class)]
if compare in c:
# only if we have the correct function we execute
# it, otherwise just ignore it.
evaluator.follow_path(iter(last), s, scope)
return listener.param_possibilities
@evaluator_function_cache(default=None)
@to_list
def _search_function_executions(evaluator, module_context, funcdef):
"""
Returns a list of param names.
"""
func_string_name = funcdef.name.value
compare_node = funcdef
if func_string_name == '__init__':
cls = get_parent_scope(funcdef)
if isinstance(cls, tree.Class):
func_string_name = cls.name.value
compare_node = cls
result = []
for params in get_posibilities(evaluator, module, func_name):
for p in params:
if str(p) == param_name:
result += evaluator.eval_statement(p.parent)
return result
found_executions = False
i = 0
for for_mod_context in imports.get_modules_containing_name(
evaluator, [module_context], func_string_name):
if not isinstance(module_context, ModuleContext):
return
for name, trailer in _get_possible_nodes(for_mod_context, func_string_name):
i += 1
func = param.get_parent_until(pr.Function)
current_module = param.get_parent_until()
func_name = unicode(func.name)
compare = func
if func_name == '__init__' and isinstance(func.parent, pr.Class):
func_name = unicode(func.parent.name)
compare = func.parent
# This is a simple way to stop Jedi's dynamic param recursion
# from going wild: The deeper Jedi's in the recursion, the less
# code should be evaluated.
if i * evaluator.dynamic_params_depth > MAX_PARAM_SEARCHES:
return
# get the param name
if param.assignment_details:
# first assignment details, others would be a syntax error
expression_list, op = param.assignment_details[0]
else:
expression_list = param.expression_list()
offset = 1 if expression_list[0] in ['*', '**'] else 0
param_name = str(expression_list[offset].name)
random_context = evaluator.create_context(for_mod_context, name)
for function_execution in _check_name_for_execution(
evaluator, random_context, compare_node, name, trailer):
found_executions = True
yield function_execution
# add the listener
listener = ParamListener()
func.listeners.add(listener)
# If there are results after processing a module, we're probably
# good to process. This is a speed optimization.
if found_executions:
return
result = []
# This is like backtracking: Get the first possible result.
for mod in imports.get_modules_containing_name([current_module], func_name):
result = get_params_for_module(mod)
if result:
break
# cleanup: remove the listener; important: should not stick.
func.listeners.remove(listener)
def _get_possible_nodes(module_context, func_string_name):
try:
names = module_context.tree_node.get_used_names()[func_string_name]
except KeyError:
return
return result
for name in names:
bracket = name.get_next_leaf()
trailer = bracket.parent
if trailer.type == 'trailer' and bracket == '(':
yield name, trailer
def _check_name_for_execution(evaluator, context, compare_node, name, trailer):
from jedi.evaluate.context.function import FunctionExecutionContext
def create_func_excs():
arglist = trailer.children[1]
if arglist == ')':
arglist = ()
args = TreeArguments(evaluator, context, arglist, trailer)
if value_node.type == 'funcdef':
yield value.get_function_execution(args)
else:
created_instance = instance.TreeInstance(
evaluator,
value.parent_context,
value,
args
)
for execution in created_instance.create_init_executions():
yield execution
for value in evaluator.goto_definitions(context, name):
value_node = value.tree_node
if compare_node == value_node:
for func_execution in create_func_excs():
yield func_execution
elif isinstance(value.parent_context, FunctionExecutionContext) and \
compare_node.type == 'funcdef':
# Here we're trying to find decorators by checking the first
# parameter. It's not very generic though. Should find a better
# solution that also applies to nested decorators.
params = value.parent_context.get_params()
if len(params) != 1:
continue
values = params[0].infer()
nodes = [v.tree_node for v in values]
if nodes == [compare_node]:
# Found a decorator.
module_context = context.get_root_context()
execution_context = next(create_func_excs())
for name, trailer in _get_possible_nodes(module_context, params[0].string_name):
if value_node.start_pos < name.start_pos < value_node.end_pos:
random_context = evaluator.create_context(execution_context, name)
iterator = _check_name_for_execution(
evaluator,
random_context,
compare_node,
name,
trailer
)
for function_execution in iterator:
yield function_execution

434
jedi/evaluate/filters.py Normal file
View File

@@ -0,0 +1,434 @@
"""
Filters are objects that you can use to filter names in different scopes. They
are needed for name resolution.
"""
from abc import abstractmethod
from parso.tree import search_ancestor
from jedi._compatibility import is_py3
from jedi.evaluate import flow_analysis
from jedi.evaluate.base_context import ContextSet, Context
from jedi.parser_utils import get_parent_scope
from jedi.evaluate.utils import to_list
class AbstractNameDefinition(object):
start_pos = None
string_name = None
parent_context = None
tree_name = None
@abstractmethod
def infer(self):
raise NotImplementedError
@abstractmethod
def goto(self):
# Typically names are already definitions and therefore a goto on that
# name will always result on itself.
return set([self])
def get_root_context(self):
return self.parent_context.get_root_context()
def __repr__(self):
if self.start_pos is None:
return '<%s: %s>' % (self.__class__.__name__, self.string_name)
return '<%s: %s@%s>' % (self.__class__.__name__, self.string_name, self.start_pos)
def execute(self, arguments):
return self.infer().execute(arguments)
def execute_evaluated(self, *args, **kwargs):
return self.infer().execute_evaluated(*args, **kwargs)
@property
def api_type(self):
return self.parent_context.api_type
class AbstractTreeName(AbstractNameDefinition):
def __init__(self, parent_context, tree_name):
self.parent_context = parent_context
self.tree_name = tree_name
def goto(self):
return self.parent_context.evaluator.goto(self.parent_context, self.tree_name)
@property
def string_name(self):
return self.tree_name.value
@property
def start_pos(self):
return self.tree_name.start_pos
class ContextNameMixin(object):
def infer(self):
return ContextSet(self._context)
def get_root_context(self):
if self.parent_context is None:
return self._context
return super(ContextNameMixin, self).get_root_context()
@property
def api_type(self):
return self._context.api_type
class ContextName(ContextNameMixin, AbstractTreeName):
def __init__(self, context, tree_name):
super(ContextName, self).__init__(context.parent_context, tree_name)
self._context = context
class TreeNameDefinition(AbstractTreeName):
_API_TYPES = dict(
import_name='module',
import_from='module',
funcdef='function',
param='param',
classdef='class',
)
def infer(self):
# Refactor this, should probably be here.
from jedi.evaluate.syntax_tree import tree_name_to_contexts
return tree_name_to_contexts(self.parent_context.evaluator, self.parent_context, self.tree_name)
@property
def api_type(self):
definition = self.tree_name.get_definition(import_name_always=True)
if definition is None:
return 'statement'
return self._API_TYPES.get(definition.type, 'statement')
class ParamName(AbstractTreeName):
api_type = 'param'
def __init__(self, parent_context, tree_name):
self.parent_context = parent_context
self.tree_name = tree_name
def infer(self):
return self.get_param().infer()
def get_param(self):
params = self.parent_context.get_params()
param_node = search_ancestor(self.tree_name, 'param')
return params[param_node.position_index]
class AnonymousInstanceParamName(ParamName):
def infer(self):
param_node = search_ancestor(self.tree_name, 'param')
# TODO I think this should not belong here. It's not even really true,
# because classmethod and other descriptors can change it.
if param_node.position_index == 0:
# This is a speed optimization, to return the self param (because
# it's known). This only affects anonymous instances.
return ContextSet(self.parent_context.instance)
else:
return self.get_param().infer()
class AbstractFilter(object):
_until_position = None
def _filter(self, names):
if self._until_position is not None:
return [n for n in names if n.start_pos < self._until_position]
return names
@abstractmethod
def get(self, name):
raise NotImplementedError
@abstractmethod
def values(self):
raise NotImplementedError
class AbstractUsedNamesFilter(AbstractFilter):
name_class = TreeNameDefinition
def __init__(self, context, parser_scope):
self._parser_scope = parser_scope
self._used_names = self._parser_scope.get_root_node().get_used_names()
self.context = context
def get(self, name):
try:
names = self._used_names[str(name)]
except KeyError:
return []
return self._convert_names(self._filter(names))
def _convert_names(self, names):
return [self.name_class(self.context, name) for name in names]
def values(self):
return self._convert_names(name for name_list in self._used_names.values()
for name in self._filter(name_list))
def __repr__(self):
return '<%s: %s>' % (self.__class__.__name__, self.context)
class ParserTreeFilter(AbstractUsedNamesFilter):
def __init__(self, evaluator, context, node_context=None, until_position=None,
origin_scope=None):
"""
node_context is an option to specify a second context for use cases
like the class mro where the parent class of a new name would be the
context, but for some type inference it's important to have a local
context of the other classes.
"""
if node_context is None:
node_context = context
super(ParserTreeFilter, self).__init__(context, node_context.tree_node)
self._node_context = node_context
self._origin_scope = origin_scope
self._until_position = until_position
def _filter(self, names):
names = super(ParserTreeFilter, self)._filter(names)
names = [n for n in names if self._is_name_reachable(n)]
return list(self._check_flows(names))
def _is_name_reachable(self, name):
if not name.is_definition():
return False
parent = name.parent
if parent.type == 'trailer':
return False
base_node = parent if parent.type in ('classdef', 'funcdef') else name
return get_parent_scope(base_node) == self._parser_scope
def _check_flows(self, names):
for name in sorted(names, key=lambda name: name.start_pos, reverse=True):
check = flow_analysis.reachability_check(
self._node_context, self._parser_scope, name, self._origin_scope
)
if check is not flow_analysis.UNREACHABLE:
yield name
if check is flow_analysis.REACHABLE:
break
class FunctionExecutionFilter(ParserTreeFilter):
param_name = ParamName
def __init__(self, evaluator, context, node_context=None,
until_position=None, origin_scope=None):
super(FunctionExecutionFilter, self).__init__(
evaluator,
context,
node_context,
until_position,
origin_scope
)
@to_list
def _convert_names(self, names):
for name in names:
param = search_ancestor(name, 'param')
if param:
yield self.param_name(self.context, name)
else:
yield TreeNameDefinition(self.context, name)
class AnonymousInstanceFunctionExecutionFilter(FunctionExecutionFilter):
param_name = AnonymousInstanceParamName
class GlobalNameFilter(AbstractUsedNamesFilter):
def __init__(self, context, parser_scope):
super(GlobalNameFilter, self).__init__(context, parser_scope)
@to_list
def _filter(self, names):
for name in names:
if name.parent.type == 'global_stmt':
yield name
class DictFilter(AbstractFilter):
def __init__(self, dct):
self._dct = dct
def get(self, name):
try:
value = self._convert(name, self._dct[str(name)])
except KeyError:
return []
return list(self._filter([value]))
def values(self):
return self._filter(self._convert(*item) for item in self._dct.items())
def _convert(self, name, value):
return value
class _BuiltinMappedMethod(Context):
"""``Generator.__next__`` ``dict.values`` methods and so on."""
api_type = 'function'
def __init__(self, builtin_context, method, builtin_func):
super(_BuiltinMappedMethod, self).__init__(
builtin_context.evaluator,
parent_context=builtin_context
)
self._method = method
self._builtin_func = builtin_func
def py__call__(self, params):
return self._method(self.parent_context)
def __getattr__(self, name):
return getattr(self._builtin_func, name)
class SpecialMethodFilter(DictFilter):
"""
A filter for methods that are defined in this module on the corresponding
classes like Generator (for __next__, etc).
"""
class SpecialMethodName(AbstractNameDefinition):
api_type = 'function'
def __init__(self, parent_context, string_name, callable_, builtin_context):
self.parent_context = parent_context
self.string_name = string_name
self._callable = callable_
self._builtin_context = builtin_context
def infer(self):
filter = next(self._builtin_context.get_filters())
# We can take the first index, because on builtin methods there's
# always only going to be one name. The same is true for the
# inferred values.
builtin_func = next(iter(filter.get(self.string_name)[0].infer()))
return ContextSet(_BuiltinMappedMethod(self.parent_context, self._callable, builtin_func))
def __init__(self, context, dct, builtin_context):
super(SpecialMethodFilter, self).__init__(dct)
self.context = context
self._builtin_context = builtin_context
"""
This context is what will be used to introspect the name, where as the
other context will be used to execute the function.
We distinguish, because we have to.
"""
def _convert(self, name, value):
return self.SpecialMethodName(self.context, name, value, self._builtin_context)
def has_builtin_methods(cls):
base_dct = {}
# Need to care properly about inheritance. Builtin Methods should not get
# lost, just because they are not mentioned in a class.
for base_cls in reversed(cls.__bases__):
try:
base_dct.update(base_cls.builtin_methods)
except AttributeError:
pass
cls.builtin_methods = base_dct
for func in cls.__dict__.values():
try:
cls.builtin_methods.update(func.registered_builtin_methods)
except AttributeError:
pass
return cls
def register_builtin_method(method_name, python_version_match=None):
def wrapper(func):
if python_version_match and python_version_match != 2 + int(is_py3):
# Some functions do only apply to certain versions.
return func
dct = func.__dict__.setdefault('registered_builtin_methods', {})
dct[method_name] = func
return func
return wrapper
def get_global_filters(evaluator, context, until_position, origin_scope):
"""
Returns all filters in order of priority for name resolution.
For global name lookups. The filters will handle name resolution
themselves, but here we gather possible filters downwards.
>>> from jedi._compatibility import u, no_unicode_pprint
>>> from jedi import Script
>>> script = Script(u('''
... x = ['a', 'b', 'c']
... def func():
... y = None
... '''))
>>> module_node = script._get_module_node()
>>> scope = next(module_node.iter_funcdefs())
>>> scope
<Function: func@3-5>
>>> context = script._get_module().create_context(scope)
>>> filters = list(get_global_filters(context.evaluator, context, (4, 0), None))
First we get the names names from the function scope.
>>> no_unicode_pprint(filters[0])
<ParserTreeFilter: <ModuleContext: @2-5>>
>>> sorted(str(n) for n in filters[0].values())
['<TreeNameDefinition: func@(3, 4)>', '<TreeNameDefinition: x@(2, 0)>']
>>> filters[0]._until_position
(4, 0)
Then it yields the names from one level "lower". In this example, this is
the module scope. As a side note, you can see, that the position in the
filter is now None, because typically the whole module is loaded before the
function is called.
>>> filters[1].values() # global names -> there are none in our example.
[]
>>> list(filters[2].values()) # package modules -> Also empty.
[]
>>> sorted(name.string_name for name in filters[3].values()) # Module attributes
['__doc__', '__file__', '__name__', '__package__']
>>> print(filters[1]._until_position)
None
Finally, it yields the builtin filter, if `include_builtin` is
true (default).
>>> filters[4].values() #doctest: +ELLIPSIS
[<CompiledName: ...>, ...]
"""
from jedi.evaluate.context.function import FunctionExecutionContext
while context is not None:
# Names in methods cannot be resolved within the class.
for filter in context.get_filters(
search_global=True,
until_position=until_position,
origin_scope=origin_scope):
yield filter
if isinstance(context, FunctionExecutionContext):
# The position should be reset if the current scope is a function.
until_position = None
context = context.parent_context
# Add builtins to the global scope.
for filter in evaluator.BUILTINS.get_filters(search_global=True):
yield filter

View File

@@ -1,8 +1,11 @@
"""
Searcjing for names with given scope and name. This is very central in Jedi and
Searching for names with given scope and name. This is very central in Jedi and
Python. The name resolution is quite complicated with descripter,
``__getattribute__``, ``__getattr__``, ``global``, etc.
If you want to understand name resolution, please read the first few chapters
in http://blog.ionelmc.ro/2015/02/09/understanding-python-metaclasses/.
Flow checks
+++++++++++
@@ -11,342 +14,172 @@ would check whether a flow has the form of ``if isinstance(a, type_or_tuple)``.
Unfortunately every other thing is being ignored (e.g. a == '' would be easy to
check for -> a is a string). There's big potential in these checks.
"""
from itertools import chain
from jedi._compatibility import hasattr, unicode, u
from jedi.parser import representation as pr, tokenize
from jedi.parser import fast
from parso.python import tree
from parso.tree import search_ancestor
from jedi import debug
from jedi import common
from jedi import settings
from jedi.evaluate import representation as er
from jedi.evaluate import dynamic
from jedi.evaluate.context import AbstractInstanceContext
from jedi.evaluate import compiled
from jedi.evaluate import docstrings
from jedi.evaluate import iterable
from jedi.evaluate import imports
from jedi.evaluate import analysis
from jedi.evaluate import precedence
from jedi.evaluate import flow_analysis
from jedi.evaluate.arguments import TreeArguments
from jedi.evaluate import helpers
from jedi.evaluate.context import iterable
from jedi.evaluate.filters import get_global_filters, TreeNameDefinition
from jedi.evaluate.base_context import ContextSet
from jedi.parser_utils import is_scope, get_parent_scope
class NameFinder(object):
def __init__(self, evaluator, scope, name_str, position=None):
def __init__(self, evaluator, context, name_context, name_or_str,
position=None, analysis_errors=True):
self._evaluator = evaluator
self.scope = scope
self.name_str = name_str
self.position = position
# Make sure that it's not just a syntax tree node.
self._context = context
self._name_context = name_context
self._name = name_or_str
if isinstance(name_or_str, tree.Name):
self._string_name = name_or_str.value
else:
self._string_name = name_or_str
self._position = position
self._found_predefined_types = None
self._analysis_errors = analysis_errors
@debug.increase_indent
def find(self, scopes, resolve_decorator=True, search_global=False):
if unicode(self.name_str) == 'None':
# Filter None, because it's really just a keyword, nobody wants to
# access it.
return []
def find(self, filters, attribute_lookup):
"""
:params bool attribute_lookup: Tell to logic if we're accessing the
attribute or the contents of e.g. a function.
"""
names = self.filter_name(filters)
if self._found_predefined_types is not None and names:
check = flow_analysis.reachability_check(
self._context, self._context.tree_node, self._name)
if check is flow_analysis.UNREACHABLE:
return ContextSet()
return self._found_predefined_types
names = self.filter_name(scopes)
types = self._names_to_types(names, resolve_decorator)
types = self._names_to_types(names, attribute_lookup)
if not names and not types \
and not (isinstance(self.name_str, pr.NamePart)
and isinstance(self.name_str.parent.parent, pr.Param)):
if not isinstance(self.name_str, (str, unicode)): # TODO Remove?
if search_global:
message = ("NameError: name '%s' is not defined."
% self.name_str)
analysis.add(self._evaluator, 'name-error', self.name_str,
message)
if not names and self._analysis_errors and not types \
and not (isinstance(self._name, tree.Name) and
isinstance(self._name.parent.parent, tree.Param)):
if isinstance(self._name, tree.Name):
if attribute_lookup:
analysis.add_attribute_error(
self._name_context, self._context, self._name)
else:
analysis.add_attribute_error(self._evaluator,
self.scope, self.name_str)
message = ("NameError: name '%s' is not defined."
% self._string_name)
analysis.add(self._name_context, 'name-error', self._name, message)
debug.dbg('finder._names_to_types: %s -> %s', names, types)
return self._resolve_descriptors(types)
return types
def scopes(self, search_global=False):
if search_global:
return get_names_of_scope(self._evaluator, self.scope, self.position)
def _get_origin_scope(self):
if isinstance(self._name, tree.Name):
scope = self._name
while scope.parent is not None:
# TODO why if classes?
if not isinstance(scope, tree.Scope):
break
scope = scope.parent
return scope
else:
return self.scope.scope_names_generator(self.position)
return None
def filter_name(self, scope_names_generator):
def get_filters(self, search_global=False):
origin_scope = self._get_origin_scope()
if search_global:
return get_global_filters(self._evaluator, self._context, self._position, origin_scope)
else:
return self._context.get_filters(search_global, self._position, origin_scope=origin_scope)
def filter_name(self, filters):
"""
Filters all variables of a scope (which are defined in the
`scope_names_generator`), until the name fits.
Searches names that are defined in a scope (the different
``filters``), until a name fits.
"""
result = []
for name_list_scope, name_list in scope_names_generator:
break_scopes = []
if not isinstance(name_list_scope, compiled.CompiledObject):
# Here is the position stuff happening (sorting of variables).
# Compiled objects don't need that, because there's only one
# reference.
name_list = sorted(name_list, key=lambda n: n.start_pos, reverse=True)
for name in name_list:
if unicode(self.name_str) != name.get_code():
continue
scope = name.parent.parent
if scope in break_scopes:
continue
# Exclude `arr[1] =` from the result set.
if not self._name_is_array_assignment(name):
result.append(name)
if result and self._is_name_break_scope(name):
if self._does_scope_break_immediately(scope, name_list_scope):
break
names = []
if self._context.predefined_names:
# TODO is this ok? node might not always be a tree.Name
node = self._name
while node is not None and not is_scope(node):
node = node.parent
if node.type in ("if_stmt", "for_stmt", "comp_for"):
try:
name_dict = self._context.predefined_names[node]
types = name_dict[self._string_name]
except KeyError:
continue
else:
break_scopes.append(scope)
if result:
self._found_predefined_types = types
break
for filter in filters:
names = filter.get(self._string_name)
if names:
if len(names) == 1:
n, = names
if isinstance(n, TreeNameDefinition):
# Something somewhere went terribly wrong. This
# typically happens when using goto on an import in an
# __init__ file. I think we need a better solution, but
# it's kind of hard, because for Jedi it's not clear
# that that name has not been defined, yet.
if n.tree_name == self._name:
if self._name.get_definition().type == 'import_from':
continue
break
scope_txt = (self.scope if self.scope == name_list_scope
else '%s-%s' % (self.scope, name_list_scope))
debug.dbg('finder.filter_name "%s" in (%s): %s@%s', self.name_str,
scope_txt, u(result), self.position)
return result
debug.dbg('finder.filter_name "%s" in (%s): %s@%s', self._string_name,
self._context, names, self._position)
return list(names)
def _check_getattr(self, inst):
"""Checks for both __getattr__ and __getattribute__ methods"""
result = []
# str is important to lose the NamePart!
name = compiled.create(self._evaluator, str(self.name_str))
with common.ignored(KeyError):
result = inst.execute_subscope_by_name('__getattr__', [name])
if not result:
# this is a little bit special. `__getattribute__` is executed
# before anything else. But: I know no use case, where this
# could be practical and the jedi would return wrong types. If
# you ever have something, let me know!
with common.ignored(KeyError):
result = inst.execute_subscope_by_name('__getattribute__', [name])
return result
# str is important, because it shouldn't be `Name`!
name = compiled.create(self._evaluator, self._string_name)
def _is_name_break_scope(self, name):
"""
Returns True except for nested imports and instance variables.
"""
par = name.parent
if par.isinstance(pr.Statement):
if isinstance(name, er.InstanceElement) and not name.is_class_var:
return False
elif isinstance(par, pr.Import) and par.is_nested():
return False
return True
# This is a little bit special. `__getattribute__` is in Python
# executed before `__getattr__`. But: I know no use case, where
# this could be practical and where Jedi would return wrong types.
# If you ever find something, let me know!
# We are inversing this, because a hand-crafted `__getattribute__`
# could still call another hand-crafted `__getattr__`, but not the
# other way around.
names = (inst.get_function_slot_names('__getattr__') or
inst.get_function_slot_names('__getattribute__'))
return inst.execute_function_slots(names, name)
def _does_scope_break_immediately(self, scope, name_list_scope):
"""
In comparison to everthing else, if/while/etc doesn't break directly,
because there are multiple different places in which a variable can be
defined.
"""
if isinstance(scope, pr.Flow) \
or isinstance(scope, pr.KeywordStatement) and scope.name == 'global':
def _names_to_types(self, names, attribute_lookup):
contexts = ContextSet.from_sets(name.infer() for name in names)
# Check for `if foo is not None`, because Jedi is not interested in
# None values, so this is the only branch we actually care about.
# ATM it carries the same issue as the isinstance checks. It
# doesn't work with instance variables (self.foo).
if isinstance(scope, pr.Flow) and scope.command in ('if', 'while'):
try:
expression_list = scope.inputs[0].expression_list()
except IndexError:
pass
else:
p = precedence.create_precedence(expression_list)
if (isinstance(p, precedence.Precedence)
and p.operator.string == 'is not'
and p.right.get_code() == 'None'
and p.left.get_code() == unicode(self.name_str)):
return True
if isinstance(name_list_scope, er.Class):
name_list_scope = name_list_scope.base
return scope == name_list_scope
else:
return True
def _name_is_array_assignment(self, name):
if name.parent.isinstance(pr.Statement):
def is_execution(calls):
for c in calls:
if isinstance(c, (unicode, str, tokenize.Token)):
continue
if c.isinstance(pr.Array):
if is_execution(c):
return True
elif c.isinstance(pr.Call):
# Compare start_pos, because names may be different
# because of executions.
if c.name.start_pos == name.start_pos \
and c.execution:
return True
return False
is_exe = False
for assignee, op in name.parent.assignment_details:
is_exe |= is_execution(assignee)
if is_exe:
# filter array[3] = ...
# TODO check executions for dict contents
return True
return False
def _names_to_types(self, names, resolve_decorator):
types = []
# Add isinstance and other if/assert knowledge.
flow_scope = self.scope
evaluator = self._evaluator
while flow_scope:
# TODO check if result is in scope -> no evaluation necessary
n = check_flow_information(evaluator, flow_scope,
self.name_str, self.position)
if n:
return n
flow_scope = flow_scope.parent
for name in names:
typ = name.parent
if typ.isinstance(pr.ForFlow):
types += self._handle_for_loops(typ)
elif isinstance(typ, pr.Param):
types += self._eval_param(typ)
elif typ.isinstance(pr.Statement):
if typ.is_global():
# global keyword handling.
types += evaluator.find_types(typ.parent.parent, str(name))
else:
types += self._remove_statements(typ, name)
else:
if isinstance(typ, pr.Class):
typ = er.Class(evaluator, typ)
elif isinstance(typ, pr.Function):
typ = er.Function(evaluator, typ)
elif isinstance(typ, pr.Module):
typ = er.ModuleWrapper(evaluator, typ)
if typ.isinstance(er.Function) and resolve_decorator:
typ = typ.get_decorated_func()
types.append(typ)
if not names and isinstance(self.scope, er.Instance):
debug.dbg('finder._names_to_types: %s -> %s', names, contexts)
if not names and isinstance(self._context, AbstractInstanceContext):
# handling __getattr__ / __getattribute__
types = self._check_getattr(self.scope)
return self._check_getattr(self._context)
return types
def _remove_statements(self, stmt, name):
"""
This is the part where statements are being stripped.
Due to lazy evaluation, statements like a = func; b = a; b() have to be
evaluated.
"""
evaluator = self._evaluator
types = []
# Remove the statement docstr stuff for now, that has to be
# implemented with the evaluator class.
#if stmt.docstr:
#res_new.append(stmt)
check_instance = None
if isinstance(stmt, er.InstanceElement) and stmt.is_class_var:
check_instance = stmt.instance
stmt = stmt.var
types += evaluator.eval_statement(stmt, seek_name=unicode(self.name_str))
# check for `except X as y` usages, because y needs to be instantiated.
p = stmt.parent
# TODO this looks really hacky, improve parser representation!
if isinstance(p, pr.Flow) and p.command == 'except' \
and p.inputs and p.inputs[0].as_names == [name]:
# TODO check for types that are not classes and add it to the
# static analysis report.
types = list(chain.from_iterable(
evaluator.execute(t) for t in types))
if check_instance is not None:
# class renames
types = [er.InstanceElement(evaluator, check_instance, a, True)
if isinstance(a, (er.Function, pr.Function))
else a for a in types]
return types
def _eval_param(self, param):
evaluator = self._evaluator
res_new = []
func = param.parent
cls = func.parent.get_parent_until((pr.Class, pr.Function))
from jedi.evaluate.param import ExecutedParam
if isinstance(cls, pr.Class) and param.position_nr == 0 \
and not isinstance(param, ExecutedParam):
# This is where we add self - if it has never been
# instantiated.
if isinstance(self.scope, er.InstanceElement):
res_new.append(self.scope.instance)
else:
for inst in evaluator.execute(er.Class(evaluator, cls)):
inst.is_generated = True
res_new.append(inst)
return res_new
# Instances are typically faked, if the instance is not called from
# outside. Here we check it for __init__ functions and return.
if isinstance(func, er.InstanceElement) \
and func.instance.is_generated and str(func.name) == '__init__':
param = func.var.params[param.position_nr]
# Add docstring knowledge.
doc_params = docstrings.follow_param(evaluator, param)
if doc_params:
return doc_params
if not param.is_generated:
# Param owns no information itself.
res_new += dynamic.search_params(evaluator, param)
if not res_new:
if param.stars:
t = 'tuple' if param.stars == 1 else 'dict'
typ = evaluator.find_types(compiled.builtin, t)[0]
res_new = evaluator.execute(typ)
if not param.assignment_details:
# this means that there are no default params,
# so just ignore it.
return res_new
return res_new + evaluator.eval_statement(param, seek_name=unicode(self.name_str))
def _handle_for_loops(self, loop):
# Take the first statement (for has always only one`in`).
if not loop.inputs:
return []
result = iterable.get_iterator_types(self._evaluator.eval_statement(loop.inputs[0]))
if len(loop.set_vars) > 1:
expression_list = loop.set_stmt.expression_list()
# loops with loop.set_vars > 0 only have one command
result = _assign_tuples(expression_list[0], result, unicode(self.name_str))
return result
def _resolve_descriptors(self, types):
"""Processes descriptors"""
result = []
for r in types:
if isinstance(self.scope, (er.Instance, er.Class)) \
and hasattr(r, 'get_descriptor_return'):
# handle descriptors
with common.ignored(KeyError):
result += r.get_descriptor_return(self.scope)
continue
result.append(r)
return result
# Add isinstance and other if/assert knowledge.
if not contexts and isinstance(self._name, tree.Name) and \
not isinstance(self._name_context, AbstractInstanceContext):
flow_scope = self._name
base_node = self._name_context.tree_node
if base_node.type == 'comp_for':
return contexts
while True:
flow_scope = get_parent_scope(flow_scope, include_flows=True)
n = _check_flow_information(self._name_context, flow_scope,
self._name, self._position)
if n is not None:
return n
if flow_scope == base_node:
break
return contexts
def check_flow_information(evaluator, flow, search_name_part, pos):
def _check_flow_information(context, flow, search_name, pos):
""" Try to find out the type of a variable just with the information that
is given by the flows: e.g. It is also responsible for assert checks.::
@@ -358,210 +191,68 @@ def check_flow_information(evaluator, flow, search_name_part, pos):
if not settings.dynamic_flow_information:
return None
result = []
if isinstance(flow, pr.IsScope) and not result:
for ass in reversed(flow.asserts):
if pos is None or ass.start_pos > pos:
continue
result = _check_isinstance_type(evaluator, ass, search_name_part)
if result:
break
result = None
if is_scope(flow):
# Check for asserts.
module_node = flow.get_root_node()
try:
names = module_node.get_used_names()[search_name.value]
except KeyError:
return None
names = reversed([
n for n in names
if flow.start_pos <= n.start_pos < (pos or flow.end_pos)
])
if isinstance(flow, pr.Flow) and not result:
if flow.command in ['if', 'while'] and len(flow.inputs) == 1:
result = _check_isinstance_type(evaluator, flow.inputs[0], search_name_part)
for name in names:
ass = search_ancestor(name, 'assert_stmt')
if ass is not None:
result = _check_isinstance_type(context, ass.assertion, search_name)
if result is not None:
return result
if flow.type in ('if_stmt', 'while_stmt'):
potential_ifs = [c for c in flow.children[1::4] if c != ':']
for if_test in reversed(potential_ifs):
if search_name.start_pos > if_test.end_pos:
return _check_isinstance_type(context, if_test, search_name)
return result
def _check_isinstance_type(evaluator, stmt, search_name_part):
def _check_isinstance_type(context, element, search_name):
try:
expression_list = stmt.expression_list()
assert element.type in ('power', 'atom_expr')
# this might be removed if we analyze and, etc
assert len(expression_list) == 1
call = expression_list[0]
assert isinstance(call, pr.Call) and str(call.name) == 'isinstance'
assert bool(call.execution)
assert len(element.children) == 2
first, trailer = element.children
assert first.type == 'name' and first.value == 'isinstance'
assert trailer.type == 'trailer' and trailer.children[0] == '('
assert len(trailer.children) == 3
# isinstance check
isinst = call.execution.values
assert len(isinst) == 2 # has two params
obj, classes = [statement.expression_list() for statement in isinst]
assert len(obj) == 1
assert len(classes) == 1
assert isinstance(obj[0], pr.Call)
# names fit?
assert unicode(obj[0].name) == unicode(search_name_part)
assert isinstance(classes[0], pr.StatementElement) # can be type or tuple
# arglist stuff
arglist = trailer.children[1]
args = TreeArguments(context.evaluator, context, arglist, trailer)
param_list = list(args.unpack())
# Disallow keyword arguments
assert len(param_list) == 2
(key1, lazy_context_object), (key2, lazy_context_cls) = param_list
assert key1 is None and key2 is None
call = helpers.call_of_leaf(search_name)
is_instance_call = helpers.call_of_leaf(lazy_context_object.data)
# Do a simple get_code comparison. They should just have the same code,
# and everything will be all right.
normalize = context.evaluator.grammar._normalize
assert normalize(is_instance_call) == normalize(call)
except AssertionError:
return []
return None
result = []
for c in evaluator.eval_call(classes[0]):
for typ in (c.values() if isinstance(c, iterable.Array) else [c]):
result += evaluator.execute(typ)
return result
def get_names_of_scope(evaluator, scope, position=None, star_search=True, include_builtin=True):
"""
Get all completions (names) possible for the current scope. The star search
option is only here to provide an optimization. Otherwise the whole thing
would probably start a little recursive madness.
This function is used to include names from outer scopes. For example, when
the current scope is function:
>>> from jedi._compatibility import u
>>> from jedi.parser import Parser
>>> parser = Parser(u('''
... x = ['a', 'b', 'c']
... def func():
... y = None
... '''))
>>> scope = parser.module.subscopes[0]
>>> scope
<Function: func@3-5>
`get_names_of_scope` is a generator. First it yields names from most inner
scope.
>>> from jedi.evaluate import Evaluator
>>> pairs = list(get_names_of_scope(Evaluator(), scope))
>>> pairs[0]
(<Function: func@3-5>, [<Name: y@4,4>])
Then it yield the names from one level outer scope. For this example, this
is the most outer scope.
>>> pairs[1]
(<ModuleWrapper: <SubModule: None@1-5>>, [<Name: x@2,0>, <Name: func@3,4>])
After that we have a few underscore names that have been defined
>>> pairs[2]
(<ModuleWrapper: <SubModule: None@1-5>>, [<FakeName: __file__@0,0>, ...])
Finally, it yields names from builtin, if `include_builtin` is
true (default).
>>> pairs[3] #doctest: +ELLIPSIS
(<Builtin: ...builtin...>, [<CompiledName: ...>, ...])
:rtype: [(pr.Scope, [pr.Name])]
:return: Return an generator that yields a pair of scope and names.
"""
if isinstance(scope, pr.ListComprehension):
position = scope.parent.start_pos
in_func_scope = scope
non_flow = scope.get_parent_until(pr.Flow, reverse=True)
while scope:
# We don't want submodules to report if we have modules.
# As well as some non-scopes, which are parents of list comprehensions.
if isinstance(scope, pr.SubModule) and scope.parent or not scope.is_scope():
scope = scope.parent
continue
# `pr.Class` is used, because the parent is never `Class`.
# Ignore the Flows, because the classes and functions care for that.
# InstanceElement of Class is ignored, if it is not the start scope.
if not (scope != non_flow and scope.isinstance(pr.Class)
or scope.isinstance(pr.Flow)
or scope.isinstance(er.Instance)
and non_flow.isinstance(er.Function)
or isinstance(scope, compiled.CompiledObject)
and scope.type() == 'class' and in_func_scope != scope):
if isinstance(scope, (pr.SubModule, fast.Module)):
scope = er.ModuleWrapper(evaluator, scope)
for g in scope.scope_names_generator(position):
yield g
if scope.isinstance(pr.ListComprehension):
# is a list comprehension
yield scope, scope.get_defined_names(is_internal_call=True)
scope = scope.parent
# This is used, because subscopes (Flow scopes) would distort the
# results.
if scope and scope.isinstance(er.Function, pr.Function, er.FunctionExecution):
in_func_scope = scope
if in_func_scope != scope \
and isinstance(in_func_scope, (pr.Function, er.FunctionExecution)):
position = None
# Add star imports.
if star_search:
for s in imports.remove_star_imports(evaluator, non_flow.get_parent_until()):
for g in get_names_of_scope(evaluator, s, star_search=False):
yield g
# Add builtins to the global scope.
if include_builtin:
yield compiled.builtin, compiled.builtin.get_defined_names()
def _assign_tuples(tup, results, seek_name):
"""
This is a normal assignment checker. In python functions and other things
can return tuples:
>>> a, b = 1, ""
>>> a, (b, c) = 1, ("", 1.0)
Here, if `seek_name` is "a", the number type will be returned.
The first part (before `=`) is the param tuples, the second one result.
:type tup: pr.Array
"""
def eval_results(index):
types = []
for r in results:
try:
func = r.get_exact_index_types
except AttributeError:
debug.warning("invalid tuple lookup %s of result %s in %s",
tup, results, seek_name)
else:
with common.ignored(IndexError):
types += func(index)
return types
result = []
for i, stmt in enumerate(tup):
# Used in assignments. There is just one call and no other things,
# therefore we can just assume, that the first part is important.
command = stmt.expression_list()[0]
if tup.type == pr.Array.NOARRAY:
# unnessecary braces -> just remove.
r = results
context_set = ContextSet()
for cls_or_tup in lazy_context_cls.infer():
if isinstance(cls_or_tup, iterable.AbstractIterable) and \
cls_or_tup.array_type == 'tuple':
for lazy_context in cls_or_tup.py__iter__():
for context in lazy_context.infer():
context_set |= context.execute_evaluated()
else:
r = eval_results(i)
# LHS of tuples can be nested, so resolve it recursively
result += find_assignments(command, r, seek_name)
return result
def find_assignments(lhs, results, seek_name):
"""
Check if `seek_name` is in the left hand side `lhs` of assignment.
`lhs` can simply be a variable (`pr.Call`) or a tuple/list (`pr.Array`)
representing the following cases::
a = 1 # lhs is pr.Call
(a, b) = 2 # lhs is pr.Array
:type lhs: pr.Call
:type results: list
:type seek_name: str
"""
if isinstance(lhs, pr.Array):
return _assign_tuples(lhs, results, seek_name)
elif unicode(lhs.name.names[-1]) == seek_name:
return results
else:
return []
context_set |= cls_or_tup.execute_evaluated()
return context_set

View File

@@ -0,0 +1,112 @@
from jedi.parser_utils import get_flow_branch_keyword, is_scope, get_parent_scope
class Status(object):
lookup_table = {}
def __init__(self, value, name):
self._value = value
self._name = name
Status.lookup_table[value] = self
def invert(self):
if self is REACHABLE:
return UNREACHABLE
elif self is UNREACHABLE:
return REACHABLE
else:
return UNSURE
def __and__(self, other):
if UNSURE in (self, other):
return UNSURE
else:
return REACHABLE if self._value and other._value else UNREACHABLE
def __repr__(self):
return '<%s: %s>' % (type(self).__name__, self._name)
REACHABLE = Status(True, 'reachable')
UNREACHABLE = Status(False, 'unreachable')
UNSURE = Status(None, 'unsure')
def _get_flow_scopes(node):
while True:
node = get_parent_scope(node, include_flows=True)
if node is None or is_scope(node):
return
yield node
def reachability_check(context, context_scope, node, origin_scope=None):
first_flow_scope = get_parent_scope(node, include_flows=True)
if origin_scope is not None:
origin_flow_scopes = list(_get_flow_scopes(origin_scope))
node_flow_scopes = list(_get_flow_scopes(node))
branch_matches = True
for flow_scope in origin_flow_scopes:
if flow_scope in node_flow_scopes:
node_keyword = get_flow_branch_keyword(flow_scope, node)
origin_keyword = get_flow_branch_keyword(flow_scope, origin_scope)
branch_matches = node_keyword == origin_keyword
if flow_scope.type == 'if_stmt':
if not branch_matches:
return UNREACHABLE
elif flow_scope.type == 'try_stmt':
if not branch_matches and origin_keyword == 'else' \
and node_keyword == 'except':
return UNREACHABLE
break
# Direct parents get resolved, we filter scopes that are separate
# branches. This makes sense for autocompletion and static analysis.
# For actual Python it doesn't matter, because we're talking about
# potentially unreachable code.
# e.g. `if 0:` would cause all name lookup within the flow make
# unaccessible. This is not a "problem" in Python, because the code is
# never called. In Jedi though, we still want to infer types.
while origin_scope is not None:
if first_flow_scope == origin_scope and branch_matches:
return REACHABLE
origin_scope = origin_scope.parent
return _break_check(context, context_scope, first_flow_scope, node)
def _break_check(context, context_scope, flow_scope, node):
reachable = REACHABLE
if flow_scope.type == 'if_stmt':
if flow_scope.is_node_after_else(node):
for check_node in flow_scope.get_test_nodes():
reachable = _check_if(context, check_node)
if reachable in (REACHABLE, UNSURE):
break
reachable = reachable.invert()
else:
flow_node = flow_scope.get_corresponding_test_node(node)
if flow_node is not None:
reachable = _check_if(context, flow_node)
elif flow_scope.type in ('try_stmt', 'while_stmt'):
return UNSURE
# Only reachable branches need to be examined further.
if reachable in (UNREACHABLE, UNSURE):
return reachable
if context_scope != flow_scope and context_scope != flow_scope.parent:
flow_scope = get_parent_scope(flow_scope, include_flows=True)
return reachable & _break_check(context, context_scope, flow_scope, node)
else:
return reachable
def _check_if(context, node):
types = context.eval_node(node)
values = set(x.py__bool__() for x in types)
if len(values) == 1:
return Status.lookup_table[values.pop()]
else:
return UNSURE

View File

@@ -1,216 +1,201 @@
import copy
import sys
import re
import os
from itertools import chain
from contextlib import contextmanager
from jedi import common
from jedi.parser import representation as pr
from jedi import debug
from parso.python import tree
from jedi._compatibility import unicode
from jedi.parser_utils import get_parent_scope
from jedi.evaluate.compiled import CompiledObject
def fast_parent_copy(obj):
def is_stdlib_path(path):
# Python standard library paths look like this:
# /usr/lib/python3.5/...
# TODO The implementation below is probably incorrect and not complete.
if 'dist-packages' in path or 'site-packages' in path:
return False
base_path = os.path.join(sys.prefix, 'lib', 'python')
return bool(re.match(re.escape(base_path) + '\d.\d', path))
def deep_ast_copy(obj):
"""
Much, much faster than copy.deepcopy, but just for certain elements.
Much, much faster than copy.deepcopy, but just for parser tree nodes.
"""
new_elements = {}
# If it's already in the cache, just return it.
new_obj = copy.copy(obj)
def recursion(obj):
if isinstance(obj, pr.Statement):
# Need to set _set_vars, otherwise the cache is not working
# correctly, don't know why.
obj.get_defined_names()
new_obj = copy.copy(obj)
new_elements[obj] = new_obj
try:
items = list(new_obj.__dict__.items())
except AttributeError:
# __dict__ not available, because of __slots__
items = []
before = ()
for cls in new_obj.__class__.__mro__:
with common.ignored(AttributeError):
if before == cls.__slots__:
continue
before = cls.__slots__
items += [(n, getattr(new_obj, n)) for n in before]
for key, value in items:
# replace parent (first try _parent and then parent)
if key in ['parent', '_parent'] and value is not None:
if key == 'parent' and '_parent' in items:
# parent can be a property
continue
with common.ignored(KeyError):
setattr(new_obj, key, new_elements[value])
elif key in ['parent_function', 'use_as_parent', '_sub_module']:
continue
elif isinstance(value, list):
setattr(new_obj, key, list_rec(value))
elif isinstance(value, pr.Simple):
setattr(new_obj, key, recursion(value))
return new_obj
def list_rec(list_obj):
copied_list = list_obj[:] # lists, tuples, strings, unicode
for i, el in enumerate(copied_list):
if isinstance(el, pr.Simple):
copied_list[i] = recursion(el)
elif isinstance(el, list):
copied_list[i] = list_rec(el)
return copied_list
return recursion(obj)
def call_signature_array_for_pos(stmt, pos):
"""
Searches for the array and position of a tuple.
"""
def search_array(arr, pos):
accepted_types = pr.Array.TUPLE, pr.Array.NOARRAY
if arr.type == 'dict':
for stmt in arr.values + arr.keys:
new_arr, index = call_signature_array_for_pos(stmt, pos)
if new_arr is not None:
return new_arr, index
# Copy children
new_children = []
for child in obj.children:
if isinstance(child, tree.Leaf):
new_child = copy.copy(child)
new_child.parent = new_obj
else:
for i, stmt in enumerate(arr):
new_arr, index = call_signature_array_for_pos(stmt, pos)
if new_arr is not None:
return new_arr, index
new_child = deep_ast_copy(child)
new_child.parent = new_obj
new_children.append(new_child)
new_obj.children = new_children
if arr.start_pos < pos <= stmt.end_pos:
if arr.type in accepted_types and isinstance(arr.parent, pr.Call):
return arr, i
if len(arr) == 0 and arr.start_pos < pos < arr.end_pos:
if arr.type in accepted_types and isinstance(arr.parent, pr.Call):
return arr, 0
return None, 0
def search_call(call, pos):
arr, index = None, 0
if call.next is not None:
if isinstance(call.next, pr.Array):
arr, index = search_array(call.next, pos)
else:
arr, index = search_call(call.next, pos)
if not arr and call.execution is not None:
arr, index = search_array(call.execution, pos)
return arr, index
if stmt.start_pos >= pos >= stmt.end_pos:
return None, 0
for command in stmt.expression_list():
arr = None
if isinstance(command, pr.Array):
arr, index = search_array(command, pos)
elif isinstance(command, pr.StatementElement):
arr, index = search_call(command, pos)
if arr is not None:
return arr, index
return None, 0
return new_obj
def search_call_signatures(user_stmt, position):
def evaluate_call_of_leaf(context, leaf, cut_own_trailer=False):
"""
Returns the function Call that matches the position before.
Creates a "call" node that consist of all ``trailer`` and ``power``
objects. E.g. if you call it with ``append``::
list([]).append(3) or None
You would get a node with the content ``list([]).append`` back.
This generates a copy of the original ast node.
If you're using the leaf, e.g. the bracket `)` it will return ``list([])``.
We use this function for two purposes. Given an expression ``bar.foo``,
we may want to
- infer the type of ``foo`` to offer completions after foo
- infer the type of ``bar`` to be able to jump to the definition of foo
The option ``cut_own_trailer`` must be set to true for the second purpose.
"""
debug.speed('func_call start')
call, index = None, 0
if user_stmt is not None and isinstance(user_stmt, pr.Statement):
# some parts will of the statement will be removed
user_stmt = fast_parent_copy(user_stmt)
arr, index = call_signature_array_for_pos(user_stmt, position)
if arr is not None:
call = arr.parent
trailer = leaf.parent
# The leaf may not be the last or first child, because there exist three
# different trailers: `( x )`, `[ x ]` and `.x`. In the first two examples
# we should not match anything more than x.
if trailer.type != 'trailer' or leaf not in (trailer.children[0], trailer.children[-1]):
if trailer.type == 'atom':
return context.eval_node(trailer)
return context.eval_node(leaf)
debug.speed('func_call parsed')
return call, index
power = trailer.parent
index = power.children.index(trailer)
if cut_own_trailer:
cut = index
else:
cut = index + 1
if power.type == 'error_node':
start = index
while True:
start -= 1
base = power.children[start]
if base.type != 'trailer':
break
trailers = power.children[start + 1: index + 1]
else:
base = power.children[0]
trailers = power.children[1:cut]
if base == 'await':
base = trailers[0]
trailers = trailers[1:]
values = context.eval_node(base)
from jedi.evaluate.syntax_tree import eval_trailer
for trailer in trailers:
values = eval_trailer(context, values, trailer)
return values
def scan_statement_for_calls(stmt, search_name, assignment_details=False):
""" Returns the function Calls that match search_name in an Array. """
def scan_array(arr, search_name):
result = []
if arr.type == pr.Array.DICT:
for key_stmt, value_stmt in arr.items():
result += scan_statement_for_calls(key_stmt, search_name)
result += scan_statement_for_calls(value_stmt, search_name)
def call_of_leaf(leaf):
"""
Creates a "call" node that consist of all ``trailer`` and ``power``
objects. E.g. if you call it with ``append``::
list([]).append(3) or None
You would get a node with the content ``list([]).append`` back.
This generates a copy of the original ast node.
If you're using the leaf, e.g. the bracket `)` it will return ``list([])``.
"""
# TODO this is the old version of this call. Try to remove it.
trailer = leaf.parent
# The leaf may not be the last or first child, because there exist three
# different trailers: `( x )`, `[ x ]` and `.x`. In the first two examples
# we should not match anything more than x.
if trailer.type != 'trailer' or leaf not in (trailer.children[0], trailer.children[-1]):
if trailer.type == 'atom':
return trailer
return leaf
power = trailer.parent
index = power.children.index(trailer)
new_power = copy.copy(power)
new_power.children = list(new_power.children)
new_power.children[index + 1:] = []
if power.type == 'error_node':
start = index
while True:
start -= 1
if power.children[start].type != 'trailer':
break
transformed = tree.Node('power', power.children[start:])
transformed.parent = power.parent
return transformed
return power
def get_names_of_node(node):
try:
children = node.children
except AttributeError:
if node.type == 'name':
return [node]
else:
for stmt in arr:
result += scan_statement_for_calls(stmt, search_name)
return result
check = list(stmt.expression_list())
if assignment_details:
for expression_list, op in stmt.assignment_details:
check += expression_list
result = []
for c in check:
if isinstance(c, pr.Array):
result += scan_array(c, search_name)
elif isinstance(c, pr.Call):
s_new = c
while s_new is not None:
n = s_new.name
if isinstance(n, pr.Name) \
and search_name in [str(x) for x in n.names]:
result.append(c)
if s_new.execution is not None:
result += scan_array(s_new.execution, search_name)
s_new = s_new.next
elif isinstance(c, pr.ListComprehension):
for s in c.stmt, c.middle, c.input:
result += scan_statement_for_calls(s, search_name)
return result
return []
else:
return list(chain.from_iterable(get_names_of_node(c) for c in children))
class FakeSubModule():
line_offset = 0
class FakeArray(pr.Array):
def __init__(self, values, parent=None, arr_type=pr.Array.LIST):
p = (0, 0)
super(FakeArray, self).__init__(FakeSubModule, p, arr_type, parent)
self.values = values
class FakeStatement(pr.Statement):
def __init__(self, expression_list, start_pos=(0, 0), parent=None):
p = start_pos
super(FakeStatement, self).__init__(FakeSubModule, expression_list, p, p)
self.set_expression_list(expression_list)
self.parent = parent
class FakeImport(pr.Import):
def __init__(self, name, parent, level=0):
p = 0, 0
super(FakeImport, self).__init__(FakeSubModule, p, p, name,
relative_count=level)
self.parent = parent
class FakeName(pr.Name):
def __init__(self, name_or_names, parent=None):
p = 0, 0
if isinstance(name_or_names, list):
names = [(n, p) for n in name_or_names]
else:
names = [(name_or_names, p)]
super(FakeName, self).__init__(FakeSubModule, names, p, p, parent)
def stmts_to_stmt(statements):
def get_module_names(module, all_scopes):
"""
Sometimes we want to have something like a result_set and unite some
statements in one.
Returns a dictionary with name parts as keys and their call paths as
values.
"""
if len(statements) == 1:
return statements[0]
array = FakeArray(statements, arr_type=pr.Array.NOARRAY)
return FakeStatement([array])
names = chain.from_iterable(module.get_used_names().values())
if not all_scopes:
# We have to filter all the names that don't have the module as a
# parent_scope. There's None as a parent, because nodes in the module
# node have the parent module and not suite as all the others.
# Therefore it's important to catch that case.
names = [n for n in names if get_parent_scope(n).parent in (module, None)]
return names
@contextmanager
def predefine_names(context, flow_scope, dct):
predefined = context.predefined_names
if flow_scope in predefined:
raise NotImplementedError('Why does this happen?')
predefined[flow_scope] = dct
try:
yield
finally:
del predefined[flow_scope]
def is_compiled(context):
return isinstance(context, CompiledObject)
def is_string(context):
return is_compiled(context) and isinstance(context.obj, (str, unicode))
def is_literal(context):
return is_number(context) or is_string(context)
def is_number(context):
return is_compiled(context) and isinstance(context.obj, (int, float))

View File

@@ -11,207 +11,96 @@ correct implementation is delegated to _compatibility.
This module also supports import autocompletion, which means to complete
statements like ``from datetim`` (curser at the end would return ``datetime``).
"""
import imp
import os
import pkgutil
import sys
from itertools import chain
from jedi._compatibility import find_module, unicode
from jedi import common
from parso.python import tree
from parso.tree import search_ancestor
from parso.cache import parser_cache
from parso import python_bytes_to_unicode
from jedi._compatibility import find_module, unicode, ImplicitNSInfo
from jedi import debug
from jedi import cache
from jedi.parser import fast
from jedi.parser import representation as pr
from jedi.evaluate.sys_path import get_sys_path, sys_path_with_modifications
from jedi.evaluate import helpers
from jedi import settings
from jedi.common import source_to_unicode
from jedi.evaluate import sys_path
from jedi.evaluate import helpers
from jedi.evaluate import compiled
from jedi.evaluate import analysis
from jedi.evaluate.cache import memoize_default, NO_DEFAULT
from jedi.evaluate.utils import unite
from jedi.evaluate.cache import evaluator_method_cache
from jedi.evaluate.filters import AbstractNameDefinition
from jedi.evaluate.base_context import ContextSet, NO_CONTEXTS
class ModuleNotFound(Exception):
def __init__(self, name_part):
super(ModuleNotFound, self).__init__()
self.name_part = name_part
# This memoization is needed, because otherwise we will infinitely loop on
# certain imports.
@evaluator_method_cache(default=NO_CONTEXTS)
def infer_import(context, tree_name, is_goto=False):
module_context = context.get_root_context()
import_node = search_ancestor(tree_name, 'import_name', 'import_from')
import_path = import_node.get_path_for_name(tree_name)
from_import_name = None
evaluator = context.evaluator
try:
from_names = import_node.get_from_names()
except AttributeError:
# Is an import_name
pass
else:
if len(from_names) + 1 == len(import_path):
# We have to fetch the from_names part first and then check
# if from_names exists in the modules.
from_import_name = import_path[-1]
import_path = from_names
importer = Importer(evaluator, tuple(import_path),
module_context, import_node.level)
types = importer.follow()
#if import_node.is_nested() and not self.nested_resolve:
# scopes = [NestedImportModule(module, import_node)]
if not types:
return NO_CONTEXTS
if from_import_name is not None:
types = unite(
t.py__getattribute__(
from_import_name,
name_context=context,
is_goto=is_goto,
analysis_errors=False
)
for t in types
)
if not is_goto:
types = ContextSet.from_set(types)
if not types:
path = import_path + [from_import_name]
importer = Importer(evaluator, tuple(path),
module_context, import_node.level)
types = importer.follow()
# goto only accepts `Name`
if is_goto:
types = set(s.name for s in types)
else:
# goto only accepts `Name`
if is_goto:
types = set(s.name for s in types)
debug.dbg('after import: %s', types)
return types
class ImportWrapper(pr.Base):
class NestedImportModule(tree.Module):
"""
An ImportWrapper is the path of a `pr.Import` object.
TODO while there's no use case for nested import module right now, we might
be able to use them for static analysis checks later on.
"""
class GlobalNamespace(object):
def __init__(self):
self.line_offset = 0
GlobalNamespace = GlobalNamespace()
def __init__(self, evaluator, import_stmt, is_like_search=False, kill_count=0,
nested_resolve=False, is_just_from=False):
"""
:param is_like_search: If the wrapper is used for autocompletion.
:param kill_count: Placement of the import, sometimes we only want to
resole a part of the import.
:param nested_resolve: Resolves nested imports fully.
:param is_just_from: Bool if the second part is missing.
"""
self._evaluator = evaluator
self.import_stmt = import_stmt
self.is_like_search = is_like_search
self.nested_resolve = nested_resolve
self.is_just_from = is_just_from
self.is_partial_import = bool(max(0, kill_count))
# rest is import_path resolution
import_path = []
if import_stmt.from_ns:
import_path += import_stmt.from_ns.names
if import_stmt.namespace:
if self.import_stmt.is_nested() and not nested_resolve:
import_path.append(import_stmt.namespace.names[0])
else:
import_path += import_stmt.namespace.names
for i in range(kill_count + int(is_like_search)):
if import_path:
import_path.pop()
module = import_stmt.get_parent_until()
self._importer = get_importer(self._evaluator, tuple(import_path), module,
import_stmt.relative_count)
def __repr__(self):
return '<%s: %s>' % (type(self).__name__, self.import_stmt)
@property
def import_path(self):
return self._importer.str_import_path()
def get_defined_names(self, on_import_stmt=False):
names = []
for scope in self.follow():
if scope is ImportWrapper.GlobalNamespace:
if not self._is_relative_import():
names += self._get_module_names()
if self._importer.file_path is not None:
path = os.path.abspath(self._importer.file_path)
for i in range(self.import_stmt.relative_count - 1):
path = os.path.dirname(path)
names += self._get_module_names([path])
if self._is_relative_import():
rel_path = os.path.join(self._importer.get_relative_path(),
'__init__.py')
if os.path.exists(rel_path):
m = _load_module(rel_path)
names += m.get_defined_names()
else:
if on_import_stmt and isinstance(scope, pr.Module) \
and scope.path.endswith('__init__.py'):
pkg_path = os.path.dirname(scope.path)
paths = self._importer.namespace_packages(pkg_path,
self.import_path)
names += self._get_module_names([pkg_path] + paths)
if self.is_just_from:
# In the case of an import like `from x.` we don't need to
# add all the variables.
if ('os',) == self.import_path and not self._is_relative_import():
# os.path is a hardcoded exception, because it's a
# ``sys.modules`` modification.
names.append(self._generate_name('path'))
continue
from jedi.evaluate import finder
for s, scope_names in finder.get_names_of_scope(self._evaluator,
scope, include_builtin=False):
for n in scope_names:
if self.import_stmt.from_ns is None \
or self.is_partial_import:
# from_ns must be defined to access module
# values plus a partial import means that there
# is something after the import, which
# automatically implies that there must not be
# any non-module scope.
continue
names.append(n)
return names
def _generate_name(self, name):
return helpers.FakeName(name, parent=self.import_stmt)
def _get_module_names(self, search_path=None):
"""
Get the names of all modules in the search_path. This means file names
and not names defined in the files.
"""
names = []
# add builtin module names
if search_path is None:
names += [self._generate_name(name) for name in sys.builtin_module_names]
if search_path is None:
search_path = self._importer.sys_path_with_modifications()
for module_loader, name, is_pkg in pkgutil.iter_modules(search_path):
names.append(self._generate_name(name))
return names
def _is_relative_import(self):
return bool(self.import_stmt.relative_count)
def follow(self, is_goto=False):
if self._evaluator.recursion_detector.push_stmt(self.import_stmt):
# check recursion
return []
if self.import_path:
try:
module, rest = self._importer.follow_file_system()
except ModuleNotFound as e:
analysis.add(self._evaluator, 'import-error', e.name_part)
return []
if self.import_stmt.is_nested() and not self.nested_resolve:
scopes = [NestedImportModule(module, self.import_stmt)]
else:
scopes = [module]
star_imports = remove_star_imports(self._evaluator, module)
if star_imports:
scopes = [StarImportModule(scopes[0], star_imports)]
# follow the rest of the import (not FS -> classes, functions)
if len(rest) > 1 or rest and self.is_like_search:
scopes = []
if ('os', 'path') == self.import_path[:2] \
and not self._is_relative_import():
# This is a huge exception, we follow a nested import
# ``os.path``, because it's a very important one in Python
# that is being achieved by messing with ``sys.modules`` in
# ``os``.
scopes = self._evaluator.follow_path(iter(rest), [module], module)
elif rest:
if is_goto:
scopes = list(chain.from_iterable(
self._evaluator.find_types(s, rest[0], is_goto=True)
for s in scopes))
else:
scopes = list(chain.from_iterable(
self._evaluator.follow_path(iter(rest), [s], s)
for s in scopes))
else:
scopes = [ImportWrapper.GlobalNamespace]
debug.dbg('after import: %s', scopes)
if not scopes:
analysis.add(self._evaluator, 'import-error',
self._importer.import_path[-1])
self._evaluator.recursion_detector.pop_stmt()
return scopes
class NestedImportModule(pr.Module):
def __init__(self, module, nested_import):
self._module = module
self._nested_import = nested_import
@@ -224,21 +113,12 @@ class NestedImportModule(pr.Module):
# This is not an existing Import statement. Therefore, set position to
# 0 (0 is not a valid line number).
zero = (0, 0)
names = [unicode(name_part) for name_part in i.namespace.names[1:]]
names = [unicode(name) for name in i.namespace_names[1:]]
name = helpers.FakeName(names, self._nested_import)
new = pr.Import(i._sub_module, zero, zero, name)
new = tree.Import(i._sub_module, zero, zero, name)
new.parent = self._module
debug.dbg('Generated a nested import: %s', new)
return helpers.FakeName(str(i.namespace.names[1]), new)
def _get_defined_names(self):
"""
NesteImportModule don't seem to be actively used, right now.
However, they might in the future. If we do more sophisticated static
analysis checks.
"""
nested = self._get_nested_import_name()
return self._module.get_defined_names() + [nested]
return helpers.FakeName(str(i.namespace_names[1]), new)
def __getattr__(self, name):
return getattr(self._module, name)
@@ -248,48 +128,60 @@ class NestedImportModule(pr.Module):
self._nested_import)
class StarImportModule(pr.Module):
def _add_error(context, name, message=None):
# Should be a name, not a string!
if hasattr(name, 'parent'):
analysis.add(context, 'import-error', name, message)
def get_init_path(directory_path):
"""
Used if a module contains star imports.
The __init__ file can be searched in a directory. If found return it, else
None.
"""
def __init__(self, module, star_import_modules):
self._module = module
self.star_import_modules = star_import_modules
def scope_names_generator(self, position=None):
for module, names in self._module.scope_names_generator(position):
yield module, names
for s in self.star_import_modules:
yield s, s.get_defined_names()
def __getattr__(self, name):
return getattr(self._module, name)
def __repr__(self):
return "<%s: %s>" % (self.__class__.__name__, self._module)
for suffix, _, _ in imp.get_suffixes():
path = os.path.join(directory_path, '__init__' + suffix)
if os.path.exists(path):
return path
return None
def get_importer(evaluator, import_path, module, level=0):
"""
Checks the evaluator caches first, which resembles the ``sys.modules``
cache and speeds up libraries like ``numpy``.
"""
if level != 0:
# Only absolute imports should be cached. Otherwise we have a mess.
# TODO Maybe calculate the absolute import and save it here?
return _Importer(evaluator, import_path, module, level)
try:
return evaluator.import_cache[import_path]
except KeyError:
importer = _Importer(evaluator, import_path, module, level)
evaluator.import_cache[import_path] = importer
return importer
class ImportName(AbstractNameDefinition):
start_pos = (1, 0)
_level = 0
def __init__(self, parent_context, string_name):
self.parent_context = parent_context
self.string_name = string_name
def infer(self):
return Importer(
self.parent_context.evaluator,
[self.string_name],
self.parent_context,
level=self._level,
).follow()
def goto(self):
return [m.name for m in self.infer()]
def get_root_context(self):
# Not sure if this is correct.
return self.parent_context.get_root_context()
@property
def api_type(self):
return 'module'
class _Importer(object):
def __init__(self, evaluator, import_path, module, level=0):
class SubModuleName(ImportName):
_level = 1
class Importer(object):
def __init__(self, evaluator, import_path, module_context, level=0):
"""
An implementation similar to ``__import__``. Use `follow_file_system`
An implementation similar to ``__import__``. Use `follow`
to actually follow the imports.
*level* specifies whether to use absolute or relative imports. 0 (the
@@ -298,273 +190,381 @@ class _Importer(object):
directory of the module calling ``__import__()`` (see PEP 328 for the
details).
:param import_path: List of namespaces (strings).
:param import_path: List of namespaces (strings or Names).
"""
debug.speed('import %s' % (import_path,))
self._evaluator = evaluator
self.import_path = import_path
self.level = level
self.module = module
path = module.path
# TODO abspath
self.file_path = os.path.dirname(path) if path is not None else None
self.module_context = module_context
try:
self.file_path = module_context.py__file__()
except AttributeError:
# Can be None for certain compiled modules like 'builtins'.
self.file_path = None
if level:
base = module_context.py__package__().split('.')
if base == ['']:
base = []
if level > len(base):
path = module_context.py__file__()
if path is not None:
import_path = list(import_path)
p = path
for i in range(level):
p = os.path.dirname(p)
dir_name = os.path.basename(p)
# This is not the proper way to do relative imports. However, since
# Jedi cannot be sure about the entry point, we just calculate an
# absolute path here.
if dir_name:
# TODO those sys.modules modifications are getting
# really stupid. this is the 3rd time that we're using
# this. We should probably refactor.
if path.endswith(os.path.sep + 'os.py'):
import_path.insert(0, 'os')
else:
import_path.insert(0, dir_name)
else:
_add_error(module_context, import_path[-1])
import_path = []
# TODO add import error.
debug.warning('Attempted relative import beyond top-level package.')
# If no path is defined in the module we have no ideas where we
# are in the file system. Therefore we cannot know what to do.
# In this case we just let the path there and ignore that it's
# a relative path. Not sure if that's a good idea.
else:
# Here we basically rewrite the level to 0.
base = tuple(base)
if level > 1:
base = base[:-level + 1]
import_path = base + tuple(import_path)
self.import_path = import_path
@property
def str_import_path(self):
"""Returns the import path as pure strings instead of NameParts."""
return tuple(str(name_part) for name_part in self.import_path)
"""Returns the import path as pure strings instead of `Name`."""
return tuple(
name.value if isinstance(name, tree.Name) else name
for name in self.import_path)
def get_relative_path(self):
path = self.file_path
for i in range(self.level - 1):
path = os.path.dirname(path)
return path
@memoize_default()
def sys_path_with_modifications(self):
# If you edit e.g. gunicorn, there will be imports like this:
# `from gunicorn import something`. But gunicorn is not in the
# sys.path. Therefore look if gunicorn is a parent directory, #56.
in_path = []
if self.import_path:
parts = self.file_path.split(os.path.sep)
for i, p in enumerate(parts):
if p == unicode(self.import_path[0]):
new = os.path.sep.join(parts[:i])
in_path.append(new)
sys_path_mod = self._evaluator.project.sys_path \
+ sys_path.check_sys_path_modifications(self.module_context)
if self.file_path is not None:
# If you edit e.g. gunicorn, there will be imports like this:
# `from gunicorn import something`. But gunicorn is not in the
# sys.path. Therefore look if gunicorn is a parent directory, #56.
if self.import_path: # TODO is this check really needed?
for path in sys_path.traverse_parents(self.file_path):
if os.path.basename(path) == self.str_import_path[0]:
in_path.append(os.path.dirname(path))
return in_path + sys_path_with_modifications(self._evaluator, self.module)
# Since we know nothing about the call location of the sys.path,
# it's a possibility that the current directory is the origin of
# the Python execution.
sys_path_mod.insert(0, os.path.dirname(self.file_path))
def follow(self, evaluator):
scope, rest = self.follow_file_system()
if rest:
# follow the rest of the import (not FS -> classes, functions)
return evaluator.follow_path(iter(rest), [scope], scope)
return [scope]
return in_path + sys_path_mod
@memoize_default(NO_DEFAULT)
def follow_file_system(self):
if self.file_path:
sys_path_mod = list(self.sys_path_with_modifications())
if not self.module.has_explicit_absolute_import:
# If the module explicitly asks for absolute imports,
# there's probably a bogus local one.
sys_path_mod.insert(0, self.file_path)
def follow(self):
if not self.import_path:
return NO_CONTEXTS
return self._do_import(self.import_path, self.sys_path_with_modifications())
# First the sys path is searched normally and if that doesn't
# succeed, try to search the parent directories, because sometimes
# Jedi doesn't recognize sys.path modifications (like py.test
# stuff).
old_path, temp_path = self.file_path, os.path.dirname(self.file_path)
while old_path != temp_path:
sys_path_mod.append(temp_path)
old_path, temp_path = temp_path, os.path.dirname(temp_path)
else:
sys_path_mod = list(get_sys_path())
from jedi.evaluate.representation import ModuleWrapper
module, rest = self._follow_sys_path(sys_path_mod)
if isinstance(module, pr.Module):
return ModuleWrapper(self._evaluator, module), rest
return module, rest
def namespace_packages(self, found_path, import_path):
def _do_import(self, import_path, sys_path):
"""
Returns a list of paths of possible ``pkgutil``/``pkg_resources``
namespaces. If the package is no "namespace package", an empty list is
returned.
This method is very similar to importlib's `_gcd_import`.
"""
def follow_path(directories, paths):
import_parts = [
i.value if isinstance(i, tree.Name) else i
for i in import_path
]
# Handle "magic" Flask extension imports:
# ``flask.ext.foo`` is really ``flask_foo`` or ``flaskext.foo``.
if len(import_path) > 2 and import_parts[:2] == ['flask', 'ext']:
# New style.
ipath = ('flask_' + str(import_parts[2]),) + import_path[3:]
modules = self._do_import(ipath, sys_path)
if modules:
return modules
else:
# Old style
return self._do_import(('flaskext',) + import_path[2:], sys_path)
module_name = '.'.join(import_parts)
try:
return ContextSet(self._evaluator.modules[module_name])
except KeyError:
pass
if len(import_path) > 1:
# This is a recursive way of importing that works great with
# the module cache.
bases = self._do_import(import_path[:-1], sys_path)
if not bases:
return NO_CONTEXTS
# We can take the first element, because only the os special
# case yields multiple modules, which is not important for
# further imports.
parent_module = list(bases)[0]
# This is a huge exception, we follow a nested import
# ``os.path``, because it's a very important one in Python
# that is being achieved by messing with ``sys.modules`` in
# ``os``.
if import_parts == ['os', 'path']:
return parent_module.py__getattribute__('path')
try:
directory = next(directories)
except StopIteration:
return paths
method = parent_module.py__path__
except AttributeError:
# The module is not a package.
_add_error(self.module_context, import_path[-1])
return NO_CONTEXTS
else:
deeper_paths = []
for p in paths:
new = os.path.join(p, directory)
if os.path.isdir(new) and new != found_path:
deeper_paths.append(new)
return follow_path(directories, deeper_paths)
with open(os.path.join(found_path, '__init__.py'), 'rb') as f:
content = common.source_to_unicode(f.read())
# these are strings that need to be used for namespace packages,
# the first one is ``pkgutil``, the second ``pkg_resources``.
options = ('declare_namespace(__name__)', 'extend_path(__path__')
if options[0] in content or options[1] in content:
# It is a namespace, now try to find the rest of the modules.
return follow_path(iter(import_path), sys.path)
return []
def _follow_sys_path(self, sys_path):
"""
Find a module with a path (of the module, like usb.backend.libusb10).
"""
def follow_str(ns_path, string):
debug.dbg('follow_module %s %s', ns_path, string)
path = None
if ns_path:
path = ns_path
elif self.level > 0: # is a relative import
path = self.get_relative_path()
if path is not None:
importing = find_module(string, [path])
else:
debug.dbg('search_module %s %s', string, self.file_path)
paths = method()
debug.dbg('search_module %s in paths %s', module_name, paths)
for path in paths:
# At the moment we are only using one path. So this is
# not important to be correct.
try:
if not isinstance(path, list):
path = [path]
module_file, module_path, is_pkg = \
find_module(import_parts[-1], path, fullname=module_name)
break
except ImportError:
module_path = None
if module_path is None:
_add_error(self.module_context, import_path[-1])
return NO_CONTEXTS
else:
parent_module = None
try:
debug.dbg('search_module %s in %s', import_parts[-1], self.file_path)
# Override the sys.path. It works only good that way.
# Injecting the path directly into `find_module` did not work.
sys.path, temp = sys_path, sys.path
try:
importing = find_module(string)
module_file, module_path, is_pkg = \
find_module(import_parts[-1], fullname=module_name)
finally:
sys.path = temp
return importing
current_namespace = (None, None, None)
# now execute those paths
rest = []
for i, s in enumerate(self.import_path):
try:
current_namespace = follow_str(current_namespace[1], unicode(s))
except ImportError:
_continue = False
if self.level >= 1 and len(self.import_path) == 1:
# follow `from . import some_variable`
rel_path = self.get_relative_path()
with common.ignored(ImportError):
current_namespace = follow_str(rel_path, '__init__')
elif current_namespace[2]: # is a package
path = self.str_import_path()[:i]
for n in self.namespace_packages(current_namespace[1], path):
try:
current_namespace = follow_str(n, unicode(s))
if current_namespace[1]:
_continue = True
break
except ImportError:
pass
# The module is not a package.
_add_error(self.module_context, import_path[-1])
return NO_CONTEXTS
if not _continue:
if current_namespace[1]:
rest = self.str_import_path()[i:]
break
else:
raise ModuleNotFound(s)
path = current_namespace[1]
is_package_directory = current_namespace[2]
f = None
if is_package_directory or current_namespace[0]:
# is a directory module
if is_package_directory:
path = os.path.join(path, '__init__.py')
with open(path, 'rb') as f:
source = f.read()
code = None
if is_pkg:
# In this case, we don't have a file yet. Search for the
# __init__ file.
if module_path.endswith(('.zip', '.egg')):
code = module_file.loader.get_source(module_name)
else:
source = current_namespace[0].read()
current_namespace[0].close()
return _load_module(path, source, sys_path=sys_path), rest
module_path = get_init_path(module_path)
elif module_file:
code = module_file.read()
module_file.close()
if isinstance(module_path, ImplicitNSInfo):
from jedi.evaluate.context.namespace import ImplicitNamespaceContext
fullname, paths = module_path.name, module_path.paths
module = ImplicitNamespaceContext(self._evaluator, fullname=fullname)
module.paths = paths
elif module_file is None and not module_path.endswith(('.py', '.zip', '.egg')):
module = compiled.load_module(self._evaluator, module_path)
else:
return _load_module(name=path, sys_path=sys_path), rest
module = _load_module(self._evaluator, module_path, code, sys_path, parent_module)
if module is None:
# The file might raise an ImportError e.g. and therefore not be
# importable.
return NO_CONTEXTS
def follow_imports(evaluator, scopes):
"""
Here we strip the imports - they don't get resolved necessarily.
Really used anymore? Merge with remove_star_imports?
"""
result = []
for s in scopes:
if isinstance(s, pr.Import):
for r in ImportWrapper(evaluator, s).follow():
result.append(r)
self._evaluator.modules[module_name] = module
return ContextSet(module)
def _generate_name(self, name, in_module=None):
# Create a pseudo import to be able to follow them.
if in_module is None:
return ImportName(self.module_context, name)
return SubModuleName(in_module, name)
def _get_module_names(self, search_path=None, in_module=None):
"""
Get the names of all modules in the search_path. This means file names
and not names defined in the files.
"""
names = []
# add builtin module names
if search_path is None and in_module is None:
names += [self._generate_name(name) for name in sys.builtin_module_names]
if search_path is None:
search_path = self.sys_path_with_modifications()
for module_loader, name, is_pkg in pkgutil.iter_modules(search_path):
names.append(self._generate_name(name, in_module=in_module))
return names
def completion_names(self, evaluator, only_modules=False):
"""
:param only_modules: Indicates wheter it's possible to import a
definition that is not defined in a module.
"""
from jedi.evaluate.context import ModuleContext
from jedi.evaluate.context.namespace import ImplicitNamespaceContext
names = []
if self.import_path:
# flask
if self.str_import_path == ('flask', 'ext'):
# List Flask extensions like ``flask_foo``
for mod in self._get_module_names():
modname = mod.string_name
if modname.startswith('flask_'):
extname = modname[len('flask_'):]
names.append(self._generate_name(extname))
# Now the old style: ``flaskext.foo``
for dir in self.sys_path_with_modifications():
flaskext = os.path.join(dir, 'flaskext')
if os.path.isdir(flaskext):
names += self._get_module_names([flaskext])
for context in self.follow():
# Non-modules are not completable.
if context.api_type != 'module': # not a module
continue
# namespace packages
if isinstance(context, ModuleContext) and context.py__file__().endswith('__init__.py'):
paths = context.py__path__()
names += self._get_module_names(paths, in_module=context)
# implicit namespace packages
elif isinstance(context, ImplicitNamespaceContext):
paths = context.paths
names += self._get_module_names(paths)
if only_modules:
# In the case of an import like `from x.` we don't need to
# add all the variables.
if ('os',) == self.str_import_path and not self.level:
# os.path is a hardcoded exception, because it's a
# ``sys.modules`` modification.
names.append(self._generate_name('path', context))
continue
for filter in context.get_filters(search_global=False):
names += filter.values()
else:
result.append(s)
return result
# Empty import path=completion after import
if not self.level:
names += self._get_module_names()
if self.file_path is not None:
path = os.path.abspath(self.file_path)
for i in range(self.level - 1):
path = os.path.dirname(path)
names += self._get_module_names([path])
return names
@cache.cache_star_import
def remove_star_imports(evaluator, scope, ignored_modules=()):
"""
Check a module for star imports::
def _load_module(evaluator, path=None, code=None, sys_path=None, parent_module=None):
if sys_path is None:
sys_path = evaluator.project.sys_path
from module import *
dotted_path = path and compiled.dotted_from_fs_path(path, sys_path)
if path is not None and path.endswith(('.py', '.zip', '.egg')) \
and dotted_path not in settings.auto_import_modules:
and follow these modules.
"""
if isinstance(scope, StarImportModule):
return scope.star_import_modules
modules = follow_imports(evaluator, (i for i in scope.get_imports() if i.star))
new = []
for m in modules:
if m not in ignored_modules:
new += remove_star_imports(evaluator, m, modules)
modules += new
module_node = evaluator.grammar.parse(
code=code, path=path, cache=True, diff_cache=True,
cache_path=settings.cache_directory)
# Filter duplicate modules.
return set(modules)
from jedi.evaluate.context import ModuleContext
return ModuleContext(evaluator, module_node, path=path)
else:
return compiled.load_module(evaluator, path)
def _load_module(path=None, source=None, name=None, sys_path=None):
def load(source):
dotted_path = path and compiled.dotted_from_fs_path(path, sys_path)
if path is not None and path.endswith('.py') \
and not dotted_path in settings.auto_import_modules:
if source is None:
with open(path, 'rb') as f:
source = f.read()
else:
return compiled.load_module(path, name)
p = path or name
p = fast.FastParser(common.source_to_unicode(source), p)
cache.save_parser(path, name, p)
return p.module
cached = cache.load_parser(path, name)
return load(source) if cached is None else cached.module
def add_module(evaluator, module_name, module):
if '.' not in module_name:
# We cannot add paths with dots, because that would collide with
# the sepatator dots for nested packages. Therefore we return
# `__main__` in ModuleWrapper.py__name__(), which is similar to
# Python behavior.
evaluator.modules[module_name] = module
def get_modules_containing_name(mods, name):
def get_modules_containing_name(evaluator, modules, name):
"""
Search a name in the directories of modules.
"""
from jedi.evaluate.context import ModuleContext
def check_directories(paths):
for p in paths:
if p is not None:
# We need abspath, because the seetings paths might not already
# have been converted to absolute paths.
d = os.path.dirname(os.path.abspath(p))
for file_name in os.listdir(d):
path = os.path.join(d, file_name)
if file_name.endswith('.py'):
yield path
def check_python_file(path):
try:
return cache.parser_cache[path].parser.module
# TODO I don't think we should use the cache here?!
node_cache_item = parser_cache[evaluator.grammar._hashed][path]
except KeyError:
try:
return check_fs(path)
except IOError:
return None
else:
module_node = node_cache_item.node
return ModuleContext(evaluator, module_node, path=path)
def check_fs(path):
with open(path, 'rb') as f:
source = source_to_unicode(f.read())
if name in source:
return _load_module(path, source)
code = python_bytes_to_unicode(f.read(), errors='replace')
if name in code:
module = _load_module(evaluator, path, code)
module_name = sys_path.dotted_path_in_sys_path(evaluator.project.sys_path, path)
if module_name is not None:
add_module(evaluator, module_name, module)
return module
# skip non python modules
mods = set(m for m in mods if not isinstance(m, compiled.CompiledObject))
mod_paths = set()
for m in mods:
mod_paths.add(m.path)
used_mod_paths = set()
for m in modules:
try:
path = m.py__file__()
except AttributeError:
pass
else:
used_mod_paths.add(path)
yield m
if settings.dynamic_params_for_other_modules:
paths = set(settings.additional_dynamic_modules)
for p in mod_paths:
if p is not None:
d = os.path.dirname(p)
for entry in os.listdir(d):
if entry not in mod_paths:
if entry.endswith('.py'):
paths.add(d + os.path.sep + entry)
if not settings.dynamic_params_for_other_modules:
return
for p in sorted(paths):
# make testing easier, sort it - same results on every interpreter
c = check_python_file(p)
if c is not None and c not in mods:
yield c
additional = set(os.path.abspath(p) for p in settings.additional_dynamic_modules)
# Check the directories of used modules.
paths = (additional | set(check_directories(used_mod_paths))) \
- used_mod_paths
# Sort here to make issues less random.
for p in sorted(paths):
# make testing easier, sort it - same results on every interpreter
m = check_python_file(p)
if m is not None and not isinstance(m, compiled.CompiledObject):
yield m

View File

@@ -1,518 +0,0 @@
"""
Contains all classes and functions to deal with lists, dicts, generators and
iterators in general.
Array modifications
*******************
If the content of an array (``set``/``list``) is requested somewhere, the
current module will be checked for appearances of ``arr.append``,
``arr.insert``, etc. If the ``arr`` name points to an actual array, the
content will be added
This can be really cpu intensive, as you can imagine. Because |jedi| has to
follow **every** ``append`` and check wheter it's the right array. However this
works pretty good, because in *slow* cases, the recursion detector and other
settings will stop this process.
It is important to note that:
1. Array modfications work only in the current module.
2. Jedi only checks Array additions; ``list.pop``, etc are ignored.
"""
from itertools import chain
from jedi import common
from jedi import debug
from jedi import settings
from jedi._compatibility import use_metaclass, is_py3, unicode
from jedi.parser import representation as pr
from jedi.evaluate import compiled
from jedi.evaluate import helpers
from jedi.evaluate import precedence
from jedi.evaluate.cache import CachedMetaClass, memoize_default, NO_DEFAULT
from jedi.cache import underscore_memoization
from jedi.evaluate import analysis
class Generator(use_metaclass(CachedMetaClass, pr.Base)):
"""Handling of `yield` functions."""
def __init__(self, evaluator, func, var_args):
super(Generator, self).__init__()
self._evaluator = evaluator
self.func = func
self.var_args = var_args
@underscore_memoization
def _get_defined_names(self):
"""
Returns a list of names that define a generator, which can return the
content of a generator.
"""
executes_generator = '__next__', 'send', 'next'
for name in compiled.generator_obj.get_defined_names():
if name.name in executes_generator:
parent = GeneratorMethod(self, name.parent)
yield helpers.FakeName(name.name, parent)
else:
yield name
def scope_names_generator(self, position=None):
yield self, self._get_defined_names()
def iter_content(self):
""" returns the content of __iter__ """
return self._evaluator.execute(self.func, self.var_args, True)
def get_index_types(self, index_array):
#debug.warning('Tried to get array access on a generator: %s', self)
analysis.add(self._evaluator, 'type-error-generator', index_array)
return []
def get_exact_index_types(self, index):
"""
Exact lookups are used for tuple lookups, which are perfectly fine if
used with generators.
"""
return [self.iter_content()[index]]
def __getattr__(self, name):
if name not in ['start_pos', 'end_pos', 'parent', 'get_imports',
'asserts', 'doc', 'docstr', 'get_parent_until',
'get_code', 'subscopes']:
raise AttributeError("Accessing %s of %s is not allowed."
% (self, name))
return getattr(self.func, name)
def __repr__(self):
return "<%s of %s>" % (type(self).__name__, self.func)
class GeneratorMethod(object):
"""``__next__`` and ``send`` methods."""
def __init__(self, generator, builtin_func):
self._builtin_func = builtin_func
self._generator = generator
def execute(self):
return self._generator.iter_content()
def __getattr__(self, name):
return getattr(self._builtin_func, name)
class GeneratorComprehension(Generator):
def __init__(self, evaluator, comprehension):
super(GeneratorComprehension, self).__init__(evaluator, comprehension, None)
self.comprehension = comprehension
def iter_content(self):
return self._evaluator.eval_statement_element(self.comprehension)
class Array(use_metaclass(CachedMetaClass, pr.Base)):
"""
Used as a mirror to pr.Array, if needed. It defines some getter
methods which are important in this module.
"""
def __init__(self, evaluator, array):
self._evaluator = evaluator
self._array = array
@memoize_default(NO_DEFAULT)
def get_index_types(self, index_array=()):
"""
Get the types of a specific index or all, if not given.
:param indexes: The index input types.
"""
indexes = create_indexes_or_slices(self._evaluator, index_array)
if [index for index in indexes if isinstance(index, Slice)]:
return [self]
lookup_done = False
types = []
for index in indexes:
if isinstance(index, compiled.CompiledObject) \
and isinstance(index.obj, (int, str, unicode)):
with common.ignored(KeyError, IndexError, TypeError):
types += self.get_exact_index_types(index.obj)
lookup_done = True
return types if lookup_done else self.values()
@memoize_default(NO_DEFAULT)
def values(self):
result = list(_follow_values(self._evaluator, self._array.values))
result += check_array_additions(self._evaluator, self)
return result
def get_exact_index_types(self, mixed_index):
""" Here the index is an int/str. Raises IndexError/KeyError """
index = mixed_index
if self.type == pr.Array.DICT:
index = None
for i, key_statement in enumerate(self._array.keys):
# Because we only want the key to be a string.
key_expression_list = key_statement.expression_list()
if len(key_expression_list) != 1: # cannot deal with complex strings
continue
key = key_expression_list[0]
if isinstance(key, pr.Literal):
key = key.value
elif isinstance(key, pr.Name):
key = str(key)
else:
continue
if mixed_index == key:
index = i
break
if index is None:
raise KeyError('No key found in dictionary')
# Can raise an IndexError
values = [self._array.values[index]]
return _follow_values(self._evaluator, values)
def scope_names_generator(self, position=None):
"""
This method generates all `ArrayMethod` for one pr.Array.
It returns e.g. for a list: append, pop, ...
"""
# `array.type` is a string with the type, e.g. 'list'.
scope = self._evaluator.find_types(compiled.builtin, self._array.type)[0]
scope = self._evaluator.execute(scope)[0] # builtins only have one class
for _, names in scope.scope_names_generator():
yield self, [ArrayMethod(n) for n in names]
@common.safe_property
def parent(self):
return compiled.builtin
def get_parent_until(self):
return compiled.builtin
def __getattr__(self, name):
if name not in ['type', 'start_pos', 'get_only_subelement', 'parent',
'get_parent_until', 'items']:
raise AttributeError('Strange access on %s: %s.' % (self, name))
return getattr(self._array, name)
def __iter__(self):
return iter(self._array)
def __len__(self):
return len(self._array)
def __repr__(self):
return "<e%s of %s>" % (type(self).__name__, self._array)
class ArrayMethod(object):
"""
A name, e.g. `list.append`, it is used to access the original array
methods.
"""
def __init__(self, name):
super(ArrayMethod, self).__init__()
self.name = name
def __getattr__(self, name):
# Set access privileges:
if name not in ['parent', 'names', 'start_pos', 'end_pos', 'get_code']:
raise AttributeError('Strange accesson %s: %s.' % (self, name))
return getattr(self.name, name)
def get_parent_until(self):
return compiled.builtin
def __repr__(self):
return "<%s of %s>" % (type(self).__name__, self.name)
class MergedArray(Array):
def __init__(self, evaluator, arrays):
super(MergedArray, self).__init__(evaluator, arrays[-1]._array)
self._arrays = arrays
def get_index_types(self, mixed_index):
return list(chain(*(a.values() for a in self._arrays)))
def get_exact_index_types(self, mixed_index):
raise IndexError
def __iter__(self):
for array in self._arrays:
for a in array:
yield a
def __len__(self):
return sum(len(a) for a in self._arrays)
def get_iterator_types(inputs):
"""Returns the types of any iterator (arrays, yields, __iter__, etc)."""
iterators = []
# Take the first statement (for has always only
# one, remember `in`). And follow it.
for it in inputs:
if isinstance(it, (Generator, Array, ArrayInstance)):
iterators.append(it)
else:
if not hasattr(it, 'execute_subscope_by_name'):
debug.warning('iterator/for loop input wrong: %s', it)
continue
try:
iterators += it.execute_subscope_by_name('__iter__')
except KeyError:
debug.warning('iterators: No __iter__ method found.')
result = []
from jedi.evaluate.representation import Instance
for it in iterators:
if isinstance(it, Array):
# Array is a little bit special, since this is an internal
# array, but there's also the list builtin, which is
# another thing.
result += it.values()
elif isinstance(it, Instance):
# __iter__ returned an instance.
name = '__next__' if is_py3 else 'next'
try:
result += it.execute_subscope_by_name(name)
except KeyError:
debug.warning('Instance has no __next__ function in %s.', it)
else:
# Is a generator.
result += it.iter_content()
return result
def check_array_additions(evaluator, array):
""" Just a mapper function for the internal _check_array_additions """
if not pr.Array.is_type(array._array, pr.Array.LIST, pr.Array.SET):
# TODO also check for dict updates
return []
is_list = array._array.type == 'list'
current_module = array._array.get_parent_until()
res = _check_array_additions(evaluator, array, current_module, is_list)
return res
@memoize_default([], evaluator_is_first_arg=True)
def _check_array_additions(evaluator, compare_array, module, is_list):
"""
Checks if a `pr.Array` has "add" statements:
>>> a = [""]
>>> a.append(1)
"""
if not settings.dynamic_array_additions or isinstance(module, compiled.CompiledObject):
return []
def check_calls(calls, add_name):
"""
Calls are processed here. The part before the call is searched and
compared with the original Array.
"""
result = []
for c in calls:
call_path = list(c.generate_call_path())
call_path_simple = [unicode(n) if isinstance(n, pr.NamePart) else n
for n in call_path]
separate_index = call_path_simple.index(add_name)
if add_name == call_path_simple[-1] or separate_index == 0:
# this means that there is no execution -> [].append
# or the keyword is at the start -> append()
continue
backtrack_path = iter(call_path[:separate_index])
position = c.start_pos
scope = c.get_parent_until(pr.IsScope)
found = evaluator.eval_call_path(backtrack_path, scope, position)
if not compare_array in found:
continue
params = call_path[separate_index + 1]
if not params.values:
continue # no params: just ignore it
if add_name in ['append', 'add']:
for param in params:
result += evaluator.eval_statement(param)
elif add_name in ['insert']:
try:
second_param = params[1]
except IndexError:
continue
else:
result += evaluator.eval_statement(second_param)
elif add_name in ['extend', 'update']:
for param in params:
iterators = evaluator.eval_statement(param)
result += get_iterator_types(iterators)
return result
from jedi.evaluate import representation as er
def get_execution_parent(element, *stop_classes):
""" Used to get an Instance/FunctionExecution parent """
if isinstance(element, Array):
stmt = element._array.parent
else:
# is an Instance with an ArrayInstance inside
stmt = element.var_args[0].var_args.parent
if isinstance(stmt, er.InstanceElement):
stop_classes = list(stop_classes) + [er.Function]
return stmt.get_parent_until(stop_classes)
temp_param_add = settings.dynamic_params_for_other_modules
settings.dynamic_params_for_other_modules = False
search_names = ['append', 'extend', 'insert'] if is_list else \
['add', 'update']
comp_arr_parent = get_execution_parent(compare_array, er.FunctionExecution)
possible_stmts = []
res = []
for n in search_names:
try:
possible_stmts += module.used_names[n]
except KeyError:
continue
for stmt in possible_stmts:
# Check if the original scope is an execution. If it is, one
# can search for the same statement, that is in the module
# dict. Executions are somewhat special in jedi, since they
# literally copy the contents of a function.
if isinstance(comp_arr_parent, er.FunctionExecution):
stmt = comp_arr_parent. \
get_statement_for_position(stmt.start_pos)
if stmt is None:
continue
# InstanceElements are special, because they don't get copied,
# but have this wrapper around them.
if isinstance(comp_arr_parent, er.InstanceElement):
stmt = er.InstanceElement(comp_arr_parent.instance, stmt)
if evaluator.recursion_detector.push_stmt(stmt):
# check recursion
continue
res += check_calls(helpers.scan_statement_for_calls(stmt, n), n)
evaluator.recursion_detector.pop_stmt()
# reset settings
settings.dynamic_params_for_other_modules = temp_param_add
return res
def check_array_instances(evaluator, instance):
"""Used for set() and list() instances."""
if not settings.dynamic_arrays_instances:
return instance.var_args
ai = ArrayInstance(evaluator, instance)
return [ai]
class ArrayInstance(pr.Base):
"""
Used for the usage of set() and list().
This is definitely a hack, but a good one :-)
It makes it possible to use set/list conversions.
"""
def __init__(self, evaluator, instance):
self._evaluator = evaluator
self.instance = instance
self.var_args = instance.var_args
def iter_content(self):
"""
The index is here just ignored, because of all the appends, etc.
lists/sets are too complicated too handle that.
"""
items = []
from jedi.evaluate.representation import Instance
for stmt in self.var_args:
for typ in self._evaluator.eval_statement(stmt):
if isinstance(typ, Instance) and len(typ.var_args):
array = typ.var_args[0]
if isinstance(array, ArrayInstance):
# Certain combinations can cause recursions, see tests.
if not self._evaluator.recursion_detector.push_stmt(self.var_args):
items += array.iter_content()
self._evaluator.recursion_detector.pop_stmt()
items += get_iterator_types([typ])
# TODO check if exclusion of tuple is a problem here.
if isinstance(self.var_args, tuple) or self.var_args.parent is None:
return [] # generated var_args should not be checked for arrays
module = self.var_args.get_parent_until()
is_list = str(self.instance.name) == 'list'
items += _check_array_additions(self._evaluator, self.instance, module, is_list)
return items
def _follow_values(evaluator, values):
""" helper function for the index getters """
return list(chain.from_iterable(evaluator.eval_statement(v) for v in values))
class Slice(object):
def __init__(self, evaluator, start, stop, step):
self._evaluator = evaluator
# all of them are either a Precedence or None.
self._start = start
self._stop = stop
self._step = step
@property
def obj(self):
"""
Imitate CompiledObject.obj behavior and return a ``builtin.slice()``
object.
"""
def get(element):
if element is None:
return None
result = self._evaluator.process_precedence_element(element)
if len(result) != 1:
# We want slices to be clear defined with just one type.
# Otherwise we will return an empty slice object.
raise IndexError
try:
return result[0].obj
except AttributeError:
return None
try:
return slice(get(self._start), get(self._stop), get(self._step))
except IndexError:
return slice(None, None, None)
def create_indexes_or_slices(evaluator, index_array):
if not index_array:
return ()
# Just take the first part of the "array", because this is Python stdlib
# behavior. Numpy et al. perform differently, but Jedi won't understand
# that anyway.
expression_list = index_array[0].expression_list()
prec = precedence.create_precedence(expression_list)
# check for slices
if isinstance(prec, precedence.Precedence) and prec.operator == ':':
start = prec.left
if isinstance(start, precedence.Precedence) and start.operator == ':':
stop = start.right
start = start.left
step = prec.right
else:
stop = prec.right
step = None
return (Slice(evaluator, start, stop, step),)
else:
return tuple(evaluator.process_precedence_element(prec))

View File

@@ -0,0 +1,100 @@
"""
This module is not intended to be used in jedi, rather it will be fed to the
jedi-parser to replace classes in the typing module
"""
try:
from collections import abc
except ImportError:
# python 2
import collections as abc
def factory(typing_name, indextypes):
class Iterable(abc.Iterable):
def __iter__(self):
while True:
yield indextypes[0]()
class Iterator(Iterable, abc.Iterator):
def next(self):
""" needed for python 2 """
return self.__next__()
def __next__(self):
return indextypes[0]()
class Sequence(abc.Sequence):
def __getitem__(self, index):
return indextypes[0]()
class MutableSequence(Sequence, abc.MutableSequence):
pass
class List(MutableSequence, list):
pass
class Tuple(Sequence, tuple):
def __getitem__(self, index):
if indextypes[1] == Ellipsis:
# https://www.python.org/dev/peps/pep-0484/#the-typing-module
# Tuple[int, ...] means a tuple of ints of indetermined length
return indextypes[0]()
else:
return indextypes[index]()
class AbstractSet(Iterable, abc.Set):
pass
class MutableSet(AbstractSet, abc.MutableSet):
pass
class KeysView(Iterable, abc.KeysView):
pass
class ValuesView(abc.ValuesView):
def __iter__(self):
while True:
yield indextypes[1]()
class ItemsView(abc.ItemsView):
def __iter__(self):
while True:
yield (indextypes[0](), indextypes[1]())
class Mapping(Iterable, abc.Mapping):
def __getitem__(self, item):
return indextypes[1]()
def keys(self):
return KeysView()
def values(self):
return ValuesView()
def items(self):
return ItemsView()
class MutableMapping(Mapping, abc.MutableMapping):
pass
class Dict(MutableMapping, dict):
pass
dct = {
"Sequence": Sequence,
"MutableSequence": MutableSequence,
"List": List,
"Iterable": Iterable,
"Iterator": Iterator,
"AbstractSet": AbstractSet,
"MutableSet": MutableSet,
"Mapping": Mapping,
"MutableMapping": MutableMapping,
"Tuple": Tuple,
"KeysView": KeysView,
"ItemsView": ItemsView,
"ValuesView": ValuesView,
"Dict": Dict,
}
return dct[typing_name]

View File

@@ -0,0 +1,61 @@
from jedi.evaluate.base_context import ContextSet, NO_CONTEXTS
class AbstractLazyContext(object):
def __init__(self, data):
self.data = data
def __repr__(self):
return '<%s: %s>' % (self.__class__.__name__, self.data)
def infer(self):
raise NotImplementedError
class LazyKnownContext(AbstractLazyContext):
"""data is a context."""
def infer(self):
return ContextSet(self.data)
class LazyKnownContexts(AbstractLazyContext):
"""data is a ContextSet."""
def infer(self):
return self.data
class LazyUnknownContext(AbstractLazyContext):
def __init__(self):
super(LazyUnknownContext, self).__init__(None)
def infer(self):
return NO_CONTEXTS
class LazyTreeContext(AbstractLazyContext):
def __init__(self, context, node):
super(LazyTreeContext, self).__init__(node)
self._context = context
# We need to save the predefined names. It's an unfortunate side effect
# that needs to be tracked otherwise results will be wrong.
self._predefined_names = dict(context.predefined_names)
def infer(self):
old, self._context.predefined_names = \
self._context.predefined_names, self._predefined_names
try:
return self._context.eval_node(self.data)
finally:
self._context.predefined_names = old
def get_merged_lazy_context(lazy_contexts):
if len(lazy_contexts) > 1:
return MergedLazyContexts(lazy_contexts)
else:
return lazy_contexts[0]
class MergedLazyContexts(AbstractLazyContext):
"""data is a list of lazy contexts."""
def infer(self):
return ContextSet.from_sets(l.infer() for l in self.data)

View File

@@ -1,375 +1,195 @@
import copy
from collections import defaultdict
from jedi._compatibility import unicode, zip_longest
from jedi.parser import representation as pr
from jedi.evaluate import iterable
from jedi import common
from jedi.evaluate import helpers
from jedi.evaluate.utils import PushBackIterator
from jedi.evaluate import analysis
from jedi.evaluate.compiled import CompiledObject
from jedi.evaluate.lazy_context import LazyKnownContext, \
LazyTreeContext, LazyUnknownContext
from jedi.evaluate import docstrings
from jedi.evaluate import pep0484
from jedi.evaluate.context import iterable
class ExecutedParam(pr.Param):
def __init__(self):
"""Don't use this method, it's just here to overwrite the old one."""
pass
@classmethod
def from_param(cls, param, parent, var_args):
instance = cls()
before = ()
for cls in param.__class__.__mro__:
with common.ignored(AttributeError):
if before == cls.__slots__:
continue
before = cls.__slots__
for name in before:
setattr(instance, name, getattr(param, name))
instance.original_param = param
instance.is_generated = True
instance.parent = parent
instance.var_args = var_args
return instance
def _add_argument_issue(parent_context, error_name, lazy_context, message):
if isinstance(lazy_context, LazyTreeContext):
node = lazy_context.data
if node.parent.type == 'argument':
node = node.parent
analysis.add(parent_context, error_name, node, message)
def _get_calling_var_args(evaluator, var_args):
old_var_args = None
while var_args != old_var_args:
old_var_args = var_args
for argument in reversed(var_args):
if not isinstance(argument, pr.Statement):
continue
exp_list = argument.expression_list()
if len(exp_list) != 2 or exp_list[0] not in ('*', '**'):
continue
class ExecutedParam(object):
"""Fake a param and give it values."""
def __init__(self, execution_context, param_node, lazy_context):
self._execution_context = execution_context
self._param_node = param_node
self._lazy_context = lazy_context
self.string_name = param_node.name.value
names, _ = evaluator.goto(argument, [exp_list[1].get_code()])
if len(names) != 1:
break
param = names[0].parent
if not isinstance(param, ExecutedParam):
if isinstance(param, pr.Param):
# There is no calling var_args in this case - there's just
# a param without any input.
return None
break
# We never want var_args to be a tuple. This should be enough for
# now, we can change it later, if we need to.
if isinstance(param.var_args, pr.Array):
var_args = param.var_args
return var_args
def infer(self):
pep0484_hints = pep0484.infer_param(self._execution_context, self._param_node)
doc_params = docstrings.infer_param(self._execution_context, self._param_node)
if pep0484_hints or doc_params:
return pep0484_hints | doc_params
return self._lazy_context.infer()
@property
def var_args(self):
return self._execution_context.var_args
def __repr__(self):
return '<%s: %s>' % (self.__class__.__name__, self.string_name)
def get_params(evaluator, func, var_args):
result = []
def get_params(execution_context, var_args):
result_params = []
param_dict = {}
for param in func.params:
param_dict[str(param.get_name())] = param
# There may be calls, which don't fit all the params, this just ignores it.
unpacked_va = _unpack_var_args(evaluator, var_args, func)
var_arg_iterator = common.PushBackIterator(iter(unpacked_va))
funcdef = execution_context.tree_node
parent_context = execution_context.parent_context
non_matching_keys = []
keys_used = set()
for param in funcdef.get_params():
param_dict[param.name.value] = param
unpacked_va = list(var_args.unpack(funcdef))
var_arg_iterator = PushBackIterator(iter(unpacked_va))
non_matching_keys = defaultdict(lambda: [])
keys_used = {}
keys_only = False
va_values = None
had_multiple_value_error = False
for param in func.params:
for param in funcdef.get_params():
# The value and key can both be null. There, the defaults apply.
# args / kwargs will just be empty arrays / dicts, respectively.
# Wrong value count is just ignored. If you try to test cases that are
# not allowed in Python, Jedi will maybe not show any completions.
key, va_values = next(var_arg_iterator, (None, []))
while key:
key, argument = next(var_arg_iterator, (None, None))
while key is not None:
keys_only = True
k = unicode(key)
try:
key_param = param_dict[unicode(key)]
key_param = param_dict[key]
except KeyError:
non_matching_keys.append((key, va_values))
non_matching_keys[key] = argument
else:
result.append(_gen_param_name_copy(func, var_args, key_param,
values=va_values))
if key in keys_used:
had_multiple_value_error = True
m = ("TypeError: %s() got multiple values for keyword argument '%s'."
% (funcdef.name, key))
for node in var_args.get_calling_nodes():
analysis.add(parent_context, 'type-error-multiple-values',
node, message=m)
else:
keys_used[key] = ExecutedParam(execution_context, key_param, argument)
key, argument = next(var_arg_iterator, (None, None))
if k in keys_used:
had_multiple_value_error = True
m = ("TypeError: %s() got multiple values for keyword argument '%s'."
% (func.name, k))
calling_va = _get_calling_var_args(evaluator, var_args)
if calling_va is not None:
analysis.add(evaluator, 'type-error-multiple-values',
calling_va, message=m)
else:
keys_used.add(k)
key, va_values = next(var_arg_iterator, (None, []))
try:
result_params.append(keys_used[param.name.value])
continue
except KeyError:
pass
keys = []
values = []
array_type = None
has_default_value = False
if param.stars == 1:
if param.star_count == 1:
# *args param
array_type = pr.Array.TUPLE
lst_values = [va_values]
for key, va_values in var_arg_iterator:
# Iterate until a key argument is found.
if key:
var_arg_iterator.push_back((key, va_values))
break
lst_values.append(va_values)
if lst_values[0]:
values = [helpers.stmts_to_stmt(v) for v in lst_values]
elif param.stars == 2:
lazy_context_list = []
if argument is not None:
lazy_context_list.append(argument)
for key, argument in var_arg_iterator:
# Iterate until a key argument is found.
if key:
var_arg_iterator.push_back((key, argument))
break
lazy_context_list.append(argument)
seq = iterable.FakeSequence(execution_context.evaluator, 'tuple', lazy_context_list)
result_arg = LazyKnownContext(seq)
elif param.star_count == 2:
# **kwargs param
array_type = pr.Array.DICT
if non_matching_keys:
keys, values = zip(*non_matching_keys)
values = [helpers.stmts_to_stmt(list(v)) for v in values]
non_matching_keys = []
dct = iterable.FakeDict(execution_context.evaluator, dict(non_matching_keys))
result_arg = LazyKnownContext(dct)
non_matching_keys = {}
else:
# normal param
if va_values:
values = va_values
else:
if param.assignment_details:
# No value: Return the default values.
has_default_value = True
result.append(param.get_name())
# TODO is this allowed? it changes it long time.
param.is_generated = True
if argument is None:
# No value: Return an empty container
if param.default is None:
result_arg = LazyUnknownContext()
if not keys_only:
for node in var_args.get_calling_nodes():
m = _error_argument_count(funcdef, len(unpacked_va))
analysis.add(parent_context, 'type-error-too-few-arguments',
node, message=m)
else:
# No value: Return an empty container
values = []
if not keys_only and isinstance(var_args, pr.Array):
calling_va = _get_calling_var_args(evaluator, var_args)
if calling_va is not None:
m = _error_argument_count(func, len(unpacked_va))
analysis.add(evaluator, 'type-error-too-few-arguments',
calling_va, message=m)
result_arg = LazyTreeContext(parent_context, param.default)
else:
result_arg = argument
# Now add to result if it's not one of the previously covered cases.
if not has_default_value and (not keys_only or param.stars == 2):
keys_used.add(unicode(param.get_name()))
result.append(_gen_param_name_copy(func, var_args, param,
keys=keys, values=values,
array_type=array_type))
result_params.append(ExecutedParam(execution_context, param, result_arg))
if not isinstance(result_arg, LazyUnknownContext):
keys_used[param.name.value] = result_params[-1]
if keys_only:
# All arguments should be handed over to the next function. It's not
# about the values inside, it's about the names. Jedi needs to now that
# there's nothing to find for certain names.
for k in set(param_dict) - keys_used:
for k in set(param_dict) - set(keys_used):
param = param_dict[k]
result.append(_gen_param_name_copy(func, var_args, param))
if not (non_matching_keys or had_multiple_value_error
or param.stars or param.assignment_details):
if not (non_matching_keys or had_multiple_value_error or
param.star_count or param.default):
# add a warning only if there's not another one.
calling_va = _get_calling_var_args(evaluator, var_args)
if calling_va is not None:
m = _error_argument_count(func, len(unpacked_va))
analysis.add(evaluator, 'type-error-too-few-arguments',
calling_va, message=m)
for node in var_args.get_calling_nodes():
m = _error_argument_count(funcdef, len(unpacked_va))
analysis.add(parent_context, 'type-error-too-few-arguments',
node, message=m)
for key, va_values in non_matching_keys:
for key, lazy_context in non_matching_keys.items():
m = "TypeError: %s() got an unexpected keyword argument '%s'." \
% (func.name, key)
for value in va_values:
analysis.add(evaluator, 'type-error-keyword-argument', value, message=m)
% (funcdef.name, key)
_add_argument_issue(
parent_context,
'type-error-keyword-argument',
lazy_context,
message=m
)
remaining_params = list(var_arg_iterator)
if remaining_params:
m = _error_argument_count(func, len(unpacked_va))
for p in remaining_params[0][1]:
analysis.add(evaluator, 'type-error-too-many-arguments',
p, message=m)
return result
remaining_arguments = list(var_arg_iterator)
if remaining_arguments:
m = _error_argument_count(funcdef, len(unpacked_va))
# Just report an error for the first param that is not needed (like
# cPython).
first_key, lazy_context = remaining_arguments[0]
if var_args.get_calling_nodes():
# There might not be a valid calling node so check for that first.
_add_argument_issue(parent_context, 'type-error-too-many-arguments', lazy_context, message=m)
return result_params
def _unpack_var_args(evaluator, var_args, func):
"""
Yields a key/value pair, the key is None, if its not a named arg.
"""
argument_list = []
from jedi.evaluate.representation import InstanceElement
if isinstance(func, InstanceElement):
# Include self at this place.
argument_list.append((None, [helpers.FakeStatement([func.instance])]))
# `var_args` is typically an Array, and not a list.
for stmt in _reorder_var_args(var_args):
if not isinstance(stmt, pr.Statement):
if stmt is None:
argument_list.append((None, []))
# TODO generate warning?
continue
old = stmt
# generate a statement if it's not already one.
stmt = helpers.FakeStatement([old])
expression_list = stmt.expression_list()
if not len(expression_list):
continue
# *args
if expression_list[0] == '*':
arrays = evaluator.eval_expression_list(expression_list[1:])
iterators = [_iterate_star_args(evaluator, a, expression_list[1:], func)
for a in arrays]
for values in list(zip_longest(*iterators)):
argument_list.append((None, [v for v in values if v is not None]))
# **kwargs
elif expression_list[0] == '**':
dct = {}
for array in evaluator.eval_expression_list(expression_list[1:]):
# Merge multiple kwargs dictionaries, if used with dynamic
# parameters.
s = _star_star_dict(evaluator, array, expression_list[1:], func)
for name, (key, value) in s.items():
try:
dct[name][1].add(value)
except KeyError:
dct[name] = key, set([value])
for key, values in dct.values():
# merge **kwargs/*args also for dynamic parameters
for i, p in enumerate(func.params):
if str(p.get_name()) == str(key) and not p.stars:
try:
k, vs = argument_list[i]
except IndexError:
pass
else:
if k is None: # k would imply a named argument
# Don't merge if they orginate at the same
# place. -> type-error-multiple-values
if [v.parent for v in values] != [v.parent for v in vs]:
vs.extend(values)
break
else:
# default is to merge
argument_list.append((key, values))
# Normal arguments (including key arguments).
else:
if stmt.assignment_details:
key_arr, op = stmt.assignment_details[0]
# Filter error tokens
key_arr = [x for x in key_arr if isinstance(x, pr.Call)]
# named parameter
if key_arr and isinstance(key_arr[0], pr.Call):
argument_list.append((key_arr[0].name, [stmt]))
else:
argument_list.append((None, [stmt]))
return argument_list
def _reorder_var_args(var_args):
"""
Reordering var_args is necessary, because star args sometimes appear after
named argument, but in the actual order it's prepended.
"""
named_index = None
new_args = []
for i, stmt in enumerate(var_args):
if isinstance(stmt, pr.Statement):
if named_index is None and stmt.assignment_details:
named_index = i
if named_index is not None:
expression_list = stmt.expression_list()
if expression_list and expression_list[0] == '*':
new_args.insert(named_index, stmt)
named_index += 1
continue
new_args.append(stmt)
return new_args
def _iterate_star_args(evaluator, array, expression_list, func):
from jedi.evaluate.representation import Instance
if isinstance(array, iterable.Array):
for field_stmt in array: # yield from plz!
yield field_stmt
elif isinstance(array, iterable.Generator):
for field_stmt in array.iter_content():
yield helpers.FakeStatement([field_stmt])
elif isinstance(array, Instance) and array.name == 'tuple':
pass
else:
if expression_list:
m = "TypeError: %s() argument after * must be a sequence, not %s" \
% (func.name, array)
analysis.add(evaluator, 'type-error-star',
expression_list[0], message=m)
def _star_star_dict(evaluator, array, expression_list, func):
dct = {}
from jedi.evaluate.representation import Instance
if isinstance(array, Instance) and array.name == 'dict':
# For now ignore this case. In the future add proper iterators and just
# make one call without crazy isinstance checks.
return {}
if isinstance(array, iterable.Array) and array.type == pr.Array.DICT:
for key_stmt, value_stmt in array.items():
# first index, is the key if syntactically correct
call = key_stmt.expression_list()[0]
if isinstance(call, pr.Name):
key = call
elif isinstance(call, pr.Call):
key = call.name
else:
continue # We ignore complicated statements here, for now.
# If the string is a duplicate, we don't care it's illegal Python
# anyway.
dct[str(key)] = key, value_stmt
else:
if expression_list:
m = "TypeError: %s argument after ** must be a mapping, not %s" \
% (func.name, array)
analysis.add(evaluator, 'type-error-star-star',
expression_list[0], message=m)
return dct
def _gen_param_name_copy(func, var_args, param, keys=(), values=(), array_type=None):
"""
Create a param with the original scope (of varargs) as parent.
"""
if isinstance(var_args, pr.Array):
parent = var_args.parent
start_pos = var_args.start_pos
else:
parent = func
start_pos = 0, 0
new_param = ExecutedParam.from_param(param, parent, var_args)
# create an Array (-> needed for *args/**kwargs tuples/dicts)
arr = pr.Array(helpers.FakeSubModule, start_pos, array_type, parent)
arr.values = list(values) # Arrays only work with list.
key_stmts = []
for key in keys:
key_stmts.append(helpers.FakeStatement([key], start_pos))
arr.keys = key_stmts
arr.type = array_type
new_param.set_expression_list([arr])
name = copy.copy(param.get_name())
name.parent = new_param
return name
def _error_argument_count(func, actual_count):
default_arguments = sum(1 for p in func.params if p.assignment_details or p.stars)
def _error_argument_count(funcdef, actual_count):
params = funcdef.get_params()
default_arguments = sum(1 for p in params if p.default or p.star_count)
if default_arguments == 0:
before = 'exactly '
else:
before = 'from %s to ' % (len(func.params) - default_arguments)
before = 'from %s to ' % (len(params) - default_arguments)
return ('TypeError: %s() takes %s%s arguments (%s given).'
% (func.name, before, len(func.params), actual_count))
% (funcdef.name, before, len(params), actual_count))
def _create_default_param(execution_context, param):
if param.star_count == 1:
result_arg = LazyKnownContext(
iterable.FakeSequence(execution_context.evaluator, 'tuple', [])
)
elif param.star_count == 2:
result_arg = LazyKnownContext(
iterable.FakeDict(execution_context.evaluator, {})
)
elif param.default is None:
result_arg = LazyUnknownContext()
else:
result_arg = LazyTreeContext(execution_context.parent_context, param.default)
return ExecutedParam(execution_context, param, result_arg)
def create_default_params(execution_context, funcdef):
return [_create_default_param(execution_context, p)
for p in funcdef.get_params()]

View File

@@ -0,0 +1,6 @@
from jedi.evaluate.cache import evaluator_function_cache
@evaluator_function_cache()
def get_yield_exprs(evaluator, funcdef):
return list(funcdef.iter_yield_exprs())

222
jedi/evaluate/pep0484.py Normal file
View File

@@ -0,0 +1,222 @@
"""
PEP 0484 ( https://www.python.org/dev/peps/pep-0484/ ) describes type hints
through function annotations. There is a strong suggestion in this document
that only the type of type hinting defined in PEP0484 should be allowed
as annotations in future python versions.
The (initial / probably incomplete) implementation todo list for pep-0484:
v Function parameter annotations with builtin/custom type classes
v Function returntype annotations with builtin/custom type classes
v Function parameter annotations with strings (forward reference)
v Function return type annotations with strings (forward reference)
v Local variable type hints
v Assigned types: `Url = str\ndef get(url:Url) -> str:`
v Type hints in `with` statements
x Stub files support
x support `@no_type_check` and `@no_type_check_decorator`
x support for typing.cast() operator
x support for type hint comments for functions, `# type: (int, str) -> int`.
See comment from Guido https://github.com/davidhalter/jedi/issues/662
"""
import os
import re
from parso import ParserSyntaxError
from parso.python import tree
from jedi.evaluate.cache import evaluator_method_cache
from jedi.evaluate import compiled
from jedi.evaluate.base_context import NO_CONTEXTS, ContextSet
from jedi.evaluate.lazy_context import LazyTreeContext
from jedi.evaluate.context import ModuleContext
from jedi import debug
from jedi import _compatibility
from jedi import parser_utils
def _evaluate_for_annotation(context, annotation, index=None):
"""
Evaluates a string-node, looking for an annotation
If index is not None, the annotation is expected to be a tuple
and we're interested in that index
"""
if annotation is not None:
context_set = context.eval_node(_fix_forward_reference(context, annotation))
if index is not None:
context_set = context_set.filter(
lambda context: context.array_type == 'tuple' \
and len(list(context.py__iter__())) >= index
).py__getitem__(index)
return context_set.execute_evaluated()
else:
return NO_CONTEXTS
def _fix_forward_reference(context, node):
evaled_nodes = context.eval_node(node)
if len(evaled_nodes) != 1:
debug.warning("Eval'ed typing index %s should lead to 1 object, "
" not %s" % (node, evaled_nodes))
return node
evaled_node = list(evaled_nodes)[0]
if isinstance(evaled_node, compiled.CompiledObject) and \
isinstance(evaled_node.obj, str):
try:
new_node = context.evaluator.grammar.parse(
_compatibility.unicode(evaled_node.obj),
start_symbol='eval_input',
error_recovery=False
)
except ParserSyntaxError:
debug.warning('Annotation not parsed: %s' % evaled_node.obj)
return node
else:
module = node.get_root_node()
parser_utils.move(new_node, module.end_pos[0])
new_node.parent = context.tree_node
return new_node
else:
return node
@evaluator_method_cache()
def infer_param(execution_context, param):
annotation = param.annotation
module_context = execution_context.get_root_context()
return _evaluate_for_annotation(module_context, annotation)
def py__annotations__(funcdef):
return_annotation = funcdef.annotation
if return_annotation:
dct = {'return': return_annotation}
else:
dct = {}
for function_param in funcdef.get_params():
param_annotation = function_param.annotation
if param_annotation is not None:
dct[function_param.name.value] = param_annotation
return dct
@evaluator_method_cache()
def infer_return_types(function_context):
annotation = py__annotations__(function_context.tree_node).get("return", None)
module_context = function_context.get_root_context()
return _evaluate_for_annotation(module_context, annotation)
_typing_module = None
def _get_typing_replacement_module(grammar):
"""
The idea is to return our jedi replacement for the PEP-0484 typing module
as discussed at https://github.com/davidhalter/jedi/issues/663
"""
global _typing_module
if _typing_module is None:
typing_path = \
os.path.abspath(os.path.join(__file__, "../jedi_typing.py"))
with open(typing_path) as f:
code = _compatibility.unicode(f.read())
_typing_module = grammar.parse(code)
return _typing_module
def py__getitem__(context, typ, node):
if not typ.get_root_context().name.string_name == "typing":
return None
# we assume that any class using [] in a module called
# "typing" with a name for which we have a replacement
# should be replaced by that class. This is not 100%
# airtight but I don't have a better idea to check that it's
# actually the PEP-0484 typing module and not some other
if node.type == "subscriptlist":
nodes = node.children[::2] # skip the commas
else:
nodes = [node]
del node
nodes = [_fix_forward_reference(context, node) for node in nodes]
type_name = typ.name.string_name
# hacked in Union and Optional, since it's hard to do nicely in parsed code
if type_name in ("Union", '_Union'):
# In Python 3.6 it's still called typing.Union but it's an instance
# called _Union.
return ContextSet.from_sets(context.eval_node(node) for node in nodes)
if type_name in ("Optional", '_Optional'):
# Here we have the same issue like in Union. Therefore we also need to
# check for the instance typing._Optional (Python 3.6).
return context.eval_node(nodes[0])
typing = ModuleContext(
context.evaluator,
module_node=_get_typing_replacement_module(context.evaluator.latest_grammar),
path=None
)
factories = typing.py__getattribute__("factory")
assert len(factories) == 1
factory = list(factories)[0]
assert factory
function_body_nodes = factory.tree_node.children[4].children
valid_classnames = set(child.name.value
for child in function_body_nodes
if isinstance(child, tree.Class))
if type_name not in valid_classnames:
return None
compiled_classname = compiled.create(context.evaluator, type_name)
from jedi.evaluate.context.iterable import FakeSequence
args = FakeSequence(
context.evaluator,
"tuple",
[LazyTreeContext(context, n) for n in nodes]
)
result = factory.execute_evaluated(compiled_classname, args)
return result
def find_type_from_comment_hint_for(context, node, name):
return _find_type_from_comment_hint(context, node, node.children[1], name)
def find_type_from_comment_hint_with(context, node, name):
assert len(node.children[1].children) == 3, \
"Can only be here when children[1] is 'foo() as f'"
varlist = node.children[1].children[2]
return _find_type_from_comment_hint(context, node, varlist, name)
def find_type_from_comment_hint_assign(context, node, name):
return _find_type_from_comment_hint(context, node, node.children[0], name)
def _find_type_from_comment_hint(context, node, varlist, name):
index = None
if varlist.type in ("testlist_star_expr", "exprlist", "testlist"):
# something like "a, b = 1, 2"
index = 0
for child in varlist.children:
if child == name:
break
if child.type == "operator":
continue
index += 1
else:
return []
comment = parser_utils.get_following_comment_same_line(node)
if comment is None:
return []
match = re.match(r"^#\s*type:\s*([^#]*)", comment)
if not match:
return []
annotation = tree.String(
repr(str(match.group(1).strip())),
node.start_pos)
annotation.parent = node.parent
return _evaluate_for_annotation(context, annotation, index)

View File

@@ -1,297 +0,0 @@
"""
Handles operator precedence.
"""
from jedi._compatibility import unicode
from jedi.parser import representation as pr
from jedi import debug
from jedi.common import PushBackIterator
from jedi.evaluate.compiled import CompiledObject, create, builtin
from jedi.evaluate import analysis
class PythonGrammar(object):
"""
Some kind of mirror of http://docs.python.org/3/reference/grammar.html.
"""
class MultiPart(str):
def __new__(cls, first, second):
self = str.__new__(cls, first)
self.second = second
return self
def __str__(self):
return str.__str__(self) + ' ' + self.second
FACTOR = '+', '-', '~'
POWER = '**',
TERM = '*', '/', '%', '//'
ARITH_EXPR = '+', '-'
SHIFT_EXPR = '<<', '>>'
AND_EXPR = '&',
XOR_EXPR = '^',
EXPR = '|',
COMPARISON = ('<', '>', '==', '>=', '<=', '!=', 'in',
MultiPart('not', 'in'), MultiPart('is', 'not'), 'is')
NOT_TEST = 'not',
AND_TEST = 'and',
OR_TEST = 'or',
#TEST = or_test ['if' or_test 'else' test] | lambdef
TERNARY = 'if',
SLICE = ':',
ORDER = (POWER, TERM, ARITH_EXPR, SHIFT_EXPR, AND_EXPR, XOR_EXPR,
EXPR, COMPARISON, AND_TEST, OR_TEST, TERNARY, SLICE)
FACTOR_PRIORITY = 0 # highest priority
LOWEST_PRIORITY = len(ORDER)
NOT_TEST_PRIORITY = LOWEST_PRIORITY - 4 # priority only lower for `and`/`or`
SLICE_PRIORITY = LOWEST_PRIORITY - 1 # priority only lower for `and`/`or`
class Precedence(object):
def __init__(self, left, operator, right):
self.left = left
self.operator = operator
self.right = right
def parse_tree(self, strip_literals=False):
def process(which):
try:
which = which.parse_tree(strip_literals)
except AttributeError:
pass
if strip_literals and isinstance(which, pr.Literal):
which = which.value
return which
return (process(self.left), self.operator.string, process(self.right))
def __repr__(self):
return '(%s %s %s)' % (self.left, self.operator, self.right)
class TernaryPrecedence(Precedence):
def __init__(self, left, operator, right, check):
super(TernaryPrecedence, self).__init__(left, operator, right)
self.check = check
def create_precedence(expression_list):
iterator = PushBackIterator(iter(expression_list))
return _check_operator(iterator)
def _syntax_error(element, msg='SyntaxError in precedence'):
debug.warning('%s: %s, %s' % (msg, element, element.start_pos))
def _get_number(iterator, priority=PythonGrammar.LOWEST_PRIORITY):
el = next(iterator)
if isinstance(el, pr.Operator):
if el in PythonGrammar.FACTOR:
right = _get_number(iterator, PythonGrammar.FACTOR_PRIORITY)
elif el in PythonGrammar.NOT_TEST \
and priority >= PythonGrammar.NOT_TEST_PRIORITY:
right = _get_number(iterator, PythonGrammar.NOT_TEST_PRIORITY)
elif el in PythonGrammar.SLICE \
and priority >= PythonGrammar.SLICE_PRIORITY:
iterator.push_back(el)
return None
else:
_syntax_error(el)
return _get_number(iterator, priority)
return Precedence(None, el, right)
elif isinstance(el, pr.tokenize.Token):
return _get_number(iterator, priority)
else:
return el
class MergedOperator(pr.Operator):
"""
A way to merge the two operators `is not` and `not int`, which are two
words instead of one.
Maybe there's a better way (directly in the tokenizer/parser? but for now
this is fine.)
"""
def __init__(self, first, second):
string = first.string + ' ' + second.string
super(MergedOperator, self).__init__(first._sub_module, string,
first.parent, first.start_pos)
self.first = first
self.second = second
def _check_operator(iterator, priority=PythonGrammar.LOWEST_PRIORITY):
try:
left = _get_number(iterator, priority)
except StopIteration:
return None
for el in iterator:
if not isinstance(el, pr.Operator):
_syntax_error(el)
continue
operator = None
for check_prio, check in enumerate(PythonGrammar.ORDER):
if check_prio >= priority:
# respect priorities.
iterator.push_back(el)
return left
try:
match_index = check.index(el)
except ValueError:
continue
match = check[match_index]
if isinstance(match, PythonGrammar.MultiPart):
next_tok = next(iterator)
if next_tok == match.second:
el = MergedOperator(el, next_tok)
else:
iterator.push_back(next_tok)
if el == 'not':
continue
operator = el
break
if operator is None:
_syntax_error(el)
continue
if operator in PythonGrammar.POWER:
check_prio += 1 # to the power of is right-associative
elif operator in PythonGrammar.TERNARY:
try:
middle = []
for each in iterator:
if each == 'else':
break
middle.append(each)
middle = create_precedence(middle)
except StopIteration:
_syntax_error(operator, 'SyntaxError ternary incomplete')
right = _check_operator(iterator, check_prio)
if right is None and not operator in PythonGrammar.SLICE:
_syntax_error(iterator.current, 'SyntaxError operand missing')
else:
if operator in PythonGrammar.TERNARY:
left = TernaryPrecedence(left, operator, right, middle)
else:
left = Precedence(left, operator, right)
return left
def _literals_to_types(evaluator, result):
# Changes literals ('a', 1, 1.0, etc) to its type instances (str(),
# int(), float(), etc).
for i, r in enumerate(result):
if is_literal(r):
# Literals are only valid as long as the operations are
# correct. Otherwise add a value-free instance.
cls = builtin.get_by_name(r.name)
result[i] = evaluator.execute(cls)[0]
return list(set(result))
def calculate(evaluator, left_result, operator, right_result):
result = []
if left_result is None and right_result:
# cases like `-1` or `1 + ~1`
for right in right_result:
result.append(_factor_calculate(evaluator, operator, right))
return result
else:
if not left_result or not right_result:
# illegal slices e.g. cause left/right_result to be None
result = (left_result or []) + (right_result or [])
result = _literals_to_types(evaluator, result)
else:
# I don't think there's a reasonable chance that a string
# operation is still correct, once we pass something like six
# objects.
if len(left_result) * len(right_result) > 6:
result = _literals_to_types(evaluator, left_result + right_result)
else:
for left in left_result:
for right in right_result:
result += _element_calculate(evaluator, left, operator, right)
return result
def _factor_calculate(evaluator, operator, right):
if _is_number(right):
if operator == '-':
return create(evaluator, -right.obj)
return right
def _is_number(obj):
return isinstance(obj, CompiledObject) \
and isinstance(obj.obj, (int, float))
def _is_string(obj):
return isinstance(obj, CompiledObject) \
and isinstance(obj.obj, (str, unicode))
def is_literal(obj):
return _is_number(obj) or _is_string(obj)
def _is_tuple(obj):
from jedi.evaluate import iterable
return isinstance(obj, iterable.Array) and obj.type == pr.Array.TUPLE
def _is_list(obj):
from jedi.evaluate import iterable
return isinstance(obj, iterable.Array) and obj.type == pr.Array.LIST
def _element_calculate(evaluator, left, operator, right):
from jedi.evaluate import iterable, representation as er
l_is_num = _is_number(left)
r_is_num = _is_number(right)
if operator == '*':
# for iterables, ignore * operations
if isinstance(left, iterable.Array) or _is_string(left):
return [left]
elif isinstance(right, iterable.Array) or _is_string(right):
return [right]
elif operator == '+':
if l_is_num and r_is_num or _is_string(left) and _is_string(right):
return [create(evaluator, left.obj + right.obj)]
elif _is_tuple(left) and _is_tuple(right) or _is_list(left) and _is_list(right):
return [iterable.MergedArray(evaluator, (left, right))]
elif operator == '-':
if l_is_num and r_is_num:
return [create(evaluator, left.obj - right.obj)]
elif operator == '%':
# With strings and numbers the left type typically remains. Except for
# `int() % float()`.
return [left]
def check(obj):
"""Checks if a Jedi object is either a float or an int."""
return isinstance(obj, er.Instance) and obj.name in ('int', 'float')
# Static analysis, one is a number, the other one is not.
if operator in ('+', '-') and l_is_num != r_is_num \
and not (check(left) or check(right)):
message = "TypeError: unsupported operand type(s) for +: %s and %s"
analysis.add(evaluator, 'type-error-operation', operator,
message % (left, right))
return [left, right]

40
jedi/evaluate/project.py Normal file
View File

@@ -0,0 +1,40 @@
import os
import sys
from jedi.evaluate.sys_path import get_venv_path, detect_additional_paths
from jedi.cache import underscore_memoization
class Project(object):
def __init__(self, sys_path=None):
if sys_path is not None:
self._sys_path = sys_path
venv = os.getenv('VIRTUAL_ENV')
if venv:
sys_path = get_venv_path(venv)
if sys_path is None:
sys_path = sys.path
base_sys_path = list(sys_path)
try:
base_sys_path.remove('')
except ValueError:
pass
self._base_sys_path = base_sys_path
def add_script_path(self, script_path):
self._script_path = script_path
def add_evaluator(self, evaluator):
self._evaluator = evaluator
@property
@underscore_memoization
def sys_path(self):
if self._script_path is None:
return self._base_sys_path
return self._base_sys_path + detect_additional_paths(self._evaluator, self._script_path)

View File

@@ -4,162 +4,132 @@ must stop recursions going mad. Some settings are here to make |jedi| stop at
the right time. You can read more about them :ref:`here <settings-recursion>`.
Next to :mod:`jedi.evaluate.cache` this module also makes |jedi| not
thread-safe. Why? ``ExecutionRecursionDecorator`` uses class variables to
thread-safe. Why? ``execution_recursion_decorator`` uses class variables to
count the function calls.
.. _settings-recursion:
Settings
~~~~~~~~~~
Recursion settings are important if you don't want extremly
recursive python code to go absolutely crazy.
The default values are based on experiments while completing the |jedi| library
itself (inception!). But I don't think there's any other Python library that
uses recursion in a similarly extreme way. Completion should also be fast and
therefore the quality might not always be maximal.
.. autodata:: recursion_limit
.. autodata:: total_function_execution_limit
.. autodata:: per_function_execution_limit
.. autodata:: per_function_recursion_limit
"""
from jedi.parser import representation as pr
from contextlib import contextmanager
from jedi import debug
from jedi import settings
from jedi.evaluate import compiled
from jedi.evaluate import iterable
from jedi.evaluate.base_context import NO_CONTEXTS
def recursion_decorator(func):
def run(evaluator, stmt, *args, **kwargs):
rec_detect = evaluator.recursion_detector
# print stmt, len(self.node_statements())
if rec_detect.push_stmt(stmt):
return []
else:
result = func(evaluator, stmt, *args, **kwargs)
rec_detect.pop_stmt()
return result
return run
recursion_limit = 15
"""
Like ``sys.getrecursionlimit()``, just for |jedi|.
"""
total_function_execution_limit = 200
"""
This is a hard limit of how many non-builtin functions can be executed.
"""
per_function_execution_limit = 6
"""
The maximal amount of times a specific function may be executed.
"""
per_function_recursion_limit = 2
"""
A function may not be executed more than this number of times recursively.
"""
class RecursionDetector(object):
def __init__(self):
self.pushed_nodes = []
@contextmanager
def execution_allowed(evaluator, node):
"""
A decorator to detect recursions in statements. In a recursion a statement
at the same place, in the same module may not be executed two times.
"""
def __init__(self):
self.top = None
self.current = None
pushed_nodes = evaluator.recursion_detector.pushed_nodes
def push_stmt(self, stmt):
self.current = _RecursionNode(stmt, self.current)
check = self._check_recursion()
if check:
debug.warning('catched stmt recursion: %s against %s @%s', stmt,
check.stmt, stmt.start_pos)
self.pop_stmt()
return True
return False
def pop_stmt(self):
if self.current is not None:
# I don't know how current can be None, but sometimes it happens
# with Python3.
self.current = self.current.parent
def _check_recursion(self):
test = self.current
while True:
test = test.parent
if self.current == test:
return test
if not test:
return False
def node_statements(self):
result = []
n = self.current
while n:
result.insert(0, n.stmt)
n = n.parent
return result
if node in pushed_nodes:
debug.warning('catched stmt recursion: %s @%s', node,
node.start_pos)
yield False
else:
pushed_nodes.append(node)
yield True
pushed_nodes.pop()
class _RecursionNode(object):
""" A node of the RecursionDecorator. """
def __init__(self, stmt, parent):
self.script = stmt.get_parent_until()
self.position = stmt.start_pos
self.parent = parent
self.stmt = stmt
# Don't check param instances, they are not causing recursions
# The same's true for the builtins, because the builtins are really
# simple.
self.is_ignored = isinstance(stmt, pr.Param) \
or (self.script == compiled.builtin)
def __eq__(self, other):
if not other:
return None
# List Comprehensions start on the same line as its statement.
# Therefore we have the unfortunate situation of the same start_pos for
# two statements.
is_list_comp = lambda x: isinstance(x, pr.ListComprehension)
return self.script == other.script \
and self.position == other.position \
and not is_list_comp(self.stmt.parent) \
and not is_list_comp(other.parent) \
and not self.is_ignored and not other.is_ignored
def execution_recursion_decorator(func):
def run(execution, evaluate_generator=False):
detector = execution._evaluator.execution_recursion_detector
if detector.push_execution(execution, evaluate_generator):
result = []
else:
result = func(execution, evaluate_generator)
detector.pop_execution()
return result
return run
def execution_recursion_decorator(default=NO_CONTEXTS):
def decorator(func):
def wrapper(execution, **kwargs):
detector = execution.evaluator.execution_recursion_detector
allowed = detector.push_execution(execution)
try:
if allowed:
result = default
else:
result = func(execution, **kwargs)
finally:
detector.pop_execution()
return result
return wrapper
return decorator
class ExecutionRecursionDetector(object):
"""
Catches recursions of executions.
It is designed like a Singelton. Only one instance should exist.
"""
def __init__(self):
self.recursion_level = 0
self.parent_execution_funcs = []
self.execution_funcs = set()
self.execution_count = 0
def __init__(self, evaluator):
self._evaluator = evaluator
def __call__(self, execution, evaluate_generator=False):
debug.dbg('Execution recursions: %s', execution, self.recursion_level,
self.execution_count, len(self.execution_funcs))
if self.check_recursion(execution, evaluate_generator):
result = []
else:
result = self.func(execution, evaluate_generator)
self.pop_execution()
return result
self._recursion_level = 0
self._parent_execution_funcs = []
self._funcdef_execution_counts = {}
self._execution_count = 0
def pop_execution(cls):
cls.parent_execution_funcs.pop()
cls.recursion_level -= 1
def pop_execution(self):
self._parent_execution_funcs.pop()
self._recursion_level -= 1
def push_execution(cls, execution, evaluate_generator):
in_par_execution_funcs = execution.base in cls.parent_execution_funcs
in_execution_funcs = execution.base in cls.execution_funcs
cls.recursion_level += 1
cls.execution_count += 1
cls.execution_funcs.add(execution.base)
cls.parent_execution_funcs.append(execution.base)
def push_execution(self, execution):
funcdef = execution.tree_node
if cls.execution_count > settings.max_executions:
return True
# These two will be undone in pop_execution.
self._recursion_level += 1
self._parent_execution_funcs.append(funcdef)
if isinstance(execution.base, (iterable.Array, iterable.Generator)):
return False
module = execution.get_parent_until()
if evaluate_generator or module == compiled.builtin:
module = execution.get_root_context()
if module == self._evaluator.BUILTINS:
# We have control over builtins so we know they are not recursing
# like crazy. Therefore we just let them execute always, because
# they usually just help a lot with getting good results.
return False
if in_par_execution_funcs:
if cls.recursion_level > settings.max_function_recursion_level:
return True
if in_execution_funcs and \
len(cls.execution_funcs) > settings.max_until_execution_unique:
if self._recursion_level > recursion_limit:
return True
if cls.execution_count > settings.max_executions_without_builtins:
if self._execution_count >= total_function_execution_limit:
return True
self._execution_count += 1
if self._funcdef_execution_counts.setdefault(funcdef, 0) >= per_function_execution_limit:
return True
self._funcdef_execution_counts[funcdef] += 1
if self._parent_execution_funcs.count(funcdef) > per_function_recursion_limit:
return True
return False

View File

@@ -1,569 +0,0 @@
"""
Like described in the :mod:`jedi.evaluate.parsing_representation` module,
there's a need for an ast like module to represent the states of parsed
modules.
But now there are also structures in Python that need a little bit more than
that. An ``Instance`` for example is only a ``Class`` before it is
instantiated. This class represents these cases.
So, why is there also a ``Class`` class here? Well, there are decorators and
they change classes in Python 3.
"""
import copy
import os
import pkgutil
from jedi._compatibility import use_metaclass, unicode
from jedi.parser import representation as pr
from jedi.parser.tokenize import Token
from jedi import debug
from jedi import common
from jedi.evaluate.cache import memoize_default, CachedMetaClass
from jedi.evaluate import compiled
from jedi.evaluate import recursion
from jedi.evaluate import iterable
from jedi.evaluate import docstrings
from jedi.evaluate import helpers
from jedi.evaluate import param
class Executable(pr.IsScope):
"""
An instance is also an executable - because __init__ is called
:param var_args: The param input array, consist of `pr.Array` or list.
"""
def __init__(self, evaluator, base, var_args=()):
self._evaluator = evaluator
self.base = base
self.var_args = var_args
def get_parent_until(self, *args, **kwargs):
return self.base.get_parent_until(*args, **kwargs)
@common.safe_property
def parent(self):
return self.base.parent
class Instance(use_metaclass(CachedMetaClass, Executable)):
"""
This class is used to evaluate instances.
"""
def __init__(self, evaluator, base, var_args=()):
super(Instance, self).__init__(evaluator, base, var_args)
if str(base.name) in ['list', 'set'] \
and compiled.builtin == base.get_parent_until():
# compare the module path with the builtin name.
self.var_args = iterable.check_array_instances(evaluator, self)
else:
# need to execute the __init__ function, because the dynamic param
# searching needs it.
with common.ignored(KeyError):
self.execute_subscope_by_name('__init__', self.var_args)
# Generated instances are classes that are just generated by self
# (No var_args) used.
self.is_generated = False
@memoize_default()
def _get_method_execution(self, func):
func = InstanceElement(self._evaluator, self, func, True)
return FunctionExecution(self._evaluator, func, self.var_args)
def _get_func_self_name(self, func):
"""
Returns the name of the first param in a class method (which is
normally self.
"""
try:
return str(func.params[0].get_name())
except IndexError:
return None
@memoize_default([])
def get_self_attributes(self):
def add_self_dot_name(name):
"""
Need to copy and rewrite the name, because names are now
``instance_usage.variable`` instead of ``self.variable``.
"""
n = copy.copy(name)
n.names = n.names[1:]
n._get_code = unicode(n.names[-1])
names.append(InstanceElement(self._evaluator, self, n))
names = []
# This loop adds the names of the self object, copies them and removes
# the self.
for sub in self.base.subscopes:
if isinstance(sub, pr.Class):
continue
# Get the self name, if there's one.
self_name = self._get_func_self_name(sub)
if not self_name:
continue
if sub.name.get_code() == '__init__':
# ``__init__`` is special because the params need are injected
# this way. Therefore an execution is necessary.
if not sub.decorators:
# __init__ decorators should generally just be ignored,
# because to follow them and their self variables is too
# complicated.
sub = self._get_method_execution(sub)
for n in sub.get_defined_names():
# Only names with the selfname are being added.
# It is also important, that they have a len() of 2,
# because otherwise, they are just something else
if unicode(n.names[0]) == self_name and len(n.names) == 2:
add_self_dot_name(n)
if not isinstance(self.base, compiled.CompiledObject):
for s in self.base.get_super_classes():
for inst in self._evaluator.execute(s):
names += inst.get_self_attributes()
return names
def get_subscope_by_name(self, name):
sub = self.base.get_subscope_by_name(name)
return InstanceElement(self._evaluator, self, sub, True)
def execute_subscope_by_name(self, name, args=()):
method = self.get_subscope_by_name(name)
return self._evaluator.execute(method, args)
def get_descriptor_return(self, obj):
""" Throws a KeyError if there's no method. """
# Arguments in __get__ descriptors are obj, class.
# `method` is the new parent of the array, don't know if that's good.
args = [obj, obj.base] if isinstance(obj, Instance) else [None, obj]
return self.execute_subscope_by_name('__get__', args)
def scope_names_generator(self, position=None):
"""
An Instance has two scopes: The scope with self names and the class
scope. Instance variables have priority over the class scope.
"""
yield self, self.get_self_attributes()
names = []
for var in self.base.instance_names():
names.append(InstanceElement(self._evaluator, self, var, True))
yield self, names
def is_callable(self):
try:
self.get_subscope_by_name('__call__')
return True
except KeyError:
return False
def get_index_types(self, index_array):
indexes = iterable.create_indexes_or_slices(self._evaluator, index_array)
if any([isinstance(i, iterable.Slice) for i in indexes]):
# Slice support in Jedi is very marginal, at the moment, so just
# ignore them in case of __getitem__.
# TODO support slices in a more general way.
indexes = []
index = helpers.FakeStatement(indexes, parent=compiled.builtin)
try:
return self.execute_subscope_by_name('__getitem__', [index])
except KeyError:
debug.warning('No __getitem__, cannot access the array.')
return []
def __getattr__(self, name):
if name not in ['start_pos', 'end_pos', 'name', 'get_imports',
'doc', 'raw_doc', 'asserts']:
raise AttributeError("Instance %s: Don't touch this (%s)!"
% (self, name))
return getattr(self.base, name)
def __repr__(self):
return "<e%s of %s (var_args: %s)>" % \
(type(self).__name__, self.base, len(self.var_args or []))
class InstanceElement(use_metaclass(CachedMetaClass, pr.Base)):
"""
InstanceElement is a wrapper for any object, that is used as an instance
variable (e.g. self.variable or class methods).
"""
def __init__(self, evaluator, instance, var, is_class_var=False):
self._evaluator = evaluator
if isinstance(var, pr.Function):
var = Function(evaluator, var)
elif isinstance(var, pr.Class):
var = Class(evaluator, var)
self.instance = instance
self.var = var
self.is_class_var = is_class_var
@common.safe_property
@memoize_default()
def parent(self):
par = self.var.parent
if isinstance(par, Class) and par == self.instance.base \
or isinstance(par, pr.Class) \
and par == self.instance.base.base:
par = self.instance
elif not isinstance(par, (pr.Module, compiled.CompiledObject)):
par = InstanceElement(self.instance._evaluator, self.instance, par, self.is_class_var)
return par
def get_parent_until(self, *args, **kwargs):
return pr.Simple.get_parent_until(self, *args, **kwargs)
def get_decorated_func(self):
""" Needed because the InstanceElement should not be stripped """
func = self.var.get_decorated_func()
func = InstanceElement(self._evaluator, self.instance, func)
return func
def expression_list(self):
# Copy and modify the array.
return [InstanceElement(self._evaluator, self.instance, command, self.is_class_var)
if not isinstance(command, (pr.Operator, Token)) else command
for command in self.var.expression_list()]
def __iter__(self):
for el in self.var.__iter__():
yield InstanceElement(self.instance._evaluator, self.instance, el,
self.is_class_var)
def __getitem__(self, index):
return InstanceElement(self._evaluator, self.instance, self.var[index],
self.is_class_var)
def __getattr__(self, name):
return getattr(self.var, name)
def isinstance(self, *cls):
return isinstance(self.var, cls)
def is_callable(self):
return self.var.is_callable()
def __repr__(self):
return "<%s of %s>" % (type(self).__name__, self.var)
class Class(use_metaclass(CachedMetaClass, pr.IsScope)):
"""
This class is not only important to extend `pr.Class`, it is also a
important for descriptors (if the descriptor methods are evaluated or not).
"""
def __init__(self, evaluator, base):
self._evaluator = evaluator
self.base = base
@memoize_default(default=())
def get_super_classes(self):
supers = []
# TODO care for mro stuff (multiple super classes).
for s in self.base.supers:
# Super classes are statements.
for cls in self._evaluator.eval_statement(s):
if not isinstance(cls, (Class, compiled.CompiledObject)):
debug.warning('Received non class as a super class.')
continue # Just ignore other stuff (user input error).
supers.append(cls)
if not supers and self.base.parent != compiled.builtin:
# add `object` to classes
supers += self._evaluator.find_types(compiled.builtin, 'object')
return supers
@memoize_default(default=())
def instance_names(self):
def in_iterable(name, iterable):
""" checks if the name is in the variable 'iterable'. """
for i in iterable:
# Only the last name is important, because these names have a
# maximal length of 2, with the first one being `self`.
if unicode(i.names[-1]) == unicode(name.names[-1]):
return True
return False
result = self.base.get_defined_names()
super_result = []
# TODO mro!
for cls in self.get_super_classes():
# Get the inherited names.
for i in cls.instance_names():
if not in_iterable(i, result):
super_result.append(i)
result += super_result
return result
def scope_names_generator(self, position=None):
yield self, self.instance_names()
yield self, compiled.type_names
def get_subscope_by_name(self, name):
for s in [self] + self.get_super_classes():
for sub in reversed(s.subscopes):
if sub.name.get_code() == name:
return sub
raise KeyError("Couldn't find subscope.")
def is_callable(self):
return True
@common.safe_property
def name(self):
return self.base.name
def __getattr__(self, name):
if name not in ['start_pos', 'end_pos', 'parent', 'asserts', 'raw_doc',
'doc', 'get_imports', 'get_parent_until', 'get_code',
'subscopes']:
raise AttributeError("Don't touch this: %s of %s !" % (name, self))
return getattr(self.base, name)
def __repr__(self):
return "<e%s of %s>" % (type(self).__name__, self.base)
class Function(use_metaclass(CachedMetaClass, pr.IsScope)):
"""
Needed because of decorators. Decorators are evaluated here.
"""
def __init__(self, evaluator, func, is_decorated=False):
""" This should not be called directly """
self._evaluator = evaluator
self.base_func = func
self.is_decorated = is_decorated
@memoize_default()
def _decorated_func(self):
"""
Returns the function, that is to be executed in the end.
This is also the places where the decorators are processed.
"""
f = self.base_func
# Only enter it, if has not already been processed.
if not self.is_decorated:
for dec in reversed(self.base_func.decorators):
debug.dbg('decorator: %s %s', dec, f)
dec_results = self._evaluator.eval_statement(dec)
if not len(dec_results):
debug.warning('decorator not found: %s on %s', dec, self.base_func)
return None
decorator = dec_results.pop()
if dec_results:
debug.warning('multiple decorators found %s %s',
self.base_func, dec_results)
# Create param array.
old_func = Function(self._evaluator, f, is_decorated=True)
wrappers = self._evaluator.execute(decorator, (old_func,))
if not len(wrappers):
debug.warning('no wrappers found %s', self.base_func)
return None
if len(wrappers) > 1:
# TODO resolve issue with multiple wrappers -> multiple types
debug.warning('multiple wrappers found %s %s',
self.base_func, wrappers)
f = wrappers[0]
debug.dbg('decorator end %s', f)
if isinstance(f, pr.Function):
f = Function(self._evaluator, f, True)
return f
def get_decorated_func(self):
"""
This function exists for the sole purpose of returning itself if the
decorator doesn't turn out to "work".
We just ignore the decorator here, because sometimes decorators are
just really complicated and Jedi cannot understand them.
"""
return self._decorated_func() \
or Function(self._evaluator, self.base_func, True)
def get_magic_function_names(self):
return compiled.magic_function_class.get_defined_names()
def get_magic_function_scope(self):
return compiled.magic_function_class
def is_callable(self):
return True
def __getattr__(self, name):
return getattr(self.base_func, name)
def __repr__(self):
dec_func = self._decorated_func()
dec = ''
if not self.is_decorated and self.base_func.decorators:
dec = " is " + repr(dec_func)
return "<e%s of %s%s>" % (type(self).__name__, self.base_func, dec)
class FunctionExecution(Executable):
"""
This class is used to evaluate functions and their returns.
This is the most complicated class, because it contains the logic to
transfer parameters. It is even more complicated, because there may be
multiple calls to functions and recursion has to be avoided. But this is
responsibility of the decorators.
"""
@memoize_default(default=())
@recursion.execution_recursion_decorator
def get_return_types(self, evaluate_generator=False):
func = self.base
# Feed the listeners, with the params.
for listener in func.listeners:
listener.execute(self._get_params())
if func.listeners:
# If we do have listeners, that means that there's not a regular
# execution ongoing. In this case Jedi is interested in the
# inserted params, not in the actual execution of the function.
return []
if func.is_generator and not evaluate_generator:
return [iterable.Generator(self._evaluator, func, self.var_args)]
else:
stmts = list(docstrings.find_return_types(self._evaluator, func))
for r in self.returns:
if r is not None:
stmts += self._evaluator.eval_statement(r)
return stmts
@memoize_default(default=())
def _get_params(self):
"""
This returns the params for an TODO and is injected as a
'hack' into the pr.Function class.
This needs to be here, because Instance can have __init__ functions,
which act the same way as normal functions.
"""
return param.get_params(self._evaluator, self.base, self.var_args)
def get_defined_names(self):
"""
Call the default method with the own instance (self implements all
the necessary functions). Add also the params.
"""
return self._get_params() + pr.Scope.get_defined_names(self)
def scope_names_generator(self, position=None):
names = pr.filter_after_position(pr.Scope.get_defined_names(self), position)
yield self, self._get_params() + names
def _copy_properties(self, prop):
"""
Literally copies a property of a Function. Copying is very expensive,
because it is something like `copy.deepcopy`. However, these copied
objects can be used for the executions, as if they were in the
execution.
"""
# Copy all these lists into this local function.
attr = getattr(self.base, prop)
objects = []
for element in attr:
if element is None:
copied = element
else:
copied = helpers.fast_parent_copy(element)
copied.parent = self._scope_copy(copied.parent)
if isinstance(copied, pr.Function):
copied = Function(self._evaluator, copied)
objects.append(copied)
return objects
def __getattr__(self, name):
if name not in ['start_pos', 'end_pos', 'imports', '_sub_module']:
raise AttributeError('Tried to access %s: %s. Why?' % (name, self))
return getattr(self.base, name)
@memoize_default()
def _scope_copy(self, scope):
""" Copies a scope (e.g. if) in an execution """
# TODO method uses different scopes than the subscopes property.
# just check the start_pos, sometimes it's difficult with closures
# to compare the scopes directly.
if scope.start_pos == self.start_pos:
return self
else:
copied = helpers.fast_parent_copy(scope)
copied.parent = self._scope_copy(copied.parent)
return copied
@common.safe_property
@memoize_default([])
def returns(self):
return self._copy_properties('returns')
@common.safe_property
@memoize_default([])
def asserts(self):
return self._copy_properties('asserts')
@common.safe_property
@memoize_default([])
def statements(self):
return self._copy_properties('statements')
@common.safe_property
@memoize_default([])
def subscopes(self):
return self._copy_properties('subscopes')
def get_statement_for_position(self, pos):
return pr.Scope.get_statement_for_position(self, pos)
def __repr__(self):
return "<%s of %s>" % (type(self).__name__, self.base)
class ModuleWrapper(use_metaclass(CachedMetaClass, pr.Module)):
def __init__(self, evaluator, module):
self._evaluator = evaluator
self._module = module
def scope_names_generator(self, position=None):
yield self, pr.filter_after_position(self._module.get_defined_names(), position)
yield self, self._module_attributes()
sub_modules = self._sub_modules()
if sub_modules:
yield self, self._sub_modules()
@memoize_default()
def _module_attributes(self):
names = ['__file__', '__package__', '__doc__', '__name__', '__version__']
# All the additional module attributes are strings.
parent = Instance(self._evaluator, compiled.create(self._evaluator, str))
return [helpers.FakeName(n, parent) for n in names]
@memoize_default()
def _sub_modules(self):
"""
Lists modules in the directory of this module (if this module is a
package).
"""
path = self._module.path
names = []
if path is not None and path.endswith(os.path.sep + '__init__.py'):
mods = pkgutil.iter_modules([os.path.dirname(path)])
for module_loader, name, is_pkg in mods:
name = helpers.FakeName(name)
# It's obviously a relative import to the current module.
imp = helpers.FakeImport(name, self, level=1)
name.parent = imp
names.append(name)
return names
def __getattr__(self, name):
return getattr(self._module, name)
def __repr__(self):
return "<%s: %s>" % (type(self).__name__, self._module)

110
jedi/evaluate/site.py Normal file
View File

@@ -0,0 +1,110 @@
"""An adapted copy of relevant site-packages functionality from Python stdlib.
This file contains some functions related to handling site-packages in Python
with jedi-specific modifications:
- the functions operate on sys_path argument rather than global sys.path
- in .pth files "import ..." lines that allow execution of arbitrary code are
skipped to prevent code injection into jedi interpreter
"""
# Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010,
# 2011, 2012, 2013, 2014, 2015 Python Software Foundation; All Rights Reserved
from __future__ import print_function
import sys
import os
def makepath(*paths):
dir = os.path.join(*paths)
try:
dir = os.path.abspath(dir)
except OSError:
pass
return dir, os.path.normcase(dir)
def _init_pathinfo(sys_path):
"""Return a set containing all existing directory entries from sys_path"""
d = set()
for dir in sys_path:
try:
if os.path.isdir(dir):
dir, dircase = makepath(dir)
d.add(dircase)
except TypeError:
continue
return d
def addpackage(sys_path, sitedir, name, known_paths):
"""Process a .pth file within the site-packages directory:
For each line in the file, either combine it with sitedir to a path
and add that to known_paths, or execute it if it starts with 'import '.
"""
if known_paths is None:
known_paths = _init_pathinfo(sys_path)
reset = 1
else:
reset = 0
fullname = os.path.join(sitedir, name)
try:
f = open(fullname, "r")
except OSError:
return
with f:
for n, line in enumerate(f):
if line.startswith("#"):
continue
try:
if line.startswith(("import ", "import\t")):
# Change by immerrr: don't evaluate import lines to prevent
# code injection into jedi through pth files.
#
# exec(line)
continue
line = line.rstrip()
dir, dircase = makepath(sitedir, line)
if not dircase in known_paths and os.path.exists(dir):
sys_path.append(dir)
known_paths.add(dircase)
except Exception:
print("Error processing line {:d} of {}:\n".format(n+1, fullname),
file=sys.stderr)
import traceback
for record in traceback.format_exception(*sys.exc_info()):
for line in record.splitlines():
print(' '+line, file=sys.stderr)
print("\nRemainder of file ignored", file=sys.stderr)
break
if reset:
known_paths = None
return known_paths
def addsitedir(sys_path, sitedir, known_paths=None):
"""Add 'sitedir' argument to sys_path if missing and handle .pth files in
'sitedir'"""
if known_paths is None:
known_paths = _init_pathinfo(sys_path)
reset = 1
else:
reset = 0
sitedir, sitedircase = makepath(sitedir)
if not sitedircase in known_paths:
sys_path.append(sitedir) # Add path component
known_paths.add(sitedircase)
try:
names = os.listdir(sitedir)
except OSError:
return
names = [name for name in names if name.endswith(".pth")]
for name in sorted(names):
addpackage(sys_path, sitedir, name, known_paths)
if reset:
known_paths = None
return known_paths

View File

@@ -1,116 +1,306 @@
"""
Implementations of standard library functions, because it's not possible to
understand them with Jedi.
To add a new implementation, create a function and add it to the
``_implemented`` dict at the bottom of this module.
Note that this module exists only to implement very specific functionality in
the standard library. The usual way to understand the standard library is the
compiled module that returns the types for C-builtins.
"""
from jedi._compatibility import unicode
from jedi.evaluate import compiled
from jedi.evaluate import representation as er
from jedi.evaluate import iterable
from jedi.evaluate.helpers import FakeArray, FakeStatement
from jedi.parser import representation as pr
import collections
import re
from jedi import debug
from jedi.evaluate.arguments import ValuesArguments
from jedi.evaluate import analysis
from jedi.evaluate import compiled
from jedi.evaluate.context.instance import InstanceFunctionExecution, \
AbstractInstanceContext, CompiledInstance, BoundMethod, \
AnonymousInstanceFunctionExecution
from jedi.evaluate.base_context import ContextualizedNode, \
NO_CONTEXTS, ContextSet
from jedi.evaluate.context import ClassContext, ModuleContext
from jedi.evaluate.context import iterable
from jedi.evaluate.lazy_context import LazyTreeContext
from jedi.evaluate.syntax_tree import is_string
# Now this is all part of fake tuples in Jedi. However super doesn't work on
# __init__ and __new__ doesn't work at all. So adding this to nametuples is
# just the easiest way.
_NAMEDTUPLE_INIT = """
def __init__(_cls, {arg_list}):
'A helper function for namedtuple.'
self.__iterable = ({arg_list})
def __iter__(self):
for i in self.__iterable:
yield i
def __getitem__(self, y):
return self.__iterable[y]
"""
class NotInStdLib(LookupError):
pass
def execute(evaluator, obj, params):
def execute(evaluator, obj, arguments):
if isinstance(obj, BoundMethod):
raise NotInStdLib()
try:
obj_name = str(obj.name)
obj_name = obj.name.string_name
except AttributeError:
pass
else:
if obj.parent == compiled.builtin:
if obj.parent_context == evaluator.BUILTINS:
module_name = 'builtins'
elif isinstance(obj.parent, pr.Module):
module_name = str(obj.parent.name)
elif isinstance(obj.parent_context, ModuleContext):
module_name = obj.parent_context.name.string_name
else:
module_name = ''
# for now we just support builtin functions.
try:
return _implemented[module_name][obj_name](evaluator, obj, params)
func = _implemented[module_name][obj_name]
except KeyError:
pass
else:
return func(evaluator, obj, arguments)
raise NotInStdLib()
def _follow_param(evaluator, params, index):
def _follow_param(evaluator, arguments, index):
try:
stmt = params[index]
key, lazy_context = list(arguments.unpack())[index]
except IndexError:
return []
return NO_CONTEXTS
else:
if isinstance(stmt, pr.Statement):
return evaluator.eval_statement(stmt)
else:
return [stmt] # just some arbitrary object
return lazy_context.infer()
def builtins_getattr(evaluator, obj, params):
stmts = []
# follow the first param
objects = _follow_param(evaluator, params, 0)
names = _follow_param(evaluator, params, 1)
for obj in objects:
if not isinstance(obj, (er.Instance, er.Class, pr.Module, compiled.CompiledObject)):
debug.warning('getattr called without instance')
def argument_clinic(string, want_obj=False, want_context=False, want_arguments=False):
"""
Works like Argument Clinic (PEP 436), to validate function params.
"""
clinic_args = []
allow_kwargs = False
optional = False
while string:
# Optional arguments have to begin with a bracket. And should always be
# at the end of the arguments. This is therefore not a proper argument
# clinic implementation. `range()` for exmple allows an optional start
# value at the beginning.
match = re.match('(?:(?:(\[),? ?|, ?|)(\w+)|, ?/)\]*', string)
string = string[len(match.group(0)):]
if not match.group(2): # A slash -> allow named arguments
allow_kwargs = True
continue
optional = optional or bool(match.group(1))
word = match.group(2)
clinic_args.append((word, optional, allow_kwargs))
def f(func):
def wrapper(evaluator, obj, arguments):
debug.dbg('builtin start %s' % obj, color='MAGENTA')
try:
lst = list(arguments.eval_argument_clinic(clinic_args))
except ValueError:
return NO_CONTEXTS
else:
kwargs = {}
if want_context:
kwargs['context'] = arguments.context
if want_obj:
kwargs['obj'] = obj
if want_arguments:
kwargs['arguments'] = arguments
return func(evaluator, *lst, **kwargs)
finally:
debug.dbg('builtin end', color='MAGENTA')
return wrapper
return f
@argument_clinic('iterator[, default], /')
def builtins_next(evaluator, iterators, defaults):
"""
TODO this function is currently not used. It's a stab at implementing next
in a different way than fake objects. This would be a bit more flexible.
"""
if evaluator.python_version[0] == 2:
name = 'next'
else:
name = '__next__'
context_set = NO_CONTEXTS
for iterator in iterators:
if isinstance(iterator, AbstractInstanceContext):
context_set = ContextSet.from_sets(
n.infer()
for filter in iterator.get_filters(include_self_names=True)
for n in filter.get(name)
).execute_evaluated()
if context_set:
return context_set
return defaults
@argument_clinic('object, name[, default], /')
def builtins_getattr(evaluator, objects, names, defaults=None):
# follow the first param
for obj in objects:
for name in names:
s = unicode, str
if isinstance(name, compiled.CompiledObject) and isinstance(name.obj, s):
stmts += evaluator.follow_path(iter([name.obj]), [obj], obj)
if is_string(name):
return obj.py__getattribute__(name.obj)
else:
debug.warning('getattr called without str')
continue
return stmts
return NO_CONTEXTS
def builtins_type(evaluator, obj, params):
if len(params) == 1:
# otherwise it would be a metaclass... maybe someday...
objects = _follow_param(evaluator, params, 0)
return [o.base for o in objects if isinstance(o, er.Instance)]
return []
@argument_clinic('object[, bases, dict], /')
def builtins_type(evaluator, objects, bases, dicts):
if bases or dicts:
# It's a type creation... maybe someday...
return NO_CONTEXTS
else:
return objects.py__class__()
def builtins_super(evaluator, obj, params):
class SuperInstance(AbstractInstanceContext):
"""To be used like the object ``super`` returns."""
def __init__(self, evaluator, cls):
su = cls.py_mro()[1]
super().__init__(evaluator, su and su[0] or self)
@argument_clinic('[type[, obj]], /', want_context=True)
def builtins_super(evaluator, types, objects, context):
# TODO make this able to detect multiple inheritance super
accept = (pr.Function,)
func = params.get_parent_until(accept)
if func.isinstance(*accept):
cls = func.get_parent_until(accept + (pr.Class,),
include_current=False)
if isinstance(cls, pr.Class):
cls = er.Class(evaluator, cls)
su = cls.get_super_classes()
if su:
return evaluator.execute(su[0])
return []
if isinstance(context, (InstanceFunctionExecution,
AnonymousInstanceFunctionExecution)):
su = context.instance.py__class__().py__bases__()
return su[0].infer().execute_evaluated()
return NO_CONTEXTS
def builtins_reversed(evaluator, obj, params):
objects = tuple(_follow_param(evaluator, params, 0))
if objects:
# unpack the iterator values
objects = tuple(iterable.get_iterator_types(objects))
if objects:
rev = reversed(objects)
# Repack iterator values and then run it the normal way. This is
# necessary, because `reversed` is a function and autocompletion
# would fail in certain cases like `reversed(x).__iter__` if we
# just returned the result directly.
stmts = [FakeStatement([r]) for r in rev]
objects = (iterable.Array(evaluator, FakeArray(stmts, objects[0].parent)),)
return [er.Instance(evaluator, obj, objects)]
@argument_clinic('sequence, /', want_obj=True, want_arguments=True)
def builtins_reversed(evaluator, sequences, obj, arguments):
# While we could do without this variable (just by using sequences), we
# want static analysis to work well. Therefore we need to generated the
# values again.
key, lazy_context = next(arguments.unpack())
cn = None
if isinstance(lazy_context, LazyTreeContext):
# TODO access private
cn = ContextualizedNode(lazy_context._context, lazy_context.data)
ordered = list(sequences.iterate(cn))
rev = list(reversed(ordered))
# Repack iterator values and then run it the normal way. This is
# necessary, because `reversed` is a function and autocompletion
# would fail in certain cases like `reversed(x).__iter__` if we
# just returned the result directly.
seq = iterable.FakeSequence(evaluator, 'list', rev)
arguments = ValuesArguments([ContextSet(seq)])
return ContextSet(CompiledInstance(evaluator, evaluator.BUILTINS, obj, arguments))
def _return_first_param(evaluator, obj, params):
if len(params) == 1:
return _follow_param(evaluator, params, 0)
return []
@argument_clinic('obj, type, /', want_arguments=True)
def builtins_isinstance(evaluator, objects, types, arguments):
bool_results = set()
for o in objects:
try:
mro_func = o.py__class__().py__mro__
except AttributeError:
# This is temporary. Everything should have a class attribute in
# Python?! Maybe we'll leave it here, because some numpy objects or
# whatever might not.
return ContextSet(compiled.create(True), compiled.create(False))
mro = mro_func()
for cls_or_tup in types:
if cls_or_tup.is_class():
bool_results.add(cls_or_tup in mro)
elif cls_or_tup.name.string_name == 'tuple' \
and cls_or_tup.get_root_context() == evaluator.BUILTINS:
# Check for tuples.
classes = ContextSet.from_sets(
lazy_context.infer()
for lazy_context in cls_or_tup.iterate()
)
bool_results.add(any(cls in mro for cls in classes))
else:
_, lazy_context = list(arguments.unpack())[1]
if isinstance(lazy_context, LazyTreeContext):
node = lazy_context.data
message = 'TypeError: isinstance() arg 2 must be a ' \
'class, type, or tuple of classes and types, ' \
'not %s.' % cls_or_tup
analysis.add(lazy_context._context, 'type-error-isinstance', node, message)
return ContextSet.from_iterable(compiled.create(evaluator, x) for x in bool_results)
def collections_namedtuple(evaluator, obj, arguments):
"""
Implementation of the namedtuple function.
This has to be done by processing the namedtuple class template and
evaluating the result.
.. note:: |jedi| only supports namedtuples on Python >2.6.
"""
# Namedtuples are not supported on Python 2.6
if not hasattr(collections, '_class_template'):
return NO_CONTEXTS
# Process arguments
# TODO here we only use one of the types, we should use all.
name = list(_follow_param(evaluator, arguments, 0))[0].obj
_fields = list(_follow_param(evaluator, arguments, 1))[0]
if isinstance(_fields, compiled.CompiledObject):
fields = _fields.obj.replace(',', ' ').split()
elif isinstance(_fields, iterable.AbstractIterable):
fields = [
v.obj
for lazy_context in _fields.py__iter__()
for v in lazy_context.infer() if hasattr(v, 'obj')
]
else:
return NO_CONTEXTS
base = collections._class_template
base += _NAMEDTUPLE_INIT
# Build source
source = base.format(
typename=name,
field_names=tuple(fields),
num_fields=len(fields),
arg_list = repr(tuple(fields)).replace("'", "")[1:-1],
repr_fmt=', '.join(collections._repr_template.format(name=name) for name in fields),
field_defs='\n'.join(collections._field_template.format(index=index, name=name)
for index, name in enumerate(fields))
)
# Parse source
module = evaluator.grammar.parse(source)
generated_class = next(module.iter_classdefs())
parent_context = ModuleContext(evaluator, module, '')
return ContextSet(ClassContext(evaluator, parent_context, generated_class))
@argument_clinic('first, /')
def _return_first_param(evaluator, firsts):
return firsts
_implemented = {
@@ -119,13 +309,17 @@ _implemented = {
'type': builtins_type,
'super': builtins_super,
'reversed': builtins_reversed,
'isinstance': builtins_isinstance,
},
'copy': {
'copy': _return_first_param,
'deepcopy': _return_first_param,
},
'json': {
'load': lambda *args: [],
'loads': lambda *args: [],
'load': lambda *args: NO_CONTEXTS,
'loads': lambda *args: NO_CONTEXTS,
},
'collections': {
'namedtuple': collections_namedtuple,
},
}

View File

@@ -0,0 +1,588 @@
"""
Functions evaluating the syntax tree.
"""
import copy
import operator as op
from parso.python import tree
from jedi import debug
from jedi import parser_utils
from jedi.evaluate.base_context import ContextSet, NO_CONTEXTS, ContextualizedNode, \
ContextualizedName, iterator_to_context_set, iterate_contexts
from jedi.evaluate import compiled
from jedi.evaluate import pep0484
from jedi.evaluate import recursion
from jedi.evaluate import helpers
from jedi.evaluate import analysis
from jedi.evaluate import imports
from jedi.evaluate import arguments
from jedi.evaluate.context import ClassContext, FunctionContext
from jedi.evaluate.context import iterable
from jedi.evaluate.context import TreeInstance, CompiledInstance
from jedi.evaluate.finder import NameFinder
from jedi.evaluate.helpers import is_string, is_literal, is_number, is_compiled
def _limit_context_infers(func):
"""
This is for now the way how we limit type inference going wild. There are
other ways to ensure recursion limits as well. This is mostly necessary
because of instance (self) access that can be quite tricky to limit.
I'm still not sure this is the way to go, but it looks okay for now and we
can still go anther way in the future. Tests are there. ~ dave
"""
def wrapper(context, *args, **kwargs):
n = context.tree_node
evaluator = context.evaluator
try:
evaluator.inferred_element_counts[n] += 1
if evaluator.inferred_element_counts[n] > 300:
debug.warning('In context %s there were too many inferences.', n)
return NO_CONTEXTS
except KeyError:
evaluator.inferred_element_counts[n] = 1
return func(context, *args, **kwargs)
return wrapper
@debug.increase_indent
@_limit_context_infers
def eval_node(context, element):
debug.dbg('eval_element %s@%s', element, element.start_pos)
evaluator = context.evaluator
typ = element.type
if typ in ('name', 'number', 'string', 'atom'):
return eval_atom(context, element)
elif typ == 'keyword':
# For False/True/None
if element.value in ('False', 'True', 'None'):
return ContextSet(compiled.builtin_from_name(evaluator, element.value))
# else: print e.g. could be evaluated like this in Python 2.7
return NO_CONTEXTS
elif typ == 'lambdef':
return ContextSet(FunctionContext(evaluator, context, element))
elif typ == 'expr_stmt':
return eval_expr_stmt(context, element)
elif typ in ('power', 'atom_expr'):
first_child = element.children[0]
if not (first_child.type == 'keyword' and first_child.value == 'await'):
context_set = eval_atom(context, first_child)
for trailer in element.children[1:]:
if trailer == '**': # has a power operation.
right = evaluator.eval_element(context, element.children[2])
context_set = _eval_comparison(
evaluator,
context,
context_set,
trailer,
right
)
break
context_set = eval_trailer(context, context_set, trailer)
return context_set
return NO_CONTEXTS
elif typ in ('testlist_star_expr', 'testlist',):
# The implicit tuple in statements.
return ContextSet(iterable.SequenceLiteralContext(evaluator, context, element))
elif typ in ('not_test', 'factor'):
context_set = context.eval_node(element.children[-1])
for operator in element.children[:-1]:
context_set = eval_factor(context_set, operator)
return context_set
elif typ == 'test':
# `x if foo else y` case.
return (context.eval_node(element.children[0]) |
context.eval_node(element.children[-1]))
elif typ == 'operator':
# Must be an ellipsis, other operators are not evaluated.
# In Python 2 ellipsis is coded as three single dot tokens, not
# as one token 3 dot token.
assert element.value in ('.', '...')
return ContextSet(compiled.create(evaluator, Ellipsis))
elif typ == 'dotted_name':
context_set = eval_atom(context, element.children[0])
for next_name in element.children[2::2]:
# TODO add search_global=True?
context_set = context_set.py__getattribute__(next_name, name_context=context)
return context_set
elif typ == 'eval_input':
return eval_node(context, element.children[0])
elif typ == 'annassign':
return pep0484._evaluate_for_annotation(context, element.children[1])
else:
return eval_or_test(context, element)
def eval_trailer(context, base_contexts, trailer):
trailer_op, node = trailer.children[:2]
if node == ')': # `arglist` is optional.
node = ()
if trailer_op == '[':
trailer_op, node, _ = trailer.children
# TODO It's kind of stupid to cast this from a context set to a set.
foo = set(base_contexts)
# special case: PEP0484 typing module, see
# https://github.com/davidhalter/jedi/issues/663
result = ContextSet()
for typ in list(foo):
if isinstance(typ, (ClassContext, TreeInstance)):
typing_module_types = pep0484.py__getitem__(context, typ, node)
if typing_module_types is not None:
foo.remove(typ)
result |= typing_module_types
return result | base_contexts.get_item(
eval_subscript_list(context.evaluator, context, node),
ContextualizedNode(context, trailer)
)
else:
debug.dbg('eval_trailer: %s in %s', trailer, base_contexts)
if trailer_op == '.':
return base_contexts.py__getattribute__(
name_context=context,
name_or_str=node
)
else:
assert trailer_op == '('
args = arguments.TreeArguments(context.evaluator, context, node, trailer)
return base_contexts.execute(args)
def eval_atom(context, atom):
"""
Basically to process ``atom`` nodes. The parser sometimes doesn't
generate the node (because it has just one child). In that case an atom
might be a name or a literal as well.
"""
if atom.type == 'name':
# This is the first global lookup.
stmt = tree.search_ancestor(
atom, 'expr_stmt', 'lambdef'
) or atom
if stmt.type == 'lambdef':
stmt = atom
return context.py__getattribute__(
name_or_str=atom,
position=stmt.start_pos,
search_global=True
)
elif isinstance(atom, tree.Literal):
string = parser_utils.safe_literal_eval(atom.value)
return ContextSet(compiled.create(context.evaluator, string))
else:
c = atom.children
if c[0].type == 'string':
# Will be one string.
context_set = eval_atom(context, c[0])
for string in c[1:]:
right = eval_atom(context, string)
context_set = _eval_comparison(context.evaluator, context, context_set, '+', right)
return context_set
# Parentheses without commas are not tuples.
elif c[0] == '(' and not len(c) == 2 \
and not(c[1].type == 'testlist_comp' and
len(c[1].children) > 1):
return context.eval_node(c[1])
try:
comp_for = c[1].children[1]
except (IndexError, AttributeError):
pass
else:
if comp_for == ':':
# Dict comprehensions have a colon at the 3rd index.
try:
comp_for = c[1].children[3]
except IndexError:
pass
if comp_for.type == 'comp_for':
return ContextSet(iterable.Comprehension.from_atom(context.evaluator, context, atom))
# It's a dict/list/tuple literal.
array_node = c[1]
try:
array_node_c = array_node.children
except AttributeError:
array_node_c = []
if c[0] == '{' and (array_node == '}' or ':' in array_node_c):
context = iterable.DictLiteralContext(context.evaluator, context, atom)
else:
context = iterable.SequenceLiteralContext(context.evaluator, context, atom)
return ContextSet(context)
@_limit_context_infers
def eval_expr_stmt(context, stmt, seek_name=None):
with recursion.execution_allowed(context.evaluator, stmt) as allowed:
if allowed or context.get_root_context() == context.evaluator.BUILTINS:
return _eval_expr_stmt(context, stmt, seek_name)
return NO_CONTEXTS
@debug.increase_indent
def _eval_expr_stmt(context, stmt, seek_name=None):
"""
The starting point of the completion. A statement always owns a call
list, which are the calls, that a statement does. In case multiple
names are defined in the statement, `seek_name` returns the result for
this name.
:param stmt: A `tree.ExprStmt`.
"""
debug.dbg('eval_expr_stmt %s (%s)', stmt, seek_name)
rhs = stmt.get_rhs()
context_set = context.eval_node(rhs)
if seek_name:
c_node = ContextualizedName(context, seek_name)
context_set = check_tuple_assignments(context.evaluator, c_node, context_set)
first_operator = next(stmt.yield_operators(), None)
if first_operator not in ('=', None) and first_operator.type == 'operator':
# `=` is always the last character in aug assignments -> -1
operator = copy.copy(first_operator)
operator.value = operator.value[:-1]
name = stmt.get_defined_names()[0].value
left = context.py__getattribute__(
name, position=stmt.start_pos, search_global=True)
for_stmt = tree.search_ancestor(stmt, 'for_stmt')
if for_stmt is not None and for_stmt.type == 'for_stmt' and context_set \
and parser_utils.for_stmt_defines_one_name(for_stmt):
# Iterate through result and add the values, that's possible
# only in for loops without clutter, because they are
# predictable. Also only do it, if the variable is not a tuple.
node = for_stmt.get_testlist()
cn = ContextualizedNode(context, node)
ordered = list(cn.infer().iterate(cn))
for lazy_context in ordered:
dct = {for_stmt.children[1].value: lazy_context.infer()}
with helpers.predefine_names(context, for_stmt, dct):
t = context.eval_node(rhs)
left = _eval_comparison(context.evaluator, context, left, operator, t)
context_set = left
else:
context_set = _eval_comparison(context.evaluator, context, left, operator, context_set)
debug.dbg('eval_expr_stmt result %s', context_set)
return context_set
def eval_or_test(context, or_test):
iterator = iter(or_test.children)
types = context.eval_node(next(iterator))
for operator in iterator:
right = next(iterator)
if operator.type == 'comp_op': # not in / is not
operator = ' '.join(c.value for c in operator.children)
# handle lazy evaluation of and/or here.
if operator in ('and', 'or'):
left_bools = set(left.py__bool__() for left in types)
if left_bools == set([True]):
if operator == 'and':
types = context.eval_node(right)
elif left_bools == set([False]):
if operator != 'and':
types = context.eval_node(right)
# Otherwise continue, because of uncertainty.
else:
types = _eval_comparison(context.evaluator, context, types, operator,
context.eval_node(right))
debug.dbg('eval_or_test types %s', types)
return types
@iterator_to_context_set
def eval_factor(context_set, operator):
"""
Calculates `+`, `-`, `~` and `not` prefixes.
"""
for context in context_set:
if operator == '-':
if is_number(context):
yield compiled.create(context.evaluator, -context.obj)
elif operator == 'not':
value = context.py__bool__()
if value is None: # Uncertainty.
return
yield compiled.create(context.evaluator, not value)
else:
yield context
# Maps Python syntax to the operator module.
COMPARISON_OPERATORS = {
'==': op.eq,
'!=': op.ne,
'is': op.is_,
'is not': op.is_not,
'<': op.lt,
'<=': op.le,
'>': op.gt,
'>=': op.ge,
}
def _literals_to_types(evaluator, result):
# Changes literals ('a', 1, 1.0, etc) to its type instances (str(),
# int(), float(), etc).
new_result = NO_CONTEXTS
for typ in result:
if is_literal(typ):
# Literals are only valid as long as the operations are
# correct. Otherwise add a value-free instance.
cls = compiled.builtin_from_name(evaluator, typ.name.string_name)
new_result |= cls.execute_evaluated()
else:
new_result |= ContextSet(typ)
return new_result
def _eval_comparison(evaluator, context, left_contexts, operator, right_contexts):
if not left_contexts or not right_contexts:
# illegal slices e.g. cause left/right_result to be None
result = (left_contexts or NO_CONTEXTS) | (right_contexts or NO_CONTEXTS)
return _literals_to_types(evaluator, result)
else:
# I don't think there's a reasonable chance that a string
# operation is still correct, once we pass something like six
# objects.
if len(left_contexts) * len(right_contexts) > 6:
return _literals_to_types(evaluator, left_contexts | right_contexts)
else:
return ContextSet.from_sets(
_eval_comparison_part(evaluator, context, left, operator, right)
for left in left_contexts
for right in right_contexts
)
def _is_tuple(context):
return isinstance(context, iterable.AbstractIterable) and context.array_type == 'tuple'
def _is_list(context):
return isinstance(context, iterable.AbstractIterable) and context.array_type == 'list'
def _eval_comparison_part(evaluator, context, left, operator, right):
l_is_num = is_number(left)
r_is_num = is_number(right)
if operator == '*':
# for iterables, ignore * operations
if isinstance(left, iterable.AbstractIterable) or is_string(left):
return ContextSet(left)
elif isinstance(right, iterable.AbstractIterable) or is_string(right):
return ContextSet(right)
elif operator == '+':
if l_is_num and r_is_num or is_string(left) and is_string(right):
return ContextSet(compiled.create(evaluator, left.obj + right.obj))
elif _is_tuple(left) and _is_tuple(right) or _is_list(left) and _is_list(right):
return ContextSet(iterable.MergedArray(evaluator, (left, right)))
elif operator == '-':
if l_is_num and r_is_num:
return ContextSet(compiled.create(evaluator, left.obj - right.obj))
elif operator == '%':
# With strings and numbers the left type typically remains. Except for
# `int() % float()`.
return ContextSet(left)
elif operator in COMPARISON_OPERATORS:
operation = COMPARISON_OPERATORS[operator]
if is_compiled(left) and is_compiled(right):
# Possible, because the return is not an option. Just compare.
left = left.obj
right = right.obj
try:
result = operation(left, right)
except TypeError:
# Could be True or False.
return ContextSet(compiled.create(evaluator, True), compiled.create(evaluator, False))
else:
return ContextSet(compiled.create(evaluator, result))
elif operator == 'in':
return NO_CONTEXTS
def check(obj):
"""Checks if a Jedi object is either a float or an int."""
return isinstance(obj, CompiledInstance) and \
obj.name.string_name in ('int', 'float')
# Static analysis, one is a number, the other one is not.
if operator in ('+', '-') and l_is_num != r_is_num \
and not (check(left) or check(right)):
message = "TypeError: unsupported operand type(s) for +: %s and %s"
analysis.add(context, 'type-error-operation', operator,
message % (left, right))
return ContextSet(left, right)
def _remove_statements(evaluator, context, stmt, name):
"""
This is the part where statements are being stripped.
Due to lazy evaluation, statements like a = func; b = a; b() have to be
evaluated.
"""
pep0484_contexts = \
pep0484.find_type_from_comment_hint_assign(context, stmt, name)
if pep0484_contexts:
return pep0484_contexts
return eval_expr_stmt(context, stmt, seek_name=name)
def tree_name_to_contexts(evaluator, context, tree_name):
types = []
node = tree_name.get_definition(import_name_always=True)
if node is None:
node = tree_name.parent
if node.type == 'global_stmt':
context = evaluator.create_context(context, tree_name)
finder = NameFinder(evaluator, context, context, tree_name.value)
filters = finder.get_filters(search_global=True)
# For global_stmt lookups, we only need the first possible scope,
# which means the function itself.
filters = [next(filters)]
return finder.find(filters, attribute_lookup=False)
elif node.type not in ('import_from', 'import_name'):
raise ValueError("Should not happen.")
typ = node.type
if typ == 'for_stmt':
types = pep0484.find_type_from_comment_hint_for(context, node, tree_name)
if types:
return types
if typ == 'with_stmt':
types = pep0484.find_type_from_comment_hint_with(context, node, tree_name)
if types:
return types
if typ in ('for_stmt', 'comp_for'):
try:
types = context.predefined_names[node][tree_name.value]
except KeyError:
cn = ContextualizedNode(context, node.children[3])
for_types = iterate_contexts(cn.infer(), cn)
c_node = ContextualizedName(context, tree_name)
types = check_tuple_assignments(evaluator, c_node, for_types)
elif typ == 'expr_stmt':
types = _remove_statements(evaluator, context, node, tree_name)
elif typ == 'with_stmt':
context_managers = context.eval_node(node.get_test_node_from_name(tree_name))
enter_methods = context_managers.py__getattribute__('__enter__')
return enter_methods.execute_evaluated()
elif typ in ('import_from', 'import_name'):
types = imports.infer_import(context, tree_name)
elif typ in ('funcdef', 'classdef'):
types = _apply_decorators(context, node)
elif typ == 'try_stmt':
# TODO an exception can also be a tuple. Check for those.
# TODO check for types that are not classes and add it to
# the static analysis report.
exceptions = context.eval_node(tree_name.get_previous_sibling().get_previous_sibling())
types = exceptions.execute_evaluated()
else:
raise ValueError("Should not happen.")
return types
def _apply_decorators(context, node):
"""
Returns the function, that should to be executed in the end.
This is also the places where the decorators are processed.
"""
if node.type == 'classdef':
decoratee_context = ClassContext(
context.evaluator,
parent_context=context,
classdef=node
)
else:
decoratee_context = FunctionContext(
context.evaluator,
parent_context=context,
funcdef=node
)
initial = values = ContextSet(decoratee_context)
for dec in reversed(node.get_decorators()):
debug.dbg('decorator: %s %s', dec, values)
dec_values = context.eval_node(dec.children[1])
trailer_nodes = dec.children[2:-1]
if trailer_nodes:
# Create a trailer and evaluate it.
trailer = tree.PythonNode('trailer', trailer_nodes)
trailer.parent = dec
dec_values = eval_trailer(context, dec_values, trailer)
if not len(dec_values):
debug.warning('decorator not found: %s on %s', dec, node)
return initial
values = dec_values.execute(arguments.ValuesArguments([values]))
if not len(values):
debug.warning('not possible to resolve wrappers found %s', node)
return initial
debug.dbg('decorator end %s', values)
return values
def check_tuple_assignments(evaluator, contextualized_name, context_set):
"""
Checks if tuples are assigned.
"""
lazy_context = None
for index, node in contextualized_name.assignment_indexes():
cn = ContextualizedNode(contextualized_name.context, node)
iterated = context_set.iterate(cn)
for _ in range(index + 1):
try:
lazy_context = next(iterated)
except StopIteration:
# We could do this with the default param in next. But this
# would allow this loop to run for a very long time if the
# index number is high. Therefore break if the loop is
# finished.
return ContextSet()
context_set = lazy_context.infer()
return context_set
def eval_subscript_list(evaluator, context, index):
"""
Handles slices in subscript nodes.
"""
if index == ':':
# Like array[:]
return ContextSet(iterable.Slice(context, None, None, None))
elif index.type == 'subscript' and not index.children[0] == '.':
# subscript basically implies a slice operation, except for Python 2's
# Ellipsis.
# e.g. array[:3]
result = []
for el in index.children:
if el == ':':
if not result:
result.append(None)
elif el.type == 'sliceop':
if len(el.children) == 2:
result.append(el.children[1])
else:
result.append(el)
result += [None] * (3 - len(result))
return ContextSet(iterable.Slice(context, *result))
# No slices
return context.eval_node(index)

View File

@@ -1,156 +1,225 @@
import glob
import os
import sys
import imp
from jedi.evaluate.site import addsitedir
from jedi._compatibility import exec_function, unicode
from jedi.parser import representation as pr
from jedi.parser import Parser
from jedi.evaluate.cache import memoize_default
from jedi._compatibility import unicode
from jedi.evaluate.cache import evaluator_method_cache
from jedi.evaluate.base_context import ContextualizedNode
from jedi.evaluate.helpers import is_string
from jedi import settings
from jedi import debug
from jedi import common
from jedi.evaluate.utils import ignored
def get_sys_path():
def check_virtual_env(sys_path):
""" Add virtualenv's site-packages to the `sys.path`."""
venv = os.getenv('VIRTUAL_ENV')
if not venv:
return
venv = os.path.abspath(venv)
if os.name == 'nt':
p = os.path.join(venv, 'lib', 'site-packages')
else:
p = os.path.join(venv, 'lib', 'python%d.%d' % sys.version_info[:2],
'site-packages')
if p not in sys_path:
sys_path.insert(0, p)
check_virtual_env(sys.path)
return [p for p in sys.path if p != ""]
def get_venv_path(venv):
"""Get sys.path for specified virtual environment."""
sys_path = _get_venv_path_dirs(venv)
with ignored(ValueError):
sys_path.remove('')
sys_path = _get_sys_path_with_egglinks(sys_path)
# As of now, get_venv_path_dirs does not scan built-in pythonpath and
# user-local site-packages, let's approximate them using path from Jedi
# interpreter.
return sys_path + sys.path
def _execute_code(module_path, code):
c = "import os; from os.path import *; result=%s"
variables = {'__file__': module_path}
try:
exec_function(c % code, variables)
except Exception:
debug.warning('sys.path manipulation detected, but failed to evaluate.')
return None
try:
res = variables['result']
if isinstance(res, str):
return os.path.abspath(res)
else:
return None
except KeyError:
return None
def _get_sys_path_with_egglinks(sys_path):
"""Find all paths including those referenced by egg-links.
def _paths_from_assignment(statement):
Egg-link-referenced directories are inserted into path immediately before
the directory on which their links were found. Such directories are not
taken into consideration by normal import mechanism, but they are traversed
when doing pkg_resources.require.
"""
extracts the assigned strings from an assignment that looks as follows::
>>> sys.path[0:0] = ['module/path', 'another/module/path']
"""
names = statement.get_defined_names()
if len(names) != 1:
return []
if [unicode(x) for x in names[0].names] != ['sys', 'path']:
return []
expressions = statement.expression_list()
if len(expressions) != 1 or not isinstance(expressions[0], pr.Array):
return
stmts = (s for s in expressions[0].values if isinstance(s, pr.Statement))
expression_lists = (s.expression_list() for s in stmts)
return [e.value for exprs in expression_lists for e in exprs
if isinstance(e, pr.Literal) and e.value]
result = []
for p in sys_path:
# pkg_resources does not define a specific order for egg-link files
# using os.listdir to enumerate them, we're sorting them to have
# reproducible tests.
for egg_link in sorted(glob.glob(os.path.join(p, '*.egg-link'))):
with open(egg_link) as fd:
for line in fd:
line = line.strip()
if line:
result.append(os.path.join(p, line))
# pkg_resources package only interprets the first
# non-empty line in egg-link files.
break
result.append(p)
return result
def _paths_from_insert(module_path, exe):
""" extract the inserted module path from an "sys.path.insert" statement
"""
exe_type, exe.type = exe.type, pr.Array.NOARRAY
exe_pop = exe.values.pop(0)
res = _execute_code(module_path, exe.get_code())
exe.type = exe_type
exe.values.insert(0, exe_pop)
return res
def _paths_from_call_expression(module_path, call):
""" extract the path from either "sys.path.append" or "sys.path.insert" """
if call.execution is None:
return
n = call.name
if not isinstance(n, pr.Name) or len(n.names) != 3:
return
names = [unicode(x) for x in n.names]
if names[:2] != ['sys', 'path']:
return
cmd = names[2]
exe = call.execution
if cmd == 'insert' and len(exe) == 2:
path = _paths_from_insert(module_path, exe)
elif cmd == 'append' and len(exe) == 1:
path = _execute_code(module_path, exe.get_code())
return path and [path] or []
def _check_module(module):
try:
possible_stmts = module.used_names['path']
except KeyError:
return get_sys_path()
sys_path = list(get_sys_path()) # copy
statements = (p for p in possible_stmts if isinstance(p, pr.Statement))
for stmt in statements:
expressions = stmt.expression_list()
if len(expressions) == 1 and isinstance(expressions[0], pr.Call):
sys_path.extend(
_paths_from_call_expression(module.path, expressions[0]) or [])
elif (
hasattr(stmt, 'assignment_details') and
len(stmt.assignment_details) == 1
):
sys_path.extend(_paths_from_assignment(stmt) or [])
def _get_venv_path_dirs(venv):
"""Get sys.path for venv without starting up the interpreter."""
venv = os.path.abspath(venv)
sitedir = _get_venv_sitepackages(venv)
sys_path = []
addsitedir(sys_path, sitedir)
return sys_path
@memoize_default(evaluator_is_first_arg=True)
def sys_path_with_modifications(evaluator, module):
if module.path is None:
# Support for modules without a path is bad, therefore return the
# normal path.
return list(get_sys_path())
def _get_venv_sitepackages(venv):
if os.name == 'nt':
p = os.path.join(venv, 'lib', 'site-packages')
else:
p = os.path.join(venv, 'lib', 'python%d.%d' % sys.version_info[:2],
'site-packages')
return p
curdir = os.path.abspath(os.curdir)
with common.ignored(OSError):
os.chdir(os.path.dirname(module.path))
result = _check_module(module)
result += _detect_django_path(module.path)
# buildout scripts often contain the same sys.path modifications
# the set here is used to avoid duplicate sys.path entries
buildout_paths = set()
for module_path in _get_buildout_scripts(module.path):
def _abs_path(module_context, path):
module_path = module_context.py__file__()
if os.path.isabs(path):
return path
if module_path is None:
# In this case we have no idea where we actually are in the file
# system.
return None
base_dir = os.path.dirname(module_path)
return os.path.abspath(os.path.join(base_dir, path))
def _paths_from_assignment(module_context, expr_stmt):
"""
Extracts the assigned strings from an assignment that looks as follows::
>>> sys.path[0:0] = ['module/path', 'another/module/path']
This function is in general pretty tolerant (and therefore 'buggy').
However, it's not a big issue usually to add more paths to Jedi's sys_path,
because it will only affect Jedi in very random situations and by adding
more paths than necessary, it usually benefits the general user.
"""
for assignee, operator in zip(expr_stmt.children[::2], expr_stmt.children[1::2]):
try:
with open(module_path, 'rb') as f:
source = f.read()
except IOError:
pass
else:
p = Parser(common.source_to_unicode(source), module_path)
for path in _check_module(p.module):
if path not in buildout_paths:
buildout_paths.add(path)
result.append(path)
# cleanup, back to old directory
os.chdir(curdir)
return list(result)
assert operator in ['=', '+=']
assert assignee.type in ('power', 'atom_expr') and \
len(assignee.children) > 1
c = assignee.children
assert c[0].type == 'name' and c[0].value == 'sys'
trailer = c[1]
assert trailer.children[0] == '.' and trailer.children[1].value == 'path'
# TODO Essentially we're not checking details on sys.path
# manipulation. Both assigment of the sys.path and changing/adding
# parts of the sys.path are the same: They get added to the end of
# the current sys.path.
"""
execution = c[2]
assert execution.children[0] == '['
subscript = execution.children[1]
assert subscript.type == 'subscript'
assert ':' in subscript.children
"""
except AssertionError:
continue
cn = ContextualizedNode(module_context.create_context(expr_stmt), expr_stmt)
for lazy_context in cn.infer().iterate(cn):
for context in lazy_context.infer():
if is_string(context):
abs_path = _abs_path(module_context, context.obj)
if abs_path is not None:
yield abs_path
def _traverse_parents(path):
def _paths_from_list_modifications(module_context, trailer1, trailer2):
""" extract the path from either "sys.path.append" or "sys.path.insert" """
# Guarantee that both are trailers, the first one a name and the second one
# a function execution with at least one param.
if not (trailer1.type == 'trailer' and trailer1.children[0] == '.'
and trailer2.type == 'trailer' and trailer2.children[0] == '('
and len(trailer2.children) == 3):
return
name = trailer1.children[1].value
if name not in ['insert', 'append']:
return
arg = trailer2.children[1]
if name == 'insert' and len(arg.children) in (3, 4): # Possible trailing comma.
arg = arg.children[2]
for context in module_context.create_context(arg).eval_node(arg):
if is_string(context):
abs_path = _abs_path(module_context, context.obj)
if abs_path is not None:
yield abs_path
@evaluator_method_cache(default=[])
def check_sys_path_modifications(module_context):
"""
Detect sys.path modifications within module.
"""
def get_sys_path_powers(names):
for name in names:
power = name.parent.parent
if power.type in ('power', 'atom_expr'):
c = power.children
if c[0].type == 'name' and c[0].value == 'sys' \
and c[1].type == 'trailer':
n = c[1].children[1]
if n.type == 'name' and n.value == 'path':
yield name, power
if module_context.tree_node is None:
return []
added = []
try:
possible_names = module_context.tree_node.get_used_names()['path']
except KeyError:
pass
else:
for name, power in get_sys_path_powers(possible_names):
expr_stmt = power.parent
if len(power.children) >= 4:
added.extend(
_paths_from_list_modifications(
module_context, *power.children[2:4]
)
)
elif expr_stmt is not None and expr_stmt.type == 'expr_stmt':
added.extend(_paths_from_assignment(module_context, expr_stmt))
return added
def sys_path_with_modifications(evaluator, module_context):
return evaluator.project.sys_path + check_sys_path_modifications(module_context)
def detect_additional_paths(evaluator, script_path):
django_paths = _detect_django_path(script_path)
buildout_script_paths = set()
for buildout_script_path in _get_buildout_script_paths(script_path):
for path in _get_paths_from_buildout_script(evaluator, buildout_script_path):
buildout_script_paths.add(path)
return django_paths + list(buildout_script_paths)
def _get_paths_from_buildout_script(evaluator, buildout_script_path):
try:
module_node = evaluator.grammar.parse(
path=buildout_script_path,
cache=True,
cache_path=settings.cache_directory
)
except IOError:
debug.warning('Error trying to read buildout_script: %s', buildout_script_path)
return
from jedi.evaluate.context import ModuleContext
module = ModuleContext(evaluator, module_node, buildout_script_path)
for path in check_sys_path_modifications(module):
yield path
def traverse_parents(path):
while True:
new = os.path.dirname(path)
if new == path:
@@ -160,7 +229,7 @@ def _traverse_parents(path):
def _get_parent_dir_with_file(path, filename):
for parent in _traverse_parents(path):
for parent in traverse_parents(path):
if os.path.isfile(os.path.join(parent, filename)):
return parent
return None
@@ -170,15 +239,15 @@ def _detect_django_path(module_path):
""" Detects the path of the very well known Django library (if used) """
result = []
for parent in _traverse_parents(module_path):
with common.ignored(IOError):
for parent in traverse_parents(module_path):
with ignored(IOError):
with open(parent + os.path.sep + 'manage.py'):
debug.dbg('Found django path: %s', module_path)
result.append(parent)
return result
def _get_buildout_scripts(module_path):
def _get_buildout_script_paths(module_path):
"""
if there is a 'buildout.cfg' file in one of the parent directories of the
given module it will return a list of all files in the buildout bin
@@ -201,9 +270,39 @@ def _get_buildout_scripts(module_path):
firstline = f.readline()
if firstline.startswith('#!') and 'python' in firstline:
extra_module_paths.append(filepath)
except IOError as e:
# either permission error or race cond. because file got deleted
except (UnicodeDecodeError, IOError) as e:
# Probably a binary file; permission error or race cond. because file got deleted
# ignore
debug.warning(unicode(e))
continue
return extra_module_paths
def dotted_path_in_sys_path(sys_path, module_path):
"""
Returns the dotted path inside a sys.path.
"""
# First remove the suffix.
for suffix, _, _ in imp.get_suffixes():
if module_path.endswith(suffix):
module_path = module_path[:-len(suffix)]
break
else:
# There should always be a suffix in a valid Python file on the path.
return None
if module_path.startswith(os.path.sep):
# The paths in sys.path most of the times don't end with a slash.
module_path = module_path[1:]
for p in sys_path:
if module_path.startswith(p):
rest = module_path[len(p):]
if rest:
split = rest.split(os.path.sep)
for string in split:
if not string or '.' in string:
return None
return '.'.join(split)
return None

62
jedi/evaluate/usages.py Normal file
View File

@@ -0,0 +1,62 @@
from jedi.evaluate import imports
from jedi.evaluate.filters import TreeNameDefinition
from jedi.evaluate.context import ModuleContext
def _resolve_names(definition_names, avoid_names=()):
for name in definition_names:
if name in avoid_names:
# Avoiding recursions here, because goto on a module name lands
# on the same module.
continue
if not isinstance(name, imports.SubModuleName):
# SubModuleNames are not actually existing names but created
# names when importing something like `import foo.bar.baz`.
yield name
if name.api_type == 'module':
for name in _resolve_names(name.goto(), definition_names):
yield name
def _dictionarize(names):
return dict(
(n if n.tree_name is None else n.tree_name, n)
for n in names
)
def _find_names(module_context, tree_name):
context = module_context.create_context(tree_name)
name = TreeNameDefinition(context, tree_name)
found_names = set(name.goto())
found_names.add(name)
return _dictionarize(_resolve_names(found_names))
def usages(module_context, tree_name):
search_name = tree_name.value
found_names = _find_names(module_context, tree_name)
modules = set(d.get_root_context() for d in found_names.values())
modules = set(m for m in modules if isinstance(m, ModuleContext))
non_matching_usage_maps = {}
for m in imports.get_modules_containing_name(module_context.evaluator, modules, search_name):
for name_leaf in m.tree_node.get_used_names().get(search_name, []):
new = _find_names(m, name_leaf)
if any(tree_name in found_names for tree_name in new):
found_names.update(new)
for tree_name in new:
for dct in non_matching_usage_maps.get(tree_name, []):
# A usage that was previously searched for matches with
# a now found name. Merge.
found_names.update(dct)
try:
del non_matching_usage_maps[tree_name]
except KeyError:
pass
else:
for name in new:
non_matching_usage_maps.setdefault(name, []).append(new)
return found_names.values()

View File

@@ -2,17 +2,19 @@
import sys
import contextlib
import functools
import re
from ast import literal_eval
from jedi._compatibility import unicode, next, reraise
from jedi import settings
from jedi._compatibility import reraise
class MultiLevelStopIteration(Exception):
"""
StopIteration's get catched pretty easy by for loops, let errors propagate.
"""
def to_list(func):
def wrapper(*args, **kwargs):
return list(func(*args, **kwargs))
return wrapper
def unite(iterable):
"""Turns a two dimensional array into a one dimensional."""
return set(typ for types in iterable for typ in types)
class UncaughtAttributeError(Exception):
@@ -86,27 +88,6 @@ class PushBackIterator(object):
return self.current
@contextlib.contextmanager
def scale_speed_settings(factor):
a = settings.max_executions
b = settings.max_until_execution_unique
settings.max_executions *= factor
settings.max_until_execution_unique *= factor
yield
settings.max_executions = a
settings.max_until_execution_unique = b
def indent_block(text, indention=' '):
"""This function indents a text block with a default of four spaces."""
temp = ''
while text and text[-1] == '\n':
temp += text[-1]
text = text[:-1]
lines = text.split('\n')
return '\n'.join(map(lambda s: indention + s, lines)) + temp
@contextlib.contextmanager
def ignored(*exceptions):
"""
@@ -119,40 +100,11 @@ def ignored(*exceptions):
pass
def source_to_unicode(source, encoding=None):
def detect_encoding():
"""
For the implementation of encoding definitions in Python, look at:
- http://www.python.org/dev/peps/pep-0263/
- http://docs.python.org/2/reference/lexical_analysis.html#encoding-declarations
"""
byte_mark = literal_eval(r"b'\xef\xbb\xbf'")
if source.startswith(byte_mark):
# UTF-8 byte-order mark
return 'utf-8'
first_two_lines = re.match(r'(?:[^\n]*\n){0,2}', str(source)).group(0)
possible_encoding = re.search(r"coding[=:]\s*([-\w.]+)",
first_two_lines)
if possible_encoding:
return possible_encoding.group(1)
else:
# the default if nothing else has been set -> PEP 263
return encoding if encoding is not None else 'iso-8859-1'
if isinstance(source, unicode):
# only cast str/bytes
return source
# cast to unicode by default
return unicode(source, detect_encoding(), 'replace')
def splitlines(string):
"""
A splitlines for Python code. In contrast to Python's ``str.splitlines``,
looks at form feeds and other special characters as normal text. Just
splits ``\n`` and ``\r\n``.
Also different: Returns ``['']`` for an empty string input.
"""
return re.split('\n|\r\n', string)
def indent_block(text, indention=' '):
"""This function indents a text block with a default of four spaces."""
temp = ''
while text and text[-1] == '\n':
temp += text[-1]
text = text[:-1]
lines = text.split('\n')
return '\n'.join(map(lambda s: indention + s, lines)) + temp

View File

@@ -1,639 +0,0 @@
"""
The ``Parser`` tries to convert the available Python code in an easy to read
format, something like an abstract syntax tree. The classes who represent this
tree, are sitting in the :mod:`jedi.parser.representation` module.
The Python module ``tokenize`` is a very important part in the ``Parser``,
because it splits the code into different words (tokens). Sometimes it looks a
bit messy. Sorry for that! You might ask now: "Why didn't you use the ``ast``
module for this? Well, ``ast`` does a very good job understanding proper Python
code, but fails to work as soon as there's a single line of broken code.
There's one important optimization that needs to be known: Statements are not
being parsed completely. ``Statement`` is just a representation of the tokens
within the statement. This lowers memory usage and cpu time and reduces the
complexity of the ``Parser`` (there's another parser sitting inside
``Statement``, which produces ``Array`` and ``Call``).
"""
import keyword
from jedi._compatibility import next, unicode
from jedi import debug
from jedi import common
from jedi.parser import representation as pr
from jedi.parser import tokenize
OPERATOR_KEYWORDS = 'and', 'for', 'if', 'else', 'in', 'is', 'lambda', 'not', 'or'
# Not used yet. In the future I intend to add something like KeywordStatement
STATEMENT_KEYWORDS = 'assert', 'del', 'global', 'nonlocal', 'raise', \
'return', 'yield', 'pass', 'continue', 'break'
class Parser(object):
"""
This class is used to parse a Python file, it then divides them into a
class structure of different scopes.
:param source: The codebase for the parser.
:type source: str
:param module_path: The path of the module in the file system, may be None.
:type module_path: str
:param no_docstr: If True, a string at the beginning is not a docstr.
:param top_module: Use this module as a parent instead of `self.module`.
"""
def __init__(self, source, module_path=None, no_docstr=False,
tokenizer=None, top_module=None):
self.no_docstr = no_docstr
tokenizer = tokenizer or tokenize.source_tokens(source)
self._gen = PushBackTokenizer(tokenizer)
# initialize global Scope
start_pos = next(self._gen).start_pos
self._gen.push_last_back()
self.module = pr.SubModule(module_path, start_pos, top_module)
self._scope = self.module
self._top_module = top_module or self.module
try:
self._parse()
except (common.MultiLevelStopIteration, StopIteration):
# StopIteration needs to be added as well, because python 2 has a
# strange way of handling StopIterations.
# sometimes StopIteration isn't catched. Just ignore it.
# on finish, set end_pos correctly
pass
s = self._scope
while s is not None:
s.end_pos = self._gen.current.end_pos
s = s.parent
# clean up unused decorators
for d in self._decorators:
# set a parent for unused decorators, avoid NullPointerException
# because of `self.module.used_names`.
d.parent = self.module
self.module.end_pos = self._gen.current.end_pos
if self._gen.current.type == tokenize.NEWLINE:
# This case is only relevant with the FastTokenizer, because
# otherwise there's always an ENDMARKER.
# we added a newline before, so we need to "remove" it again.
#
# NOTE: It should be keep end_pos as-is if the last token of
# a source is a NEWLINE, otherwise the newline at the end of
# a source is not included in a ParserNode.code.
if self._gen.previous.type != tokenize.NEWLINE:
self.module.end_pos = self._gen.previous.end_pos
del self._gen
def __repr__(self):
return "<%s: %s>" % (type(self).__name__, self.module)
def _check_user_stmt(self, simple):
# this is not user checking, just update the used_names
for tok_name in self.module.temp_used_names:
try:
self.module.used_names[tok_name].add(simple)
except KeyError:
self.module.used_names[tok_name] = set([simple])
self.module.temp_used_names = []
def _parse_dot_name(self, pre_used_token=None):
"""
The dot name parser parses a name, variable or function and returns
their names.
:return: tuple of Name, next_token
"""
def append(el):
names.append(el)
self.module.temp_used_names.append(el[0])
names = []
tok = next(self._gen) if pre_used_token is None else pre_used_token
if tok.type != tokenize.NAME and tok.string != '*':
return None, tok
first_pos = tok.start_pos
append((tok.string, first_pos))
while True:
end_pos = tok.end_pos
tok = next(self._gen)
if tok.string != '.':
break
tok = next(self._gen)
if tok.type != tokenize.NAME:
break
append((tok.string, tok.start_pos))
n = pr.Name(self.module, names, first_pos, end_pos) if names else None
return n, tok
def _parse_import_list(self):
"""
The parser for the imports. Unlike the class and function parse
function, this returns no Import class, but rather an import list,
which is then added later on.
The reason, why this is not done in the same class lies in the nature
of imports. There are two ways to write them:
- from ... import ...
- import ...
To distinguish, this has to be processed after the parser.
:return: List of imports.
:rtype: list
"""
imports = []
brackets = False
continue_kw = [",", ";", "\n", '\r\n', ')'] \
+ list(set(keyword.kwlist) - set(['as']))
while True:
defunct = False
tok = next(self._gen)
if tok.string == '(': # python allows only one `(` in the statement.
brackets = True
tok = next(self._gen)
if brackets and tok.type == tokenize.NEWLINE:
tok = next(self._gen)
i, tok = self._parse_dot_name(tok)
if not i:
defunct = True
name2 = None
if tok.string == 'as':
name2, tok = self._parse_dot_name()
imports.append((i, name2, defunct))
while tok.string not in continue_kw:
tok = next(self._gen)
if not (tok.string == "," or brackets and tok.type == tokenize.NEWLINE):
break
return imports
def _parse_parentheses(self):
"""
Functions and Classes have params (which means for classes
super-classes). They are parsed here and returned as Statements.
:return: List of Statements
:rtype: list
"""
names = []
tok = None
pos = 0
breaks = [',', ':']
while tok is None or tok.string not in (')', ':'):
param, tok = self._parse_statement(added_breaks=breaks,
stmt_class=pr.Param)
if param and tok.string == ':':
# parse annotations
annotation, tok = self._parse_statement(added_breaks=breaks)
if annotation:
param.add_annotation(annotation)
# params without vars are usually syntax errors.
if param and (param.get_defined_names()):
param.position_nr = pos
names.append(param)
pos += 1
return names
def _parse_function(self):
"""
The parser for a text functions. Process the tokens, which follow a
function definition.
:return: Return a Scope representation of the tokens.
:rtype: Function
"""
first_pos = self._gen.current.start_pos
tok = next(self._gen)
if tok.type != tokenize.NAME:
return None
fname = pr.Name(self.module, [(tok.string, tok.start_pos)], tok.start_pos,
tok.end_pos)
tok = next(self._gen)
if tok.string != '(':
return None
params = self._parse_parentheses()
colon = next(self._gen)
annotation = None
if colon.string in ('-', '->'):
# parse annotations
if colon.string == '-':
# The Python 2 tokenizer doesn't understand this
colon = next(self._gen)
if colon.string != '>':
return None
annotation, colon = self._parse_statement(added_breaks=[':'])
if colon.string != ':':
return None
# because of 2 line func param definitions
return pr.Function(self.module, fname, params, first_pos, annotation)
def _parse_class(self):
"""
The parser for a text class. Process the tokens, which follow a
class definition.
:return: Return a Scope representation of the tokens.
:rtype: Class
"""
first_pos = self._gen.current.start_pos
cname = next(self._gen)
if cname.type != tokenize.NAME:
debug.warning("class: syntax err, token is not a name@%s (%s: %s)",
cname.start_pos[0], tokenize.tok_name[cname.type], cname.string)
return None
cname = pr.Name(self.module, [(cname.string, cname.start_pos)],
cname.start_pos, cname.end_pos)
super = []
_next = next(self._gen)
if _next.string == '(':
super = self._parse_parentheses()
_next = next(self._gen)
if _next.string != ':':
debug.warning("class syntax: %s@%s", cname, _next.start_pos[0])
return None
return pr.Class(self.module, cname, super, first_pos)
def _parse_statement(self, pre_used_token=None, added_breaks=None,
stmt_class=pr.Statement, names_are_set_vars=False):
"""
Parses statements like::
a = test(b)
a += 3 - 2 or b
and so on. One line at a time.
:param pre_used_token: The pre parsed token.
:type pre_used_token: set
:return: Statement + last parsed token.
:rtype: (Statement, str)
"""
set_vars = []
level = 0 # The level of parentheses
if pre_used_token:
tok = pre_used_token
else:
tok = next(self._gen)
while tok.type == tokenize.COMMENT:
# remove newline and comment
next(self._gen)
tok = next(self._gen)
first_pos = tok.start_pos
opening_brackets = ['{', '(', '[']
closing_brackets = ['}', ')', ']']
# the difference between "break" and "always break" is that the latter
# will even break in parentheses. This is true for typical flow
# commands like def and class and the imports, which will never be used
# in a statement.
breaks = set(['\n', '\r\n', ':', ')'])
always_break = [';', 'import', 'from', 'class', 'def', 'try', 'except',
'finally', 'while', 'return', 'yield']
not_first_break = ['del', 'raise']
if added_breaks:
breaks |= set(added_breaks)
tok_list = []
as_names = []
in_lambda_param = False
while not (tok.string in always_break
or tok.string in not_first_break and not tok_list
or tok.string in breaks and level <= 0
and not (in_lambda_param and tok.string in ',:')):
try:
# print 'parse_stmt', tok, tokenize.tok_name[token_type]
is_kw = tok.string in OPERATOR_KEYWORDS
if tok.type == tokenize.OP or is_kw:
tok_list.append(
pr.Operator(self.module, tok.string, self._scope, tok.start_pos)
)
else:
tok_list.append(tok)
if tok.string == 'as':
tok = next(self._gen)
if tok.type == tokenize.NAME:
n, tok = self._parse_dot_name(self._gen.current)
if n:
set_vars.append(n)
as_names.append(n)
tok_list.append(n)
continue
elif tok.string == 'lambda':
breaks.discard(':')
in_lambda_param = True
elif in_lambda_param and tok.string == ':':
in_lambda_param = False
elif tok.type == tokenize.NAME and not is_kw:
n, tok = self._parse_dot_name(self._gen.current)
# removed last entry, because we add Name
tok_list.pop()
if n:
tok_list.append(n)
continue
elif tok.string in opening_brackets:
level += 1
elif tok.string in closing_brackets:
level -= 1
tok = next(self._gen)
except (StopIteration, common.MultiLevelStopIteration):
# comes from tokenizer
break
if not tok_list:
return None, tok
first_tok = tok_list[0]
# docstrings
if len(tok_list) == 1 and isinstance(first_tok, tokenize.Token) \
and first_tok.type == tokenize.STRING:
# Normal docstring check
if self.freshscope and not self.no_docstr:
self._scope.add_docstr(first_tok)
return None, tok
# Attribute docstring (PEP 224) support (sphinx uses it, e.g.)
# If string literal is being parsed...
elif first_tok.type == tokenize.STRING:
with common.ignored(IndexError, AttributeError):
# ...then set it as a docstring
self._scope.statements[-1].add_docstr(first_tok)
return None, tok
stmt = stmt_class(self.module, tok_list, first_pos, tok.end_pos,
as_names=as_names,
names_are_set_vars=names_are_set_vars)
stmt.parent = self._top_module
self._check_user_stmt(stmt)
if tok.string in always_break + not_first_break:
self._gen.push_last_back()
return stmt, tok
def _parse(self):
"""
The main part of the program. It analyzes the given code-text and
returns a tree-like scope. For a more detailed description, see the
class description.
:param text: The code which should be parsed.
:param type: str
:raises: IndentationError
"""
extended_flow = ['else', 'elif', 'except', 'finally']
statement_toks = ['{', '[', '(', '`']
self._decorators = []
self.freshscope = True
for tok in self._gen:
token_type = tok.type
tok_str = tok.string
first_pos = tok.start_pos
self.module.temp_used_names = []
# debug.dbg('main: tok=[%s] type=[%s] indent=[%s]', \
# tok, tokenize.tok_name[token_type], start_position[0])
# check again for unindented stuff. this is true for syntax
# errors. only check for names, because thats relevant here. If
# some docstrings are not indented, I don't care.
while first_pos[1] <= self._scope.start_pos[1] \
and (token_type == tokenize.NAME or tok_str in ('(', '['))\
and self._scope != self.module:
self._scope.end_pos = first_pos
self._scope = self._scope.parent
if isinstance(self._scope, pr.Module) \
and not isinstance(self._scope, pr.SubModule):
self._scope = self.module
if isinstance(self._scope, pr.SubModule):
use_as_parent_scope = self._top_module
else:
use_as_parent_scope = self._scope
if tok_str == 'def':
func = self._parse_function()
if func is None:
debug.warning("function: syntax error@%s", first_pos[0])
continue
self.freshscope = True
self._scope = self._scope.add_scope(func, self._decorators)
self._decorators = []
elif tok_str == 'class':
cls = self._parse_class()
if cls is None:
debug.warning("class: syntax error@%s" % first_pos[0])
continue
self.freshscope = True
self._scope = self._scope.add_scope(cls, self._decorators)
self._decorators = []
# import stuff
elif tok_str == 'import':
imports = self._parse_import_list()
for count, (m, alias, defunct) in enumerate(imports):
e = (alias or m or self._gen.previous).end_pos
end_pos = self._gen.previous.end_pos if count + 1 == len(imports) else e
i = pr.Import(self.module, first_pos, end_pos, m,
alias, defunct=defunct)
self._check_user_stmt(i)
self._scope.add_import(i)
if not imports:
i = pr.Import(self.module, first_pos, self._gen.current.end_pos,
None, defunct=True)
self._check_user_stmt(i)
self.freshscope = False
elif tok_str == 'from':
defunct = False
# take care for relative imports
relative_count = 0
while True:
tok = next(self._gen)
if tok.string != '.':
break
relative_count += 1
# the from import
mod, tok = self._parse_dot_name(self._gen.current)
tok_str = tok.string
if str(mod) == 'import' and relative_count:
self._gen.push_last_back()
tok_str = 'import'
mod = None
if not mod and not relative_count or tok_str != "import":
debug.warning("from: syntax error@%s", tok.start_pos[0])
defunct = True
if tok_str != 'import':
self._gen.push_last_back()
names = self._parse_import_list()
for count, (name, alias, defunct2) in enumerate(names):
star = name is not None and unicode(name.names[0]) == '*'
if star:
name = None
e = (alias or name or self._gen.previous).end_pos
end_pos = self._gen.previous.end_pos if count + 1 == len(names) else e
i = pr.Import(self.module, first_pos, end_pos, name,
alias, mod, star, relative_count,
defunct=defunct or defunct2)
self._check_user_stmt(i)
self._scope.add_import(i)
self.freshscope = False
# loops
elif tok_str == 'for':
set_stmt, tok = self._parse_statement(added_breaks=['in'],
names_are_set_vars=True)
if tok.string != 'in':
debug.warning('syntax err, for flow incomplete @%s', tok.start_pos[0])
try:
statement, tok = self._parse_statement()
except StopIteration:
statement, tok = None, None
s = [] if statement is None else [statement]
f = pr.ForFlow(self.module, s, first_pos, set_stmt)
self._scope = self._scope.add_statement(f)
if tok is None or tok.string != ':':
debug.warning('syntax err, for flow started @%s', first_pos[0])
elif tok_str in ['if', 'while', 'try', 'with'] + extended_flow:
added_breaks = []
command = tok_str
if command in ('except', 'with'):
added_breaks.append(',')
# multiple inputs because of with
inputs = []
first = True
while first or command == 'with' and tok.string not in (':', '\n', '\r\n'):
statement, tok = \
self._parse_statement(added_breaks=added_breaks)
if command == 'except' and tok.string == ',':
# the except statement defines a var
# this is only true for python 2
n, tok = self._parse_dot_name()
if n:
n.parent = statement
statement.as_names.append(n)
if statement:
inputs.append(statement)
first = False
f = pr.Flow(self.module, command, inputs, first_pos)
if command in extended_flow:
# the last statement has to be another part of
# the flow statement, because a dedent releases the
# main scope, so just take the last statement.
try:
s = self._scope.statements[-1].set_next(f)
except (AttributeError, IndexError):
# If set_next doesn't exist, just add it.
s = self._scope.add_statement(f)
else:
s = self._scope.add_statement(f)
self._scope = s
if tok.string != ':':
debug.warning('syntax err, flow started @%s', tok.start_pos[0])
# returns
elif tok_str in ('return', 'yield'):
s = tok.start_pos
self.freshscope = False
# add returns to the scope
func = self._scope.get_parent_until(pr.Function)
if tok_str == 'yield':
func.is_generator = True
stmt, tok = self._parse_statement()
if stmt is not None:
stmt.parent = use_as_parent_scope
try:
func.statements.append(pr.KeywordStatement(tok_str, s,
use_as_parent_scope, stmt))
func.returns.append(stmt)
# start_pos is the one of the return statement
stmt.start_pos = s
except AttributeError:
debug.warning('return in non-function')
elif tok_str == 'assert':
stmt, tok = self._parse_statement()
if stmt is not None:
stmt.parent = use_as_parent_scope
self._scope.statements.append(stmt)
self._scope.asserts.append(stmt)
elif tok_str in STATEMENT_KEYWORDS:
stmt, _ = self._parse_statement()
kw = pr.KeywordStatement(tok_str, tok.start_pos,
use_as_parent_scope, stmt)
self._scope.add_statement(kw)
if stmt is not None and tok_str == 'global':
for t in stmt._token_list:
if isinstance(t, pr.Name):
# Add the global to the top module, it counts there.
self.module.add_global(t)
# decorator
elif tok_str == '@':
stmt, tok = self._parse_statement()
if stmt is not None:
self._decorators.append(stmt)
elif tok_str == 'pass':
continue
# default
elif token_type in (tokenize.NAME, tokenize.STRING,
tokenize.NUMBER, tokenize.OP) \
or tok_str in statement_toks:
# this is the main part - a name can be a function or a
# normal var, which can follow anything. but this is done
# by the statement parser.
stmt, tok = self._parse_statement(self._gen.current)
if stmt:
self._scope.add_statement(stmt)
self.freshscope = False
else:
if token_type not in (tokenize.COMMENT, tokenize.NEWLINE, tokenize.ENDMARKER):
debug.warning('Token not used: %s %s %s', tok_str,
tokenize.tok_name[token_type], first_pos)
continue
self.no_docstr = False
class PushBackTokenizer(object):
def __init__(self, tokenizer):
self._tokenizer = tokenizer
self._push_backs = []
self.current = self.previous = tokenize.Token(None, '', (0, 0))
def push_last_back(self):
self._push_backs.append(self.current)
def next(self):
""" Python 2 Compatibility """
return self.__next__()
def __next__(self):
if self._push_backs:
return self._push_backs.pop(0)
previous = self.current
self.current = next(self._tokenizer)
self.previous = previous
return self.current
def __iter__(self):
return self

View File

@@ -1,466 +0,0 @@
"""
Basically a parser that is faster, because it tries to parse only parts and if
anything changes, it only reparses the changed parts. But because it's not
finished (and still not working as I want), I won't document it any further.
"""
import re
from jedi._compatibility import use_metaclass, unicode
from jedi import settings
from jedi import common
from jedi.parser import Parser
from jedi.parser import representation as pr
from jedi.parser import tokenize
from jedi import cache
from jedi.parser.tokenize import (source_tokens, Token, FLOWS, NEWLINE,
COMMENT, ENDMARKER)
class Module(pr.Module, pr.Simple):
def __init__(self, parsers):
super(Module, self).__init__(self, (1, 0))
self.parsers = parsers
self.reset_caches()
self.start_pos = 1, 0
self.end_pos = None, None
def reset_caches(self):
""" This module does a whole lot of caching, because it uses different
parsers. """
with common.ignored(AttributeError):
del self._used_names
def __getattr__(self, name):
if name.startswith('__'):
raise AttributeError('Not available!')
else:
return getattr(self.parsers[0].module, name)
@property
@cache.underscore_memoization
def used_names(self):
used_names = {}
for p in self.parsers:
for k, statement_set in p.module.used_names.items():
if k in used_names:
used_names[k] |= statement_set
else:
used_names[k] = set(statement_set)
return used_names
def __repr__(self):
return "<fast.%s: %s@%s-%s>" % (type(self).__name__, self.name,
self.start_pos[0], self.end_pos[0])
class CachedFastParser(type):
""" This is a metaclass for caching `FastParser`. """
def __call__(self, source, module_path=None):
if not settings.fast_parser:
return Parser(source, module_path)
pi = cache.parser_cache.get(module_path, None)
if pi is None or isinstance(pi.parser, Parser):
p = super(CachedFastParser, self).__call__(source, module_path)
else:
p = pi.parser # pi is a `cache.ParserCacheItem`
p.update(source)
return p
class ParserNode(object):
def __init__(self, parser, code, parent=None):
self.parent = parent
self.children = []
# must be created before new things are added to it.
self.save_contents(parser, code)
def save_contents(self, parser, code):
self.code = code
self.hash = hash(code)
self.parser = parser
try:
# with fast_parser we have either 1 subscope or only statements.
self.content_scope = parser.module.subscopes[0]
except IndexError:
self.content_scope = parser.module
scope = self.content_scope
self._contents = {}
for c in pr.SCOPE_CONTENTS:
self._contents[c] = list(getattr(scope, c))
self._is_generator = scope.is_generator
self.old_children = self.children
self.children = []
def reset_contents(self):
scope = self.content_scope
for key, c in self._contents.items():
setattr(scope, key, list(c))
scope.is_generator = self._is_generator
if self.parent is None:
# Global vars of the first one can be deleted, in the global scope
# they make no sense.
self.parser.module.global_vars = []
for c in self.children:
c.reset_contents()
def parent_until_indent(self, indent=None):
if indent is None or self.indent >= indent and self.parent:
self.old_children = []
if self.parent is not None:
return self.parent.parent_until_indent(indent)
return self
@property
def indent(self):
if not self.parent:
return 0
module = self.parser.module
try:
el = module.subscopes[0]
except IndexError:
try:
el = module.statements[0]
except IndexError:
try:
el = module.imports[0]
except IndexError:
try:
el = [r for r in module.returns if r is not None][0]
except IndexError:
return self.parent.indent + 1
return el.start_pos[1]
def _set_items(self, parser, set_parent=False):
# insert parser objects into current structure
scope = self.content_scope
for c in pr.SCOPE_CONTENTS:
content = getattr(scope, c)
items = getattr(parser.module, c)
if set_parent:
for i in items:
if i is None:
continue # happens with empty returns
i.parent = scope.use_as_parent
if isinstance(i, (pr.Function, pr.Class)):
for d in i.decorators:
d.parent = scope.use_as_parent
content += items
# global_vars
cur = self
while cur.parent is not None:
cur = cur.parent
cur.parser.module.global_vars += parser.module.global_vars
scope.is_generator |= parser.module.is_generator
def add_node(self, node, set_parent=False):
"""Adding a node means adding a node that was already added earlier"""
self.children.append(node)
self._set_items(node.parser, set_parent=set_parent)
node.old_children = node.children # TODO potential memory leak?
node.children = []
scope = self.content_scope
while scope is not None:
#print('x',scope)
if not isinstance(scope, pr.SubModule):
# TODO This seems like a strange thing. Check again.
scope.end_pos = node.content_scope.end_pos
scope = scope.parent
return node
def add_parser(self, parser, code):
return self.add_node(ParserNode(parser, code, self), True)
class FastParser(use_metaclass(CachedFastParser)):
_keyword_re = re.compile('^[ \t]*(def|class|@|%s)' % '|'.join(tokenize.FLOWS))
def __init__(self, code, module_path=None):
# set values like `pr.Module`.
self.module_path = module_path
self.current_node = None
self.parsers = []
self.module = Module(self.parsers)
self.reset_caches()
try:
self._parse(code)
except:
# FastParser is cached, be careful with exceptions
del self.parsers[:]
raise
def update(self, code):
self.reset_caches()
try:
self._parse(code)
except:
# FastParser is cached, be careful with exceptions
del self.parsers[:]
raise
def _split_parts(self, code):
"""
Split the code into different parts. This makes it possible to parse
each part seperately and therefore cache parts of the file and not
everything.
"""
def gen_part():
text = '\n'.join(current_lines)
del current_lines[:]
return text
# Split only new lines. Distinction between \r\n is the tokenizer's
# job.
self._lines = code.split('\n')
current_lines = []
is_decorator = False
current_indent = 0
old_indent = 0
new_indent = False
in_flow = False
# All things within flows are simply being ignored.
for l in self._lines:
# check for dedents
s = l.lstrip('\t ')
indent = len(l) - len(s)
if not s or s[0] in ('#', '\r'):
current_lines.append(l) # just ignore comments and blank lines
continue
if indent < current_indent: # -> dedent
current_indent = indent
new_indent = False
if not in_flow or indent < old_indent:
if current_lines:
yield gen_part()
in_flow = False
elif new_indent:
current_indent = indent
new_indent = False
# Check lines for functions/classes and split the code there.
if not in_flow:
m = self._keyword_re.match(l)
if m:
in_flow = m.group(1) in tokenize.FLOWS
if not is_decorator and not in_flow:
if current_lines:
yield gen_part()
is_decorator = '@' == m.group(1)
if not is_decorator:
old_indent = current_indent
current_indent += 1 # it must be higher
new_indent = True
elif is_decorator:
is_decorator = False
current_lines.append(l)
if current_lines:
yield gen_part()
def _parse(self, code):
""" :type code: str """
def empty_parser():
new, temp = self._get_parser(unicode(''), unicode(''), 0, [], False)
return new
del self.parsers[:]
line_offset = 0
start = 0
p = None
is_first = True
for code_part in self._split_parts(code):
if is_first or line_offset >= p.module.end_pos[0]:
indent = len(code_part) - len(code_part.lstrip('\t '))
if is_first and self.current_node is not None:
nodes = [self.current_node]
else:
nodes = []
if self.current_node is not None:
self.current_node = \
self.current_node.parent_until_indent(indent)
nodes += self.current_node.old_children
# check if code_part has already been parsed
# print '#'*45,line_offset, p and p.module.end_pos, '\n', code_part
p, node = self._get_parser(code_part, code[start:],
line_offset, nodes, not is_first)
# The actual used code_part is different from the given code
# part, because of docstrings for example there's a chance that
# splits are wrong.
used_lines = self._lines[line_offset:p.module.end_pos[0]]
code_part_actually_used = '\n'.join(used_lines)
if is_first and p.module.subscopes:
# special case, we cannot use a function subscope as a
# base scope, subscopes would save all the other contents
new = empty_parser()
if self.current_node is None:
self.current_node = ParserNode(new, '')
else:
self.current_node.save_contents(new, '')
self.parsers.append(new)
is_first = False
if is_first:
if self.current_node is None:
self.current_node = ParserNode(p, code_part_actually_used)
else:
self.current_node.save_contents(p, code_part_actually_used)
else:
if node is None:
self.current_node = \
self.current_node.add_parser(p, code_part_actually_used)
else:
self.current_node = self.current_node.add_node(node)
self.parsers.append(p)
is_first = False
#else:
#print '#'*45, line_offset, p.module.end_pos, 'theheck\n', repr(code_part)
line_offset += code_part.count('\n') + 1
start += len(code_part) + 1 # +1 for newline
if self.parsers:
self.current_node = self.current_node.parent_until_indent()
else:
self.parsers.append(empty_parser())
self.module.end_pos = self.parsers[-1].module.end_pos
# print(self.parsers[0].module.get_code())
def _get_parser(self, code, parser_code, line_offset, nodes, no_docstr):
h = hash(code)
for index, node in enumerate(nodes):
if node.hash != h or node.code != code:
continue
if node != self.current_node:
offset = int(nodes[0] == self.current_node)
self.current_node.old_children.pop(index - offset)
p = node.parser
m = p.module
m.line_offset += line_offset + 1 - m.start_pos[0]
break
else:
tokenizer = FastTokenizer(parser_code, line_offset)
p = Parser(parser_code, self.module_path, tokenizer=tokenizer,
top_module=self.module, no_docstr=no_docstr)
p.module.parent = self.module
node = None
return p, node
def reset_caches(self):
self.module.reset_caches()
if self.current_node is not None:
self.current_node.reset_contents()
class FastTokenizer(object):
"""
Breaks when certain conditions are met, i.e. a new function or class opens.
"""
def __init__(self, source, line_offset=0):
self.source = source
self.gen = source_tokens(source, line_offset)
self.closed = False
# fast parser options
self.current = self.previous = Token(None, '', (0, 0))
self.in_flow = False
self.new_indent = False
self.parser_indent = self.old_parser_indent = 0
self.is_decorator = False
self.first_stmt = True
self.parentheses_level = 0
def next(self):
""" Python 2 Compatibility """
return self.__next__()
def __next__(self):
if self.closed:
raise common.MultiLevelStopIteration()
current = next(self.gen)
tok_type = current.type
tok_str = current.string
if tok_type == ENDMARKER:
raise common.MultiLevelStopIteration()
self.previous = self.current
self.current = current
# this is exactly the same check as in fast_parser, but this time with
# tokenize and therefore precise.
breaks = ['def', 'class', '@']
def close():
if not self.first_stmt:
self.closed = True
raise common.MultiLevelStopIteration()
# Ignore comments/newlines, irrelevant for indentation.
if self.previous.type in (None, NEWLINE) \
and tok_type not in (COMMENT, NEWLINE):
# print c, tok_name[c[0]]
indent = current.start_pos[1]
if self.parentheses_level:
# parentheses ignore the indentation rules.
pass
elif indent < self.parser_indent: # -> dedent
self.parser_indent = indent
self.new_indent = False
if not self.in_flow or indent < self.old_parser_indent:
close()
self.in_flow = False
elif self.new_indent:
self.parser_indent = indent
self.new_indent = False
if not self.in_flow:
if tok_str in FLOWS or tok_str in breaks:
self.in_flow = tok_str in FLOWS
if not self.is_decorator and not self.in_flow:
close()
self.is_decorator = '@' == tok_str
if not self.is_decorator:
self.old_parser_indent = self.parser_indent
self.parser_indent += 1 # new scope: must be higher
self.new_indent = True
if tok_str != '@':
if self.first_stmt and not self.new_indent:
self.parser_indent = indent
self.first_stmt = False
# Ignore closing parentheses, because they are all
# irrelevant for the indentation.
if tok_str in '([{':
self.parentheses_level += 1
elif tok_str in ')]}':
self.parentheses_level = max(self.parentheses_level - 1, 0)
return current

File diff suppressed because it is too large Load Diff

View File

@@ -1,299 +0,0 @@
# -*- coding: utf-8 -*-
"""
This tokenizer has been copied from the ``tokenize.py`` standard library
tokenizer. The reason was simple: The standard library tokenizer fails
if the indentation is not right. The fast parser of jedi however requires
"wrong" indentation.
Basically this is a stripped down version of the standard library module, so
you can read the documentation there. Additionally we included some speed and
memory optimizations here.
"""
from __future__ import absolute_import
import string
import re
from io import StringIO
from token import (tok_name, N_TOKENS, ENDMARKER, STRING, NUMBER, NAME, OP,
ERRORTOKEN, NEWLINE)
from jedi._compatibility import u
cookie_re = re.compile("coding[:=]\s*([-\w.]+)")
# From here on we have custom stuff (everything before was originally Python
# internal code).
FLOWS = ['if', 'else', 'elif', 'while', 'with', 'try', 'except', 'finally']
namechars = string.ascii_letters + '_'
COMMENT = N_TOKENS
tok_name[COMMENT] = 'COMMENT'
class Token(object):
"""
The token object is an efficient representation of the structure
(type, token, (start_pos_line, start_pos_col)). It has indexer
methods that maintain compatibility to existing code that expects the above
structure.
>>> repr(Token(1, "test", (1, 1)))
"<Token: ('NAME', 'test', (1, 1))>"
>>> Token(1, 'bar', (3, 4)).__getstate__()
(1, 'bar', 3, 4)
>>> a = Token(0, 'baz', (0, 0))
>>> a.__setstate__((1, 'foo', 3, 4))
>>> a
<Token: ('NAME', 'foo', (3, 4))>
>>> a.start_pos
(3, 4)
>>> a.string
'foo'
>>> a._start_pos_col
4
>>> Token(1, u("😷"), (1 ,1)).string + "p" == u("😷p")
True
"""
__slots__ = ("type", "string", "_start_pos_line", "_start_pos_col")
def __init__(self, type, string, start_pos):
self.type = type
self.string = string
self._start_pos_line = start_pos[0]
self._start_pos_col = start_pos[1]
def __repr__(self):
typ = tok_name[self.type]
content = typ, self.string, (self._start_pos_line, self._start_pos_col)
return "<%s: %s>" % (type(self).__name__, content)
@property
def start_pos(self):
return (self._start_pos_line, self._start_pos_col)
@property
def end_pos(self):
"""Returns end position respecting multiline tokens."""
end_pos_line = self._start_pos_line
lines = self.string.split('\n')
if self.string.endswith('\n'):
lines = lines[:-1]
lines[-1] += '\n'
end_pos_line += len(lines) - 1
end_pos_col = self._start_pos_col
# Check for multiline token
if self._start_pos_line == end_pos_line:
end_pos_col += len(lines[-1])
else:
end_pos_col = len(lines[-1])
return (end_pos_line, end_pos_col)
# Make cache footprint smaller for faster unpickling
def __getstate__(self):
return (self.type, self.string, self._start_pos_line, self._start_pos_col)
def __setstate__(self, state):
self.type = state[0]
self.string = state[1]
self._start_pos_line = state[2]
self._start_pos_col = state[3]
def group(*choices):
return '(' + '|'.join(choices) + ')'
def maybe(*choices):
return group(*choices) + '?'
# Note: we use unicode matching for names ("\w") but ascii matching for
# number literals.
whitespace = r'[ \f\t]*'
comment = r'#[^\r\n]*'
name = r'\w+'
hex_number = r'0[xX][0-9a-fA-F]+'
bin_number = r'0[bB][01]+'
oct_number = r'0[oO][0-7]+'
dec_number = r'(?:0+|[1-9][0-9]*)'
int_number = group(hex_number, bin_number, oct_number, dec_number)
exponent = r'[eE][-+]?[0-9]+'
point_float = group(r'[0-9]+\.[0-9]*', r'\.[0-9]+') + maybe(exponent)
Expfloat = r'[0-9]+' + exponent
float_number = group(point_float, Expfloat)
imag_number = group(r'[0-9]+[jJ]', float_number + r'[jJ]')
number = group(imag_number, float_number, int_number)
# Tail end of ' string.
single = r"[^'\\]*(?:\\.[^'\\]*)*'"
# Tail end of " string.
double = r'[^"\\]*(?:\\.[^"\\]*)*"'
# Tail end of ''' string.
single3 = r"[^'\\]*(?:(?:\\.|'(?!''))[^'\\]*)*'''"
# Tail end of """ string.
double3 = r'[^"\\]*(?:(?:\\.|"(?!""))[^"\\]*)*"""'
triple = group("[bB]?[rR]?'''", '[bB]?[rR]?"""')
# Single-line ' or " string.
# Because of leftmost-then-longest match semantics, be sure to put the
# longest operators first (e.g., if = came before ==, == would get
# recognized as two instances of =).
operator = group(r"\*\*=?", r">>=?", r"<<=?", r"!=",
r"//=?", r"->",
r"[+\-*/%&|^=<>]=?",
r"~")
bracket = '[][(){}]'
special = group(r'\r?\n', r'\.\.\.', r'[:;.,@]')
funny = group(operator, bracket, special)
# First (or only) line of ' or " string.
cont_str = group(r"[bBuU]?[rR]?'[^\n'\\]*(?:\\.[^\n'\\]*)*" +
group("'", r'\\\r?\n'),
r'[bBuU]?[rR]?"[^\n"\\]*(?:\\.[^\n"\\]*)*' +
group('"', r'\\\r?\n'))
pseudo_extras = group(r'\\\r?\n', comment, triple)
pseudo_token = whitespace + group(pseudo_extras, number, funny, cont_str, name)
def _compile(expr):
return re.compile(expr, re.UNICODE)
pseudoprog, single3prog, double3prog = map(
_compile, (pseudo_token, single3, double3))
endprogs = {"'": _compile(single), '"': _compile(double),
"'''": single3prog, '"""': double3prog,
"r'''": single3prog, 'r"""': double3prog,
"b'''": single3prog, 'b"""': double3prog,
"u'''": single3prog, 'u"""': double3prog,
"br'''": single3prog, 'br"""': double3prog,
"R'''": single3prog, 'R"""': double3prog,
"B'''": single3prog, 'B"""': double3prog,
"U'''": single3prog, 'U"""': double3prog,
"bR'''": single3prog, 'bR"""': double3prog,
"Br'''": single3prog, 'Br"""': double3prog,
"BR'''": single3prog, 'BR"""': double3prog,
'r': None, 'R': None, 'b': None, 'B': None}
triple_quoted = {}
for t in ("'''", '"""',
"r'''", 'r"""', "R'''", 'R"""',
"b'''", 'b"""', "B'''", 'B"""',
"u'''", 'u"""', "U'''", 'U"""',
"br'''", 'br"""', "Br'''", 'Br"""',
"bR'''", 'bR"""', "BR'''", 'BR"""'):
triple_quoted[t] = t
single_quoted = {}
for t in ("'", '"',
"r'", 'r"', "R'", 'R"',
"b'", 'b"', "B'", 'B"',
"u'", 'u""', "U'", 'U"',
"br'", 'br"', "Br'", 'Br"',
"bR'", 'bR"', "BR'", 'BR"'):
single_quoted[t] = t
del _compile
tabsize = 8
def source_tokens(source, line_offset=0):
"""Generate tokens from a the source code (string)."""
source = source + '\n' # end with \n, because the parser needs it
readline = StringIO(source).readline
return generate_tokens(readline, line_offset)
def generate_tokens(readline, line_offset=0):
"""
The original stdlib Python version with minor modifications.
Modified to not care about dedents.
"""
lnum = line_offset
numchars = '0123456789'
contstr = ''
contline = None
while True: # loop over lines in stream
line = readline() # readline returns empty if it's finished. See StringIO
if not line:
if contstr:
yield Token(ERRORTOKEN, contstr, contstr_start)
break
lnum += 1
pos, max = 0, len(line)
if contstr: # continued string
endmatch = endprog.match(line)
if endmatch:
pos = endmatch.end(0)
yield Token(STRING, contstr + line[:pos], contstr_start)
contstr = ''
contline = None
else:
contstr = contstr + line
contline = contline + line
continue
while pos < max:
pseudomatch = pseudoprog.match(line, pos)
if not pseudomatch: # scan for tokens
txt = line[pos]
if line[pos] in '"\'':
# If a literal starts but doesn't end the whole rest of the
# line is an error token.
txt = txt = line[pos:]
yield Token(ERRORTOKEN, txt, (lnum, pos))
pos += 1
continue
start, pos = pseudomatch.span(1)
spos = (lnum, start)
token, initial = line[start:pos], line[start]
if (initial in numchars or # ordinary number
(initial == '.' and token != '.' and token != '...')):
yield Token(NUMBER, token, spos)
elif initial in '\r\n':
yield Token(NEWLINE, token, spos)
elif initial == '#':
assert not token.endswith("\n")
yield Token(COMMENT, token, spos)
elif token in triple_quoted:
endprog = endprogs[token]
endmatch = endprog.match(line, pos)
if endmatch: # all on one line
pos = endmatch.end(0)
token = line[start:pos]
yield Token(STRING, token, spos)
else:
contstr_start = (lnum, start) # multiple lines
contstr = line[start:]
contline = line
break
elif initial in single_quoted or \
token[:2] in single_quoted or \
token[:3] in single_quoted:
if token[-1] == '\n': # continued string
contstr_start = lnum, start
endprog = (endprogs[initial] or endprogs[token[1]] or
endprogs[token[2]])
contstr = line[start:]
contline = line
break
else: # ordinary string
yield Token(STRING, token, spos)
elif initial in namechars: # ordinary name
yield Token(NAME, token, spos)
elif initial == '\\' and line[start:] == '\\\n': # continued stmt
continue
else:
yield Token(OP, token, spos)
yield Token(ENDMARKER, '', (lnum, 0))

View File

@@ -1,255 +0,0 @@
import re
import os
from jedi import cache
from jedi import common
from jedi.parser import tokenize
from jedi._compatibility import u
from jedi.parser.fast import FastParser
from jedi.parser import representation
from jedi import debug
from jedi.common import PushBackIterator
class UserContext(object):
"""
:param source: The source code of the file.
:param position: The position, the user is currently in. Only important \
for the main file.
"""
def __init__(self, source, position):
self.source = source
self.position = position
self._line_cache = None
# this two are only used, because there is no nonlocal in Python 2
self._line_temp = None
self._relevant_temp = None
@cache.underscore_memoization
def get_path_until_cursor(self):
""" Get the path under the cursor. """
path, self._start_cursor_pos = self._calc_path_until_cursor(self.position)
return path
def _calc_path_until_cursor(self, start_pos=None):
"""
Something like a reverse tokenizer that tokenizes the reversed strings.
"""
def fetch_line():
if self._is_first:
self._is_first = False
self._line_length = self._column_temp
line = first_line
else:
line = self.get_line(self._line_temp)
self._line_length = len(line)
line = '\n' + line
# add lines with a backslash at the end
while True:
self._line_temp -= 1
last_line = self.get_line(self._line_temp)
if last_line and last_line[-1] == '\\':
line = last_line[:-1] + ' ' + line
self._line_length = len(last_line)
else:
break
return line[::-1]
self._is_first = True
self._line_temp, self._column_temp = start_cursor = start_pos
first_line = self.get_line(self._line_temp)[:self._column_temp]
open_brackets = ['(', '[', '{']
close_brackets = [')', ']', '}']
gen = PushBackIterator(tokenize.generate_tokens(fetch_line))
string = u('')
level = 0
force_point = False
last_type = None
is_first = True
for tok in gen:
tok_type = tok.type
tok_str = tok.string
end = tok.end_pos
self._column_temp = self._line_length - end[1]
if is_first:
if tok.start_pos != (1, 0): # whitespace is not a path
return u(''), start_cursor
is_first = False
# print 'tok', token_type, tok_str, force_point
if last_type == tok_type == tokenize.NAME:
string += ' '
if level > 0:
if tok_str in close_brackets:
level += 1
if tok_str in open_brackets:
level -= 1
elif tok_str == '.':
force_point = False
elif force_point:
# Reversed tokenizing, therefore a number is recognized as a
# floating point number.
# The same is true for string prefixes -> represented as a
# combination of string and name.
if tok_type == tokenize.NUMBER and tok_str[0] == '.' \
or tok_type == tokenize.NAME and last_type == tokenize.STRING:
force_point = False
else:
break
elif tok_str in close_brackets:
level += 1
elif tok_type in [tokenize.NAME, tokenize.STRING]:
force_point = True
elif tok_type == tokenize.NUMBER:
pass
else:
if tok_str == '-':
next_tok = next(gen)
if next_tok.string == 'e':
gen.push_back(next_tok)
else:
break
else:
break
x = start_pos[0] - end[0] + 1
l = self.get_line(x)
l = first_line if x == start_pos[0] else l
start_cursor = x, len(l) - end[1]
string += tok_str
last_type = tok_type
# string can still contain spaces at the end
return string[::-1].strip(), start_cursor
def get_path_under_cursor(self):
"""
Return the path under the cursor. If there is a rest of the path left,
it will be added to the stuff before it.
"""
return self.get_path_until_cursor() + self.get_path_after_cursor()
def get_path_after_cursor(self):
line = self.get_line(self.position[0])
return re.search("[\w\d]*", line[self.position[1]:]).group(0)
def get_operator_under_cursor(self):
line = self.get_line(self.position[0])
after = re.match("[^\w\s]+", line[self.position[1]:])
before = re.match("[^\w\s]+", line[:self.position[1]][::-1])
return (before.group(0) if before is not None else '') \
+ (after.group(0) if after is not None else '')
def get_context(self, yield_positions=False):
self.get_path_until_cursor() # In case _start_cursor_pos is undefined.
pos = self._start_cursor_pos
while True:
# remove non important white space
line = self.get_line(pos[0])
while True:
if pos[1] == 0:
line = self.get_line(pos[0] - 1)
if line and line[-1] == '\\':
pos = pos[0] - 1, len(line) - 1
continue
else:
break
if line[pos[1] - 1].isspace():
pos = pos[0], pos[1] - 1
else:
break
try:
result, pos = self._calc_path_until_cursor(start_pos=pos)
if yield_positions:
yield pos
else:
yield result
except StopIteration:
if yield_positions:
yield None
else:
yield ''
def get_line(self, line_nr):
if not self._line_cache:
self._line_cache = common.splitlines(self.source)
if line_nr == 0:
# This is a fix for the zeroth line. We need a newline there, for
# the backwards parser.
return u('')
if line_nr < 0:
raise StopIteration()
try:
return self._line_cache[line_nr - 1]
except IndexError:
raise StopIteration()
def get_position_line(self):
return self.get_line(self.position[0])[:self.position[1]]
class UserContextParser(object):
def __init__(self, source, path, position, user_context):
self._source = source
self._path = path and os.path.abspath(path)
self._position = position
self._user_context = user_context
@cache.underscore_memoization
def _parser(self):
cache.invalidate_star_import_cache(self._path)
parser = FastParser(self._source, self._path)
# Don't pickle that module, because the main module is changing quickly
cache.save_parser(self._path, None, parser, pickling=False)
return parser
@cache.underscore_memoization
def user_stmt(self):
module = self.module()
debug.speed('parsed')
return module.get_statement_for_position(self._position, include_imports=True)
@cache.underscore_memoization
def user_stmt_with_whitespace(self):
"""
Returns the statement under the cursor even if the statement lies
before the cursor.
"""
user_stmt = self.user_stmt()
if not user_stmt:
# for statements like `from x import ` (cursor not in statement)
# or `abs( ` where the cursor is out in the whitespace.
if self._user_context.get_path_under_cursor():
# We really should have a user_stmt, but the parser couldn't
# process it - probably a Syntax Error (or in a comment).
debug.warning('No statement under the cursor.')
return
pos = next(self._user_context.get_context(yield_positions=True))
user_stmt = self.module().get_statement_for_position(pos, include_imports=True)
return user_stmt
@cache.underscore_memoization
def user_scope(self):
user_stmt = self.user_stmt()
if user_stmt is None:
def scan(scope):
for s in scope.statements + scope.subscopes:
if isinstance(s, representation.Scope):
if s.start_pos <= self._position <= s.end_pos:
return scan(s) or s
return scan(self.module()) or self.module()
else:
return user_stmt.parent
def module(self):
return self._parser().module

241
jedi/parser_utils.py Normal file
View File

@@ -0,0 +1,241 @@
import textwrap
from inspect import cleandoc
from jedi._compatibility import literal_eval, is_py3
from parso.python import tree
_EXECUTE_NODES = set([
'funcdef', 'classdef', 'import_from', 'import_name', 'test', 'or_test',
'and_test', 'not_test', 'comparison', 'expr', 'xor_expr', 'and_expr',
'shift_expr', 'arith_expr', 'atom_expr', 'term', 'factor', 'power', 'atom'
])
_FLOW_KEYWORDS = (
'try', 'except', 'finally', 'else', 'if', 'elif', 'with', 'for', 'while'
)
def get_executable_nodes(node, last_added=False):
"""
For static analysis.
"""
result = []
typ = node.type
if typ == 'name':
next_leaf = node.get_next_leaf()
if last_added is False and node.parent.type != 'param' and next_leaf != '=':
result.append(node)
elif typ == 'expr_stmt':
# I think evaluating the statement (and possibly returned arrays),
# should be enough for static analysis.
result.append(node)
for child in node.children:
result += get_executable_nodes(child, last_added=True)
elif typ == 'decorator':
# decorator
if node.children[-2] == ')':
node = node.children[-3]
if node != '(':
result += get_executable_nodes(node)
else:
try:
children = node.children
except AttributeError:
pass
else:
if node.type in _EXECUTE_NODES and not last_added:
result.append(node)
for child in children:
result += get_executable_nodes(child, last_added)
return result
def get_comp_fors(comp_for):
yield comp_for
last = comp_for.children[-1]
while True:
if last.type == 'comp_for':
yield last
elif not last.type == 'comp_if':
break
last = last.children[-1]
def for_stmt_defines_one_name(for_stmt):
"""
Returns True if only one name is returned: ``for x in y``.
Returns False if the for loop is more complicated: ``for x, z in y``.
:returns: bool
"""
return for_stmt.children[1].type == 'name'
def get_flow_branch_keyword(flow_node, node):
start_pos = node.start_pos
if not (flow_node.start_pos < start_pos <= flow_node.end_pos):
raise ValueError('The node is not part of the flow.')
keyword = None
for i, child in enumerate(flow_node.children):
if start_pos < child.start_pos:
return keyword
first_leaf = child.get_first_leaf()
if first_leaf in _FLOW_KEYWORDS:
keyword = first_leaf
return 0
def get_statement_of_position(node, pos):
for c in node.children:
if c.start_pos <= pos <= c.end_pos:
if c.type not in ('decorated', 'simple_stmt', 'suite') \
and not isinstance(c, (tree.Flow, tree.ClassOrFunc)):
return c
else:
try:
return get_statement_of_position(c, pos)
except AttributeError:
pass # Must be a non-scope
return None
def clean_scope_docstring(scope_node):
""" Returns a cleaned version of the docstring token. """
node = scope_node.get_doc_node()
if node is not None:
# TODO We have to check next leaves until there are no new
# leaves anymore that might be part of the docstring. A
# docstring can also look like this: ``'foo' 'bar'
# Returns a literal cleaned version of the ``Token``.
cleaned = cleandoc(safe_literal_eval(node.value))
# Since we want the docstr output to be always unicode, just
# force it.
if is_py3 or isinstance(cleaned, unicode):
return cleaned
else:
return unicode(cleaned, 'UTF-8', 'replace')
return ''
def safe_literal_eval(value):
first_two = value[:2].lower()
if first_two[0] == 'f' or first_two in ('fr', 'rf'):
# literal_eval is not able to resovle f literals. We have to do that
# manually, but that's right now not implemented.
return ''
try:
return literal_eval(value)
except SyntaxError:
# It's possible to create syntax errors with literals like rb'' in
# Python 2. This should not be possible and in that case just return an
# empty string.
# Before Python 3.3 there was a more strict definition in which order
# you could define literals.
return ''
def get_call_signature(funcdef, width=72, call_string=None):
"""
Generate call signature of this function.
:param width: Fold lines if a line is longer than this value.
:type width: int
:arg func_name: Override function name when given.
:type func_name: str
:rtype: str
"""
# Lambdas have no name.
if call_string is None:
if funcdef.type == 'lambdef':
call_string = '<lambda>'
else:
call_string = funcdef.name.value
if funcdef.type == 'lambdef':
p = '(' + ''.join(param.get_code() for param in funcdef.get_params()).strip() + ')'
else:
p = funcdef.children[2].get_code()
code = call_string + p
return '\n'.join(textwrap.wrap(code, width))
def get_doc_with_call_signature(scope_node):
"""
Return a document string including call signature.
"""
call_signature = None
if scope_node.type == 'classdef':
for funcdef in scope_node.iter_funcdefs():
if funcdef.name.value == '__init__':
call_signature = \
get_call_signature(funcdef, call_string=scope_node.name.value)
elif scope_node.type in ('funcdef', 'lambdef'):
call_signature = get_call_signature(scope_node)
doc = clean_scope_docstring(scope_node)
if call_signature is None:
return doc
return '%s\n\n%s' % (call_signature, doc)
def move(node, line_offset):
"""
Move the `Node` start_pos.
"""
try:
children = node.children
except AttributeError:
node.line += line_offset
else:
for c in children:
move(c, line_offset)
def get_following_comment_same_line(node):
"""
returns (as string) any comment that appears on the same line,
after the node, including the #
"""
try:
if node.type == 'for_stmt':
whitespace = node.children[5].get_first_leaf().prefix
elif node.type == 'with_stmt':
whitespace = node.children[3].get_first_leaf().prefix
else:
whitespace = node.get_last_leaf().get_next_leaf().prefix
except AttributeError:
return None
except ValueError:
# TODO in some particular cases, the tree doesn't seem to be linked
# correctly
return None
if "#" not in whitespace:
return None
comment = whitespace[whitespace.index("#"):]
if "\r" in comment:
comment = comment[:comment.index("\r")]
if "\n" in comment:
comment = comment[:comment.index("\n")]
return comment
def is_scope(node):
return node.type in ('file_input', 'classdef', 'funcdef', 'lambdef', 'comp_for')
def get_parent_scope(node, include_flows=False):
"""
Returns the underlying scope.
"""
scope = node.parent
while scope is not None:
if include_flows and isinstance(scope, tree.Flow):
return scope
if is_scope(scope):
break
scope = scope.parent
return scope

View File

@@ -14,9 +14,8 @@ following functions (sometimes bug-prone):
"""
import difflib
from jedi import common
from parso import python_bytes_to_unicode, split_lines
from jedi.evaluate import helpers
from jedi.parser import representation as pr
class Refactoring(object):
@@ -29,7 +28,7 @@ class Refactoring(object):
def old_files(self):
dct = {}
for old_path, (new_path, old_l, new_l) in self.change_dct.items():
dct[new_path] = '\n'.join(new_l)
dct[old_path] = '\n'.join(old_l)
return dct
def new_files(self):
@@ -83,7 +82,7 @@ def _rename(names, replace_str):
with open(current_path) as f:
source = f.read()
new_lines = common.splitlines(common.source_to_unicode(source))
new_lines = split_lines(python_bytes_to_unicode(source))
old_lines = new_lines[:]
nr, indent = name.line, name.column
@@ -101,7 +100,7 @@ def extract(script, new_name):
:type source: str
:return: list of changed lines/changed files
"""
new_lines = common.splitlines(common.source_to_unicode(script.source))
new_lines = split_lines(python_bytes_to_unicode(script.source))
old_lines = new_lines[:]
user_stmt = script._parser.user_stmt()
@@ -160,43 +159,42 @@ def inline(script):
"""
:type script: api.Script
"""
new_lines = common.splitlines(common.source_to_unicode(script.source))
new_lines = split_lines(python_bytes_to_unicode(script.source))
dct = {}
definitions = script.goto_assignments()
with common.ignored(AssertionError):
assert len(definitions) == 1
stmt = definitions[0]._definition
usages = script.usages()
inlines = [r for r in usages
if not stmt.start_pos <= (r.line, r.column) <= stmt.end_pos]
inlines = sorted(inlines, key=lambda x: (x.module_path, x.line, x.column),
reverse=True)
expression_list = stmt.expression_list()
# don't allow multiline refactorings for now.
assert stmt.start_pos[0] == stmt.end_pos[0]
index = stmt.start_pos[0] - 1
assert len(definitions) == 1
stmt = definitions[0]._definition
usages = script.usages()
inlines = [r for r in usages
if not stmt.start_pos <= (r.line, r.column) <= stmt.end_pos]
inlines = sorted(inlines, key=lambda x: (x.module_path, x.line, x.column),
reverse=True)
expression_list = stmt.expression_list()
# don't allow multiline refactorings for now.
assert stmt.start_pos[0] == stmt.end_pos[0]
index = stmt.start_pos[0] - 1
line = new_lines[index]
replace_str = line[expression_list[0].start_pos[1]:stmt.end_pos[1] + 1]
replace_str = replace_str.strip()
# tuples need parentheses
if expression_list and isinstance(expression_list[0], pr.Array):
arr = expression_list[0]
if replace_str[0] not in ['(', '[', '{'] and len(arr) > 1:
replace_str = '(%s)' % replace_str
line = new_lines[index]
replace_str = line[expression_list[0].start_pos[1]:stmt.end_pos[1] + 1]
replace_str = replace_str.strip()
# tuples need parentheses
if expression_list and isinstance(expression_list[0], pr.Array):
arr = expression_list[0]
if replace_str[0] not in ['(', '[', '{'] and len(arr) > 1:
replace_str = '(%s)' % replace_str
# if it's the only assignment, remove the statement
if len(stmt.get_defined_names()) == 1:
line = line[:stmt.start_pos[1]] + line[stmt.end_pos[1]:]
# if it's the only assignment, remove the statement
if len(stmt.get_defined_names()) == 1:
line = line[:stmt.start_pos[1]] + line[stmt.end_pos[1]:]
dct = _rename(inlines, replace_str)
# remove the empty line
new_lines = dct[script.path][2]
if line.strip():
new_lines[index] = line
else:
new_lines.pop(index)
dct = _rename(inlines, replace_str)
# remove the empty line
new_lines = dct[script.path][2]
if line.strip():
new_lines[index] = line
else:
new_lines.pop(index)
return Refactoring(dct)

View File

@@ -16,7 +16,6 @@ Completion output
~~~~~~~~~~~~~~~~~
.. autodata:: case_insensitive_completion
.. autodata:: add_dot_after_module
.. autodata:: add_bracket_after_function
.. autodata:: no_completion_duplicates
@@ -37,7 +36,6 @@ Parser
Dynamic stuff
~~~~~~~~~~~~~
.. autodata:: dynamic_arrays_instances
.. autodata:: dynamic_array_additions
.. autodata:: dynamic_params
.. autodata:: dynamic_params_for_other_modules
@@ -45,32 +43,9 @@ Dynamic stuff
.. autodata:: auto_import_modules
.. _settings-recursion:
Recursions
~~~~~~~~~~
Recursion settings are important if you don't want extremly
recursive python code to go absolutely crazy. First of there is a
global limit :data:`max_executions`. This limit is important, to set
a maximum amount of time, the completion may use.
The default values are based on experiments while completing the |jedi| library
itself (inception!). But I don't think there's any other Python library that
uses recursion in a similarly extreme way. These settings make the completion
definitely worse in some cases. But a completion should also be fast.
.. autodata:: max_until_execution_unique
.. autodata:: max_function_recursion_level
.. autodata:: max_executions_without_builtins
.. autodata:: max_executions
.. autodata:: scale_call_signatures
Caching
~~~~~~~
.. autodata:: star_import_cache_validity
.. autodata:: call_signatures_validity
@@ -87,13 +62,6 @@ case_insensitive_completion = True
The completion is by default case insensitive.
"""
add_dot_after_module = False
"""
Adds a dot after a module, because a module that is not accessed this way is
definitely not the normal case. However, in VIM this doesn't work, that's why
it isn't used at the moment.
"""
add_bracket_after_function = False
"""
Adds an opening bracket after a function, because that's normal behaviour.
@@ -125,7 +93,7 @@ else:
'jedi')
cache_directory = os.path.expanduser(_cache_directory)
"""
The path where all the caches can be found.
The path where the cache is stored.
On Linux, this defaults to ``~/.cache/jedi/``, on OS X to
``~/Library/Caches/Jedi/`` and on Windows to ``%APPDATA%\\Jedi\\Jedi\\``.
@@ -148,14 +116,9 @@ function is being reparsed.
# dynamic stuff
# ----------------
dynamic_arrays_instances = True
"""
Check for `append`, etc. on array instances like list()
"""
dynamic_array_additions = True
"""
check for `append`, etc. on arrays: [], {}, ()
check for `append`, etc. on arrays: [], {}, () as well as list/set calls.
"""
dynamic_params = True
@@ -189,55 +152,10 @@ This improves autocompletion for libraries that use ``setattr`` or
``globals()`` modifications a lot.
"""
# ----------------
# recursions
# ----------------
max_until_execution_unique = 50
"""
This limit is probably the most important one, because if this limit is
exceeded, functions can only be one time executed. So new functions will be
executed, complex recursions with the same functions again and again, are
ignored.
"""
max_function_recursion_level = 5
"""
`max_function_recursion_level` is more about whether the recursions are
stopped in deepth or in width. The ratio beetween this and
`max_until_execution_unique` is important here. It stops a recursion (after
the number of function calls in the recursion), if it was already used
earlier.
"""
max_executions_without_builtins = 200
"""
.. todo:: Document this.
"""
max_executions = 250
"""
A maximum amount of time, the completion may use.
"""
scale_call_signatures = 0.1
"""
Because call_signatures is normally used on every single key hit, it has
to be faster than a normal completion. This is the factor that is used to
scale `max_executions` and `max_until_execution_unique`:
"""
# ----------------
# caching validity (time)
# ----------------
star_import_cache_validity = 60.0
"""
In huge packages like numpy, checking all star imports on every completion
might be slow, therefore we do a star import caching, that lasts a certain
time span (in seconds).
"""
call_signatures_validity = 3.0
"""
Finding function calls might be slow (0.1-0.5s). This is not acceptible for

View File

@@ -5,13 +5,19 @@ Utilities for end-users.
from __future__ import absolute_import
import __main__
from collections import namedtuple
import logging
import traceback
import re
import os
import sys
from parso import split_lines
from jedi import Interpreter
from jedi.api.helpers import completion_parts
from jedi.parser.user_context import UserContext
from jedi.api.helpers import get_on_completion_name
READLINE_DEBUG = False
def setup_readline(namespace_module=__main__):
@@ -55,6 +61,13 @@ def setup_readline(namespace_module=__main__):
bash).
"""
if READLINE_DEBUG:
logging.basicConfig(
filename='/tmp/jedi.log',
filemode='a',
level=logging.DEBUG
)
class JediRL(object):
def complete(self, text, state):
"""
@@ -70,12 +83,21 @@ def setup_readline(namespace_module=__main__):
sys.path.insert(0, os.getcwd())
# Calling python doesn't have a path, so add to sys.path.
try:
logging.debug("Start REPL completion: " + repr(text))
interpreter = Interpreter(text, [namespace_module.__dict__])
path = UserContext(text, (1, len(text))).get_path_until_cursor()
path, dot, like = completion_parts(path)
before = text[:len(text) - len(like)]
lines = split_lines(text)
position = (len(lines), len(lines[-1]))
name = get_on_completion_name(
interpreter._get_module_node(),
lines,
position
)
before = text[:len(text) - len(name)]
completions = interpreter.completions()
except:
logging.error("REPL Completion error:\n" + traceback.format_exc())
raise
finally:
sys.path.pop(0)
@@ -88,7 +110,7 @@ def setup_readline(namespace_module=__main__):
try:
import readline
except ImportError:
print("Module readline not available.")
print("Jedi: Module readline not available.")
else:
readline.set_completer(JediRL().complete)
readline.parse_and_bind("tab: complete")
@@ -108,7 +130,7 @@ def version_info():
Returns a namedtuple of Jedi's version, similar to Python's
``sys.version_info``.
"""
Version = namedtuple('Version', 'major, minor, micro, releaselevel, serial')
Version = namedtuple('Version', 'major, minor, micro')
from jedi import __version__
tupl = re.findall('[a-z]+|\d+', __version__)
return Version(*[x if i == 3 else int(x) for i, x in enumerate(tupl)])

View File

@@ -2,7 +2,7 @@
addopts = --doctest-modules
# Ignore broken files in blackbox test directories
norecursedirs = .* docs completion refactor absolute_import namespace_package scripts extensions speed static_analysis
norecursedirs = .* docs completion refactor absolute_import namespace_package scripts extensions speed static_analysis not_in_sys_path buildout_project sample_venvs init_extension_module simple_import
# Activate `clean_jedi_cache` fixture for all tests. This should be
# fine as long as we are using `clean_jedi_cache` as a session scoped

1
requirements.txt Normal file
View File

@@ -0,0 +1 @@
parso==0.1.1

50
scripts/diff_parser_profile.py Executable file
View File

@@ -0,0 +1,50 @@
#!/usr/bin/env python
"""
Profile a piece of Python code with ``cProfile`` that uses the diff parser.
Usage:
profile.py <file> [-d] [-s <sort>]
profile.py -h | --help
Options:
-h --help Show this screen.
-d --debug Enable Jedi internal debugging.
-s <sort> Sort the profile results, e.g. cumtime, name [default: time].
"""
import cProfile
from docopt import docopt
from jedi.parser.python import load_grammar
from jedi.parser.diff import DiffParser
from jedi.parser.python import ParserWithRecovery
from jedi._compatibility import u
from jedi.common import splitlines
import jedi
def run(parser, lines):
diff_parser = DiffParser(parser)
diff_parser.update(lines)
# Make sure used_names is loaded
parser.module.used_names
def main(args):
if args['--debug']:
jedi.set_debug_function(notices=True)
with open(args['<file>']) as f:
code = f.read()
grammar = load_grammar()
parser = ParserWithRecovery(grammar, u(code))
# Make sure used_names is loaded
parser.module.used_names
code = code + '\na\n' # Add something so the diff parser needs to run.
lines = splitlines(code, keepends=True)
cProfile.runctx('run(parser, lines)', globals(), locals(), sort=args['-s'])
if __name__ == '__main__':
args = docopt(__doc__)
main(args)

View File

@@ -12,7 +12,9 @@ Note: This requires the psutil library, available on PyPI.
"""
import time
import sys
import os
import psutil
sys.path.insert(0, os.path.abspath(os.path.dirname(__file__) + '/..'))
import jedi
@@ -40,15 +42,17 @@ def main(mods):
for mod in mods:
elapsed, used = profile_preload(mod)
if used > 0:
print('%8.1f | %8d | %s' % (elapsed, used, mod))
print('%8.2f | %8d | %s' % (elapsed, used, mod))
print('------------------------------')
elapsed = time.time() - t0
used = used_memory() - baseline
print('%8.1f | %8d | %s' % (elapsed, used, 'Total'))
print('%8.2f | %8d | %s' % (elapsed, used, 'Total'))
if __name__ == '__main__':
mods = ['re', 'numpy', 'scipy', 'scipy.sparse', 'scipy.stats',
'wx', 'decimal', 'PyQt4.QtGui', 'PySide.QtGui', 'Tkinter']
mods += sys.argv[1:]
if sys.argv[1:]:
mods = sys.argv[1:]
else:
mods = ['re', 'numpy', 'scipy', 'scipy.sparse', 'scipy.stats',
'wx', 'decimal', 'PyQt4.QtGui', 'PySide.QtGui', 'Tkinter']
main(mods)

View File

@@ -45,5 +45,5 @@ def main(args):
if __name__ == '__main__':
args = docopt(__doc__)
if args['<code>'] is None:
args['<code>'] = 'import numpy; numpy.array([0])'
args['<code>'] = 'import numpy; numpy.array([0]).'
main(args)

2
setup.cfg Normal file
View File

@@ -0,0 +1,2 @@
[bdist_wheel]
universal=1

View File

@@ -1,31 +1,40 @@
#!/usr/bin/env python
from __future__ import with_statement
try:
from setuptools import setup
except ImportError:
# Distribute is not actually required to install
from distutils.core import setup
from setuptools import setup, find_packages
import ast
import sys
__AUTHOR__ = 'David Halter'
__AUTHOR_EMAIL__ = 'davidhalter88@gmail.com'
readme = open('README.rst').read() + '\n\n' + open('CHANGELOG.rst').read()
# Get the version from within jedi. It's defined in exactly one place now.
with open('jedi/__init__.py') as f:
tree = ast.parse(f.read())
if sys.version_info > (3, 7):
version = tree.body[0].value.s
else:
version = tree.body[1].value.s
import jedi
readme = open('README.rst').read() + '\n\n' + open('CHANGELOG.rst').read()
with open('requirements.txt') as f:
install_requires = f.read().splitlines()
setup(name='jedi',
version=jedi.__version__,
version=version,
description='An autocompletion tool for Python that can be used for text editors.',
author=__AUTHOR__,
author_email=__AUTHOR_EMAIL__,
include_package_data=True,
maintainer=__AUTHOR__,
maintainer_email=__AUTHOR_EMAIL__,
url='https://github.com/davidhalter/jedi',
license='MIT',
keywords='python completion refactoring vim',
long_description=readme,
packages=['jedi', 'jedi.parser', 'jedi.evaluate', 'jedi.evaluate.compiled', 'jedi.api'],
packages=find_packages(exclude=['test']),
install_requires=install_requires,
extras_require={'dev': ['docopt']},
package_data={'jedi': ['evaluate/compiled/fake/*.pym']},
platforms=['any'],
classifiers=[
@@ -38,8 +47,11 @@ setup(name='jedi',
'Programming Language :: Python :: 2.6',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.2',
'Programming Language :: Python :: 3.3',
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
'Topic :: Software Development :: Libraries :: Python Modules',
'Topic :: Text Editors :: Integrated Development Environments (IDE)',
'Topic :: Utilities',

10
sith.py
View File

@@ -118,7 +118,11 @@ class TestCase(object):
try:
with open(self.path) as f:
self.script = jedi.Script(f.read(), self.line, self.column, self.path)
self.objects = getattr(self.script, self.operation)()
kwargs = {}
if self.operation == 'goto_assignments':
kwargs['follow_imports'] = random.choice([False, True])
self.objects = getattr(self.script, self.operation)(**kwargs)
if print_result:
print("{path}: Line {line} column {column}".format(**self.__dict__))
self.show_location(self.line, self.column)
@@ -145,7 +149,7 @@ class TestCase(object):
# Three lines ought to be enough
lower = lineno - show if lineno - show > 0 else 0
prefix = ' |'
for i, line in enumerate(self.script.source.split('\n')[lower:lineno]):
for i, line in enumerate(self.script._source.split('\n')[lower:lineno]):
print(prefix, lower + i + 1, line)
print(prefix, ' ', ' ' * (column + len(str(lineno))), '^')
@@ -169,7 +173,7 @@ class TestCase(object):
self.show_location(completion.line, completion.column)
def show_errors(self):
print(self.traceback)
sys.stderr.write(self.traceback)
print(("Error with running Script(...).{operation}() with\n"
"\tpath: {path}\n"
"\tline: {line}\n"

View File

@@ -0,0 +1,30 @@
# todo probably remove test_integration_keyword
def test_keyword_doc():
r = list(Script("or", 1, 1).goto_definitions())
assert len(r) == 1
assert len(r[0].doc) > 100
r = list(Script("asfdasfd", 1, 1).goto_definitions())
assert len(r) == 0
k = Script("fro").completions()[0]
imp_start = '\nThe ``import'
assert k.raw_doc.startswith(imp_start)
assert k.doc.startswith(imp_start)
def test_blablabla():
defs = Script("import").goto_definitions()
assert len(defs) == 1 and [1 for d in defs if d.doc]
# unrelated to #44
def test_operator_doc(self):
r = list(Script("a == b", 1, 3).goto_definitions())
assert len(r) == 1
assert len(r[0].doc) > 100
def test_lambda():
defs = Script('lambda x: x', column=0).goto_definitions()
assert [d.type for d in defs] == ['keyword']

View File

@@ -39,13 +39,16 @@ b[8:]
#? list()
b[int():]
#? list()
b[:]
class _StrangeSlice():
def __getitem__(self, slice):
return slice
def __getitem__(self, sliced):
return sliced
# Should not result in an error, just because the slice itself is returned.
#? []
#? slice()
_StrangeSlice()[1:2]
@@ -125,6 +128,9 @@ f
# -----------------
# unnessecary braces
# -----------------
a = (1)
#? int()
a
#? int()
(1)
#? int()
@@ -204,9 +210,22 @@ dic2['asdf']
dic2[r'asdf']
#? int()
dic2[r'asdf']
#? int()
dic2[r'as' 'd' u'f']
#? int() str()
dic2['just_something']
# unpacking
a, b = dic2
#? str()
a
a, b = {1: 'x', 2.0: 1j}
#? int() float()
a
#? int() float()
b
def f():
""" github #83 """
r = {}
@@ -224,7 +243,7 @@ f()
d = dict({'a':''})
def y(a):
return a
#?
#?
y(**d)
# problem with more complicated casts
@@ -232,6 +251,11 @@ dic = {str(key): ''}
#? str()
dic['']
for x in {1: 3.0, '': 1j}:
#? int() str()
x
# -----------------
# with variable as index
# -----------------
@@ -287,6 +311,17 @@ for i in 0, 2:
#? int() str()
GetItemWithList()[i]
# With super
class SuperYeah(list):
def __getitem__(self, index):
return super()[index]
#?
SuperYeah([1])[0]
#?
SuperYeah()[0]
# -----------------
# conversions
# -----------------
@@ -297,12 +332,14 @@ list(a)[1]
#? int() str()
list(a)[0]
#?
#?
set(a)[0]
#? int() str()
list(set(a))[1]
#? int() str()
next(iter(set(a)))
#? int() str()
list(list(set(a)))[1]
# does not yet work, because the recursion catching is not good enough (catches # to much)
@@ -329,8 +366,6 @@ tuple(a)[1]
#? int() str()
tuple(list(set(a)))[1]
#? int()
tuple({1})[0]
#? int()
tuple((1,))[0]
@@ -338,17 +373,64 @@ tuple((1,))[0]
#? []
list().__iterable
# With a list comprehension.
for i in set(a for a in [1]):
#? int()
i
# -----------------
# Recursions
# Merged Arrays
# -----------------
def to_list(iterable):
return list(set(iterable))
for x in [1] + ['']:
#? int() str()
x
# -----------------
# For loops with attribute assignment.
# -----------------
def test_func():
x = 'asdf'
for x.something in [6,7,8]:
pass
#? str()
x
for x.something, b in [[6, 6.0]]:
pass
#? str()
x
def recursion1(foo):
return to_list(to_list(foo)) + recursion1(foo)
# python >= 2.7
# Set literals are not valid in 2.6.
#? int()
recursion1([1,2])[0]
tuple({1})[0]
# python >= 3.3
# -----------------
# PEP 3132 Extended Iterable Unpacking (star unpacking)
# -----------------
a, *b, c = [1, 'b', list, dict]
#? int()
a
#? str()
b
#? list
c
# Not valid syntax
a, *b, *c = [1, 'd', list]
#? int()
a
#? str()
b
#? list
c
lc = [x for a, *x in [(1, '', 1.0)]]
#?
lc[0][0]

36
test/completion/async_.py Normal file
View File

@@ -0,0 +1,36 @@
"""
Tests for all async use cases.
Currently we're not supporting completion of them, but they should at least not
raise errors or return extremely strange results.
"""
async def x():
argh = await x()
#?
argh
return 2
#? int()
x()
a = await x()
#?
a
async def x2():
async with open('asdf') as f:
#? ['readlines']
f.readlines
class A():
@staticmethod
async def b(c=1, d=2):
return 1
#! 9 ['def b']
await A.b()
#! 11 ['param d=2']
await A.b(d=3)

View File

@@ -23,18 +23,18 @@ a(0):.
# if/else/elif
# -----------------
if 1:
if (random.choice([0, 1])):
1
elif(3):
elif(random.choice([0, 1])):
a = 3
else:
a = ''
#? int() str()
a
def func():
if 1:
if random.choice([0, 1]):
1
elif(3):
elif(random.choice([0, 1])):
a = 3
else:
a = ''
@@ -51,7 +51,7 @@ func()
assert []
def focus_return():
#? list
#? list()
return []
@@ -105,6 +105,13 @@ for i in b:
#? float() str()
a[0]
for i in [1,2,3]:
#? int()
i
else:
i
# -----------------
# range()
# -----------------
@@ -112,115 +119,6 @@ for i in range(10):
#? int()
i
# -----------------
# list comprehensions
# -----------------
# basics:
a = ['' for a in [1]]
#? str()
a[0]
a = [a for a in [1]]
#? int()
a[0]
a = [a for a in 1,2]
#? int()
a[0]
a = [a for a,b in [(1,'')]]
#? int()
a[0]
arr = [1,'']
a = [a for a in arr]
#? int() str()
a[0]
a = [a if 1.0 else '' for a in [1] if [1.0]]
#? int() str()
a[0]
# name resolve should be correct
left, right = 'a', 'b'
left, right = [x for x in (left, right)]
#? str()
left
# with a dict literal
#? str()
[a for a in {1:'x'}][0]
##? str()
{a-1:b for a,b in {1:'a', 3:1.0}.items()}[0]
# list comprehensions should also work in combination with functions
def listen(arg):
for x in arg:
#? str()
x
listen(['' for x in [1]])
#? str()
([str for x in []])[0]
# -----------------
# nested list comprehensions
# -----------------
b = [a for arr in [[1]] for a in arr]
#? int()
b[0]
b = [a for arr in [[1]] if '' for a in arr if '']
#? int()
b[0]
b = [b for arr in [[[1.0]]] for a in arr for b in a]
#? float()
b[0]
# jedi issue #26
#? list()
a = [[int(v) for v in line.strip().split() if v] for line in ["123", "123", "123"] if line]
#? list()
a[0]
#? int()
a[0][0]
# -----------------
# generator comprehensions
# -----------------
left, right = (i for i in (1, ''))
#? int()
left
gen = (i for i in (1,))
#? int()
next(gen)
#?
gen[0]
gen = (a for arr in [[1.0]] for a in arr)
#? float()
next(gen)
#? int()
(i for i in (1,)).send()
# issues with different formats
left, right = (i for i in
('1', '2'))
#? str()
left
# -----------------
# ternary operator
# -----------------
@@ -238,21 +136,6 @@ ret(1)[0]
#? str() set()
ret()[0]
# -----------------
# with statements
# -----------------
with open('') as f:
#? ['closed']
f.closed
with open('') as f1, open('') as f2:
#? ['closed']
f1.closed
#? ['closed']
f2.closed
# -----------------
# global vars
# -----------------
@@ -264,6 +147,17 @@ def global_define():
#? int()
global_var_in_func
def funct1():
# From issue #610
global global_dict_var
global_dict_var = dict()
def funct2():
global global_dict_var
#? dict()
global_dict_var
# -----------------
# within docstrs
# -----------------
@@ -275,9 +169,19 @@ def a():
"""
pass
#?
#?
# str literals in comment """ upper
def completion_in_comment():
#? ['Exception']
# might fail because the comment is not a leaf: Exception
pass
some_word
#? ['Exception']
# Very simple comment completion: Exception
# Commment after it
# -----------------
# magic methods
# -----------------
@@ -314,6 +218,9 @@ if 1:
#? str()
xyz
#?
¹.
# -----------------
# exceptions
# -----------------
@@ -327,9 +234,10 @@ except ImportError as i_a:
try:
import math
except ImportError, i_b:
#? ['i_b']
# TODO check this only in Python2
##? ['i_b']
i_b
#? ImportError()
##? ImportError()
i_b
@@ -356,16 +264,6 @@ foo = \
#? int()
foo
# -----------------
# if `is not` checks
# -----------------
foo = ['a']
if foo is not None:
foo = ''.join(foo)
#? str()
foo
# -----------------
# module attributes
# -----------------
@@ -375,3 +273,23 @@ foo
__file__
#? ['__file__']
__file__
# -----------------
# with statements
# -----------------
with open('') as f:
#? ['closed']
f.closed
for line in f:
#? str()
line
# Nested with statements don't exist in Python 2.6.
# python >= 2.7
with open('') as f1, open('') as f2:
#? ['closed']
f1.closed
#? ['closed']
f2.closed

View File

@@ -50,6 +50,8 @@ class TestClass(object):
self.var_local = 3
#? ['var_class', 'var_func', 'var_inst', 'var_local']
self.var_
#?
var_local
def ret(self, a1):
# should not know any class functions!
@@ -83,6 +85,9 @@ TestClass.var_local.
#? int()
TestClass().ret(1)
# Should not return int(), because we want the type before `.ret(1)`.
#? 11 TestClass()
TestClass().ret(1)
#? int()
inst.ret(1)
@@ -131,6 +136,8 @@ A().addition
A().addition = None
#? 8 int()
A(1).addition = None
#? 1 A
A(1).addition = None
a = A()
#? 8 int()
a.addition = None
@@ -155,6 +162,7 @@ class Mixin(SuperClass):
def method_mixin(self):
return int
#? 20 SuperClass
class SubClass(SuperClass):
class_sub = 3
def __init__(self):
@@ -191,6 +199,22 @@ Base.upper
#? ['upper']
Base().upper
# -----------------
# dynamic inheritance
# -----------------
class Angry(object):
def shout(self):
return 'THIS IS MALARKEY!'
def classgetter():
return Angry
class Dude(classgetter()):
def react(self):
#? ['shout']
self.s
# -----------------
# __call__
# -----------------
@@ -222,7 +246,10 @@ class V:
V(1).b()
#? int()
V(1).c()
#? []
#?
V(1).d()
# Only keywords should be possible to complete.
#? ['is', 'in', 'not', 'and', 'or', 'if']
V(1).d()
@@ -270,20 +297,6 @@ class A():
#? list()
A().b()
# -----------------
# recursions
# -----------------
def Recursion():
def recurse(self):
self.a = self.a
self.b = self.b.recurse()
#?
Recursion().a
#?
Recursion().b
# -----------------
# ducktyping
# -----------------
@@ -323,15 +336,15 @@ getattr(str(), 'upper')
getattr(str, 'upper')
# some strange getattr calls
#?
#?
getattr(str, 1)
#?
#?
getattr()
#?
#?
getattr(str)
#?
#?
getattr(getattr, 1)
#?
#?
getattr(str, [])
@@ -372,19 +385,46 @@ class PrivateVar():
self.__var = 1
#? int()
self.__var
#? ['__var']
self.__var
def __private_func(self):
return 1
def wrap_private(self):
return self.__private_func()
#? []
PrivateVar().__var
#?
#?
PrivateVar().__var
#? []
PrivateVar().__private_func
#? int()
PrivateVar().wrap_private()
class PrivateSub(PrivateVar):
def test(self):
#? []
self.__var
def wrap_private(self):
#? []
self.__var
#? []
PrivateSub().__var
# -----------------
# super
# -----------------
class Super(object):
a = 3
def return_sup(self):
return 1
class TestSuper(Super):
#?
#?
super()
def test(self):
#? Super()
@@ -395,9 +435,16 @@ class TestSuper(Super):
#? Super()
super()
def a():
#?
#?
super()
def return_sup(self):
#? int()
return super().return_sup()
#? int()
TestSuper().return_sup()
# -----------------
# if flow at class level
@@ -423,9 +470,66 @@ class TestX(object):
# -----------------
class A(object):
pass
a = 3
#? ['mro']
A.mro
#? []
A().mro
# -----------------
# mro resolution
# -----------------
class B(A()):
b = 3
#?
B.a
#?
B().a
#? int()
B.b
#? int()
B().b
# -----------------
# With import
# -----------------
from import_tree.classes import Config2, BaseClass
class Config(BaseClass):
"""#884"""
#? Config2()
Config.mode
#? int()
Config.mode2
# -----------------
# Nested class/def/class
# -----------------
class Foo(object):
a = 3
def create_class(self):
class X():
a = self.a
self.b = 3.0
return X
#? int()
Foo().create_class().a
#? float()
Foo().b
class Foo(object):
def comprehension_definition(self):
return [1 for self.b in [1]]
#? int()
Foo().b

Some files were not shown because too many files have changed in this diff Show More