1
0
forked from VimPlug/jedi

Compare commits

...

2783 Commits

Author SHA1 Message Date
Dave Halter
3b7106ae71 Fix a typo 2020-07-17 21:56:13 +02:00
Dave Halter
74116fe2ea Prepare for 0.17.2 2020-07-17 21:39:36 +02:00
Dave Halter
1233caebdc Fix a Python 3.9 issue on travis 2020-07-17 16:13:23 +02:00
Dave Halter
7851dff915 Properly negate with Interpreter, fixes #1636 2020-07-17 15:57:32 +02:00
Dave Halter
e4987b3e7a Fix issues with generators, fixes #1624 2020-07-17 15:57:32 +02:00
Dave Halter
d1851c369c Introduce py__next__ to have more clear way to use __next__ 2020-07-17 15:57:32 +02:00
Dave Halter
d63fbd8624 Merge pull request #1633 from mrclary/mrclary-fix-wingkinl-patch-python-environ
Fix for #1630
2020-07-17 11:26:02 +02:00
Ryan Clary
b0f664ec94 * reflect default Popen behavior by inheriting os.environ
* without passing env_vars to create_environment, GeneralizedPopen behavior is same as before fix to issue #1540 (803c3cb271)
* env_vars allows explicit environment variables, per PR #1619 (f9183bbf64)
2020-07-16 19:04:33 -07:00
Dave Halter
9957374508 Fix dict completions for inherited dicts, fixes #1631 2020-07-14 17:50:12 +02:00
Dave Halter
7f3a7db7e6 Refactor Interpeter completions a bit 2020-07-12 22:26:57 +02:00
Dave Halter
3ffe8475b8 Make sure the interpreter completions work better in Jupyter Notebook, fixes #1628 2020-07-12 22:20:06 +02:00
Dave Halter
396d7df314 Fix an issue with interpreter completion, see also #1628 2020-07-12 22:02:00 +02:00
Dave Halter
0c618a4456 Making sure to note that Python 2 will not be supported after 0.17.2 2020-07-12 21:22:36 +02:00
Dave Halter
c4c36d8e2e Mention in Changelog that 3.9 is now supported 2020-07-12 19:44:48 +02:00
Dave Halter
a3a9ae1a26 Add download badge 2020-06-27 15:15:34 +02:00
Dave Halter
e41b966283 Some test skips 2020-06-27 03:10:24 +02:00
Dave Halter
4188526e2d Revert some of the Decoratee changes 2020-06-27 02:18:31 +02:00
Dave Halter
804b0f0d06 Some more signature adjustments 2020-06-27 02:18:31 +02:00
Dave Halter
7b15f1736c Change Decoratee slightly 2020-06-27 02:18:31 +02:00
Dave Halter
4846848a1e Fix an issue with decoratee names 2020-06-27 02:18:31 +02:00
Dave Halter
344fef1e2f Add Project.path, fixes #1622 2020-06-27 02:18:31 +02:00
Dave Halter
bc23458164 Fix the of a signature with a decorator 2020-06-27 02:18:31 +02:00
Dave Halter
9a54e583e7 Fix docstrings for method decorators, fixes #1621 2020-06-27 02:18:31 +02:00
Dave Halter
59ccd2da93 Make partial use the __doc__ of its function, fixes #1621 2020-06-27 02:18:31 +02:00
Dave Halter
737c1e5792 Merge pull request #1614 from PeterJCLaw/fix-decorator-factory-passthrough
Support passing values through decorators from factories
2020-06-26 13:29:58 +02:00
Peter Law
f72adf0cbc Switch to much simpler solution for preserving unbound type vars
Co-Authored-By: Dave Halter <davidhalter88@gmail.com>
2020-06-26 11:23:35 +01:00
Peter Law
5184d0cb9c Support passing values through decorators from factories
This builds on the approach taken in https://github.com/davidhalter/jedi/pull/1613
but applies it to type vars themselves so that their type var
nature is preserved when a function returns Callable[[T], T] and
the T has an upper bound.
2020-06-26 11:22:19 +01:00
Peter Law
2d0258db1a Add tests for class-style decorator factories 2020-06-26 11:19:51 +01:00
Dave Halter
f5e6a25542 Merge pull request #1623 from mallamanis/master
Add __matmul__ to supported operators.
2020-06-26 12:10:00 +02:00
Miltos
bc5a8ddf87 Add __matmul__ to supported operators. 2020-06-25 17:35:07 +01:00
Dave Halter
eabddb9698 Remove a print 2020-06-24 01:29:50 +02:00
Dave Halter
6fcdc44f3e Typeshed third party libraries should not be loaded if they don't actually exist in the environment, fixes #1620 2020-06-24 01:08:04 +02:00
Dave Halter
0d1a45ddc1 Add the env_vars change to CHANGELOG 2020-06-22 00:13:57 +02:00
Dave Halter
f9183bbf64 Merge pull request #1619 from mrclary/subprocess-env-vars
Provide option to pass explicit environment variables to Environment and CompiledSubprocess
2020-06-22 00:11:18 +02:00
Ryan Clary
7ec8454fc1 * Provide option to pass environment variables to Environment and CompiledSubprocess (subprocess.Popen)
* Extend this option to find_system_enviornments and get_system_environment without breaking API
2020-06-21 08:08:32 -07:00
Dave Halter
a3410f124a Make sure that Callables are properly represented
See also comment of https://github.com/davidhalter/jedi/pull/1614#issuecomment-647054740
2020-06-21 01:31:58 +02:00
Peter Law
3488f6b61d Add Python 3.8 to the tox env list (#1618) 2020-06-20 16:18:32 +02:00
Dave Halter
3dad9cac6b Use Python 3 in the deployment script 2020-06-20 01:19:01 +02:00
Dave Halter
7aa13e35e9 Prepare release 0.17.1 2020-06-20 00:39:09 +02:00
Dave Halter
cf1b54cfe5 Make sure the current version doesn't install a parso version that is new 2020-06-16 21:39:17 +02:00
Dave Halter
8669405a1c Small changelog improvement 2020-06-16 08:53:02 +02:00
Dave Halter
54775acc7a Mention Django Manager support for managers/querysets in changelog 2020-06-16 08:52:19 +02:00
Dave Halter
be184241fd Add SyntaxError.get_message 2020-06-16 08:51:54 +02:00
Dave Halter
61ad05d511 Mention 3.9 support better 2020-06-16 08:42:18 +02:00
Dave Halter
1872ad311b Fix decorator param completion 2020-06-15 00:34:55 +02:00
Dave Halter
364d33119c Merge branch 'django' 2020-06-14 22:24:31 +02:00
Dave Halter
1702a6340e Document a special case in Django a bit better 2020-06-14 22:23:08 +02:00
Dave Halter
4ab35cac7b Merge branch 'master' of github.com:davidhalter/jedi 2020-06-14 18:11:50 +02:00
Dave Halter
21f1df18b6 Fix some issues with sub class matching, fixes #1560 2020-06-14 18:10:00 +02:00
Dave Halter
8ea4c0589c Merge pull request #1613 from PeterJCLaw/fix-1425-1607-typevar-wrap-functions-and-classes
Handle passing functions and classes through a TypeVar
2020-06-14 18:01:48 +02:00
Dave Halter
1d1c0ec3af Better debugging output for is_sub_class_of 2020-06-14 17:55:53 +02:00
Peter Law
7e637c5e5e Python 2 compatible super() 2020-06-14 16:27:39 +01:00
Peter Law
4f11f20e1d Add a signature check for decorated functions
Specifically where the decorator is type annotated.
2020-06-14 16:24:42 +01:00
Dave Halter
674e0114a5 Ignore runtime_checkable, because we don't really need it 2020-06-14 14:14:47 +02:00
Peter Law
1f082b69d2 Handle passing functions and classes through a TypeVar
This fixes #1425 and #1607 by persisting the original underlying
function or class when we process a TypeVar they are passed into.
2020-06-13 23:28:20 +01:00
Dave Halter
9de5ab2037 Make it possible to complete on QuerySet methods, fixes #1587 2020-06-13 20:55:37 +02:00
Dave Halter
3415ccbb73 Add support for Django signatures, fixes parts of #1587 2020-06-13 16:18:47 +02:00
Dave Halter
b165596a6e Avoid doing a call twice for now reason 2020-06-13 14:25:52 +02:00
Dave Halter
089a4713e3 Fix a small extract_variable issue, fixes #1611 2020-06-13 01:35:58 +02:00
Dave Halter
365d725bc1 Fix a small issue that was inadvertently changed 2020-06-13 00:26:12 +02:00
Dave Halter
7586900fd9 Merge branch 'master' into django 2020-06-12 20:04:28 +02:00
Dave Halter
c4de9ae2d3 Use a customized django-stubs 2020-06-12 19:30:49 +02:00
Dave Halter
3a0a484fcb Try to get get the tests for Python 3.9 passing, fixes #1608 2020-06-10 09:54:32 +02:00
Dave Halter
df7dd026d2 Make it possible to use inheritance on generics without always specifying type vars, see also discussion in #1593 2020-06-10 09:54:32 +02:00
Dave Halter
a2108de2c0 Use py__get__ for Django Model.objects
This includes the fix in https://github.com/typeddjango/django-stubs/pull/394
2020-06-09 23:26:43 +02:00
Dave Halter
6d0d75c7d9 @publish_method should provide arguments 2020-06-09 22:37:50 +02:00
Dave Halter
d4f0424ddc Move py__getitem__ from Class to ClassMixin 2020-06-08 00:58:38 +02:00
Dave Halter
cd6113c2c3 Move with_generics and define_generics to ClassMixin 2020-06-08 00:11:45 +02:00
Dave Halter
c9a21adc5f Make sure py__get__ is applied properly for Django metaclasses 2020-06-07 15:01:12 +02:00
Dave Halter
9adcf3d233 Make sure meta class filters can distinguish between classes and instances 2020-06-07 14:54:26 +02:00
Dave Halter
34cc8e9ad7 Properly handle __get__ in properties/partials 2020-06-07 14:18:45 +02:00
yuan
cf923ec6de Update MANIFEST.in 2020-06-07 12:01:56 +02:00
Dave Halter
105c097fea Merge branch 'django-custom-object-manager' of https://github.com/PeterJCLaw/jedi into django 2020-06-06 01:24:24 +02:00
Dave Halter
574b790296 Make it possible to use inheritance on generics without always specifying type vars, see also discussion in #1593 2020-06-06 01:23:14 +02:00
Dave Halter
3870253b56 Make sure that scopes can only be exact values, see #1590 2020-06-05 23:04:39 +02:00
Dave Halter
21a380f7cb Merge pull request #1590 from muffinmad/references-scope
Get references in the current module only
2020-06-05 19:21:34 +02:00
muffinmad
404661f361 Replace Script by timedelta in the test 2020-06-05 17:44:59 +03:00
muffinmad
1e58f9a15c Test both named params are found 2020-06-05 15:28:22 +03:00
Dave Halter
24236be3ce Fix a small issue with doctest completions, fixes #1585 2020-06-05 13:35:36 +02:00
muffinmad
8705149619 Use pytest.mark.parametrize 2020-06-03 17:20:23 +03:00
muffinmad
782dedd439 Get references in the current module only 2020-06-03 16:35:28 +03:00
muffinmad
f9bbccbc13 Pycodestyle configuration section moved to setup.cfg 2020-06-03 15:24:37 +03:00
Michał Górny
cecdaa98ae Exclude more Linux constants in test_import
The list of differences have grown again in Python 3.9.  Instead of
increasing the allowed count let's filter out more Linux-specific
constants.  This probably makes it possible to reduce allowed
len(difference) too.
2020-06-02 23:04:50 +02:00
Dave Halter
9980f760b1 Merge pull request #1601 from yuan-xy/patch_3
add test case to fix code example in doc
2020-05-31 11:14:58 +02:00
yuan
5946a5cd8c Refactoring about checking \r\n (#1603) 2020-05-31 11:13:30 +02:00
yuan_xy
32687474db add test case to fix code example in doc 2020-05-31 11:00:15 +08:00
yuan
98a8b6c76c fix typo (#1602) 2020-05-30 12:04:15 +02:00
yuan
ca08365a81 fix typo 2020-05-28 21:29:34 +02:00
Dave Halter
8239328e42 Merge pull request #1599 from isidentical/py38-plus-setuppy
Upgrade setup.py's version parsing for 3.8+
2020-05-28 21:18:51 +02:00
Batuhan Taskaya
b9131c6070 Upgrade setup.py's version parsing for 3.8+ 2020-05-28 15:26:48 +03:00
muffinmad
1c342d36e5 Don't goto while building found_names for the current file
But goto for all non_matching_reference_maps items later
2020-05-24 22:58:04 +03:00
Dave Halter
2d672d2f28 Merge pull request #1595 from PeterJCLaw/operator-not-in
Explicitly handle `a not in b` operator comparison
2020-05-23 14:48:40 +02:00
Peter Law
c62cbd6654 Explicitly handle a not in b operator comparison
This avoids a `KeyError` from operator_to_magic_method lookup for
this case. Jedi probably could check for `__contains__` here, however
as it doesn't do so for `in` checks I'm following that lead for now.

Fixes https://github.com/davidhalter/jedi/issues/1594.
2020-05-23 12:49:53 +01:00
Peter Law
c36904d983 Support custom managers in Django models
For the moment this support is limited to just Model.objects
replacements and does not use the custom manager for ForeignKey
related managers.
2020-05-22 12:33:03 +01:00
Peter Law
669b70b2cd Validate instance methods on Django models 2020-05-22 12:32:14 +01:00
muffinmad
7459d67fee Test local references in some other cases 2020-05-22 13:24:39 +03:00
muffinmad
741097827d Get references in the current module only 2020-05-21 19:51:13 +03:00
muffinmad
4ceca54138 Specify max-line-length for pycodestyle
According to CONTRIBUTING.md it can be 100
2020-05-21 17:31:44 +03:00
Christopher Cave-Ayland
860d5e8889 Import FileNotFoundError from jedi._compatibility 2020-05-21 11:45:52 +02:00
Dave Halter
64d131060c Merge pull request #1586 from PeterJCLaw/django-more-fields
Support more Django model fields
2020-05-19 00:39:27 +02:00
Peter Law
b7cdec427e Support OneToOneFields 2020-05-18 22:19:20 +01:00
Peter Law
df66b35444 Support UUIDFields 2020-05-18 22:11:31 +01:00
Peter Law
cd9f2f31ea Support URLFields 2020-05-18 22:10:48 +01:00
Peter Law
b54d7433c7 Support GenericIPAddressFields 2020-05-18 22:10:09 +01:00
Dave Halter
855fb5a936 Fix potential AttributeError in get_defintion_start_position/get_defintion_end_position, see #1584 2020-05-18 19:21:04 +02:00
Dave Halter
8fdf16b316 Fix an error of get_definition_end_pos, see #1584 2020-05-18 01:44:51 +02:00
Dave Halter
fa6194c0a9 Refactor test_definition_start_end_position to use parametrize 2020-05-18 01:41:07 +02:00
Dave Halter
2d17b81313 definition_end_position -> get_definition_end_position, same for start, see #1584 2020-05-18 01:18:22 +02:00
Dave Halter
cb1730f628 Merge pull request #1584 from pappasam/get_definition_position
Add BaseName.definition_[start,end]_position
2020-05-18 01:14:00 +02:00
Sam Roeca
d848047012 Add unit tests for definition_[start,end]_position 2020-05-17 11:48:28 -04:00
Sam Roeca
716beae455 Add BaseName.definition_[start,end]_position
Provides two public (property) methods getting the (row, column) of the
start / end of the definition range. Rows start with 1, columns start
with 0.

:rtype: Tuple[int, int]
2020-05-16 15:08:36 -04:00
Dave Halter
d16355fcf2 Fix tests in Python 2 2020-05-16 17:47:33 +02:00
Dave Halter
cd3d40a3b8 Fix a small issue 2020-05-16 15:42:15 +02:00
Dave Halter
b3fc10a6e4 Magic methods fixes for reverse methods 2020-05-16 15:39:48 +02:00
Dave Halter
09dbbc6361 lists and tuples should not be added 2020-05-16 15:10:47 +02:00
Dave Halter
f5ad561c51 Use __truediv__ instead of __div__
This ignores Python 2, but that shouldn't be an issue, since we are going to drop it anyway.
2020-05-16 14:57:57 +02:00
Dave Halter
0db50b521d Fix an issue with Tuple generics 2020-05-16 14:55:59 +02:00
Dave Halter
9942a3d44c A few class renames 2020-05-16 14:35:15 +02:00
Dave Halter
47637c147c Better debugging 2020-05-16 14:31:31 +02:00
Dave Halter
2fb072532a Skip another non-important Python 2 test that fails on Windows 2020-05-16 01:25:15 +02:00
Dave Halter
70aa7fc917 Fix a namespace issue when getting references 2020-05-16 01:05:39 +02:00
Dave Halter
384b2ad014 Fix an about dict completions 2020-05-16 00:46:46 +02:00
Dave Halter
f2975f9a05 Fix a None issue 2020-05-16 00:27:14 +02:00
Dave Halter
41c146a6f3 Implement magic method return values, fixes #1577 2020-05-15 23:53:44 +02:00
Dave Halter
be594f1498 Remove an unused cache method 2020-05-15 23:53:44 +02:00
Dave Halter
99eba4e0eb Undefined api types should not return a random value 2020-05-15 23:53:44 +02:00
Peter Law
43806f8668 Add support for generic optional parameters (#1559)
* Add support for generic optional parameters

* Tests for passing non-optional arguments to optional parameters

* Remove now-redundant is_class_value handling

This parameter has since been removed from infer_type_vars methods,
much simplifying the code.
2020-05-15 19:56:03 +02:00
Dave Halter
d4aa583e16 Fix inline case where a name was removed without the code being used, fixes #1582 2020-05-14 23:08:37 +02:00
Dave Halter
381fbeda6a Make the diff nicer if there is no ending newline, fixes #1581 2020-05-14 00:20:20 +02:00
Dave Halter
3104443212 Merge pull request #1579 from muffinmad/pseudotreenameclass
Return 'class' as _PseudoTreeNameClass.type (fix #1578)
2020-05-13 18:59:05 +02:00
muffinmad
16e2b86bcf Fix test 2020-05-13 01:18:47 +03:00
Dave Halter
0caee73975 Merge pull request #1572 from davidhalter/classvar
Remove is_class_value from infer_type_vars
2020-05-12 23:56:03 +02:00
Dave Halter
7f25e28d89 Fix tuple issue in 3.6 2020-05-12 23:33:06 +02:00
muffinmad
ce8473ee63 Add author's name to AUTHORS.txt 2020-05-12 23:34:28 +03:00
muffinmad
7ccee7d8fc Add test _PseudoTreeNameClass.type == 'class' 2020-05-12 23:28:46 +03:00
muffinmad
7cd89cff6e Return 'class' as BaseName.type of _PseudoTreeNameClass (fix #1578) 2020-05-12 23:14:32 +03:00
Vlad Serebrennikov
e1c0d2c501 Reduce noise in signatures of compiled params (#1564)
* Remove "typing." prefix from compiled signature param

* Don't print default "None" for Optional params

* Don't remove 'typing.' prefix if symbol doesn't come from typing module

* Revert "Don't print default "None" for Optional params"

This reverts commit 8db334d9bb.

* Make sure "typing." doesn't appear in the middle

* Make sure only "typing." prefix is removed and not it's entries in the middle

* Use inspect.formatannotation() to create an annotation string

* Update AUTHORS.txt

* Add test for compiled param annotation string

* Replace Optional in test with other typing facilities

in order for test to be forward-compatible with 3.9

* Add an empty string fallback for Python 2

* Move _annotation_to_str back to original position
2020-05-10 13:33:36 +02:00
Dave Halter
be7a1346ec Fix #1573 again; a tree_node can be None 2020-05-10 13:29:58 +02:00
Dave Halter
6dbc5e783e Fix argument clinic unpacking, remove dynamic bullshit 2020-05-10 13:27:20 +02:00
Max Mäusezahl
1115cbd94d This fixes two issues with the caching on Windows:
* the cache directory should really be %LOCALAPPDATA%
 * ~ is not a meaningful directory on Windows. It should really be
   os.path.expanduser('~'). To be honest it is probably always safe to
   assume that os.getenv('LOCALAPPDATA') executes to something sensible
   on any Windows system that hasn't been tampered with.
2020-05-10 11:46:29 +02:00
Dave Halter
bf4ec2282f Fix getattr completions on very weird cases, fixes #1573 2020-05-10 11:37:58 +02:00
Dave Halter
e6e43413ff Any -> AnyClass 2020-05-10 03:17:52 +02:00
Dave Halter
e9a0c01af8 TypedDictBase -> TypedDictClass 2020-05-10 03:17:07 +02:00
Dave Halter
d0270b5e59 DefineGenericBase -> DefineGenericBaseClass 2020-05-10 03:07:40 +02:00
Dave Halter
b57654aed3 Rename some classes to make it clearer that they are classes 2020-05-10 03:04:52 +02:00
Dave Halter
78ad06612e Remove an unused import 2020-05-10 03:00:47 +02:00
Dave Halter
434866558a Instances should not need get_generics 2020-05-10 02:59:54 +02:00
Dave Halter
42963a0e03 By having get_annotated_class_object for Tuple/Callable, some details are not necessary anymore 2020-05-10 02:52:42 +02:00
Dave Halter
c2d1da09cb Make sure that Tuple/Callable instances have the correct py__class__ 2020-05-10 01:05:55 +02:00
Dave Halter
f362932ec5 Return a more correct py__class__ for typing base objects 2020-05-09 16:28:05 +02:00
Dave Halter
3b48c76e4a Make a function private 2020-05-09 00:49:37 +02:00
Dave Halter
d56f607f35 Reinstate an if that was deleted by mistake 2020-05-09 00:13:18 +02:00
Dave Halter
39a2cd8aa2 Fix a potential issue with tuples 2020-05-08 18:07:15 +02:00
Dave Halter
14ca8e6499 Add a comment 2020-05-08 18:00:35 +02:00
Dave Halter
2a227dcc7a Remove is_class_value from infer_type_vars 2020-05-08 17:49:02 +02:00
Dave Halter
12090ce74b Fix tests 2020-05-08 15:18:23 +02:00
Dave Halter
25973554e2 Remove the common folder and move it to a common file 2020-05-08 13:23:56 +02:00
Dave Halter
138c22afe9 Remove common.value 2020-05-08 13:18:01 +02:00
Dave Halter
d19535340c Move infer_type_vars to base_value 2020-05-08 13:13:26 +02:00
Dave Halter
5fcbed721d Merge pull request #1554 from PeterJCLaw/fix-nested-tuple-argument
Fix handling of nested tuple arguments
2020-05-08 12:49:44 +02:00
Sam Roeca
812776b9ce Add .venv to _IGNORE_FOLDERS
".venv" is a popular virtual environment folder name; project.search
gets really mucked up when it isn't ignored.
2020-05-05 21:15:18 +02:00
Dave Halter
d606ea6759 Correct a test 2020-04-27 09:59:38 +02:00
Dave Halter
c314e1c36e Speed up signature fetching for MixedName, see discussion in #1422 2020-04-27 01:53:42 +02:00
Dave Halter
8c7a883abd Test that the actual signature of a function is used in Interpreter 2020-04-27 01:47:06 +02:00
Peter Law
55facaaf3d Switch back to using execute_annotation
get_annotated_class_object is (sort-of) the inverse of execute_annotation,
so adding a get_annotated_class_object to implement execute_annotation
specifically for Tuples didn't make much sense.
2020-04-26 14:39:39 +01:00
Peter Law
17ca3a620f Merge branch 'master' into fix-nested-tuple-argument 2020-04-26 13:56:14 +01:00
Dave Halter
9836a1b347 Very small refactoring 2020-04-26 12:47:44 +02:00
Peter Law
8c3fd99009 Tell sith that goto_assignments is now goto 2020-04-26 02:15:53 +02:00
Dave Halter
4d9cb083ac Merge pull request #1561 from PeterJCLaw/newtype-pyclass
Support accessing the py__class__ of a NewType
2020-04-26 02:15:17 +02:00
Peter Law
612fd23777 Support accessing the py__class__ of a NewType
The test here is a bit contrived, the actual place I found this
was in using a NewType as a type within a NamedTuple. However due
to https://github.com/davidhalter/jedi/issues/1560 that currently
also fails for other reasons. This still feels useful to fix on
its own though.
2020-04-26 00:59:07 +01:00
Dave Halter
dca505c884 Merge pull request #1553 from PeterJCLaw/generic-tuple-return
Fix construction of nested generic tuple return types
2020-04-26 01:28:51 +02:00
Dave Halter
7fd5c8af8f Allow files for get_default_project, fixes #1552 2020-04-26 00:33:10 +02:00
Dave Halter
97fb95ec0c Don't display unnecessary help, fixes #1557 2020-04-26 00:21:01 +02:00
Dave Halter
e6d8a955d2 Pin Django in a different way so tests can work everywhere 2020-04-25 23:25:51 +02:00
Dave Halter
a3a147f028 Make sure that Django's values/values_list is tested (though not implemented 2020-04-25 22:55:29 +02:00
Dave Halter
c761dded35 Properly implement inheritance for Django models 2020-04-25 22:55:29 +02:00
Dave Halter
92623232c3 Make sure Django User inference works 2020-04-25 22:55:29 +02:00
Dave Halter
9b58bf6199 Pin the Django test dependency 2020-04-25 22:55:29 +02:00
Dave Halter
9d5eb28523 Mention django stubs support in README 2020-04-25 22:55:29 +02:00
Dave Halter
857e0fc00e Include Django stubs license in Jedi package 2020-04-25 22:55:29 +02:00
Dave Halter
bf8b58aeeb Some more django query tests 2020-04-25 22:55:29 +02:00
Dave Halter
f6803bce2c Infer many to many fields 2020-04-25 22:55:29 +02:00
Dave Halter
6bff30fbbb Include Django stubs as a third party repo 2020-04-25 22:55:29 +02:00
Dave Halter
6d927d502e Make sure that infering the Django User model works 2020-04-25 22:55:29 +02:00
Dave Halter
2e1284f044 Fix a recursion error issue 2020-04-25 22:55:29 +02:00
Dave Halter
11eb4f8fde Remove unused imports 2020-04-25 22:55:29 +02:00
Peter Law
c19c13e2c6 Apply tuple-only filtering to apply more broadly 2020-04-24 16:44:25 +01:00
Peter Law
891383f8dc Use get_annotated_class_object over execute_annotation 2020-04-24 16:32:00 +01:00
Peter Law
ce1ac38cde Implement get_annotated_class_object for Tuples 2020-04-24 16:25:19 +01:00
Peter Law
df951733cd Rename variable to placate mypy 2020-04-24 12:45:05 +01:00
Josh Bax
912fe68069 Fix typos in api.classes docstrings 2020-04-24 10:34:46 +02:00
Josh Bax
be82d5ff36 Remove a redundant check from Name.desc_with_module 2020-04-24 10:34:46 +02:00
Dave Halter
784f9ff081 Actually fix #1556, forgot to add this in 94d374c9ce 2020-04-23 10:10:58 +02:00
Dave Halter
0f39135ae5 Start changelog for 0.17.1 2020-04-22 23:14:58 +02:00
Dave Halter
94d374c9ce Fix a small issue with the help method, fixes #1556 2020-04-22 17:32:40 +02:00
Dave Halter
f3152a8c2b Django is not supported for Python 2 2020-04-22 09:44:43 +02:00
Dave Halter
f3eaa418bb Work with a NameWrapper, so Django goto works better 2020-04-22 09:32:39 +02:00
Dave Halter
f9176578ea Fix another django modelfield issue 2020-04-22 00:54:43 +02:00
Dave Halter
17eeb73767 Some nitpicks 2020-04-22 00:41:59 +02:00
Dave Halter
7756792bba Fix another issue with foreign keys 2020-04-22 00:33:51 +02:00
Dave Halter
ba4e3393d3 Fix ForeignKey issues with invalid values 2020-04-22 00:27:06 +02:00
Dave Halter
1a89fafce4 Some other small refactorings 2020-04-22 00:15:35 +02:00
Dave Halter
df307b8eda Refactor a few things for django 2020-04-22 00:05:35 +02:00
Dave Halter
d96887b102 Remove old third party django tests 2020-04-21 23:43:59 +02:00
Dave Halter
89ad9a500b Use debug instead of print for Django and fix indentation, see #1467 2020-04-21 23:41:54 +02:00
Dave Halter
086728365c Make Django test optional 2020-04-21 23:36:00 +02:00
Dave Halter
f9e36943d4 Merge branch 'master' of https://github.com/ANtlord/jedi 2020-04-21 23:22:40 +02:00
ANtlord
b5c1c6d414 Django plugin test of ManyToManyField is added and marked for future implementation. 2020-04-21 10:56:22 +03:00
ANtlord
df76b2462e Review corrections. 2020-04-20 10:31:03 +03:00
Peter Law
343a10d491 Drop redundant blank line 2020-04-19 14:42:57 +01:00
Peter Law
72c52f5f15 Add type match guard 2020-04-19 14:29:44 +01:00
Peter Law
cfa01d3ac5 Add handling of nested generic tuples 2020-04-19 14:10:03 +01:00
Peter Law
f8e7447d35 Add handling of nested generic callables
Previously tests for these were passing somewhat by accident,
however this commit's parent adds a case which showed that the
handling was missing.

Note that this also relies on the recent fix for nested tuples
which changed the `isinstance` check in `define_generics`.
2020-04-19 13:27:06 +01:00
Peter Law
2ac806e39f Add test which demonstrates incomplete generic Callable handling 2020-04-19 13:25:02 +01:00
Peter Law
7ebbf9da44 Make this test case obey typing rules in Python
Unfortunately I can't recall exactly what it was that this test
case was trying to validate, however on a second look it turns
out that it was working by accident and did not represent a valid
use of generic type vars in Python (which cannot be used completely
unbound as this was).
2020-04-18 22:59:20 +01:00
Peter Law
1c4a2edbdb Fix construction of nested generic tuple return types
Unfortunately this appears to show up a separate bug.
2020-04-18 19:43:47 +01:00
ANtlord
1d3082249f Debug information corrections. 2020-04-18 18:51:12 +03:00
ANtlord
09950233e7 Django is designated in test dependencies. 2020-04-18 18:36:04 +03:00
ANtlord
d48575c8c5 Simple tests of Django plugin are added. 2020-04-18 16:13:48 +03:00
ANtlord
f8a0cf76c8 Merge branch 'master' of github.com:davidhalter/jedi 2020-04-18 14:25:24 +03:00
Dave Halter
851e0d59f0 Better developer tools 2020-04-18 12:19:17 +02:00
Dave Halter
10b2de2c3f Make the linter completely private 2020-04-18 11:23:25 +02:00
Dave Halter
3718d62e24 Make sure that calling Jedi with a random argument in CLI results in errors 2020-04-18 11:23:12 +02:00
Dave Halter
a793dd7c91 Fix a small _get_annotated_class_object, fixes #1550 2020-04-18 00:36:32 +02:00
Dave Halter
0850b86456 Also don't complete keywords if kwargs only are allowed, see #1541 2020-04-17 23:51:40 +02:00
Dave Halter
f07dee3564 Completion: Don't suggest variables when only kwargs are legal, fixes #1541 2020-04-17 22:59:26 +02:00
xu0o0
f871f5e726 fix #1548 2020-04-17 19:24:05 +02:00
Ryan Clary
803c3cb271 * Use an explicit environment for subprocess to ensure that existing environment variables are not inherited. This ensures more reliable results, see issue #1540.
* Attempt to send SYSTEMROOT variable to Windows subprocess
2020-04-16 00:52:44 +02:00
Michał Górny
7ff76bb7d0 Sort test_project::test_search results to fix failures
Fixes #1542
2020-04-15 17:21:40 +02:00
Michał Górny
e7feeef64e Inc difference limit in TestSetupReadline::test_import for py3.8
Python 3.8 on Linux has 21 differences which exceed the current limit.
Increase it to 22.
2020-04-15 10:09:36 +02:00
Dave Halter
8aaa8e0044 Project._python_path -> Project.environment_path 2020-04-14 23:14:07 +02:00
Dave Halter
cbfbe7c08d Set the release date in Changelog 2020-04-14 22:59:17 +02:00
Dave Halter
81926a785c Some README improvements 2020-04-14 00:06:32 +02:00
Dave Halter
9ccb596f93 Extract now properly validates line/column and those two params are required 2020-04-13 23:15:42 +02:00
Dave Halter
25db8de0da Some minor CHANGELOG changes 2020-04-13 22:40:06 +02:00
Dave Halter
24dffe4226 Upgrade parso version 2020-04-13 22:33:51 +02:00
Dave Halter
c3fc129695 Fix a small issue 2020-04-12 00:54:31 +02:00
Dave Halter
02c3d651bd Some more code quality fixes 2020-04-11 02:23:23 +02:00
Dave Halter
bdd4deedc1 Some code cleanups 2020-04-11 02:11:52 +02:00
Dave Halter
9d55194b92 Don't reuse a variable 2020-04-11 01:40:41 +02:00
Dave Halter
102f83ea85 Remove unreachable code 2020-04-11 01:39:04 +02:00
Dave Halter
22902f6dba _convert_names kwargs are not needed 2020-04-11 01:37:34 +02:00
Dave Halter
5a3565785c Add pyproject.toml to the list of files to search for projects 2020-04-11 00:51:28 +02:00
Dave Halter
0f2a7215bb Use the interpreter environment if the executable is not available, fixes #1531 2020-04-02 20:59:35 +02:00
Dave Halter
61e9371849 Fix a potential AttributeError 2020-04-02 00:32:50 +02:00
Dave Halter
dde40b3a71 Add a comment to clarify the Type case 2020-04-02 00:23:38 +02:00
Dave Halter
ebb2786748 Avoid AttributeErrors for generics when a module is passed 2020-04-01 01:59:13 +02:00
Dave Halter
28f256d2a6 Merge branch 'improve-type-annotation-inference-refactors' of https://github.com/PeterJCLaw/jedi 2020-04-01 00:54:25 +02:00
Dave Halter
883f5a3824 Merge branch 'improve-type-annotation-inference' of https://github.com/PeterJCLaw/jedi 2020-04-01 00:54:13 +02:00
Dave Halter
ac33d5dea3 If branch inference should not trigger for things we don't know, fixes #1530 2020-03-31 22:46:31 +02:00
Dave Halter
604029568c Fix string completion issue, fixes #1528 2020-03-26 15:47:27 +01:00
Peter Law
eac5ac8426 Update comment after refactor moved code 2020-03-25 22:35:12 +00:00
Peter Law
7e9ad9e733 Fix typo 2020-03-25 22:32:53 +00:00
Peter Law
e2090772f3 Push tuple handling onto Tuple class
This resolves a TODO to avoid using a private method
2020-03-22 16:04:39 +00:00
Peter Law
525b88e9f1 Simplify early-exit code by having it once 2020-03-22 15:49:31 +00:00
Peter Law
3c90a84f68 Extract common get_generics() calls
These no longer need to be guarded by the conditions now that we
know these types are generic anyway.
2020-03-22 15:47:46 +00:00
Peter Law
ea33db388b Remove dict merging where it doesn't do anything
These cases are all at the end of a single-path branch that ends
up "merging" against an empty mapping which is then returned
unchanged.
2020-03-22 15:45:18 +00:00
Peter Law
f68d65ed59 Push much looping and merging of infering type vars into ValueSet 2020-03-22 15:29:11 +00:00
Peter Law
3c7621049c Extract annotation inference onto annotation classes
This removes the _infer_type_vars util in favour of a polymorphic
implementation, removing the conditional checks on the type of
the annotation instance.

While for the moment this creates some circular imports, further
refactoring to follow should be able to remove those.
2020-03-22 15:29:11 +00:00
Peter Law
dd60a8a4c9 Extract nested function which is going to be used elsewhere 2020-03-22 15:20:58 +00:00
Peter Law
5bd6a9c164 Rename function which is going to be used elsewhere 2020-03-22 15:18:41 +00:00
Peter Law
c743e5d9f3 Push type check into helper 2020-03-22 15:14:01 +00:00
Peter Law
5ca69458d4 Add testing for mismatch cases
This should help catch any errors in our handling of invalid cases.
While some of these produce outputs which aren't correct, what
we're checking here is that we don't _error_ while producing that
output.

Also fix a case which this showed up.
2020-03-22 15:10:43 +00:00
Dave Halter
bb9731b561 Fix wrong types for iterate, fixes #1524 2020-03-21 18:09:03 +01:00
Dave Halter
a2f4d1bbe7 Fix stub conversion for Decoratee, so docstrings work, see #117 2020-03-21 17:23:27 +01:00
Dave Halter
88c13639bc Remove unused environment param 2020-03-21 03:19:39 +01:00
Dave Halter
28c1ba6c1c Fix a Python 2 test 2020-03-21 03:13:59 +01:00
Dave Halter
a2764283ba Merge branch 'refactor' 2020-03-21 02:54:07 +01:00
Dave Halter
0ffd566957 Merge branch 'project' 2020-03-21 02:52:51 +01:00
Dave Halter
5b54ac835d Fix deprecations in tests 2020-03-21 02:42:00 +01:00
Dave Halter
5f6a25fb58 Add deprecations warnings, to deprecated functions in the main API 2020-03-21 02:30:07 +01:00
Dave Halter
d6d9286242 Get rid of deprecations in tests 2020-03-21 02:15:57 +01:00
Dave Halter
4c964ae655 Fix some test results 2020-03-21 01:52:56 +01:00
Dave Halter
8000d425ec Don't use desc_with_module in integration tests 2020-03-21 01:47:00 +01:00
Dave Halter
c7cd84b1a4 Rework the introduction of the README/docs 2020-03-21 01:25:58 +01:00
Dave Halter
6a89599fa5 Rework badges 2020-03-19 10:12:52 +01:00
Dave Halter
5f40fa9bc6 Docs: Remove links for sources/created using sphinx/copyright 2020-03-19 09:48:12 +01:00
Dave Halter
24cde8e974 Clean up acknowledgements 2020-03-19 09:43:19 +01:00
Dave Halter
dea80b20e9 REPL docs improvements 2020-03-19 02:57:51 +01:00
Dave Halter
197d64d9a8 Remove tox from docs 2020-03-19 02:53:24 +01:00
Dave Halter
a2bbbfe2d5 Rework a lot of the README 2020-03-19 02:49:29 +01:00
Dave Halter
2e9fac0b71 Rewrite the history part 2020-03-19 02:33:45 +01:00
Dave Halter
83e0e3bd8d Move history 2020-03-19 02:16:21 +01:00
Dave Halter
2f651966e7 Make jedi testing explanations better 2020-03-19 02:13:01 +01:00
Dave Halter
ffbaa4afea Improve settings documentation 2020-03-19 01:53:47 +01:00
Dave Halter
e11db6e8e4 Move acknowledgements in docs 2020-03-19 01:42:18 +01:00
Dave Halter
eea6c7f41b Move recipes to Jedi Usage 2020-03-19 01:31:49 +01:00
Dave Halter
01f53236a4 Rework the recipe parts 2020-03-19 01:26:45 +01:00
Dave Halter
c39326616c A lot of improvements for the features & limitations docs 2020-03-19 01:04:48 +01:00
Dave Halter
b1aef26464 Docs: End user usage improvements 2020-03-19 00:25:54 +01:00
Dave Halter
97117bfaf2 Display full version in docs 2020-03-19 00:16:03 +01:00
Dave Halter
f12262881d Some minor docstring improvements 2020-03-19 00:11:02 +01:00
Peter Law
95b0cdcb5e Add test for child of specialised generic 2020-03-18 22:15:32 +00:00
Peter Law
0f8e7b453e Formatting 2020-03-18 22:12:21 +00:00
Dave Halter
516b58b287 Fix a lot of sphinx warnings 2020-03-18 10:16:32 +01:00
Dave Halter
e53acb4150 Create an autosummary for Jedi's API 2020-03-18 10:03:07 +01:00
Dave Halter
7de475318a Minor refactoring 2020-03-17 10:00:38 +01:00
Dave Halter
6dda514ec6 Make sure encoding doesn't unnecessarily raise warnings 2020-03-17 10:00:30 +01:00
Dave Halter
72a3a33e33 ParamDefinition -> ParamName 2020-03-17 09:34:28 +01:00
Dave Halter
d26926a582 Definition -> Name 2020-03-17 09:33:12 +01:00
Dave Halter
0731206b9d BaseDefinition -> BaseName 2020-03-17 09:25:30 +01:00
Dave Halter
c2451ddd03 Small docstring changes 2020-03-17 09:21:48 +01:00
Dave Halter
88adf84fc2 Move acknowledgements over to the documentation 2020-03-17 09:18:34 +01:00
Dave Halter
94c97765c8 Include the CHANGELOG in docs 2020-03-17 09:16:57 +01:00
Dave Halter
1c56d15836 Added project support to the changelog 2020-03-17 09:06:37 +01:00
Dave Halter
7985ef37d4 Rewrite Interpreter docs 2020-03-17 09:04:02 +01:00
Dave Halter
8f4f6d6ac3 Document refactoring functions 2020-03-17 08:57:35 +01:00
Dave Halter
4a065642f2 Docs: Reformat API return classes 2020-03-17 08:34:51 +01:00
Dave Halter
3276db0bdc Improve many Script API docstrings 2020-03-16 10:19:39 +01:00
Dave Halter
88757f00e7 Script source argument to code 2020-03-16 09:45:05 +01:00
Dave Halter
6d79ac9fde Add deprecations for Script parameters line/column/encoding 2020-03-16 09:41:47 +01:00
Dave Halter
25af28946e Docs: API overview 2020-03-16 09:35:47 +01:00
Dave Halter
950f5c186c Restructure API documentation 2020-03-16 09:27:01 +01:00
Dave Halter
8f96cbdabf Replace the old flask theme with the sphinx_rtd_theme 2020-03-16 01:28:06 +01:00
Christopher Cave-Ayland
17b3611c53 Included statement as a possible return type for BaseDefinition.type 2020-03-16 00:36:17 +01:00
Dave Halter
9240a20d13 Remove an old note that was not valid anymore 2020-03-16 00:21:15 +01:00
Dave Halter
6220b20659 "Document" stubs for develops 2020-03-16 00:19:08 +01:00
Dave Halter
2feb0acd7d Docs: remove arrogance :) 2020-03-16 00:13:30 +01:00
Dave Halter
8efd111426 Small docs example code changes 2020-03-16 00:07:01 +01:00
Dave Halter
616e9bf275 Docs: security 2020-03-16 00:05:48 +01:00
Dave Halter
78f0f5855f Docs: History 2020-03-16 00:02:17 +01:00
Dave Halter
0f11f65682 Docs: Features 2020-03-16 00:00:43 +01:00
Dave Halter
43363936cd Installation notes for docs moved down in priority a bit 2020-03-15 23:52:52 +01:00
Dave Halter
0f25eb9c9a Way more docs work 2020-03-15 23:41:53 +01:00
Dave Halter
8ceb76b3f6 Move is_side_effect to BaseDefinition 2020-03-15 23:13:41 +01:00
Dave Halter
25e6db5e82 Some more docstring stuff 2020-03-15 23:12:38 +01:00
Dave Halter
7c7864d500 Improve docstrings for a lot of the return API classes 2020-03-15 23:02:30 +01:00
Dave Halter
a9761079e6 Remove follow_definition 2020-03-15 19:28:02 +01:00
Dave Halter
20fad922bc Better SyntaxError listings 2020-03-14 17:30:33 +01:00
Dave Halter
3cef022a15 Add a proper CHANGELOG for the current version 2020-03-14 17:22:25 +01:00
Dave Halter
52b0450953 Add a warning about fast_parser, fixes #1240 2020-03-14 16:53:08 +01:00
Dave Halter
7b725553ff Better documentation of Script 2020-03-14 16:48:07 +01:00
Dave Halter
e811651b00 Further example tinkering 2020-03-14 15:47:32 +01:00
Dave Halter
fbba7714e4 Better examples 2020-03-14 15:42:16 +01:00
Dave Halter
bdb36ab626 Document projects better 2020-03-14 15:35:41 +01:00
Dave Halter
1a466d9641 Move the Project.save function within the file 2020-03-14 15:25:40 +01:00
Dave Halter
94f99aaeb3 Docs: Document projects 2020-03-14 15:25:03 +01:00
Dave Halter
851980e2a9 Document errors better 2020-03-14 15:15:09 +01:00
Dave Halter
88c766afb0 Better docstrings for search 2020-03-14 15:00:47 +01:00
Dave Halter
13254a30df Docs: Restructure API overview 2020-03-14 14:28:06 +01:00
Dave Halter
50af2650bb Docs: features reworked 2020-03-14 13:58:30 +01:00
Dave Halter
788562715e Update the README with the latest API changes 2020-03-14 12:21:55 +01:00
Dave Halter
0888dd468f Fix partialmethod issues 2020-03-14 01:22:46 +01:00
Dave Halter
fd9a493868 Make sure partialmethod tests are only executed for Python 3 2020-03-14 00:45:43 +01:00
Dave Halter
661fdb2b26 Merge branch 'add-partialmethod' of https://github.com/ffe4/jedi 2020-03-14 00:28:06 +01:00
Dave Halter
23f267bb86 Fix small make html errors for docs 2020-03-14 00:18:29 +01:00
Dave Halter
4af138f4fb Merge branch 'docs' of https://github.com/blueyed/jedi into refactor
Almost all of the docstrings were still there.
2020-03-14 00:12:53 +01:00
Dave Halter
10bc578bfe Merge branch 'master' into refactor 2020-03-13 23:53:09 +01:00
Daniel Lemm
2406e58386 Refactor stdlib PartialObject
Merges PartialObject and PartialMethodObject. Also adds more tests.
Some parts are still WIP, see: #1522.

Fixes #1519
2020-03-13 23:47:48 +01:00
Dave Halter
5cd212c51c Merge branch 'expandtab' of https://github.com/Carreau/jedi
Also modify the test a bit to make sure that it passes properly if there are
folders present.
2020-03-13 23:40:48 +01:00
Daniel Lemm
fd6540a9e5 Fix PartialMethodObject (WIP)
Implemented feedback from PR #1522.
Does not pass new tests in test/completion/stdlib.py
2020-03-13 21:40:58 +01:00
Dave Halter
521e240c5f Changed semantics of ClassVar attributes in classes, fixes #1502 2020-03-13 12:54:29 +01:00
Dave Halter
b4fa42a282 Avoid duplicate definitions for goto, fixes #1514 2020-03-13 02:22:05 +01:00
Dave Halter
fb72e1b448 Merge _remove_statements and infer_expr_stmt, fixes #1504 2020-03-13 00:50:25 +01:00
Peter Law
da9d312185 Remove redundant attribute check 2020-03-12 22:06:13 +00:00
Daniel Lemm
96c969687a Add partialmethod, fixes #1519
Returns correct method signature but test/completion/stdlib.py fails
2020-03-12 18:47:17 +01:00
Dave Halter
f83844408f Some minor refactorings for string quotes 2020-03-11 19:32:26 +01:00
Dave Halter
b247423184 Indentation 2020-03-11 19:26:59 +01:00
Dave Halter
9c77113e21 Fix string completions with quote prefixes, fixes #1503 2020-03-11 19:26:42 +01:00
Dave Halter
91857c2c0a Fix issues with iter_module_names 2020-03-11 00:19:40 +01:00
Dave Halter
886dadaaff Skip more tests for Python 2/3.5 2020-03-10 20:17:39 +01:00
Dave Halter
d574162da3 Fix namedtuple docstring/signature issues, fixes #1506 2020-03-10 20:07:10 +01:00
Dave Halter
0aa1ef6639 Move an import to the top 2020-03-10 09:36:45 +01:00
Dave Halter
33c61b8708 Make a method public 2020-03-10 09:35:03 +01:00
Dave Halter
bedf3bff0e Add Project.complete_search instead of the complete param 2020-03-10 08:31:15 +01:00
Dave Halter
d838eaecd2 Implement Script.complete_search instead of the complete param and return Completion objects 2020-03-09 23:55:17 +01:00
Dave Halter
cf3d83ee4f Don't mix up caches for stubs and python files 2020-03-09 17:48:36 +01:00
Dave Halter
7247c32990 Refactor load_module_from_path to be simpler 2020-03-09 17:40:14 +01:00
Dave Halter
75ae73ee97 Load -stubs packages properly in _load_python_module 2020-03-09 17:27:51 +01:00
Dave Halter
753440682e Some further testing of code search with stubs 2020-03-08 15:12:57 +01:00
Dave Halter
53f39c88e4 Try to fix a few more stub issues in search 2020-03-08 15:02:00 +01:00
Dave Halter
d3e3021a3d Care better about stubs for code search 2020-03-08 13:16:06 +01:00
Dave Halter
e46e1269a2 Finally use the string_names attribute to identify module names instead of some fucked up path calculation. 2020-03-08 12:58:44 +01:00
Dave Halter
a5f7412296 Load stub modules if it's a stub 2020-03-08 11:51:39 +01:00
Peter Law
b198434694 Remove resolved TODO
The common logic this refers to has now been extracted (see 95cec459)
and the remaining checks are specific to tuple handling.
2020-03-07 20:29:14 +00:00
Dave Halter
58998748e3 Make it clear in search tests if a stub or a normal definition is expected 2020-03-07 20:43:57 +01:00
Dave Halter
6bddca011c Listing modules is no longer done by a subprocess 2020-03-07 20:25:58 +01:00
Dave Halter
f147cb1133 Make it possible to get stdlib modules for project search 2020-03-07 19:42:27 +01:00
Peter Law
d06efd0dd1 Push fetching of generics into nested function
This slightly simplifies both the calling code and semantics of
the nested function.
2020-03-07 18:09:20 +00:00
Peter Law
96132587b7 Clarify generic tuple inference
This hoist a loop invariant conditional check outside the loop
making it clearer and one branch more obviously similar to the
general type handling.
2020-03-07 17:35:29 +00:00
Peter Law
5d273f4630 Explain these branches 2020-03-07 17:35:03 +00:00
Peter Law
95cec459a8 Extract nested function for common pattern
This slightly simplifies the code, as well as providing a place
to put an explanation of what the moved block of code does.
2020-03-07 17:06:22 +00:00
Peter Law
3b4fa2aa9c Clarify variable name 2020-03-07 16:32:38 +00:00
Peter Law
54e29eede1 Add explanation of the parameters to _infer_type_vars 2020-03-07 16:31:12 +00:00
Dave Halter
c159b9debd Get namespace package searches working 2020-03-07 17:14:47 +01:00
Dave Halter
eecdf31601 Make it possible to search folders __init__ files 2020-03-07 13:57:14 +01:00
Dave Halter
7f2f025866 Move get_module_names to api.helpers 2020-03-06 14:32:52 +01:00
Dave Halter
ed3564831c Some minor test reworks 2020-03-06 14:28:48 +01:00
Dave Halter
8c1e518ab7 Make sure you can search for 'def something' 2020-03-06 14:27:29 +01:00
Dave Halter
c7a862ec19 Fix issues where references were identified as definitions 2020-03-06 14:24:57 +01:00
Dave Halter
6e3bd38600 Start merging efforts for project search and file search
First project tests are passing
2020-03-06 13:32:04 +01:00
Dave Halter
e6bdaea73e Actually implement symbol search for projects 2020-03-06 11:15:34 +01:00
Dave Halter
ebb9df07f3 Progress for recursive symbol searches 2020-03-06 10:31:48 +01:00
Dave Halter
8df917f1df Fix a getattr_static issue, fixes #1517 2020-03-06 10:07:23 +01:00
Dave Halter
30f72c48c4 Test that full_name in funcs work 2020-03-01 20:11:00 +01:00
Dave Halter
e03924895b Add tests for search 2020-03-01 19:52:49 +01:00
Dave Halter
af055ec69c Some minor refactorings of search 2020-03-01 19:39:26 +01:00
Dave Halter
9d8ad4cc04 Implement a search function, fixes #225 2020-03-01 18:47:01 +01:00
Dave Halter
a6ef8efb72 fuzzy_match and start_match are now match with fuzzy param 2020-03-01 18:03:13 +01:00
Dave Halter
ccc1262a3e Avoid one more private access 2020-03-01 17:53:39 +01:00
Dave Halter
656324f686 Disable some more tests for Python 2 2020-03-01 13:30:41 +01:00
Dave Halter
bd1ef659e8 Make InterpreterEnvironment public 2020-03-01 12:47:26 +01:00
Dave Halter
afc61c2576 is_typeddict should be part of ClassMixin 2020-03-01 12:26:40 +01:00
Dave Halter
4d5373d626 Don't continue searching for values if an annotation is found 2020-03-01 12:25:46 +01:00
Dave Halter
609737322d TypedDict checking should be at a later point 2020-03-01 02:34:38 +01:00
Dave Halter
fa63c92cf7 Simplify tests a bit 2020-03-01 01:56:49 +01:00
Dave Halter
e5fabb4c5f Fix some version issue stuff 2020-03-01 01:42:22 +01:00
Dave Halter
bb91b96286 Merge branch 'typeddict' of https://github.com/pappasam/jedi 2020-03-01 01:31:17 +01:00
Dave Halter
fd23946de3 Avoid universal newlines even more 2020-03-01 01:12:47 +01:00
Dave Halter
a2b8c44e8f Get rid of Python's universal newlines for refactoring 2020-02-29 23:34:49 +01:00
Dave Halter
0a1de619b4 Reverse order of travis tests 2020-02-28 12:48:08 +01:00
Dave Halter
31d5c92dae Reverse order of tests in appveyor 2020-02-28 12:47:18 +01:00
Dave Halter
d1873f8e1e Windows uses backslashes for paths 2020-02-28 12:42:39 +01:00
Dave Halter
58ba47841c Use inline_mod instead of some_mod for inline refactor tests 2020-02-28 01:53:35 +01:00
Dave Halter
0f2d6ac27a Undo some .travis.yml changes that were removed because of Python 3.4 drop 2020-02-28 00:22:29 +01:00
Dave Halter
76ce422590 Make refactoring diff path a relative path to the project path 2020-02-28 00:17:14 +01:00
Dave Halter
1f773d8e65 Refactoring is not allowed for environments and the current version lower than 3.6 2020-02-27 23:24:23 +01:00
Dave Halter
4451d2fec7 Refactoring diffs now show relative paths 2020-02-27 23:23:24 +01:00
Dave Halter
0ef8053919 Don't use a random grammar for extract 2020-02-27 22:50:30 +01:00
Dave Halter
140a45081f Python 3.5 is not supported for refactorings 2020-02-27 19:01:08 +01:00
Dave Halter
ebdaf0177d Don't continue searching for values if an annotation is found 2020-02-27 18:47:13 +01:00
Dave Halter
f2f11bc574 Remove some code for 3.3 compatibility 2020-02-27 18:31:50 +01:00
Dave Halter
5f2a402b19 Removed some more 3.4 usages 2020-02-27 18:30:46 +01:00
Dave Halter
5f226bc82e Make sure to not execute refactoring tests for Python 2 2020-02-27 02:17:05 +01:00
Dave Halter
a892887b04 Remove Python 3.4 support 2020-02-27 02:04:03 +01:00
Dave Halter
d1ac00f64f Fix run.py issue 2020-02-27 01:44:01 +01:00
Dave Halter
03e1770a24 Fix rename refactoring tests 2020-02-27 01:23:07 +01:00
Dave Halter
42adadd0cb Add an extract test for methods without params 2020-02-27 01:19:01 +01:00
Dave Halter
3708ab3514 Make extract yield error message better 2020-02-27 01:12:34 +01:00
Dave Halter
c9334d140b Make it impossible to extract if return is not at the end 2020-02-27 01:08:03 +01:00
Dave Halter
35e992c37c Make sure that return at the end works properly for extract 2020-02-27 00:54:40 +01:00
Dave Halter
a92c28840b Fix: Extract can now deal with return statements at the end 2020-02-26 09:31:33 +01:00
Dave Halter
c96994dd8d Add a method extract test 2020-02-26 01:11:04 +01:00
Dave Halter
bb6f0d5e91 Fix extract: better input filtering 2020-02-26 00:59:04 +01:00
Dave Halter
bf9a3a4ca8 Rewrite an extract test to make them more diverse 2020-02-26 00:24:27 +01:00
Dave Halter
eef47e951e One more function test 2020-02-26 00:21:46 +01:00
Dave Halter
17892556f8 Fix another comment extraction issue 2020-02-26 00:17:44 +01:00
Dave Halter
b65c1c26aa Fix a function extract indentation issue 2020-02-25 23:52:23 +01:00
Dave Halter
bc3e1ada03 One more comment test for extract with range 2020-02-25 23:30:44 +01:00
Dave Halter
1f82efa86d Fix a newline issue for refactoring functions 2020-02-25 23:27:21 +01:00
Dave Halter
94c00229f2 Make it possible to include comments for extract function 2020-02-25 23:25:50 +01:00
Dave Halter
5614ef2fed Move all the extract stuff into a different file 2020-02-25 10:33:31 +01:00
Dave Halter
8ff5ca81d2 Make a package out of refactoring 2020-02-25 10:28:27 +01:00
Dave Halter
ff60c0af87 Docstrings 2020-02-25 10:27:36 +01:00
Dave Halter
89398e5c87 Deal a lot better with prefixes in range extractions 2020-02-25 10:23:38 +01:00
Dave Halter
f8d9f498d0 Get a first extract test mostly working 2020-02-24 10:12:38 +01:00
Peter Law
30738a092b Update sith's module docstring to match the available operations 2020-02-24 01:33:46 +01:00
Dave Halter
f527138e6c Extract: Fix param order for methods 2020-02-24 00:19:34 +01:00
Dave Halter
24a4c3ceba Test closure extraction 2020-02-23 23:56:59 +01:00
Dave Halter
48e25c1b9b Extract: Make sure params are not duplicated 2020-02-23 23:22:38 +01:00
Peter Law
f1a9e681ad Ensure comprehensions and generator expressions work 2020-02-23 15:25:28 +00:00
Peter Law
f4cbf61604 Ensure variadic tuples (Tuple[T, ...]) behave like sequences 2020-02-23 14:00:39 +00:00
Peter Law
5e990d9206 Support passing through values for non-annotated tuples 2020-02-23 14:00:16 +00:00
Peter Law
80db4dcf56 Add test to ensure unions work 2020-02-23 14:00:16 +00:00
Peter Law
e557129121 Remove check which doesn't seem to be needed
I'm not sure why I added this, though removing it doesn't seem to
casue any issues. I suspect there might be some oddness if the type
being passed in doesn't match the type expected, though them having
the same number of generic paramters isn't an expecially great way
to validate that.
2020-02-23 14:00:16 +00:00
Peter Law
c15e0ef9b8 Ensure specialised types inheriting from generics work 2020-02-23 14:00:15 +00:00
Peter Law
e455709a31 Add test case for nested generic callables 2020-02-23 14:00:13 +00:00
Peter Law
c03ae0315e Make nested Type[T] annotations work 2020-02-23 13:59:44 +00:00
Peter Law
bc53dabce3 Make tuple generic parameters work 2020-02-23 13:59:44 +00:00
Peter Law
969a8f1fd9 First pass at extending infer_type_vars
This mostly works for the new tests, but doesn't work for:
- tuples (though this seems to be because they lack generic information anyway)
- nested Type[T] handling (e.g: List[Type[T]])
2020-02-23 13:59:44 +00:00
Peter Law
0a7820f6de Add many test cases
While these definitely _ought_ to work on Python 2.7, the annotation
support there is very limited and as Python 2 is deprecated it
doesn't seem worth it.
2020-02-23 13:58:10 +00:00
Dave Halter
da935baa99 Some more extract improvements 2020-02-23 12:06:37 +01:00
Dave Halter
cc8483a07a Fix extract issues when self is involved 2020-02-23 11:50:05 +01:00
Dave Halter
48c4262f66 Start trying to find param names 2020-02-23 01:55:43 +01:00
Dave Halter
d069a4e482 Add a test for extraction in a class 2020-02-23 01:41:51 +01:00
Dave Halter
2061919b64 Get staticmethod working 2020-02-23 01:36:45 +01:00
Dave Halter
a7110a4e08 Get a first classmethod extraction working 2020-02-23 00:40:31 +01:00
Dave Halter
b7be5a4fe2 Extract: Correct newlines for classes and make it possible to be on a return/yield statement 2020-02-23 00:24:34 +01:00
Dave Halter
876109267a Remove is_function_execution, it's not used 2020-02-23 00:16:46 +01:00
Dave Halter
1c0f9e1f30 Extract functions properly out of functions 2020-02-22 21:24:06 +01:00
Peter Law
6efafb348e Extract the annotation name upfront
We almost always need this and this simplifies the code within
each branch. This also means we'll be able to the name to determine
the branching.
2020-02-22 19:42:08 +00:00
Peter Law
36b4b797c1 Add trailing comma 2020-02-22 19:42:08 +00:00
Dave Halter
ce1093406a Get some first extract_function stuff working 2020-02-22 00:04:11 +01:00
Dave Halter
dcffe8e60b Some refactorings and final tests for extract variable 2020-02-21 03:15:40 +01:00
Dave Halter
0516637e8d Fix an extract case about "not" 2020-02-21 03:03:48 +01:00
Dave Halter
3bc66c2f00 Fix some error cases for extract 2020-02-21 02:22:54 +01:00
Dave Halter
742c4370b5 Fix some last extract issues 2020-02-21 01:57:12 +01:00
Dave Halter
292ad9d9ac Enable extracting of parts of nodes 2020-02-21 01:43:36 +01:00
Dave Halter
3457bd77eb Make sure that extract variable works for some ranges 2020-02-20 23:34:09 +01:00
Lior Goldberg
1874e9be81 Remove the word 'class' from annotation_string
Currently, 'foo(x: int)' results with annotation_string="<class 'int'>".
Change this to 'int'.
2020-02-20 09:35:01 +01:00
Dave Halter
3f86d803d2 Fix another special extract case 2020-02-20 01:29:04 +01:00
Dave Halter
26bf2ceb15 Fix refactoring of leaves just before leaves 2020-02-20 00:43:02 +01:00
Dave Halter
bfa15c61f1 Keyword extraction is now working better 2020-02-19 09:25:59 +01:00
Dave Halter
61619c4db1 Test keyword extraction 2020-02-19 09:20:12 +01:00
Dave Halter
50be49544d Move indent_block to common 2020-02-19 09:15:39 +01:00
Dave Halter
b1d3c7ef52 Move indent_block to a separate utils 2020-02-18 18:50:40 +01:00
Dave Halter
7dff25f7c9 Test extracing of base classes 2020-02-17 10:06:40 +01:00
Dave Halter
ab4fe548f2 Handle params better for extract variable 2020-02-17 09:55:11 +01:00
Peter Law
c4cf0d78e1 Add a couple of docstrings
These are based on observation of the outputs of these functions.
2020-02-15 12:25:12 +01:00
Dave Halter
d1f7400829 First implementation of extract variable 2020-02-15 12:17:29 +01:00
Dave Halter
ee8cdb667d Make it possible to test refactoring outputs a bit different 2020-02-15 00:59:26 +01:00
Dave Halter
24114ba631 Remove reorder imports. For now this is not a priority 2020-02-14 23:56:11 +01:00
Dave Halter
9d171609da Fix some inline tests about different modules and atom_expr/trailer combinations 2020-02-14 18:02:37 +01:00
Dave Halter
518d2449a7 More inline tests 2020-02-14 17:26:58 +01:00
Dave Halter
a906a76ccd Don't support refactoring for Python 2 2020-02-14 17:19:21 +01:00
Dave Halter
af20905f7d Make sure the brackets are set properly 2020-02-14 17:08:42 +01:00
Dave Halter
d536a20019 Fix some whitespace refactoring when inlining 2020-02-14 16:57:25 +01:00
Dave Halter
bcefb04d54 add some more test for inline errors 2020-02-14 15:49:18 +01:00
Dave Halter
dac2655915 Make sure to test errors for inlining 2020-02-14 15:30:49 +01:00
Dave Halter
14180ad185 Make sure to have a rename test if no name is under the cursor 2020-02-14 14:24:05 +01:00
Dave Halter
dbf88f2750 Make it possible to be able to test errors for refactorings 2020-02-14 14:15:57 +01:00
Dave Halter
0a3ff6bd70 Implement inline refactorings 2020-02-14 13:53:41 +01:00
Sam Roeca
d6f6c29a63 TypedDict test: fix Bar inheritance checks
Note: foo is defined as a function a the module level so I remove it
from consideration here to avoid complicating this test with other tests
in the module.
2020-02-13 10:43:41 -05:00
Peter Law
c7d1b8de9e Tell sith that 'completions' became 'complete' 2020-02-13 09:51:31 +01:00
Dave Halter
b4628abc60 Some sother small test improvements 2020-02-13 09:34:33 +01:00
Dave Halter
aef675c79b Rewrite old refactoring tests a bit to reuse them 2020-02-13 09:27:57 +01:00
Dave Halter
41602124c7 Prepare remaining refactoring methods that should be implemented at some point 2020-02-13 09:27:36 +01:00
Dave Halter
5c246649e2 Test renames better and change some small things about the refactoring API 2020-02-13 00:19:34 +01:00
Dave Halter
6c9f187884 Refactor the rename tests a bit 2020-02-13 00:19:00 +01:00
Dave Halter
871575b06c Make sure that get_changed_files returns a dict 2020-02-12 09:59:39 +01:00
Dave Halter
fd4ba3f47e Make sure to that renames works for keyword params 2020-02-12 01:19:47 +01:00
Dave Halter
204b072388 Add tests for undefined variables 2020-02-12 01:08:47 +01:00
Dave Halter
e7ab318107 Make sure rename diffs have the right paths 2020-02-12 01:00:13 +01:00
Dave Halter
52d72157c0 Rename a module to make refactoring tests a bit faster 2020-02-12 00:35:49 +01:00
Sam Roeca
ac47866c4c TypedDict: fix non-inheritance tests, add inheritance
Note: tests currently failing
2020-02-11 18:32:15 -05:00
Dave Halter
c47021150e Add a rename test for combination of variables and modules 2020-02-11 23:43:09 +01:00
Dave Halter
a39b2e95c1 Add another refactoring test 2020-02-11 21:13:55 +01:00
Jma353
d42d3f45f0 Add venv to .gitignore 2020-02-11 19:08:47 +01:00
Dave Halter
b4494e588f A prefixed path should not also be suffixed 2020-02-11 18:34:41 +01:00
Dave Halter
0697a39145 Make refactoring tests a bit clearer 2020-02-11 10:08:36 +01:00
Dave Halter
e43b0cec4a Get renames working for module imports 2020-02-11 01:35:07 +01:00
Dave Halter
ab4f282b03 Move rename function to refactoring 2020-02-11 00:18:49 +01:00
Dave Halter
4bc9075d0b Add another rename test for imports 2020-02-10 21:17:22 +01:00
Dave Halter
faddf412f9 Make some refactoring test variables private 2020-02-10 20:06:27 +01:00
Dave Halter
e22a44d79e Remove a lot of nonsense from refactoring tests 2020-02-10 20:04:48 +01:00
Dave Halter
4cc03d2239 Add another rename test 2020-02-10 19:51:35 +01:00
Dave Halter
1e929b0aa0 Remove the old refactoring module 2020-02-10 17:48:24 +01:00
Dave Halter
13b393a5e3 Get the first rename test passing 2020-02-10 17:42:23 +01:00
Dave Halter
6166e7961e Make sure that tests for refactoring are redirected 2020-02-09 14:05:16 +01:00
Peter Law
370e539a7e Remove additional prefix which seems incorrect 2020-02-09 11:39:41 +01:00
Peter Law
fd1f9f22e9 Update use of _source which no longer exists to _code 2020-02-09 11:39:41 +01:00
Dave Halter
bcb7cc864c Make sure to move up VSCode, because it's used a lot 2020-02-08 20:09:46 +01:00
Dave Halter
de2f753546 Revert "Make sure to mention that VSCode is using Jedi"
It was already in there.

This reverts commit 2cf06bcf48.
2020-02-08 20:06:17 +01:00
Dave Halter
2cf06bcf48 Make sure to mention that VSCode is using Jedi
It has been used for a long time
2020-02-08 20:04:47 +01:00
Sam Roeca
cf954bf006 Expand on TypedDict tests.
Adds a function that takes the TypedDict as an argument.

Note: the last two tests are failing, along with lots of other tests
throughout the system.
2020-02-07 14:40:39 -05:00
Sam Roeca
9d2083fa08 Remove argument to filter.values()
Given 87161df2, values(from_instance=False) doesn't produce completions
anymore. Therefore, we remove from_instance as an argument.
2020-02-07 13:38:52 -05:00
Sam Roeca
6a9745b42b Get basic completions working with TypedDict 2020-02-07 13:24:00 -05:00
Dave Halter
87161df2f0 Make sure that typeddict py__getitem__ works 2020-02-07 16:45:03 +01:00
Dave Halter
7ef07b576f Merge branch 'master' into typeddict 2020-02-07 04:03:27 +01:00
Dave Halter
6e63799a7d Fix a test that picked up the wrong paths 2020-02-06 22:51:40 +01:00
Dave Halter
841fe75326 Fix an issue with environment selection 2020-02-06 22:41:11 +01:00
Dave Halter
f6465c5202 Get rid of one more os.getcwd() call 2020-02-06 01:51:10 +01:00
Dave Halter
14ac0512a9 Get rid of cwd modifications in tests 2020-02-06 01:47:39 +01:00
Dave Halter
f2722952e7 Fix load_unsafe_extensions issue 2020-02-05 10:01:21 +01:00
Dave Halter
b7919bd3e6 Merge branch 'master' into project 2020-02-04 23:56:47 +01:00
Dave Halter
7a55484b79 Fix a test issue 2020-02-04 23:56:01 +01:00
Dave Halter
670d6e8639 Move is_side_effect to Definition and correct bugs 2020-02-04 20:12:24 +01:00
Dave Halter
6313934d94 Add a docstring for is_side_effect 2020-02-04 19:39:13 +01:00
Dave Halter
40fced2450 Actually use follow_builtin_imports and improve the goto docstring, fixes #1492 2020-02-04 19:34:42 +01:00
Dave Halter
692bf5cfb7 Properly identify side effects, fixes #1411 2020-02-04 10:12:13 +01:00
Dave Halter
66e28eb52e Move test_api/test_defined_names.py -> test_api/test_names.py 2020-02-04 10:03:55 +01:00
Dave Halter
3388a9659b Catch an error with illegal class instances, fixes #1491 2020-02-03 22:27:48 +01:00
Dave Halter
eb88c483fb Catch an error with illegal class instances, fixes #1491 2020-02-03 22:27:22 +01:00
Dave Halter
2c62166ff6 Get parser errors working, fixes #1488 2020-02-03 22:06:12 +01:00
Dave Halter
3101e43aa6 Merge branch 'master' into project 2020-02-03 09:26:43 +01:00
Dave Halter
a49c757b8a Make Ellipsis without list in Callable work, fixes #1475 2020-02-03 09:25:46 +01:00
Dave Halter
3ad3dc08b8 Run get_type_hint tests only for 3.6+ 2020-02-03 01:03:19 +01:00
Dave Halter
eee919174d Stubs should not become stubs again in the conversion function, fixes #1475 2020-02-03 00:58:54 +01:00
Dave Halter
e802f5aabd Make sure to print errors in __main__ completions 2020-02-02 23:28:55 +01:00
Dave Halter
e3c4b5b77e Make sure param hints are working for functions 2020-02-02 18:42:01 +01:00
Dave Halter
4c7179bc87 Generate type hints, fixes #987 2020-02-02 16:55:10 +01:00
Dave Halter
f4b1fc479d Bump version to 0.16.1 2020-01-31 13:38:27 +01:00
Dave Halter
e1425de8a4 Make sure to be able to deal with all kinds of loaders, fixes #1487 2020-01-31 13:26:56 +01:00
Dave Halter
8ff2ea4b38 Make sure to not load unsafe modules anymore if they are not on the sys path, fixes #760 2020-01-31 13:09:28 +01:00
Dave Halter
e7a77e438d Remove python_version again, it might not be needed 2020-01-31 02:15:24 +01:00
Dave Halter
a05628443e Make sure serialization works for projects 2020-01-31 02:14:34 +01:00
Dave Halter
d09882f970 Remove django from the project API 2020-01-31 01:50:52 +01:00
Dave Halter
e5ec2a3adf Introduce two new Project params: python_path, python_version 2020-01-31 01:46:55 +01:00
Dave Halter
d02af44331 Make it possible to use get_default_project directly from Jedi 2020-01-31 00:21:46 +01:00
Dave Halter
251ff447bc Add added_sys_path to Project, fixes #1334 2020-01-31 00:08:24 +01:00
Dave Halter
4a1d9a9116 Use project instead of sys_path parameter in tests 2020-01-30 21:02:47 +01:00
Dave Halter
ceccbf3678 Make the Project API public, fixes #778 2020-01-30 19:24:16 +01:00
Dave Halter
e930f47861 Make generators return more correct values with while loops, fixes #683 2020-01-29 10:13:46 +01:00
Dave Halter
d630ed55f3 Avoid aborting search for yields when they are still reachable, see #683 2020-01-28 09:35:58 +01:00
Dave Halter
bec87f7ff8 Jedi understand now when you use del, fixes #313 2020-01-26 20:07:56 +01:00
Dave Halter
045b8a35a2 Remove dead code 2020-01-26 19:39:15 +01:00
Dave Halter
8eb980db73 Create the basics to work with TypedDict in the future 2020-01-26 19:25:23 +01:00
Dave Halter
18f84d3af7 Remove Python 3.3 from environment tests 2020-01-26 01:30:31 +01:00
Dave Halter
2ccd015b5a Make sure to skip some tests for Python 3.5 2020-01-26 01:18:28 +01:00
Dave Halter
1a62674254 Small Changelog updates 2020-01-26 00:58:04 +01:00
Dave Halter
7645762a25 Fix a small signature issue 2020-01-26 00:42:00 +01:00
Dave Halter
2e036bffb5 Create a private helper to test completions 2020-01-26 00:28:48 +01:00
Dave Halter
feefd47ddd Fix an issue with names 2020-01-25 18:48:52 +01:00
Dave Halter
f42ab8872d compiled_object -> compiled_value 2020-01-25 18:25:19 +01:00
Dave Halter
7c3dbef9c5 Remove dead code 2020-01-25 18:16:30 +01:00
Dave Halter
8cccdde28d CompiledObject -> CompiledValue 2020-01-25 18:13:50 +01:00
Dave Halter
5cd4a52bcd CompiledValue -> ExactValue 2020-01-25 18:09:44 +01:00
Dave Halter
517fa27dc6 Revisit caching of mixed 2020-01-25 17:58:12 +01:00
Dave Halter
329329c195 Make MixedName a Namewrapper instead of inheritance 2020-01-25 17:54:19 +01:00
Dave Halter
8bde54a072 Remove underscore_memoization caching method 2020-01-25 17:29:52 +01:00
Dave Halter
235b887b75 Refactor MixedName quite a bit 2020-01-25 16:56:01 +01:00
Dave Halter
da2a55c73f Fix issue with mixed objects, fixes #1480 2020-01-25 15:02:55 +01:00
Dave Halter
0435e0e85c Remove some dead code 2020-01-25 13:25:23 +01:00
Dave Halter
9c0efd5a67 Prepare a test for #1479 2020-01-25 01:07:20 +01:00
Dave Halter
066b8b7165 Avoid a print in tests 2020-01-24 22:11:52 +01:00
Dave Halter
7683c05de3 Fix value/context mixup in mixed, fixes #1479 2020-01-24 22:09:25 +01:00
Dave Halter
eaa49aa26b Clarify that for Python 2 we will not fix bugs anymore 2020-01-24 14:09:43 +01:00
Dave Halter
3f6a718c34 Skip a test in Python 2 2020-01-24 14:08:18 +01:00
Dave Halter
6cfcba0d97 Use is_compiled instead of isinstance checks 2020-01-24 13:12:48 +01:00
Dave Halter
4d3f314baa Create CompiledModule to have a better differentiation between compiled modules and compiles values 2020-01-24 13:01:54 +01:00
Dave Halter
e3e6727a2d Make sure that the builtin docstring works again for infer calls 2020-01-24 12:49:39 +01:00
Dave Halter
b985a380bc Fix a bug with version_info, fixes #1477 2020-01-24 11:04:50 +01:00
Dave Halter
11b61596e0 Make sure that del_stmt as a name can be handled, see #313 2020-01-23 23:58:52 +01:00
Dave Halter
290e2151df Remove use_filesystem_cache and additional_dynamic_modules, it hasn't been implemented for a long time 2020-01-23 23:37:36 +01:00
Dave Halter
cc8a3f192d Removed settings.no_completion_duplicates 2020-01-23 23:16:02 +01:00
Dave Halter
0c56aa4d4b Make sure to stop gathering buildout paths at a certain point, fixes #1325 2020-01-22 23:31:27 +01:00
Dave Halter
6a75a0c590 Rewrite some whitespace 2020-01-22 23:14:07 +01:00
ANtlord
8440e1719f Unuseful changes are rolled back. 2020-01-22 20:57:17 +02:00
ANtlord
ddcd48edd8 Typeshed submodule checked out to d386452 2020-01-22 20:55:25 +02:00
Dave Halter
7e98c9449b Reformat the changelog a bit 2020-01-22 18:31:49 +01:00
Dave Halter
dbdd556a2b Add follow_imports to Definition.goto, fixes #1474 2020-01-22 18:29:02 +01:00
ANtlord
9bc01da9c4 Fix conflicts. 2020-01-22 11:12:09 +02:00
Dave Halter
5c68304bec Raise a proper exception instead of assert in case only_stubs and prefer_stubs are given 2020-01-22 10:00:10 +01:00
Dave Halter
59e7bacfae Make sure a certain test passes as well with tox 2020-01-22 01:29:56 +01:00
Dave Halter
318fab8682 Fix a Python 2 issue 2020-01-22 01:25:26 +01:00
Dave Halter
bff6e95e28 Rename Script.names to Script.get_names, fixes #1476 2020-01-22 01:22:46 +01:00
Dave Halter
8cc836e816 find_signatures -> get_signatures, see #1476 2020-01-22 01:10:38 +01:00
Dave Halter
58f54d8391 find_references -> get_references, see #1476 2020-01-22 01:06:37 +01:00
Dave Halter
9d7858eb3a Fix remaining tests 2020-01-22 00:36:30 +01:00
Dave Halter
6df755e8b6 Reduce limits of files to parse by quite a bit 2020-01-21 22:51:57 +01:00
ANtlord
2a86f7d82f Django-plugin related code is removed from stdlib-plugin. 2020-01-21 21:21:43 +02:00
ANtlord
7287d67e7a Functions infers type of Django model field is refactored. 2020-01-21 21:12:38 +02:00
Dave Halter
44ba40958e Make sure that CompiledObject doesn't have a file_io 2020-01-21 18:29:40 +01:00
Dave Halter
d9960081f5 Use different limits for references and dynamic calls 2020-01-21 09:22:16 +01:00
Dave Halter
c12cbf2106 Explain why the references limits were chosen 2020-01-20 17:24:21 +01:00
Dave Halter
6e10313cca Start limiting opened files and parsed files for references 2020-01-20 17:13:22 +01:00
Dave Halter
28027a3fee Remove a few imports 2020-01-20 16:59:22 +01:00
Dave Halter
a246624f70 Make sure to not scan the same directory multiple times 2020-01-20 10:33:37 +01:00
Dave Halter
621bd7d1db Don't search for usages when we are working with params 2020-01-20 02:14:46 +01:00
Dave Halter
445dc2411e Ignore .gitignore in get_references and therefore make get_references usable again 2020-01-20 02:03:58 +01:00
Dave Halter
ed36efabeb Revisit reference finding, scan a lot of folders 2020-01-20 01:43:51 +01:00
Dave Halter
62a77dcd16 Added FolderIO.walk and FolderIO.get_base_name 2020-01-20 00:36:18 +01:00
ANtlord
c61ca0d27b Infering of django model fields is moved to a dedicated module. 2020-01-19 18:46:28 +02:00
Dave Halter
26f0fa9eb0 Move get_module_contexts_containing_name to the references module 2020-01-17 22:51:09 +01:00
Dave Halter
4cd2b9a355 Apparently this one variable is needed 2020-01-17 02:15:06 +01:00
Dave Halter
eb103d293c Small changelog fix 2020-01-17 02:03:42 +01:00
Dave Halter
4931180df1 Forgot to use sudo for installing dependencies in travis 2020-01-17 01:43:23 +01:00
Dave Halter
2937c95e9e Another few travis fixes 2020-01-17 01:30:54 +01:00
Dave Halter
f53b08516d Don't run some usage tests on Python 2 2020-01-17 01:26:40 +01:00
Dave Halter
c6ca889927 Interpreter test fix for travis config 2020-01-17 00:36:09 +01:00
ANtlord
a6dfc130c9 Foreign key is handled. 2020-01-16 15:40:45 +02:00
Dave Halter
3645ea0557 Add a few more stub usage tests 2020-01-15 00:30:31 +01:00
Dave Halter
df7080c1da Disable flow analysis for finding usages 2020-01-14 18:37:10 +01:00
Dave Halter
a098bf28af Add another stub usage test 2020-01-14 01:29:37 +01:00
Dave Halter
8bcd1f5fd9 Fix stub conversion 2020-01-14 01:08:26 +01:00
Dave Halter
e1564da23d Make sure to find both stubs and non-stubs with usages 2020-01-13 20:45:53 +01:00
Dave Halter
9c1063c35a Use the proper fixture 2020-01-12 23:58:49 +01:00
Dave Halter
c3503672d5 Implement interpreter test on travis 2020-01-12 20:51:40 +01:00
Dave Halter
c56dae4835 Get interpreter environment tests working 2020-01-12 20:47:51 +01:00
Dave Halter
591e3c4565 Make sure tests are proper packages, so that pytest doesn't do shenannigans with sys path 2020-01-12 19:58:29 +01:00
Dave Halter
4fb595f422 Remove NestedImportModule, because it hasn't been used in years 2020-01-12 13:42:50 +01:00
Dave Halter
11a12d6ca8 Refactor execute_operation a bit 2020-01-12 13:01:08 +01:00
Dave Halter
bd2ed8dbbd Finally get rid of call_of_leaf 2020-01-12 03:06:52 +01:00
Dave Halter
a17d4d9e16 Refactor the isinstance checks a bit 2020-01-12 02:00:27 +01:00
Dave Halter
700dd9380a Makes sure examples are excluded from pytest 2020-01-12 01:22:12 +01:00
Dave Halter
4f6116ac6e speed test to examples 2020-01-12 01:21:26 +01:00
Dave Halter
cc34c7d4f3 Move not_in_sys_path tests to examples 2020-01-12 00:55:01 +01:00
Dave Halter
796a2b4df5 Move namespace tests to examples 2020-01-12 00:51:42 +01:00
Dave Halter
f3919823fb Moved zipped imports test files 2020-01-12 00:43:36 +01:00
Dave Halter
46f8e53e71 Move sample_venvs to examples 2020-01-12 00:30:05 +01:00
Dave Halter
8dc7f2d899 Move the extension test to examples 2020-01-12 00:26:01 +01:00
Dave Halter
c79269b3ee Move another test to examples 2020-01-12 00:09:48 +01:00
Dave Halter
1e27491545 Remove unused test code 2020-01-12 00:07:27 +01:00
Dave Halter
f31c90926e Move implicit namespace package code to example dir 2020-01-11 22:25:12 +01:00
Dave Halter
8459b02a98 Move flask tests to examples folder 2020-01-11 22:01:33 +01:00
Dave Halter
ba6154c314 Move the absolute import test files 2020-01-11 21:59:21 +01:00
Dave Halter
095f1295af Avoid a bug that a compiler might have found, fixes #1469 2020-01-11 21:35:39 +01:00
Dave Halter
4f56ec5daf Make sure the latest changes work with Python 3.6/3.7 2020-01-10 15:14:22 +01:00
Dave Halter
3ba68b5bc6 Properly convert compiled values to generic classes 2020-01-10 15:09:16 +01:00
Dave Halter
cac73f2d44 Make Union/Optional works with compiled objects 2020-01-10 13:34:10 +01:00
Dave Halter
ba7776c0d9 Make sure that CompiledValue can deal with string annotations
Fixes #952
Inspired by #1461
2020-01-10 12:40:24 +01:00
Dave Halter
072d506302 Avoid a few warnings 2020-01-10 11:59:11 +01:00
Dave Halter
76a4820926 Skip a test that doesn't work in Python 2 2020-01-10 10:30:53 +01:00
Dave Halter
10c5990614 Remove a statement that didn't make sense 2020-01-07 22:20:36 +01:00
Dave Halter
a0536bd854 Remove a method that was not necessary 2020-01-07 18:42:06 +01:00
Dave Halter
800ab65701 Fix a bug where parent_context was a value 2020-01-07 11:27:36 +01:00
Dave Halter
fdb5071bec Fix some issues with converting names, see #1466 2020-01-07 10:59:15 +01:00
Dave Halter
a17b56f260 Use one single way to convert stubs to Python, see #1466 2020-01-07 10:02:31 +01:00
Dave Halter
9b9cacfbf9 Make sure to use _stub_to_python_value_set for all conversions, see #1466 2020-01-07 01:27:50 +01:00
Dave Halter
d8deceb4b1 Make sure fixture resolving works in conftest.py, see #791 2020-01-06 23:27:25 +01:00
Dave Halter
9c4cd40b7e Fix signatures when used for Generic classes, fixes #1468 2020-01-06 09:40:57 +01:00
Dave Halter
4243d01560 Make sure inheritance works for fixtures, fixes #791 2020-01-05 19:13:56 +01:00
Dave Halter
5da9f9facd Add a test to check if numpy tensorflow stuff is now cached, see #1116 2020-01-05 18:29:02 +01:00
Dave Halter
ea0972d7ac Make sure to check the module cache before loading a module (again)
This hopefully results in some performance improvements (maybe numpy?).
2020-01-05 18:28:34 +01:00
Dave Halter
bf446f2729 Add a completion cache for numpy/tensorflow, fixes #1116 2020-01-05 18:13:24 +01:00
Dave Halter
1cdeee6519 Ignore processing param names, fixes #520 2020-01-05 02:38:54 +01:00
Dave Halter
cc1664c69a Avoid using params in tests and use get_signatures().params 2020-01-05 02:09:22 +01:00
Dave Halter
a7415be0ea Make sure params have no name 2020-01-05 01:55:29 +01:00
Dave Halter
74fc29be9a Make sure that kwargs are not repeated when they are inferred 2020-01-05 01:48:10 +01:00
Dave Halter
aca2a5a409 Undo finding signatures for everything and only do it for stubs and non-statements for when used in docstrings 2020-01-04 16:00:07 +01:00
Dave Halter
088fca2f8e Fix an issue with the is_big_annoying_library function, see #520 2020-01-04 13:33:06 +01:00
Dave Halter
1813105b69 Make sure decorators are also not inferred for big annoying libraries, see #520 2020-01-04 13:26:55 +01:00
Dave Halter
e30385465c Make sure the repr of compiled access isn't huge 2020-01-04 13:10:46 +01:00
Dave Halter
47d3aa73dc Disable some features for big annoying libraries like pandas, tensorflow, see #520 2020-01-04 02:39:36 +01:00
Dave Halter
441ede2c7f Fix a debug message 2020-01-04 01:32:02 +01:00
Dave Halter
dfc6ea8ce2 Fix a small issue 2020-01-04 01:19:12 +01:00
Dave Halter
673ea0c5a5 Little refactoring 2020-01-03 10:38:00 +01:00
Dave Halter
0e707d3824 Remove the old definition tests
The reason for this is that they haven't been used in years and don't really
make sense, because the way we now resolve parentheses is by executing the
result.

IMO this was a good patch at the time, but doesn't make sense anymore. Let me
know if you disagree ~dave.
2020-01-03 00:59:17 +01:00
Dave Halter
92a2e17a9e Remove get_signatures again from names 2020-01-03 00:54:13 +01:00
Dave Halter
3b6bbab556 Infer doctests and signatures uniformly, fixes #1466 2020-01-03 00:45:14 +01:00
Dave Halter
2d31e2e760 Fix a small pytest fixture bug 2020-01-03 00:03:32 +01:00
Dave Halter
bac91652ea Raise a deprecation warning on Definition.params 2020-01-02 16:11:58 +01:00
Dave Halter
67b720d939 Remove a weird assert 2020-01-02 01:58:21 +01:00
Dave Halter
ff96b052d0 Make sure coverage works again 2020-01-02 01:28:30 +01:00
Dave Halter
9824929ad1 Use Python 3.7 for calculating test coverage 2020-01-02 00:23:25 +01:00
Dave Halter
a36d609756 Remoeve dead code 2020-01-01 23:23:29 +01:00
Dave Halter
04a738c014 Remove unnecessary code 2020-01-01 23:11:02 +01:00
Dave Halter
0a53ce5136 Separate getting docstrings and getting signatures for names, see discussion #1466 2020-01-01 23:05:06 +01:00
Dave Halter
bb3a81c578 LazyInstanceClassName -> Use NameWrapper 2020-01-01 20:27:07 +01:00
Dave Halter
54bd0b437f Make sure that equals will only be added to keyword arguments and not just randomly 2020-01-01 19:00:17 +01:00
Dave Halter
9dc18054ee Make some test code prettier 2020-01-01 17:36:42 +01:00
Dave Halter
cab7c6fdc7 Remove some skips around attribute docstrings 2020-01-01 17:30:25 +01:00
Dave Halter
1cc8f96f26 Add some more dict completion tests with whitespace 2020-01-01 17:14:11 +01:00
Dave Halter
47e2cf95d2 Change ModuleValue param order and add defaults 2020-01-01 17:07:19 +01:00
Dave Halter
cf1f66600c Make sure to pass tests again on Python 3.4 2020-01-01 16:15:21 +01:00
Dave Halter
8770e12d16 Make sure that include_signature always works, fixes #1466 2020-01-01 16:10:19 +01:00
Dave Halter
8e2bfdc07e Add a test for #1465 2020-01-01 14:03:42 +01:00
Dave Halter
ce748e6dc7 Skip dict key completion tests for Python 3.5, because it's just annoying with all the f-string stuff 2020-01-01 13:13:10 +01:00
Dave Halter
4837822e32 Revert "Use the root implementation for get_root_context"
Was not able to pass the tests with it.

This reverts commit ba6cd1e2d4.
2020-01-01 12:18:44 +01:00
Dave Halter
3ae0bb9805 Added debug.warning to coveragerc, it's not relevant 2020-01-01 03:28:21 +01:00
Dave Halter
829ee0e6b0 Remove unused code 2020-01-01 03:27:17 +01:00
Dave Halter
ba6cd1e2d4 Use the root implementation for get_root_context 2020-01-01 03:24:09 +01:00
Dave Halter
87a0566637 Add github sponsor FUNDING.yml file 2020-01-01 03:16:03 +01:00
Dave Halter
57e18da7ae Merge branch 'qa' of https://github.com/blueyed/jedi
Made some slight adaptions
2020-01-01 03:14:49 +01:00
Dave Halter
8cdd9d3de5 Get rid of most flake8 errors 2020-01-01 02:43:57 +01:00
Dave Halter
66ad620692 Get rid of a lot of flake8 errors 2020-01-01 02:42:31 +01:00
Dave Halter
818577f423 Make sure to get completions for backticks in docstrings work, see #860 2020-01-01 01:53:55 +01:00
Dave Halter
cea7a12908 Some more clarifications around docstrings, see #860 2020-01-01 01:45:58 +01:00
Dave Halter
50c5eb5786 Get doctest completions working, fixes #860 2020-01-01 00:59:44 +01:00
Dave Halter
8914bbbcc3 Fix tests, skip more Python 2 2019-12-31 22:43:32 +01:00
Dave Halter
dfd7910dd3 Make sure test prefixed functions are checked for pytest fixtures, see #791 2019-12-31 21:31:58 +01:00
Dave Halter
1da0a7bd58 Make sure pytester is also used for fixtures, see #791 2019-12-31 21:30:56 +01:00
Dave Halter
e4cf9293c2 Clarify a sentence around virtualenv security, see #1250 2019-12-31 19:20:59 +01:00
Dave Halter
c8b3443d5f Add the CHANGELOG entries for dict completions. 2019-12-31 19:12:15 +01:00
Dave Halter
469ddc281d Merge branch 'dict', fixes #951 2019-12-31 19:05:15 +01:00
Dave Halter
cf26ede702 Add some more tests to check if getitem on stuff like dict(f=3) works 2019-12-31 19:04:37 +01:00
Dave Halter
5853c67906 Write tests for dict getitem 2019-12-31 18:53:35 +01:00
Dave Halter
83ce8b1162 Make the completions possible for Interpreter objects 2019-12-31 18:34:50 +01:00
Dave Halter
b7a8929905 Add a few more tests for dict completions 2019-12-31 11:23:54 +01:00
Dave Halter
ca13c44788 Make sure to avoid duplicates in completions 2019-12-31 11:16:11 +01:00
Dave Halter
94a97ff8e8 Fix remaining issues with dict completions 2019-12-30 22:59:01 +01:00
Dave Halter
46ac4371df Make most dict completions possible 2019-12-30 14:15:32 +01:00
Dave Halter
9fa4811425 Get dict completions mostly working 2019-12-30 03:34:18 +01:00
Dave Halter
7e769b87f3 Fix some more dict tests 2019-12-30 00:29:55 +01:00
Dave Halter
c7296ade68 Merge branch 'master' into dict 2019-12-28 12:17:04 +01:00
Dave Halter
eff670679c Make sure to mention that Jedi understands Pytest fixtures 2019-12-28 00:02:40 +01:00
Dave Halter
3ec73f1da3 Fix namedtuple issues that were uncovered by the 'self' changes 2019-12-27 23:57:22 +01:00
Dave Halter
cc136a2879 Self manipulations are now more correct, fixes #1392 2019-12-27 19:00:29 +01:00
Dave Halter
73161fe72e Skip pytest tests when environments is not the same one 2019-12-27 16:54:11 +01:00
Dave Halter
35fb8a942c Make sure pytest stdlib fixtures are completable 2019-12-27 16:28:07 +01:00
Dave Halter
e86487cb96 Make sure the monkeypatch fixture completion works 2019-12-27 16:13:20 +01:00
Dave Halter
b4163a3912 Merge branch 'pytest', fixes parts of #791 2019-12-27 14:13:46 +01:00
Dave Halter
dc3d6a3975 Fix python 2 tests 2019-12-27 14:13:35 +01:00
Dave Halter
0931c5492d Fix tests 2019-12-27 13:30:53 +01:00
Dave Halter
7715655c96 Fix selection of what is a pytest fixture and what isn't 2019-12-27 13:26:31 +01:00
Dave Halter
4c22f4dbb1 Fix completion for non-pytest params 2019-12-27 13:02:16 +01:00
Dave Halter
31936776a5 Make completion of pytest fixtures possible 2019-12-27 12:29:18 +01:00
Dave Halter
8611fcf8ea Fix some tests 2019-12-27 11:59:40 +01:00
Dave Halter
ff0e3ec8fb Fix _BuiltinMappedMethod to use a ValueWrapper 2019-12-27 11:52:14 +01:00
Dave Halter
a8782d0070 Make sure param completions work the right way 2019-12-27 11:48:39 +01:00
Dave Halter
70bf3d9586 Deprecate Python 2 support 2019-12-27 11:29:39 +01:00
Dave Halter
8c737ba17e Make goto work for pytest fixtures 2019-12-27 10:51:49 +01:00
Dave Halter
5a54d94aa5 Make sure that infering params is possible from the API 2019-12-27 10:36:13 +01:00
Dave Halter
02320f832d Check better for when something is a picture 2019-12-27 02:12:02 +01:00
Dave Halter
148fffae28 Make yield pytest fixtures work 2019-12-27 01:50:17 +01:00
Dave Halter
c45c8ec8ef Get some pytest fixtures working with some side effects 2019-12-27 01:04:01 +01:00
Dave Halter
dd89325441 Make sure py__name__ and name are defined on all values 2019-12-27 00:31:58 +01:00
Dave Halter
82ed28955d Fix tests 2019-12-25 15:02:35 +01:00
Dave Halter
f3c8bc10f5 Keyword completion after ... should not work, fixes davidhalter/jedi-vim#506 2019-12-25 14:44:25 +01:00
Dave Halter
9fb94bb621 Fix python 2 environment finalizing, fixes #1412 2019-12-25 14:32:06 +01:00
Dave Halter
3e478cc6bb Remove a function that did nothing anymore 2019-12-25 03:54:16 +01:00
Dave Halter
a4a0d482a2 Make sure modules for dynamic searches are not checked twice 2019-12-25 03:53:45 +01:00
Dave Halter
3b2dddd1d3 Make sure classmethod param completion works better for the first param 2019-12-25 03:39:37 +01:00
Dave Halter
110d89724e Make sure staticmethod params are (mostly) inferred correctly, fixes #735 2019-12-24 21:32:12 +01:00
Dave Halter
7a988d9d8b Python 2 test fixes 2019-12-24 19:52:44 +01:00
Dave Halter
6daa03e98d Add the fix for #997 to the changelog 2019-12-24 12:51:14 +01:00
Dave Halter
9578e4252b Goto on a function/attribute in a class now goes to the definition in its super class, fixes #1175 2019-12-24 12:49:23 +01:00
Dave Halter
a21f443756 Fix a few tests 2019-12-24 12:32:13 +01:00
Dave Halter
1d17033717 Add support for completion even when __getattr__ is present, fixes #997 2019-12-24 01:44:53 +01:00
Dave Halter
eca8278eef Fix an error recovery goto issue, fixes davidhalter/jedi-vim#962 2019-12-23 10:09:45 +01:00
Dave Halter
d9383f1927 Add a test to make sure some renamings work always
fixes davidhalter/jedi-vim#552
2019-12-23 00:48:01 +01:00
Dave Halter
1087b62e95 Refactor references: Matching more names that might be related
Fixes davidhalter/jedi-vim#900.
See also davidhalter/jedi-vim#552.
2019-12-23 00:41:22 +01:00
Dave Halter
f2a64e24c8 Catch an additional case for get_context where the cursor is e.g. on the function name 2019-12-22 17:35:40 +01:00
Dave Halter
fcf8506531 Add Script().get_context, fixes #253 2019-12-22 17:19:01 +01:00
Dave Halter
22c3beffd0 Fix some issues with Definition.parent() 2019-12-22 15:37:53 +01:00
Dave Halter
0202d4ed0a Test parents a bit better 2019-12-22 14:32:07 +01:00
Dave Halter
63a9418bd5 Refactor tests a bit 2019-12-22 02:32:31 +01:00
Dave Halter
fc785ce6ea Attribute docstrings work now, fixes #138 2019-12-22 02:05:40 +01:00
Dave Halter
4161bfc7f2 Avoid some duplication of code 2019-12-22 01:24:50 +01:00
Dave Halter
290d1c151a Remove the _Help class completely 2019-12-21 20:07:43 +01:00
Dave Halter
fcede44c2a Move the docstring checking code to the names 2019-12-21 20:06:37 +01:00
Dave Halter
536fd8c7c0 Add the Script.help function, fixes #392 2019-12-21 12:46:58 +01:00
Dave Halter
341d79681a Add big API changes to Changelog 2019-12-21 03:12:28 +01:00
Dave Halter
66a36c3b94 Merge branch 'api', fixes #1166 2019-12-20 20:05:10 +01:00
Dave Halter
fcecac20ec Add tests to fix all the deprecations 2019-12-20 20:03:12 +01:00
Dave Halter
9e818dc377 Test setting line/column multiple times 2019-12-20 20:03:00 +01:00
Dave Halter
e5496381f3 sith now also uses the new API 2019-12-20 19:45:20 +01:00
Dave Halter
5fc308f1f8 call signature -> signature 2019-12-20 19:41:57 +01:00
Dave Halter
694b05bb8c usage -> reference 2019-12-20 19:26:33 +01:00
Dave Halter
bd861e40a8 Rename references file 2019-12-20 19:25:46 +01:00
Dave Halter
e1d787821b usages -> find_references 2019-12-20 19:23:26 +01:00
Dave Halter
adff6d34a4 goto_assignment -> goto everywhere where it was left 2019-12-20 19:15:41 +01:00
Dave Halter
d7d9c9642a Don't use goto_definitions anymore, use infer 2019-12-20 19:06:24 +01:00
Dave Halter
4bbaec68e8 Make sure goto_definitions is no longer used in the main code 2019-12-20 18:47:04 +01:00
Dave Halter
dbb61357c3 Make sure that jedi.names is not references anymore 2019-12-20 18:04:47 +01:00
Dave Halter
f90aeceb27 Move names to Script and mark deprecations 2019-12-20 17:55:45 +01:00
Dave Halter
7f8ba17990 Get rid of all completions usages 2019-12-20 17:47:37 +01:00
Dave Halter
5bf6e7048b A few renames for readability in the api/completion.py file 2019-12-20 17:40:04 +01:00
Dave Halter
ebe9921208 Try to use the new API names everywhere 2019-12-20 17:29:42 +01:00
Dave Halter
f03c70e577 Refactor run.py to use the new API 2019-12-20 17:25:44 +01:00
Dave Halter
2cc898ba35 Get rid of completions in tests 2019-12-20 16:54:51 +01:00
Dave Halter
38460ce9d7 Use complete instead of completions in test_api/ 2019-12-20 16:16:01 +01:00
Dave Halter
2b5af19989 Fix signature issues 2019-12-20 16:14:01 +01:00
Dave Halter
bcf726054e Make sure the line numbers are validated for the new API methods 2019-12-20 16:00:49 +01:00
Dave Halter
1514695fc1 usages -> find_references, see #1166 2019-12-20 15:46:17 +01:00
Dave Halter
f32b0aebeb call_signatures -> find_signatures 2019-12-20 15:41:20 +01:00
Dave Halter
6c7b8f669f goto_definitions -> infer; goto_assignments -> goto, see #1166 2019-12-20 15:35:19 +01:00
Dave Halter
87d5334b9e completions -> complete, see #1166 2019-12-20 15:30:35 +01:00
Dave Halter
cefc4a46a3 Add latest changes to Changelog 2019-12-20 14:57:58 +01:00
Dave Halter
39605bfa08 Make sure goto_assignments is no longer used on Definition 2019-12-20 14:43:20 +01:00
Dave Halter
1f4be4bc51 Make sure that goto is used instead of goto_assignments 2019-12-20 14:31:42 +01:00
Dave Halter
afbd8cad89 Don't test Python 3.4 in tox anymore by default 2019-12-20 11:52:19 +01:00
Dave Halter
0194efcd6b Add the release date to Changelog 2019-12-20 11:26:55 +01:00
Dave Halter
96156dd5df Jedi needs at least parso 0.5.2 now 2019-12-20 11:00:37 +01:00
Dave Halter
095a9c530a Fix a rb byte literal test 2019-12-20 10:49:28 +01:00
Dave Halter
45edfbdeeb Goto definition doesn't work on strings anymore, fixes microsoft/vscode#81520 2019-12-20 10:29:54 +01:00
Dave Halter
540a57766d Make sure that sequence literals have the right generic classes, fixes #1416 2019-12-20 01:33:41 +01:00
Dave Halter
e56d4fde98 Improved Generic subclass matching 2019-12-20 01:33:41 +01:00
Dave Halter
51e2e90dce Make sure overload checks work for TypeAlias, see #1416 2019-12-20 01:33:41 +01:00
Dave Halter
902b355aea Avoid recursion in a specific example, see also #1458 2019-12-20 01:33:41 +01:00
Tim Gates
542a2a339e Fix simple typo: wheter -> whether (#1460)
Closes #1459
2019-12-17 10:05:21 +01:00
Dave Halter
41a6591d88 Completions.complete returns None for fuzzy completions #1409 2019-12-15 19:56:56 +01:00
Dave Halter
f91f655d55 Cleanup fuzzy tests a bit, see #1409 2019-12-15 19:50:43 +01:00
Dave Halter
49eb2c0a12 Add fuzzy completions to Changelog 2019-12-15 19:12:48 +01:00
Dave Halter
ec2391c74f Merge branch 'feature_827_fuzzy_search' of https://github.com/jmfrank63/jedi 2019-12-15 19:08:33 +01:00
Dave Halter
0ce414eb94 Python 2 compatibility 2019-12-15 18:52:16 +01:00
Dave Halter
38eb2c9ba3 Make sure that the definition order in stubs are ignored 2019-12-15 18:41:41 +01:00
Dave Halter
9d35adda02 Make sure that a goto on stubs even without the implementation is possible 2019-12-15 18:00:09 +01:00
Dave Halter
6e2a76feb9 Fix a goto case with nested pyi files 2019-12-15 17:37:24 +01:00
Dave Halter
35442eff81 Catch some cases were _sqlite3.Connection was misidentified as sqlite3.Connection 2019-12-15 16:15:55 +01:00
Dave Halter
8fc84a2aaa Rename goto_changes to options 2019-12-15 14:36:36 +01:00
Dave Halter
7bdedb40e3 Fix: Stubs in typeshed weren't loaded properly sometimes, fixes #1434 2019-12-14 21:33:00 +01:00
Dave Halter
3219f14c63 Files bigger than one MB (about 20kLOC) get cropped to avoid getting stuck completely
Fixes #843
2019-12-14 12:39:40 +01:00
Dave Halter
7639bc2da9 Upgrade typeshed, fixes #1084 2019-12-14 11:38:37 +01:00
Dave Halter
5bc6ce231b Add a typeshed README 2019-12-14 11:27:52 +01:00
Dave Halter
a6bf49783f Make sure param annotation completions work 2019-12-14 02:55:11 +01:00
Dave Halter
621e280451 Make sure that you can select virtualenvs more precisely, fixes #1407 2019-12-13 21:00:34 +01:00
Dave Halter
6b9add4264 Python 2 compatibility 2019-12-13 16:58:56 +01:00
Dave Halter
92c59180fd Make sure goto definitions works on with, fixes #1280 2019-12-13 16:57:18 +01:00
Dave Halter
923fcf95d9 Make sure that __getattr__ is always working with Interpreter
fixes #1378
2019-12-13 16:07:38 +01:00
Dave Halter
902f0754e0 qualified names can be None, so we need to handle it 2019-12-13 14:36:05 +01:00
Dave Halter
12b07a435d Cleanup some callbacks 2019-12-13 12:47:55 +01:00
Dave Halter
b9f8a7f52e Make sure that Python 2 passes a test on more systems 2019-12-13 12:28:03 +01:00
Dave Halter
769b3556d2 Make sure warnings are not shown if a property is executed, fixes #1383 2019-12-13 01:48:56 +01:00
Dave Halter
5e3e268cc6 Fix RecursionError: global statements in modules should just be ignored, fixes #1457 2019-12-13 00:21:36 +01:00
Dave Halter
e656a5f18f Make it possible to infer Callable TypeVars, fixes #1449 2019-12-12 23:46:55 +01:00
Dave Halter
536a77551b Account for sys path potentially not being all unicode in typeshed, fixes #1456
This is a bit stupid, but don't care too much, it will get removed soon, when
Python 2 support is going to get removed.
2019-12-11 00:30:37 +01:00
Dave Halter
a2cebc4b92 Make sure docstrings can always be inferred for builtins modules, fixes #1432 2019-12-11 00:06:58 +01:00
Dave Halter
3065609162 Forgot to add some test files 2019-12-09 19:26:24 +01:00
Dave Halter
8e33fd1931 Get the context of a class name right, fixes #1396 2019-12-09 09:56:03 +01:00
Dave Halter
ed3fdf8876 Make sure classmethod signatures don't include cls, fixes #1455 2019-12-09 08:58:59 +01:00
Dave Halter
46982ce42b Add a test to show that type var inference also works for tuples 2019-12-09 00:26:18 +01:00
Dave Halter
28ecc2709a Don't use globals anymore 2019-12-09 00:15:21 +01:00
Dave Halter
33224ae7e1 Remove a duplicate method 2019-12-09 00:11:51 +01:00
Dave Halter
d9260bf78b More docstrings 2019-12-09 00:07:07 +01:00
Dave Halter
a51dc54759 A bit better documentation 2019-12-09 00:02:44 +01:00
Dave Halter
5acbb06315 Refactor so typing uses BaseTypingValueWithGenerics
This makes it finally possible to use type vars with Callable and some other classes.
Fixes #1413
2019-12-08 23:50:57 +01:00
Dave Halter
7319f8bf2c Make some more classes private for inference.gradual.base 2019-12-08 23:15:31 +01:00
Dave Halter
d9ddaa31ae Use _create_instance_with_generics 2019-12-08 23:09:55 +01:00
Dave Halter
5874b0bd69 The generics manager is now part of DefineGenericBase 2019-12-08 23:03:15 +01:00
Dave Halter
9eef771ec5 Remove get_index_and_execute and use something else 2019-12-08 22:55:52 +01:00
Dave Halter
9e6c53151b _InstanceWrapper to _GenericInstanceWrapper 2019-12-08 22:47:07 +01:00
Dave Halter
84d10657a3 Another rename for readability 2019-12-08 22:46:35 +01:00
Dave Halter
5c4b3da45d Merge GenericClass and _AbstractAnnotatedClass 2019-12-08 22:45:30 +01:00
Dave Halter
ad92882c48 Use the generics manager for all the typing classes 2019-12-08 22:42:01 +01:00
Dave Halter
8213d183fb Start using generic managers, for #1413 2019-12-08 21:56:30 +01:00
Dave Halter
4fca7bd22d Start working on generic managers, see #1413 2019-12-08 20:35:00 +01:00
Dave Halter
c112858a1c Move iter_over_arguments to a separate file 2019-12-08 20:14:15 +01:00
Dave Halter
deaa7265dd value_of_index -> context_of_index 2019-12-08 20:09:53 +01:00
Dave Halter
72fc85f4c3 Try to prepare DefineGenericBase for a more general usage 2019-12-08 19:58:00 +01:00
Dave Halter
df697cfb03 Make AbstractAnnotatedClass private 2019-12-08 19:00:26 +01:00
Dave Halter
fd054d1add Move parts of AbstractAnnotatedClass to the new class DefineGenericBase 2019-12-08 18:58:28 +01:00
Dave Halter
95763f0bb0 Formatting 2019-12-08 18:23:55 +01:00
Dave Halter
aab0002950 Rename two classes to make some things clearer 2019-12-07 15:32:59 +01:00
Dave Halter
ddbb87fd1d Make some lines shorter 2019-12-07 15:29:56 +01:00
Dave Halter
bc99fbdfea Remove an unused InstanceArguments 2019-12-07 15:27:26 +01:00
Dave Halter
48ac0c9421 Move more stuff from gradual/typing.py to gradual/base.py 2019-12-07 15:27:14 +01:00
Dave Halter
37a9d1536c Remove TypingName, it looks like it's not used 2019-12-07 15:12:27 +01:00
Dave Halter
3dbe5c10ae Start splitting up gradual/typing.py 2019-12-07 15:11:50 +01:00
Dave Halter
ab8f0ba834 Make sure Callable TypeVars are better identified, solves a part of #1413 2019-12-07 15:02:41 +01:00
Dave Halter
4bd7c2e627 Remove a TODO that was implemented 2019-12-07 14:56:20 +01:00
Dave Halter
1f73c65dcd Pin colorama to a version that works for Python 3.4 2019-12-07 01:09:36 +01:00
Shane Steinert-Threlkeld
bd5909e7b2 Find active conda environment and set it as default (if there is one) (#1440)
* add detection of conda environments

* changed get_default_environment to look for conda

* updated comment on get_default_environment to mention CONDA_PREFIX

* added myself to authors

* simple fix for mistaken conda paths
2019-12-07 01:04:41 +01:00
Dave Halter
6f70e759a4 Remove Python 3.4 test from appveyor
3.4 is end-of-life anyway so it doesn't really matter. It's also still tested
to a certain degree on travis and with the test environments. So this change
doesn't really mean much except a lower maintenance burden.
2019-12-07 01:02:18 +01:00
Dave Halter
0474371f23 Make sure overload signatures work, see #1417 2019-12-07 00:30:37 +01:00
ANtlord
654475b7d6 Infering multiple fields is fixed. 2019-12-06 23:58:13 +02:00
ANtlord
fbeff00761 Decimal, DurationField, DateField, DateTimeField, DecimalField django types are resolved. 2019-12-06 23:47:19 +02:00
Dave Halter
c582545628 Shorten a line to < 100 chars 2019-12-06 16:41:47 +01:00
Dave Halter
759808e8bb Fix a tuple test 2019-12-05 20:01:27 +01:00
Dave Halter
36b800f8d3 Make sure that Tuple autocompletes properly, fixes #1389 2019-12-05 18:55:33 +01:00
Dave Halter
7e64bfa075 Avoid more Python 2 2019-12-05 17:44:47 +01:00
Dave Halter
54f4bd0bad Fix issues with interpreter completions on unittest.mock.
For 3.6+ an error was ignored that lead to crashes. In 3.5 the OOM killer
eventually arrived...

Fixes #1415
2019-12-05 10:09:22 +01:00
Dave Halter
cf65ecdb96 Start writing the changelog 2019-12-05 01:13:52 +01:00
Dave Halter
700bd12122 Improve call signature detection by a lot
Fixes #1399
2019-12-04 23:55:26 +01:00
Dave Halter
4ba3dc69b3 Make sure we use the right context in case of goto with decorators, fixes #1427 2019-12-04 00:25:43 +01:00
Dave Halter
b8a1f6da55 Python 2 still sucks. 2019-12-03 23:41:33 +01:00
Dave Halter
19aa50bb7f Make sure an assert no longer causes unnecessary trouble
Fixes #1426, fixes #1414
2019-12-03 23:10:24 +01:00
Dave Halter
8aee1e6213 Make sure that decorator signature completion is working, see #1433 2019-12-03 22:20:38 +01:00
Dave Halter
f46f00bc71 Avoid wrong random call signature completion, fixes #1433 2019-12-03 22:12:33 +01:00
Dave Halter
fea80c7fc8 Make sure py__iter__ has the right signature 2019-12-03 17:18:55 +01:00
Dave Halter
87852c1295 Remove probably dead code 2019-12-03 17:16:38 +01:00
Dave Halter
3d784c748e Python 2 2019-12-03 16:53:07 +01:00
Dave Halter
e5d1091e80 Make sure execute_function_slots and get_function_slots is defined for all instances 2019-12-03 16:50:34 +01:00
Dave Halter
7254bec92c Upgrade typeshed to the latest commits
Also fixes some small Jedi issues with it (None interpretation in annoations mostly)
2019-12-02 10:14:59 +01:00
Dave Halter
74de9e7d53 Make sure that the differences are calculated a bit more reliable, fixes #1429 2019-12-02 00:22:15 +01:00
Dave Halter
f54291a30b Unfortunately commited something that should not have been committed 2019-12-02 00:11:29 +01:00
Dave Halter
4d3f6fa790 Fix Python 2 issues 2019-12-02 00:04:22 +01:00
Dave Halter
b8dfbc5d18 A CompiledInstance is not really compiled, it's an instance of a compiled class value 2019-12-01 23:50:46 +01:00
Dave Halter
f43d144e23 Remove is_package on contexts 2019-12-01 21:01:33 +01:00
Dave Halter
76e0e6a8c5 Fix some more package issues 2019-12-01 20:42:55 +01:00
Dave Halter
7b6405f76c Get rid of py__package__ from contexts 2019-12-01 19:33:24 +01:00
Dave Halter
8a26a23884 Make is_package a function and call it consistently 2019-12-01 19:31:31 +01:00
Dave Halter
6ffeea7eea Make sure code_lines works on stubs, even if they are builtins 2019-12-01 19:10:08 +01:00
Dave Halter
582df2f76d Fix the MixedContext and also use MixedModuleContext 2019-12-01 17:22:36 +01:00
Dave Halter
5c79472024 Separate CompiledModuleContext from CompiledContext, fixes #1428 2019-12-01 14:56:55 +01:00
Dave Halter
378712dbc1 Fix contextualizing of subscriptlist 2019-12-01 11:07:18 +01:00
Dave Halter
b13c4c446f Fix a globals context issue, fixes #1435 2019-12-01 01:28:03 +01:00
Dave Halter
e81c241905 Remove Python 2 implicit relative imports feature
Python 2 is almost gone, so I don't support those old features anymore.
2019-12-01 00:45:08 +01:00
Dave Halter
c77f33b73b A small rename of a value that is actually a context 2019-12-01 00:31:22 +01:00
Dave Halter
b895924311 Merge pull request #1451 from pappasam/FIX_SHOW_SYSTEM_FOLDERS
Fix: no longer shows folders recursively to root
2019-12-01 00:12:51 +01:00
Dave Halter
86071dda54 Use a different sys path for import completions and import type inference
Fix tests of the #1451 pull request
2019-12-01 00:12:19 +01:00
Sam Roeca
1ba83414a5 Change search strategy for adding parent paths:
1. skip dirs with __init__.py
2. Stop immediately when above self._path
2019-11-30 10:14:28 -05:00
Dave Halter
59c5b51c0d Add __ne__ to BaseValueSet. Might have caused issues in Python 2, see #1442 2019-11-30 10:01:27 +01:00
Sam Roeca
c2fd7b3104 Fix: upward search omits unnecessary paths
In the previous implementation, Jedi's traverse_parents function
traversed parent directories to the system root every time. This would
inadvertently add every folder to the system root every time. Obviously,
this is not the behavior desired for the import system.

This commit collects directories in an upward search until we:

1. Hit any directory without an __init__.py, AND
2. Are above self._path.
2019-11-29 21:12:12 -05:00
Sam Roeca
4bc4f167e9 Revert "Fix: no longer shows folders recursively to root"
This reverts commit 03b4177d3d.
2019-11-29 20:11:23 -05:00
Dave Halter
3c68d3d341 Avoid finding submodules for compiled objects, because it's at least not implemented 2019-11-29 17:18:04 +01:00
Dave Halter
8478ad7ffb Make sure that goto on a subscript colon doesn't crash 2019-11-29 17:10:07 +01:00
Dave Halter
98b592cb68 Fix getitem in compiled
This change just applies a change to CompiledObject that was done to values a long time ago
2019-11-29 16:14:17 +01:00
Dave Halter
c38e4fce70 Make sure py__get__ is defined on all values
Also define matches_signature on all signatures, there's definitely cases where that might be called
2019-11-29 15:04:04 +01:00
Dave Halter
6e5e706288 Fix file name completions when file name is too long 2019-11-29 14:11:31 +01:00
Dave Halter
0e92be66db Fix an issue around completions in comments before strings 2019-11-29 13:44:12 +01:00
Sam Roeca
03b4177d3d Fix: no longer shows folders recursively to root
In the previous implementation, Jedi would's traverse_parents function
traversed parent directories to the system root every time. This would
inadvertently add every folder to the system root every time. Obviously,
this is not the behavior desired for the import system.

This pull request provides a new argument to the traverse_parents
function, "root", which represents the root parent for the search. This
argument defaults to None, thereby preserving the existing behavior of
the function.

I chose to duplicate some code for performance reasons. Since I'm trying
to avoid too much path manipulation magic, we do:

* a search to a valid specified root, OR
* a simple upward search until hitting the system root when there is no
valid root specified.
2019-11-28 23:04:08 -05:00
Samuel Roeca
761f0828c7 Fix missing inference for typing.Type[typing.TypeVar] (#1448)
* Add Type[TypeVar] support
* Completion tests for typing.Type[typing.TypeVar]
2019-11-27 22:10:58 +01:00
Dave Halter
facd21afc6 Remove Python 3.9 dev build from travis, it's not needed 2019-11-27 20:50:29 +01:00
Dave Halter
e1d840c89b Start to use Python 3.8 in the normal CI pipeline 2019-11-27 20:15:52 +01:00
Dave Halter
15c13c1386 Fix the pow test for Python 3.8 2019-11-27 20:12:50 +01:00
Jérome Perrin
6d632a01eb Fix inference from type comment for function parameter with dot
fix for https://github.com/davidhalter/jedi/issues/1437
2019-11-08 13:41:17 +01:00
ANtlord
4b15c8459a Merge branch 'master' of https://github.com/davidhalter/jedi 2019-10-28 08:52:56 +02:00
Endill
00b220516d Fix annotation string generated from wrong object 2019-10-26 13:58:15 +02:00
Johannes Maria Frank
364a527fd9 Added missing sorted to scandir 2019-10-22 16:49:35 +01:00
Johannes Maria Frank
2039ab9a3c Fixed pytest fixtures for test_api 2019-10-22 16:47:06 +01:00
Johannes Maria Frank
d48816603e Sorted scandir results to have completions ordered 2019-10-22 16:34:47 +01:00
Johannes Maria Frank
f61d041830 Switched back to fuzzy off as default 2019-10-22 16:06:46 +01:00
Johannes Maria Frank
f7fae4dde7 Added file fuzzy match and refactored 2019-10-22 15:50:16 +01:00
ANtlord
893b695a61 Merge branch 'master' of https://github.com/davidhalter/jedi 2019-10-21 22:27:06 +03:00
Johannes Maria Frank
2653752f9c Corrected formatting 2019-10-21 15:44:03 +01:00
Johannes Maria Frank
d73f32745d Fixed bug for python 2 2019-10-10 15:02:00 +02:00
Johannes Maria Frank
1fa678e3fe Corrected an error in the math fuzzy completion test 2019-10-10 13:23:33 +02:00
Johannes Maria Frank
a84087682d Adopted results for different python versions 2019-10-10 12:10:19 +02:00
Johannes Maria Frank
48ffc5473a Added test for math og 2019-10-09 10:37:46 +02:00
Johannes Maria Frank
0b56bf8f08 Added completions test with fuzzy=True 2019-10-04 17:18:01 +01:00
Johannes Maria Frank
85278242c3 Switched to fuzzy boolean 2019-10-02 00:28:31 +01:00
Dave Halter
6baa3ae8e1 Start working on uniting parts of code of file path/dict completion 2019-09-27 09:36:37 +02:00
Johannes Maria Frank
0bbc8d6e9a Added match_method parameter 2019-09-26 09:12:15 +01:00
Johannes Maria Frank
8f306953da Added experimental substring and fuzzysearch 2019-09-26 08:17:30 +01:00
Dave Halter
88ebb3e140 Get a few more tests passing about dict key strings 2019-09-23 21:05:01 +02:00
Dave Halter
954fd56fcc Get some more dict completions working 2019-09-23 09:21:43 +02:00
Dave Halter
e8afb46cde Get the first dict completions passing 2019-09-23 09:18:26 +02:00
Dave Halter
a6fcf779d4 Fix a small issue created in #1398 2019-09-21 23:29:07 +02:00
Levente Polyak
527ef6fcdd fix static analysis test skips with latest pytest
Latest pytest ensures pytest.skip is being called with a str parameter.
However, test_static_analysis passed over the skip parameter which
contains a tuple returned from skip_python_version leading to test
regression.
Unify the version skip reasons for both, static analysis and integration
tests by using a shared BaseTestCase parent to avoid code duplication.
Furthermore handle test_static_analysis skip_reason extraction
orthogonal to test_completion.
2019-09-21 21:42:05 +02:00
Philipp A
a0f95fc89f Fixed rST in changelog 2019-09-21 21:35:04 +02:00
Maxim Cournoyer
96d650cab3 test: test_completion: Dynamically resolve current directory name.
This fixes issue #1395 (see:
https://github.com/davidhalter/jedi/issues/1395).

* test/test_api/test_completion.py(current_dirname): New variable.
(test_file_path_completions): Use it.
2019-09-19 23:35:18 +02:00
ANtlord
659aaf6861 Naming corrections. 2019-09-19 08:42:39 +03:00
ANtlord
d68545d8de Merge branch 'master' of https://github.com/davidhalter/jedi 2019-09-18 09:28:30 +03:00
ANtlord
f5ae7148dd Basic django model fields are infered as builtin types. 2019-09-18 09:27:39 +03:00
Dave Halter
e86a2ec566 Small rename 2019-09-08 03:32:47 +02:00
Dave Halter
e179b3e526 Add a test for dict key completions 2019-09-07 02:58:21 +02:00
Dave Halter
66022edf14 Skip Python 2 tests for some array issues 2019-09-06 00:04:44 +02:00
Dave Halter
ae79919eb4 Skip some param resolving tests in Python 2/3.4 2019-09-05 18:27:37 +02:00
Dave Halter
fbe58306c3 Add a few tests for a previous assertion failure 2019-09-05 10:57:04 +02:00
Dave Halter
9c19f72af3 Make sure a compiled instance is is_compiled 2019-09-05 10:13:03 +02:00
Dave Halter
a9f1d3d9bb Reenable a test 2019-09-05 10:09:33 +02:00
Dave Halter
1db3e9a65d Disable a test in Python2 2019-09-05 10:03:50 +02:00
Dave Halter
599eded3d1 Remove a few unused imports 2019-09-05 00:54:13 +02:00
Dave Halter
4e68287bba Move eval_node to one place 2019-09-05 00:52:14 +02:00
Dave Halter
008e9860a8 Avoid creating the same object twice 2019-09-05 00:37:51 +02:00
Dave Halter
8cd5932fed Move inference_state.goto to the name and _follow_error_node_imports_if_possible away from inference_state 2019-09-05 00:34:13 +02:00
Dave Halter
67c007338a Make some dynamic array variables private 2019-09-05 00:18:01 +02:00
Dave Halter
aea2ddcbd8 ContextualizedName -> TreeNameDefinition 2019-09-05 00:15:38 +02:00
Dave Halter
4d332c32c0 Use create_name instead of duplicated logic 2019-09-05 00:04:24 +02:00
Dave Halter
02046d5333 Replace obj with value 2019-09-04 11:12:30 +02:00
Dave Halter
2faa8ade8b Remove get_object, it's not needed anymore 2019-09-04 11:04:09 +02:00
Dave Halter
f9292ca8fa Implement properties properly 2019-09-04 11:00:43 +02:00
Dave Halter
40b01bfd2c Make arguments private for instance 2019-09-04 09:34:22 +02:00
Dave Halter
46e9b9e7cf Refactor dynamic params a bit 2019-09-04 09:31:01 +02:00
Dave Halter
96848dd627 Revert "Refactor some dynamic function arguments things"
This reverts commit e7d9a59da2.
2019-09-04 09:28:31 +02:00
Dave Halter
e7d9a59da2 Refactor some dynamic function arguments things 2019-09-04 09:27:06 +02:00
Dave Halter
dd400f115a Move some annotation inferring code to proper functions 2019-09-04 01:29:41 +02:00
Dave Halter
34f131e9b3 Remove an unneeded list cast 2019-09-04 01:22:16 +02:00
Dave Halter
47d6ae3da1 SimpleParamName -> AnonymousParamName 2019-09-04 01:20:44 +02:00
Dave Halter
79f9d78c83 Make create_instance_context a lot more understandable (and shorter) 2019-09-04 01:06:25 +02:00
Dave Halter
06d2119f51 Make sure a self variable is only defined in a function not outside 2019-09-04 00:53:46 +02:00
Dave Halter
b27f47683c get_first_non_keyword_argument_values is not really used anymore 2019-09-04 00:08:49 +02:00
Dave Halter
0be9ab0caf A simplification 2019-09-04 00:03:03 +02:00
Dave Halter
c8564a68df Fix recursion issues about dynamic param lookups and defaults work again 2019-09-03 23:59:31 +02:00
Dave Halter
75262d294f Refactor search_param_names interface 2019-09-03 22:17:30 +02:00
Dave Halter
ac4dd06d11 Use get_executed_param_names if get_executed_param_names_and_issues is not necessary 2019-09-03 22:11:00 +02:00
Dave Halter
d4f3963cd0 Don't use get_executed_param_names_and_issues as an attribute on arguments 2019-09-03 22:07:34 +02:00
Dave Halter
a3659e2750 Remvoe AnonymousArguments 2019-09-03 21:59:50 +02:00
Dave Halter
3a74d65404 Refactor AnonymousInstance/TreeInstance, so that the anonymous instance doesn't have to use arguments 2019-09-03 21:56:48 +02:00
Dave Halter
acda3527cb Separate tree/compiled instances better 2019-09-03 21:36:13 +02:00
Dave Halter
03f6d0edf8 Get rid of create_init_executions 2019-09-03 17:50:03 +02:00
Dave Halter
c79faa6b10 Implement super() properly 2019-09-03 14:53:40 +02:00
Dave Halter
4b10644100 Start using AnonymousMethodExecutionContext instead of the normal function execution context with arguments 2019-09-03 14:44:01 +02:00
Dave Halter
274f8dbb02 Prepare instance for AnonymousMethodExecutionContext 2019-09-03 14:19:56 +02:00
Dave Halter
efa51a1d70 Use the function execution filters with proper inheritance 2019-09-03 13:55:09 +02:00
Dave Halter
0a420339e8 Deal with inheritance properly when dealing with function executions 2019-09-03 13:29:25 +02:00
Dave Halter
fe5523268e Separate FunctionExecution and AnonymousFunctionExecution 2019-09-03 13:11:50 +02:00
Dave Halter
b16c987a72 Fix static analysis for params 2019-09-03 12:45:35 +02:00
Dave Halter
35efdd84d2 Add get_param_names to the function execution, which is needed to do some filtering 2019-09-03 09:22:31 +02:00
Dave Halter
1495a0ec4c Move the normal anonymous arguments case over to names 2019-09-03 01:28:54 +02:00
Dave Halter
33586deef1 Prefer annotations in SimpleParamName 2019-09-03 01:03:10 +02:00
Dave Halter
7bdd71f9a7 Add some dynamic inference checks for annotations 2019-09-03 00:56:42 +02:00
Dave Halter
a67861a320 Avoid using arguments.get_executed_param_names_and_issues 2019-09-03 00:47:10 +02:00
Dave Halter
fe8a605d4a Remove get_executed_param_names_and_issues from FunctionExecution 2019-09-03 00:30:22 +02:00
Dave Halter
7ad7d22fb0 Use function/arguments intead of execution 2019-09-02 21:50:56 +02:00
Dave Halter
bdb01c7546 Make FunctionExecutionContext.arguments private 2019-09-02 21:24:21 +02:00
Dave Halter
73003a995b _ArrayInstance -> _DynamicArrayAdditions 2019-09-02 19:49:21 +02:00
Dave Halter
06890203dd var_args -> arguments 2019-09-02 19:48:17 +02:00
Dave Halter
e97bb1d2e5 Fix the final issues about parameter arguments 2019-09-02 19:27:39 +02:00
Dave Halter
4fd1149be2 Fix infering of dynamic params 2019-09-02 10:05:12 +02:00
Dave Halter
51475a5b39 Remove an unnecessary piece of code from goto 2019-09-02 09:52:58 +02:00
Dave Halter
a0cadd9375 Use Context.create_name instead of weird playing with params everywhere 2019-09-02 09:38:54 +02:00
Dave Halter
b4dc95553f Use SimpleParamName everywhere it's needed 2019-09-02 09:29:43 +02:00
Dave Halter
edb17b8e7c Refactor params and what execution contexts need 2019-09-01 14:14:42 +02:00
Dave Halter
59f26ad6ab Fix a TODO 2019-08-30 01:18:13 +02:00
Dave Halter
286d2c9b1a Make the order of overloaded functions correct 2019-08-30 01:11:11 +02:00
Dave Halter
04bc9eb62c Get py__simple_getitem__ working on dicts that have a dict as a param, see #1385 2019-08-29 09:27:43 +02:00
Dave Halter
9c950321df Move some code from SequenceLiteralValue to DictLiteralValue 2019-08-28 23:47:32 +02:00
Dave Halter
4572503c9f Fix usages in context of the new parso parameter include_setitem=True 2019-08-28 22:56:16 +02:00
Dave Halter
7d28f4ce5b execution_allowed should be called with nodes 2019-08-28 18:24:26 +02:00
Dave Halter
2a27ec37ae Move a repr function 2019-08-28 10:12:00 +02:00
Dave Halter
066b189bfa Fix cases where dicts are passed to dicts and generics were not properly applied 2019-08-27 20:41:46 +02:00
Dave Halter
18ecb5a746 Small rename 2019-08-26 23:34:01 +02:00
Dave Halter
305bfd3a3c Change a test so it works with generics 2019-08-26 21:53:41 +02:00
Dave Halter
8311328a8e Get py__simple_getitem__ modifications working for list/dict instances 2019-08-26 21:48:41 +02:00
Dave Halter
24b392b915 Random objects should not be affected by list/dict modifications 2019-08-26 19:28:30 +02:00
Dave Halter
356c25a399 Add a way how dict setitem can be understood
Needs the latest parso commits
2019-08-26 19:27:33 +02:00
Matthias Bussonnier
5329f95096 Attempt at a test of completion of filepath after ~.
I'm not quite sure how this will behave on windows, and we can't really
create a tempdir (as we don't want to mess with path on home.

One possibility would be to mock/monkeypatch scandir, listdir and
os.path.expanduser or set $HOME in env; but I'm quite unsure we want to
go that route.
2019-08-25 19:55:33 +02:00
Dave Halter
eb5586d7e0 Move the dynamic module to dynamic_params 2019-08-25 17:12:04 +02:00
Dave Halter
b7febc1960 Move the dynamic arrays code 2019-08-25 17:08:42 +02:00
Dave Halter
d31ca7e9f0 Add a comment about how _ArrayInstance is used 2019-08-25 17:00:57 +02:00
Dave Halter
0f13e02fc2 check_array_additions -> _check_array_additions 2019-08-25 16:46:08 +02:00
Dave Halter
2a86d810cd Remove methods that are not used 2019-08-25 14:36:42 +02:00
Dave Halter
473dbb0f69 Create separate classes for FakeSequence 2019-08-25 14:31:15 +02:00
Dave Halter
51912db46a Remove _FakeArray, because it's no longer needed 2019-08-25 13:58:35 +02:00
Dave Halter
41dc514546 Enable a sys path test that is working now 2019-08-25 13:20:37 +02:00
Dave Halter
e3d2bce7ff Reenable some tests 2019-08-25 02:37:52 +02:00
Dave Halter
9b21c02819 Add a method implementation, that doesn't seem to be used, but it might one day be. 2019-08-24 14:50:59 +02:00
Dave Halter
c94bce315a Merge branch 'refactoring' 2019-08-24 14:38:45 +02:00
Dave Halter
8beea77bc8 Merge branch 'master' of github.com:davidhalter/jedi 2019-08-24 14:34:09 +02:00
Dave Halter
9290b7291b get_param -> get_executed_param_name 2019-08-24 14:33:19 +02:00
Dave Halter
4969b52ddf Reuse a function 2019-08-24 14:18:08 +02:00
Dave Halter
06a6cea02d DynamicExecutedParams -> DynamicExecutedParamName 2019-08-24 14:14:45 +02:00
Dave Halter
9469533b9f Make InstanceExecutedParam a ParamName 2019-08-24 14:08:11 +02:00
Dave Halter
98d0fc632e Some more renames 2019-08-24 14:02:04 +02:00
Dave Halter
622db8d2d7 Actually start using names for executed param names 2019-08-24 13:52:50 +02:00
Dave Halter
0619d58cd3 search_params -> search_param_names 2019-08-24 13:47:19 +02:00
Dave Halter
b1d2f2462b get_executed_params_and_issues -> get_executed_param_names_and_issues 2019-08-24 13:45:47 +02:00
Dave Halter
bccc85f453 Remove a strange comment 2019-08-24 13:41:41 +02:00
Dave Halter
4db6793719 Remove an isisinstance check that is no longer needed 2019-08-24 13:40:03 +02:00
Dave Halter
ec6fa0c97c Differentiate between a public name and an internal string_name 2019-08-24 13:35:15 +02:00
Dave Halter
8b1f35a8b1 Use get_kind in ExecutedParam 2019-08-24 13:20:53 +02:00
Dave Halter
e7020bea3d Use infer_annotation only from param name 2019-08-24 13:09:00 +02:00
Dave Halter
bb3eb23864 Move docstring param recognizing 2019-08-24 12:32:50 +02:00
Dave Halter
88cf198552 Avoid function executions if they are not necessary
This also means that annotations are prefered to docstring types
2019-08-24 12:23:33 +02:00
Dave Halter
e0f26dd7a1 get_function_execution -> as_context 2019-08-24 11:16:20 +02:00
Dave Halter
d913d7d701 Don't use filter_name for global completions 2019-08-24 11:02:45 +02:00
Dave Halter
dd6befdc52 Cosmetics 2019-08-24 03:22:26 +02:00
Dave Halter
c1d8454f0c Finally get rid of NameFinder 2019-08-24 03:21:00 +02:00
Dave Halter
c4b0b45a1d Move the isinstance checks out of finder 2019-08-24 03:09:40 +02:00
Dave Halter
eba088b049 Move some static analysis details out of finder 2019-08-24 02:51:11 +02:00
Dave Halter
ba67d384c1 Remove predefined_names from value, it's not needed anymore 2019-08-24 02:41:10 +02:00
Dave Halter
ba9c318d22 Move predefine_names to context 2019-08-24 02:39:51 +02:00
Dave Halter
ce3ec4eecb Rename value -> context for some more places 2019-08-24 02:36:29 +02:00
Dave Halter
e148d5120f Move some finder stuff around 2019-08-24 02:28:58 +02:00
Dave Halter
3828532065 Move a debugging statement out of finder 2019-08-24 02:13:52 +02:00
Dave Halter
6d361e03ac Avoid import recursions in other ways 2019-08-24 02:06:57 +02:00
Dave Halter
250ac77f4a Remove a check that is not needed 2019-08-24 01:11:31 +02:00
Dave Halter
ddb2ccb657 Move error handling for py__getattribute__ 2019-08-24 00:59:48 +02:00
Dave Halter
bd24ee2ab3 Move a paragraph 2019-08-24 00:46:16 +02:00
Dave Halter
b13a9f7d5b Trying to move towards unifying goto and py__getattribute__ 2019-08-24 00:18:48 +02:00
Dave Halter
fcec30dff6 Use py__getattribute__alternatives instead of overwriting py__getattribute__ 2019-08-23 23:04:17 +02:00
Dave Halter
0992dc7ae9 Move __getattr__ and __getattribute__ logic to instance
Now getattr warnings might be wrong
2019-08-23 21:59:01 +02:00
Dave Halter
60a73f6bac Move get_global_filters to the context module 2019-08-23 21:19:17 +02:00
Dave Halter
a9d8f389a9 Avoid using get_global_filters if it's not needed 2019-08-23 20:56:00 +02:00
Matthias Bussonnier
9a3f41e63b Complete path after ~.
Note this is mostly to discuss as if I understood one of your message on
Twitter, this was not possible without fuzzy completion.

I tried with just this patch and that works great.

Note that unlike IPython that right now does :

    ~/<tab> -> /Full/Path/to/user/home

But with this patch this just complete things correctly without
expanding the tab. And I think not expanding the tab is actually better.

Anyway, open that to better understand the why you were waiting for
fuzzy completion.
2019-08-23 17:57:12 +02:00
Dave Halter
3fcecb3d6d Move the filter search to a different place 2019-08-23 16:29:13 +02:00
Dave Halter
ead0964282 _get_origin_scope is no longer really used 2019-08-23 16:21:14 +02:00
Dave Halter
0cbd1e6cff Avoid passing of contexts in multiple ways for self name filters 2019-08-23 15:45:26 +02:00
Dave Halter
b38da47981 Prefer readability 2019-08-23 15:33:15 +02:00
Dave Halter
c393a406ee Refactoring of the contexts to properly use inheritance 2019-08-23 15:28:55 +02:00
Dave Halter
7573e2033a Fix comprehension parent issues, fixes #1215 2019-08-23 14:34:16 +02:00
Dave Halter
ecc574025c Merge branch 'ngates/comprehension-parent' of https://github.com/gatesn/jedi into refactoring 2019-08-23 13:52:25 +02:00
Dave Halter
51ac055a38 Another _value removal 2019-08-23 13:42:07 +02:00
Dave Halter
c9e4cdaba1 Get rid of another private access 2019-08-23 13:34:04 +02:00
Dave Halter
aceef78a21 Get rid of a private access 2019-08-23 13:19:53 +02:00
Dave Halter
86f4f7be45 Remove a private access 2019-08-23 11:55:49 +02:00
Dave Halter
041fd992b3 create_value can deal with modules now 2019-08-23 00:45:59 +02:00
Dave Halter
05ce1c8237 Remove a test that tested a removed function 2019-08-23 00:26:15 +02:00
Dave Halter
3e684519e6 Very small refactoring 2019-08-23 00:24:30 +02:00
Dave Halter
9f3a2f93c4 Remove get_statement_of_position. It's not used anymore 2019-08-23 00:13:18 +02:00
Dave Halter
193ba47f50 Simplify get_user_context 2019-08-23 00:10:56 +02:00
Dave Halter
05fe29a156 Get rid of the node_is_value parameter 2019-08-23 00:02:39 +02:00
Dave Halter
bd754718e1 Fix a string escape 2019-08-22 23:32:52 +02:00
Dave Halter
df014dc527 Create create_value to eventally use on contexts for some things 2019-08-22 23:21:21 +02:00
Dave Halter
6d5e9f4b0f Remove node_is_object, not used anymore 2019-08-22 23:13:42 +02:00
Dave Halter
faf6752ff8 Move create_context to a context 2019-08-22 22:47:26 +02:00
Dave Halter
ee6331747f Use a class_context instead of a class_value for MethodValue 2019-08-22 22:13:02 +02:00
Dave Halter
eee6810576 Small cleanup of code 2019-08-22 17:20:07 +02:00
Dave Halter
f87f8c028b Fix context issues when working with instances 2019-08-22 17:11:54 +02:00
Dave Halter
b97237f264 Rename the filter context argument to parent_context 2019-08-22 10:09:07 +02:00
Dave Halter
4e260cdadb Remove infer_element from Value 2019-08-22 00:36:15 +02:00
Dave Halter
337c03e5be Separate infer_import and goto_import a bit better 2019-08-22 00:23:24 +02:00
Dave Halter
bf4d42798b Make separate methods for goto_import and infer_import 2019-08-21 23:58:26 +02:00
Dave Halter
2fb04db0ab Fix the weird py__path__ behavior 2019-08-21 23:08:42 +02:00
Dave Halter
592f3771fc Make Importer.module_context Importer._module_context 2019-08-21 09:56:36 +02:00
Dave Halter
925dd38c18 Remove a private access 2019-08-21 09:54:32 +02:00
Dave Halter
6142d18206 More rename 2019-08-21 09:53:48 +02:00
Dave Halter
9d34df2fed Make Slice a proper LazyValueWrapper 2019-08-21 09:51:47 +02:00
Dave Halter
02c96b37db Some more value -> context renames 2019-08-21 09:31:23 +02:00
Dave Halter
55c08e06ab Remove a hack that is no longer necessary 2019-08-21 09:25:15 +02:00
Dave Halter
84f6d95fde Fix a python 2 dynamic issue 2019-08-21 09:16:48 +02:00
Dave Halter
4cbe2898c0 Fix usage tests
With those tests fixed, everything should pass again
2019-08-21 01:01:09 +02:00
Dave Halter
8a2b7f18cd Get all tests working except usage tests 2019-08-21 00:50:34 +02:00
Dave Halter
85f8f2a764 Fix os path resolving issues 2019-08-21 00:22:34 +02:00
Dave Halter
14fc5ed289 Fix more issues with where contexts are used and where values are used 2019-08-20 09:59:10 +02:00
Dave Halter
39b294e085 Fix some interpreter issues 2019-08-20 09:09:19 +02:00
Dave Halter
217b632213 Write a CompForContext that is still not in good shape but working 2019-08-19 21:17:11 +02:00
Dave Halter
caee8e9952 Fix final gradual typing related issues 2019-08-19 19:43:45 +02:00
Dave Halter
b19ba12566 Fix some more context issues 2019-08-19 19:33:12 +02:00
Dave Halter
f54617867d Fix dynamic param checking 2019-08-18 18:19:12 +02:00
Dave Halter
6fb49eaadf as_context caching 2019-08-18 17:52:15 +02:00
Dave Halter
8e60689bcf valueualized_node -> contextualized_node 2019-08-18 00:58:33 +02:00
Dave Halter
4415de010d ValueualizedName -> ContextualizedName
Basically a change back to an older version
2019-08-18 00:57:29 +02:00
Dave Halter
f61246bf13 Fix quite a few more tests. Only about a fifth failing now 2019-08-18 00:47:21 +02:00
Dave Halter
0c419a5094 Fix class tests 2019-08-17 23:52:52 +02:00
Dave Halter
895e774962 Module fixes 2019-08-17 17:56:57 +02:00
Dave Halter
a9b1de7060 execution_value -> execution_context 2019-08-17 17:13:29 +02:00
Dave Halter
680388a7e8 More fixes 2019-08-17 17:01:21 +02:00
Dave Halter
2629ff55f3 Fix some array tests 2019-08-17 15:42:13 +02:00
Dave Halter
c6d2aa6da2 Some small improvements 2019-08-16 16:44:03 +02:00
Dave Halter
165639c1dd Start implementing the bulk of the context/value separation 2019-08-16 16:12:12 +02:00
Dave Halter
d19233a338 Start working on replacing value partially with context 2019-08-16 13:00:05 +02:00
Dave Halter
03920502c4 infer_state -> inference_state 2019-08-16 11:44:30 +02:00
Dave Halter
fffb39227e InferState -> InferenceState 2019-08-16 11:43:21 +02:00
Dave Halter
9ee6285414 Remove infer_state from filters 2019-08-16 09:41:23 +02:00
Dave Halter
600272366f parent_value -> parent_context 2019-08-15 09:36:46 +02:00
Dave Halter
2e90e3b2b1 Avoid position passing for value filters 2019-08-15 09:31:12 +02:00
Dave Halter
21a18c698e Differentiate in finder between get_value_filters and get_global_filters 2019-08-15 09:29:08 +02:00
Dave Halter
9986d8c9aa Context -> Value 2019-08-15 01:26:11 +02:00
Dave Halter
49f996867d NO_CONTEXTS -> NO_VALUES 2019-08-15 01:24:28 +02:00
Dave Halter
ad4f546aca context -> value 2019-08-15 01:23:06 +02:00
Dave Halter
9e23f4d67b Move base_context -> base_value 2019-08-15 00:41:02 +02:00
Dave Halter
a5dff65142 Evaluator -> InferState 2019-08-15 00:37:51 +02:00
Dave Halter
8157d119a7 eval_ -> infer_ 2019-08-15 00:20:01 +02:00
Dave Halter
199799a966 Rename some functions -> evaluate_ to infer_ 2019-08-15 00:15:38 +02:00
Dave Halter
3b4f292464 Move the evaluate package to inference 2019-08-15 00:14:26 +02:00
Dave Halter
e4d1e5455f test_evaluate -> test_inference 2019-08-14 23:56:44 +02:00
Dave Halter
a23bbbfbb9 Remove some docstrings that are outdated 2019-08-14 23:51:49 +02:00
Dave Halter
7ce77b724d Merge pull request #1382 from Carreau/scandir
Use scandir on py3.5+ for less disk access on filename completion
2019-08-13 22:08:15 +02:00
Matthias Bussonnier
f06e7f55c0 fix version check 2019-08-13 09:48:38 -07:00
Matthias Bussonnier
f47211c129 Use scandir on py3.5+ for less disk access on filename completion
On Python 3.5+, we can make use of scandir that not only list the
content of the directory as an iterator but caches some infomations (for
example, `is_dir()`; this avoid extra stats call to the underlying
filesytem and can be – according to pep 471 –  2x to 20 time faster
especially on NFS filesystem where stats call is expensive.

From a quick this is the only place where scandir would make sens, as
most other places only require the name.

Fixes 1381
2019-08-12 17:56:29 -07:00
Dave Halter
9cc3b18d52 evaluation -> type inference and a few similar changes 2019-08-13 01:29:50 +02:00
Dave Halter
4619552589 Evaluation -> type inference 2019-08-13 01:29:50 +02:00
Dave Halter
467839a9ea execute_evaluated -> execute_with_values 2019-08-13 01:29:50 +02:00
Dave Halter
084995c378 Bump version 2019-08-13 01:29:01 +02:00
Dave Halter
005f69390c Write the CHANGELOG for 0.15.1 2019-08-13 00:18:45 +02:00
Matthias Bussonnier
ecca190462 Remove forgotten debug/print in filename completion. (#1380)
This is in the latest 0.15, and when forwarding path completions to
jedi, print a lot of stuff on the screen.
2019-08-12 12:37:21 +02:00
Dave Halter
5d0d09bb7d staticmethod and a few other cases might not have properly returned its signatures 2019-08-12 09:37:59 +02:00
Dave Halter
972cae4859 Remove reference to a file that doesn't exist anymore 2019-08-12 00:24:35 +02:00
Dave Halter
77bc2d548a Bump version to make it clear that it's a different one than the current one 2019-08-12 00:22:48 +02:00
Dave Halter
35e5cf2c2a A small Changelog improvement 2019-08-11 20:49:49 +02:00
Dave Halter
c6f0ecd223 Cleanup Changelog for the next release 2019-08-11 20:37:50 +02:00
Dave Halter
f727e4e661 Make it possible to access functions that were inherited, see #1347 2019-08-11 20:34:21 +02:00
Dave Halter
1ad4003740 Messed up a Windows test 2019-08-11 20:12:33 +02:00
Dave Halter
1108ad9994 Again a small windows issue fixed. 2019-08-11 20:01:12 +02:00
Dave Halter
f7f9b1e5ec Need to escape the path backslash for windows slashes 2019-08-11 19:56:57 +02:00
Dave Halter
c3d40949b1 Make it possible to access properties again
This time we catch all exceptions and try to avoid issues for the user.

This is only happening when working with an Interpreter. I don't feel this is
necessary otherwise.

See #1299
2019-08-11 16:24:19 +02:00
Dave Halter
a7accf4171 A small compatibility fix 2019-08-11 01:54:26 +02:00
Dave Halter
ab80646b86 Fix an issue with type vars that might have been a problem for other things as well 2019-08-11 01:28:09 +02:00
Dave Halter
3d0ac09fc9 Don't add quotes after paths if they are already there 2019-08-10 18:37:10 +02:00
Dave Halter
0a84678a60 A small speed optimization that helps a lot with sys.version_info >= (3, 0) patterns in typeshed 2019-08-10 15:31:36 +02:00
Dave Halter
4a5c992b1a Remove an unnecessary isinstance usage 2019-08-10 14:41:47 +02:00
Dave Halter
04b7c99753 Make CompiledValue lazy
This definitely reduces debug output and it might be slightly faster, because some values are never asked for
2019-08-10 14:36:40 +02:00
Dave Halter
499408657b A python 2 fix 2019-08-08 17:07:54 +02:00
Dave Halter
4ec3fb6e12 Fix an error that occured because of some refactorings 2019-08-08 11:03:27 +02:00
Dave Halter
463cbb1595 Fix one more os.path.join issue 2019-08-08 09:31:13 +02:00
Dave Halter
03608151e8 Fix more issues with os.path path completion 2019-08-08 01:48:25 +02:00
Dave Halter
822394663c Make join detection much easier 2019-08-08 01:04:08 +02:00
Dave Halter
52517f78b1 Fix some remaining issues with file path completions 2019-08-07 23:00:27 +02:00
Dave Halter
a191b7b458 A few more tests for path completions (join) 2019-08-07 21:11:48 +02:00
Dave Halter
e68273c0ff Fix quote completions for os.path.join path completions 2019-08-07 20:55:12 +02:00
Dave Halter
aeff5faa3d Fix first param argument of os.path.join file completions 2019-08-07 20:39:47 +02:00
Dave Halter
0fd3757a51 Fix arglist/trailer issues 2019-08-07 10:16:05 +02:00
Dave Halter
1b064c1078 in os.path.join completions, directories should not end in a slash 2019-08-07 01:37:58 +02:00
Dave Halter
5726c29385 Make some file path completions in os.path.join work 2019-08-07 01:34:46 +02:00
Dave Halter
7c1c4981fb Fix os.path.join static value gathering 2019-08-06 22:48:28 +02:00
Dave Halter
81488bcd20 os.path.sep should always have a clear value 2019-08-06 19:57:16 +02:00
Dave Halter
99008eef43 Fix string name completion for stuff like dirname and abspath 2019-08-06 19:38:16 +02:00
Dave Halter
3a9dc0ca2e Fix bytes issue with file path adding 2019-08-06 01:08:57 +02:00
Dave Halter
98a550e352 Python 2 compatibility 2019-08-06 00:42:02 +02:00
Dave Halter
4b8505b78d Make __file__ return the correct value 2019-08-06 00:30:31 +02:00
Dave Halter
b7c2bacbd2 Fix string additions when used in certain ways 2019-08-05 10:11:36 +02:00
Dave Halter
8108122347 Make string additions work for file path completion
With this most simple cases of file path completions should be working now, fixes #493
2019-08-05 01:43:50 +02:00
Dave Halter
45dada9552 Fix interpeter project path 2019-08-05 00:43:37 +02:00
Dave Halter
38e0cbc1d2 Fix the REPL completer for file path completions 2019-08-04 23:08:25 +02:00
Dave Halter
e008a515e3 Fix a few more file name completion cases 2019-08-04 22:43:23 +02:00
Dave Halter
fd1e6afd07 A first iteration for file path completions 2019-08-04 13:50:23 +02:00
Dave Halter
9dd088f3db Fix a test failure 2019-08-03 14:58:57 +02:00
Dave Halter
8e1417e3ce Add Definition.execute, fixes #1076 2019-08-03 02:01:30 +02:00
Dave Halter
97526aa320 Add tests to show that #516 is not working, yet 2019-08-02 22:31:26 +02:00
Dave Halter
16e0351897 List possible Definition.type in its docstring, fixes #1069. 2019-08-02 21:16:58 +02:00
Dave Halter
c0c7c949fd Start writing the Changelog for 0.15.0 2019-08-02 17:17:25 +02:00
Dave Halter
b8bc4060dd 3.8-dev should not be allowed to fail 2019-08-02 16:15:16 +02:00
Dave Halter
c737e3ee40 Skip more Python 2 tests 2019-08-02 15:54:10 +02:00
Dave Halter
4c3d4508e9 Skipping of tests was done the wrong way again 2019-08-02 15:50:06 +02:00
Dave Halter
70bcc9405f Skip the right tests 2019-08-02 15:25:20 +02:00
Dave Halter
6a82f60901 Parameter.kind is not avaialble in Python 3.5 2019-08-02 13:49:01 +02:00
Dave Halter
814998253a Fix Python 2 test issues 2019-08-02 13:44:04 +02:00
Dave Halter
a22c6da89f Add a few docstrings to make some things clearer 2019-08-02 13:16:18 +02:00
Dave Halter
876a6a5c22 Add ParamDefinition.kind, fixes #1361 2019-08-02 13:11:41 +02:00
Dave Halter
642e8f2aa6 Make it possible to format a param to a string, fixes #1074 2019-08-02 12:17:58 +02:00
Dave Halter
a64ef2759c Add another test for signature annotations 2019-08-02 12:17:58 +02:00
Dave Halter
d58bbce24f Add Signature.to_string() with proper tests, fixes #779, fixes #780 2019-08-02 12:17:13 +02:00
Dave Halter
ca6a7215e2 Test infer_default 2019-08-02 10:41:04 +02:00
Dave Halter
93b7548f1a Use a helper to create definitions 2019-08-02 10:30:23 +02:00
Dave Halter
24db05841b Add a execute_annotation option to infer_annotation 2019-08-02 10:24:15 +02:00
Dave Halter
375d1d57fb Test infer_annotation 2019-08-02 10:00:17 +02:00
Dave Halter
c2e50e1d0d Make it possible for users to infer annotations/defaults
Fixes #1039
2019-08-01 18:27:37 +02:00
Dave Halter
7988c1d11b A first iteration of adding signatures to the API, fixes #1139 2019-08-01 17:48:10 +02:00
Dave Halter
8ab2a5320e Fix a caching issue 2019-08-01 02:10:46 +02:00
Dave Halter
b5a62825ce Forgot the right resolve_stars parameters in one place 2019-07-31 23:05:24 +02:00
Dave Halter
ec70815318 Cache getting resolved param names 2019-07-31 22:54:29 +02:00
Dave Halter
a739c17a6f Turn around resolve_stars, it shouldn't by default be resolved 2019-07-31 18:51:31 +02:00
Dave Halter
ab5f4b6774 Remove a class that is not needed anymore 2019-07-31 18:44:57 +02:00
Dave Halter
a5a544cb09 Revert "Use __str__ instead of to_string"
This reverts commit 1151700114.
2019-07-31 18:39:17 +02:00
Dave Halter
7d2374ed81 Fix the last remaining issues with function signature 2019-07-31 18:29:41 +02:00
Dave Halter
97b642a3e1 overloaded_functions should be private 2019-07-31 00:11:08 +02:00
Dave Halter
1151700114 Use __str__ instead of to_string 2019-07-31 00:07:38 +02:00
Dave Halter
75f654b944 Better repr for CallSignature 2019-07-30 23:55:58 +02:00
Dave Halter
bb852c3e85 Fix some minor signature issues 2019-07-30 23:48:54 +02:00
Dave Halter
1fbb69b35a Remove the unused function signature_matches 2019-07-30 10:01:50 +02:00
Dave Halter
0352c3250a Fix signatures for __init__ calls when used with supers, fixes #1163 2019-07-30 01:44:53 +02:00
Dave Halter
268f828963 Fix some issues for args resolving in method calls 2019-07-30 01:28:51 +02:00
Dave Halter
21508a8c79 Remove a bit of code that i sprobably unused 2019-07-30 00:38:42 +02:00
Dave Halter
f9de26f72c Move get_signatures from Function to FunctionMixin 2019-07-29 20:17:03 +02:00
Dave Halter
22580f771c Merge the signature changes
Fixes include
- Better @wraps(func) understanding
- *args, **kwargs in call signatures is now resolved as well as possible

Fixes #503, fixes #1058
Also look at #906, #634, #1163
2019-07-29 00:31:08 +02:00
Dave Halter
9b338f69a6 Add a comment about wraps 2019-07-29 00:28:12 +02:00
Dave Halter
fa0424cfd6 Fix signatures for wraps, see #1058 2019-07-29 00:13:05 +02:00
Dave Halter
f6808a96e0 Skip pre python 3.5 2019-07-28 20:40:32 +02:00
Dave Halter
02bd7e5bc7 Some small args adaptions 2019-07-28 20:22:28 +02:00
Dave Halter
e8e3e8c111 Deal better with non-functions 2019-07-28 19:52:48 +02:00
Dave Halter
c8588191f9 Some more small fixes 2019-07-28 18:09:08 +02:00
Dave Halter
97e7f608df Don't return multiple used names for signatures 2019-07-28 17:51:40 +02:00
Dave Halter
fae2c8c060 Move args resolving to a different file 2019-07-28 17:41:28 +02:00
Dave Halter
b4f2d82867 A new approach of getting arguments 2019-07-28 17:31:17 +02:00
Dave Halter
6a480780f8 Some more tests 2019-07-26 14:51:30 +02:00
Dave Halter
41dc5382fa Make nesting of *args/**kwargs possible to understand. 2019-07-26 14:42:20 +02:00
Dave Halter
ba160e72ab Some more signature progress 2019-07-26 14:29:33 +02:00
Dave Halter
0703a69369 Some progress in signature understanding 2019-07-26 12:11:45 +02:00
Dave Halter
c490d37c2d Start getting signature inferring working 2019-07-26 02:54:50 +02:00
Dave Halter
84219236a7 Remove an import 2019-07-25 14:15:52 +02:00
Dave Halter
57fd995727 Small refactoring 2019-07-25 14:15:26 +02:00
Dave Halter
a803d687e2 Skipped Python 2 Interpreter tests the wrong way 2019-07-24 13:44:26 +02:00
Dave Halter
c7927fb141 Remove a paragraph in docs that was arguing that stubs and generics (and other things) were not properly supported, fixes #1012 2019-07-24 13:41:33 +02:00
Dave Halter
05d9602032 Fix partial signatures for MixedObject
Now a MixedObject return the signatures of its CompiledObject all the time, fixes #1371
2019-07-24 12:58:20 +02:00
Dave Halter
e76120da06 Fix partial signatures, fixes #1371 2019-07-24 02:28:49 +02:00
Dave Halter
25bbecc269 Make sure with a test that the staticmethod signature is also correct 2019-07-24 01:15:48 +02:00
Dave Halter
08bb9cfae7 Fix classmethod signature, fixes #498 2019-07-24 01:06:49 +02:00
Dave Halter
703b747a31 Deal with annotation on *args and **kwargs correctly, fixes #980 2019-07-23 23:56:30 +02:00
Dave Halter
ff149b74e0 Use LazyContextWrapper more 2019-07-23 13:59:08 +02:00
Dave Halter
3d08eb92d5 Very small refactoring 2019-07-23 13:08:57 +02:00
Johannes-Maria Frank
02d16ac55c Fix for failing assertion on native modules Issue #1354 (#1370) 2019-07-23 13:02:08 +02:00
Dave Halter
18eb7622ba Skip numpydoc tests for Python 2 2019-07-22 00:49:40 +02:00
Dave Halter
13dd173664 Remove code that didn't mean anything 2019-07-22 00:39:19 +02:00
Dave Halter
73c078ec7a Fix docstrings for wrapped functions, fixes #906 2019-07-21 12:19:22 +02:00
Dave Halter
cdf50e2a69 Fix an isue about dict ordering in Python before 3.6. 2019-07-19 12:54:22 +02:00
Dave Halter
2b0b29f921 Make it clearer when get_param is used. 2019-07-19 11:57:55 +02:00
Dave Halter
0dc60fb535 A small dataclass refactoring 2019-07-19 11:44:11 +02:00
Dave Halter
5722a3458e Evaluate annotations for dataclasses when infer is called on param 2019-07-19 11:42:08 +02:00
Dave Halter
93c52f615a Get inheritance of dataclass right 2019-07-19 11:35:13 +02:00
Dave Halter
050d686a27 A first working iteration of dataclass signatures, fixes #1213 2019-07-19 02:01:36 +02:00
Dave Halter
7156ddf607 Remove an unused function 2019-07-19 01:32:27 +02:00
Dave Halter
1cccc832b6 Dataclass progress 2019-07-19 01:27:37 +02:00
Dave Halter
fd4eca5e03 Add enum changes to changelog 2019-07-18 12:19:21 +02:00
Dave Halter
1d9b9cff47 Fix a recursion error about getting metaclasses 2019-07-18 12:02:27 +02:00
Dave Halter
f4fe113c0f One test about recursion issues only applied to Python 2 2019-07-18 12:00:47 +02:00
Dave Halter
c7fc715535 Use class filters in instances differently so metaclass plugins work, fixes #1090 2019-07-18 11:20:54 +02:00
Dave Halter
eeea88046e First step in working with metaclasses in plugins, see #1090. 2019-07-18 11:20:28 +02:00
Dave Halter
dea887d27d Refactor the plugin registry 2019-07-16 12:48:54 +02:00
Dave Halter
8329e2e969 Remove classes from plugins and use decorators instead 2019-07-16 10:23:19 +02:00
Dave Halter
60415033b4 Prepare the v0.14.1 release 2019-07-13 16:00:27 +02:00
Dave Halter
a06d760f45 Use fixture names everywhere 2019-07-10 23:26:59 -07:00
Dave Halter
b7687fcfb7 Cleanup a test file 2019-07-10 23:23:18 -07:00
Dave Halter
0ec86d5034 Use parametrize instead of TestCase 2019-07-10 23:22:10 -07:00
Dave Halter
cef23f44cd Remove a TestCase class usage 2019-07-10 19:32:19 -07:00
Dave Halter
e889a4923e Use pytest.mark.parametrize for something instad of a class 2019-07-10 19:04:12 -07:00
Dave Halter
114aba462c Use the names fixture even more 2019-07-10 19:00:24 -07:00
Dave Halter
26c7cec7b5 Use the names fixture more 2019-07-10 18:39:33 -07:00
Dave Halter
3e3a33ab79 A small rename 2019-07-10 18:38:24 -07:00
Dave Halter
7f386e0e68 Refactor names tests 2019-07-10 18:34:40 -07:00
Dave Halter
82d970d2b8 A small refactoring 2019-07-10 18:24:21 -07:00
Dave Halter
3ed9e836cc Make sure __wrapped__ works properly when using an Interpreter, fixes #1353 2019-07-10 16:12:57 -07:00
Dave Halter
f984e8d6ef Small refactoring 2019-07-10 15:38:41 -07:00
Dave Halter
670cf4d394 Make API param names appear without leading double underscores, fixes #1357 again 2019-07-10 12:10:12 -07:00
Dave Halter
e85fba844c Fix some call signature tests 2019-07-09 00:46:53 -07:00
Dave Halter
ee5557ddf6 Make expected index work in Python 3 2019-07-09 00:37:33 -07:00
Dave Halter
42f72b219b Test both closing brackets and non-closing brackets for CallSignature.index 2019-07-09 00:16:53 -07:00
Dave Halter
374721b789 Fix a case with errors 2019-07-09 00:04:53 -07:00
Dave Halter
01cec186ae Move some code around 2019-07-08 22:52:04 -07:00
Dave Halter
3fb89f9f9b Fix some kwargs cases 2019-07-08 22:38:22 -07:00
Dave Halter
a0b4e76c1a Fix some *args issues 2019-07-08 17:03:45 -07:00
Dave Halter
97bf83aa03 Deal better with some error nodes 2019-07-08 14:26:11 -07:00
Dave Halter
ca7658cab7 Delete unused code 2019-07-08 13:29:46 -07:00
Dave Halter
dd78f4cfbf Fix some error node handling for call signatures 2019-07-08 13:22:07 -07:00
Dave Halter
08019075c3 Fix CallSignature index for a looot of cases, fixes #1364,#1363 2019-07-08 12:40:58 -07:00
Dave Halter
943617a94f Use recursion rather than other stuff 2019-07-05 23:51:24 -07:00
Dave Halter
d579c0ad57 Even more refactorings 2019-07-05 15:24:39 -07:00
Dave Halter
76c6104415 small name refactoring 2019-07-05 14:35:48 -07:00
Dave Halter
ef9d803ce3 Refactor some call details 2019-07-05 14:30:59 -07:00
Dave Halter
a26cb42d07 Disable a test for Python 2 2019-07-04 09:31:22 -07:00
Dave Halter
6b9b2836ba Fix pow() signature, fixes #1357
This commit changes how params starting with __ are viewed as positional only params
2019-07-04 00:29:57 -07:00
Dave Halter
abdb8de89d Merge branch 'master' of github.com:davidhalter/jedi 2019-07-03 23:49:18 -07:00
Dave Halter
ac492ef598 Fix signature to_string 2019-07-03 23:44:58 -07:00
Dave Halter
947bfe7b78 Fix an issue with keyword params, fixes #1356 2019-07-03 22:35:46 -07:00
Dave Halter
be6c90d135 Simplify some test code for param defaults, see #1356 2019-07-03 19:43:32 -07:00
Dave Halter
8cb059deda Merge branch 'function_signature_in_interpreter' of https://github.com/linupi/jedi 2019-07-03 19:11:36 -07:00
Arnaud Limbourg
0f4da5c1cf Update link to YouCompleteMe
The source repository changed.
2019-07-03 14:20:31 -07:00
Dave Halter
de138e9114 Improve a bit of dataclasses support, so at least the attributes can be seen
see #1213
2019-07-03 09:21:57 -07:00
Linus Pithan
15bb9b29a2 adding test_kwarg_defaults to point out missing default value of kwargs in certain cases 2019-07-02 11:52:57 +02:00
Dave Halter
4c132d94b9 Make sure in tests that pep 0526 variables are also able to be used when using self, see #933 2019-07-01 23:34:28 -07:00
mwchase
925fc89447 Get typing.NewType working (#1344)
Squashed from the following commits:

* Update pep0484.py

(I don't think I want to know why the cursor jumped to the beginning of the line with every keystroke in GitHub's online editor. Change was entered backwards.)

* Added test for inline use of NewType. Currently assuming that wrapped instances should get the underlying type.

* Altered tests per https://github.com/davidhalter/jedi/issues/1015#issuecomment-356131566

* Add NewTypeFunction to typing evaluation module

* Update AUTHORS.txt

* Add a new test, and a speculative justification

For now, address only the second comment

* Copy code from third comment on the PR

From inspection, I *believe* I understand what this code is doing, and as such, I believe this should cause the new test I added in response to the second comment to fail, because that test is based on faulty assumptions.

* Explicitly discard the key from the tuple

* Update pep0484_typing.py

* Test for the wrapped type, not the wrapper "type"

* Change the return value from calling a NewType
2019-07-01 22:42:59 -07:00
Dave Halter
cb95dbc707 Cannot use pytest 5 yet 2019-07-01 22:30:59 -07:00
Dave Halter
1e3b6a201d Fix filters for classes and functions 2019-07-01 22:24:29 -07:00
Dave Halter
3829ef4785 Fix some small things to get more tests passing 2019-07-01 21:52:03 -07:00
Dave Halter
b382f06be0 A better repr for Definition 2019-07-01 19:40:06 -07:00
Dave Halter
94faceb57c Merge branch 'master' of github.com:davidhalter/jedi 2019-06-30 22:14:02 -07:00
Dave Halter
a9ff58683e Fix ClassVar filter for instances 2019-06-26 22:56:30 +02:00
Dave Halter
fafd6b2ac6 Keyword completions are no longer possible directly after a number, fixes #1085 2019-06-26 15:04:46 +02:00
Nelson, Karl E
344a03e6b2 Fix for EmptyCompiledName 2019-06-24 23:51:19 +02:00
Dave Halter
265abe1d08 Fix super call goto for multiple inheritance, fixes #1311 2019-06-24 09:53:56 +02:00
Dave Halter
ebdae87821 goto should always goto definitions, fixes #1304 2019-06-24 01:25:26 +02:00
Dave Halter
56ec79d62a Fix star imports checks, fixes #1235 2019-06-22 16:45:56 +02:00
Dave Halter
c413b486fb Actually import IsADirectoryError 2019-06-22 15:43:11 +02:00
Dave Halter
cb0a0d228a Add 3.8 to supported versions 2019-06-22 14:45:22 +02:00
Dave Halter
3ae4a154f9 Fix project search if a directory is called manage.py, fixes #1314 2019-06-22 14:04:32 +02:00
Dave Halter
aa2dc6be09 Return annotations for compiled objects now help to infer
However only if it's a type, if it's a string, it doesn't work, yet

Fixes #1347
2019-06-22 00:15:20 +02:00
Nathaniel J. Smith
a62ba86d7b Update parso requirement
Fixes #1348
2019-06-21 10:12:34 +02:00
Dave Halter
454447d422 Fix an invalid escape sequence 2019-06-20 21:43:52 +02:00
Dave Halter
02d10a3aff Some small CHANGELOG changes 2019-06-20 21:27:06 +02:00
Dave Halter
4479b866ff CompiledContext should not have a file 2019-06-20 20:30:23 +02:00
Dave Halter
907fdaa153 Fix some minor errors 2019-06-20 09:53:40 +02:00
Dave Halter
b85c0db72e Fix a typo 2019-06-19 18:32:09 +02:00
Dave Halter
8852745cf3 Add stub files to the list of features 2019-06-19 18:30:02 +02:00
Dave Halter
d1501527a2 Add a script for profiling pytest 2019-06-19 18:28:45 +02:00
Dave Halter
ccd7939a92 Remove the linter, since it's no longer developed 2019-06-19 18:27:34 +02:00
Dave Halter
db716d96e5 Use the same intro text in README 2019-06-19 18:25:59 +02:00
Dave Halter
5f81353182 Make the jedi documentation more concise 2019-06-19 18:25:11 +02:00
Dave Halter
b71a851081 Write a better introduction text 2019-06-19 10:12:23 +02:00
Dave Halter
474dcb857a Some small docs improvements 2019-06-19 09:59:21 +02:00
Dave Halter
5ad0e3d72e Ignore some tests for Python 3.4, because it's end of life soon and the typing library doesn't exist for it 2019-06-19 01:37:16 +02:00
Dave Halter
2cf1797465 Caching for get_parent_scope 2019-06-18 10:04:10 +02:00
Dave Halter
f2f54f2864 Create a better cache to avoid the amount of get_definition/is_definition calls in parso 2019-06-18 09:29:39 +02:00
Dave Halter
38232fe133 Fix issues with Python 3.7 tests 2019-06-15 22:26:34 +02:00
Dave Halter
4405c4f190 Skip stub tests for Python 2 2019-06-15 21:59:54 +02:00
Dave Halter
c3a0fec2d9 Fix tests for stubs 2019-06-15 21:47:03 +02:00
Dave Halter
8e3caaca7f Improve the stub test a bit 2019-06-15 02:20:15 +02:00
Dave Halter
860f627f48 Merge branch 'master' of github.com:davidhalter/jedi 2019-06-15 02:14:29 +02:00
Dave Halter
3ddbee1666 Fix issues for socket 2019-06-15 02:07:30 +02:00
Dave Halter
fc20faf8f8 Remove _apply_qualified_name_changes, because it's really not needed 2019-06-15 01:58:54 +02:00
Dave Halter
0749e5091a Apparently a change we made does not seem to be needed 2019-06-15 01:57:59 +02:00
Dave Halter
e61949da66 Fix some collections.deque issues 2019-06-15 01:56:49 +02:00
Dave Halter
fdad24cc0a Fix some test errors 2019-06-15 01:42:50 +02:00
Dave Halter
3ed30409ea Some progress in trying to make the deque work 2019-06-14 09:36:23 +02:00
Dave Halter
d55d494e0a Merge pull request #1342 from JCavallo/ignore_unknown_super_calls
Ignore super calls when super class cannot be inferred
2019-06-14 00:28:08 +02:00
Trevor Sullivan
e7423696af Added git URL to git clone command 2019-06-14 00:27:04 +02:00
Trevor Sullivan
c6c49d1476 Update manual installation directions 2019-06-14 00:27:04 +02:00
Dave Halter
4564275eba By default a name has no qualified names 2019-06-13 09:45:59 +02:00
Dave Halter
9b610c9760 Make sure there are proper tests for goto_assignments with prefer_stubs and only_stubs 2019-06-13 09:41:23 +02:00
Jean Cavallo
ce97b0a5e7 Make sure py__bases__always return something 2019-06-13 09:37:51 +02:00
Dave Halter
5a26d4cf8f Prefer stubs to Python names when starting to infer 2019-06-13 09:26:50 +02:00
Dave Halter
a0adff9d36 Added Changelog for goto_* 2019-06-12 19:04:58 +02:00
Dave Halter
ad2fbf71ba Move stub tests 2019-06-12 14:00:56 +02:00
Dave Halter
097b073d20 Undo the tensorflow speedups, because they seem to cause more harm than good, see #1116 2019-06-12 10:00:45 +02:00
Jean Cavallo
a3afdc0ece Ignore super calls when super class cannot be inferred 2019-06-12 09:51:08 +02:00
Dave Halter
ed092e6da7 Better error message, when typeshed is missing, see #1341 2019-06-12 00:08:54 +02:00
Dave Halter
78973a9f35 Move execute_evaluated to HelperContextMixin 2019-06-11 17:46:30 +02:00
Dave Halter
f672d3329a Make sure that execute is always called with arguments 2019-06-11 09:37:24 +02:00
Dave Halter
be269f3e1c Remove a print 2019-06-10 22:21:41 +02:00
Dave Halter
c1047bef4f Ignore warnings for numpydocs 2019-06-10 21:41:15 +02:00
Dave Halter
1b0677ec55 Fix some test imports 2019-06-10 19:48:46 +02:00
Dave Halter
5ef0563abe Don't use stub_to_python_context_set anymore 2019-06-10 19:39:26 +02:00
Dave Halter
56d8945d17 Use convert_context function for docs lookup 2019-06-10 19:17:50 +02:00
Dave Halter
7f853a324a Fix a small copy paste fail 2019-06-10 19:05:03 +02:00
Dave Halter
7f3e55df02 Fix conversion for contexts 2019-06-10 18:56:37 +02:00
Dave Halter
144aa97c00 Fix imports for some tests 2019-06-10 17:41:29 +02:00
Dave Halter
9871fe2adf Be even more strict with numpy doctsring parsing, it should just be ignored if it fails in any ways 2019-06-10 17:40:39 +02:00
Dave Halter
95f3aed82c Eliminate more actual appearances 2019-06-10 16:16:34 +02:00
Dave Halter
8ba3e5d463 Change some names from actual -> python 2019-06-10 16:02:05 +02:00
Dave Halter
c8937ccdbf Add only_stubs and prefer_stubs as parameters to goto_assignments/goto_definitions 2019-06-10 15:59:12 +02:00
Dave Halter
49f652a2ad Better comment 2019-06-10 03:27:33 +02:00
Dave Halter
12dbdbf258 qualified names for imports with relative paths have not been solved, yet 2019-06-10 03:20:54 +02:00
Dave Halter
abba305f64 Better debugging 2019-06-10 03:19:32 +02:00
Dave Halter
d4cccd452d Fix qualified_names for some cases 2019-06-10 03:17:50 +02:00
Dave Halter
a555def6ca Use a different function signature instead of a separate goto_stubs function 2019-06-10 02:27:22 +02:00
Dave Halter
827a79861d Add tests for positional only params 2019-06-09 22:56:20 +02:00
Dave Halter
42b6e20729 Changes for 3.8: sync_comp_for instead of comp_for
Please also look at the changes for parso in its b5d50392a4058919c0018666cdfc8c3eaaea9cb5 commit
2019-06-09 18:05:34 +02:00
Dave Halter
f3364a458c Better repr for DictFilter 2019-06-09 15:00:18 +02:00
Dave Halter
48b1b9a1aa Make sure that 3.8 is a supported Python environment going forward (and remove 3.3) 2019-06-08 02:11:45 +02:00
Dave Halter
787276366e Use the same environment in 3.8 for travis 2019-06-08 02:05:43 +02:00
Dave Halter
6e758acd16 Add Python 3.7 to travis testing 2019-06-08 01:54:08 +02:00
Dave Halter
eef02e5c56 Fix generator issues for typing 2019-06-08 01:50:38 +02:00
Dave Halter
26951f5c18 Fixed a few failing tests, that were failing, because of the qualified_names changes 2019-06-08 01:05:40 +02:00
Dave Halter
bb42850d63 Improve the Changelog a bit 2019-06-08 00:29:47 +02:00
Dave Halter
0ff1a88cc4 Use get_qualified_names for full_name 2019-06-08 00:18:31 +02:00
Dave Halter
f80828cb07 Fix issues with simple_getitem and mixed objects 2019-06-07 03:00:01 +02:00
Dave Halter
65d5c6eb2b Disable some more tests in Python 2 2019-06-07 02:45:48 +02:00
Dave Halter
94dfe7bf69 Use even more stubs to get more complex completions for e.g. strings working 2019-06-07 02:37:51 +02:00
Dave Halter
97f342fc4c Fix qualified names for CompiledObject 2019-06-07 01:33:37 +02:00
Dave Halter
a43a6cbc06 Add interpreter tests for collections.Counter 2019-06-06 23:44:55 +02:00
Dave Halter
8c495a1142 More tests for deque 2019-06-06 20:46:19 +02:00
Dave Halter
5d3028bd1f Fix completions for collections.deque 2019-06-06 20:34:50 +02:00
Dave Halter
07f9f241c6 py__call__ is now always available 2019-06-06 10:04:48 +02:00
Dave Halter
659c043584 Get rid of py__getattribute__ overriding that wasn't needed 2019-06-06 00:49:36 +02:00
Dave Halter
b98bf07767 Avoid failing if additional dynamic modules is defined with files that don't exist 2019-06-06 00:43:24 +02:00
Dave Halter
84eb91beaa Add a few tests about simple completions for interpreters 2019-06-06 00:17:37 +02:00
Dave Halter
de03b96232 Fix a small issue about accesses 2019-06-05 23:29:45 +02:00
Dave Halter
0d11a94dad Use latest grammar for parsing docstrings 2019-06-05 23:03:15 +02:00
Dave Halter
da4e6f275e Fix stub lookups for MixedObject 2019-06-05 19:46:40 +02:00
Dave Halter
b24e782b7d Cleaned up create_context for methods
Some improvements made a lot of things clearer about function/method contexts, therefore
the code is now clearer.
2019-06-05 10:11:51 +02:00
Dave Halter
1139761525 Fix some of the mixed test failures 2019-06-05 00:28:48 +02:00
Dave Halter
0a56211df8 Setting correct parents for CompiledObject filters 2019-06-04 23:31:42 +02:00
Dave Halter
586354b571 Remove the unused function get_node 2019-06-03 20:33:03 +02:00
Dave Halter
bade4e661f Some changes to get stubs working better for mixed objects 2019-06-03 20:28:04 +02:00
Dave Halter
c8d658e452 A first very incomplete implementation of named expression support 2019-06-03 00:11:49 +02:00
Dave Halter
30526c564e Correct some regex SyntaxWarnings 2019-06-03 00:06:23 +02:00
Dave Halter
8ec6f54f86 Fix an issue about boolean params resolving 2019-06-02 18:31:52 +02:00
Dave Halter
1213b51c66 Imports completions after a semicolon work now 2019-06-02 17:54:00 +02:00
Dave Halter
c6173efe61 Remove sith test from travis. Never looked it it. Just run it locally 2019-06-01 18:21:40 +02:00
Dave Halter
5ba8fd1267 Fix none test for Python 2 2019-06-01 17:58:01 +02:00
Dave Halter
4aa91efc2e Get python3.4 on travis working 2019-06-01 12:18:21 +02:00
Dave Halter
448f08b74e Merge branch 'travis' of https://github.com/blueyed/jedi into travis 2019-06-01 12:00:36 +02:00
Dave Halter
b4e41ef953 Don't use logger, use debug, which is used everywhere 2019-05-31 23:45:22 +02:00
Dave Halter
fcf214b548 Start using file io when opening random modules 2019-05-31 23:42:19 +02:00
Dave Halter
b9e8bff5e2 Start using FileIO in modules 2019-05-31 22:10:49 +02:00
Dave Halter
9c40c75136 Add file_io for Jedi for listdir 2019-05-31 21:25:48 +02:00
Dave Halter
77bd393a92 Simplified module repr 2019-05-31 21:11:12 +02:00
Dave Halter
55d40e22b3 Apparently numpydoc can fail with numpydoc.docscrape.ParseError as well, just ignore all exceptions 2019-05-31 17:54:21 +02:00
Dave Halter
190793d82f Fix an AttributeError 2019-05-31 17:44:03 +02:00
Dave Halter
d6c89ced99 goto should work on globals 2019-05-31 17:41:34 +02:00
Dave Halter
d9332aec8c Fix tuple unpacking for special case 2019-05-31 17:07:51 +02:00
Dave Halter
cdc9520c9d Fix an issue with None docstrings 2019-05-31 15:31:46 +02:00
Dave Halter
6cdde65052 Fix an issue with namedtuples 2019-05-31 15:21:03 +02:00
Dave Halter
6d62e55b5e Fix a small issue regarding typing filters, fixes #1339 2019-05-31 14:19:48 +02:00
Dave Halter
ed93bbfb68 Cleanup the mess of comprehensions at least a bit 2019-05-31 14:04:37 +02:00
Dave Halter
39eefdbc00 Remove a TODO that was already done 2019-05-31 13:38:42 +02:00
Dave Halter
1e9e684575 GeneratorBase -> GeneratorMixin 2019-05-31 13:37:01 +02:00
Dave Halter
3fb5b4992b Fix: Function calls with generators should always work, even if syntastically invalid 2019-05-31 13:35:23 +02:00
Dave Halter
4d647238b3 Fix sith.py line number generation 2019-05-31 11:18:49 +02:00
Dave Halter
f83c38f5c1 Fix a very random issue with executed contexts 2019-05-31 11:05:34 +02:00
Dave Halter
f7076da700 Get rid of follow_definition and replace it with infer 2019-05-31 00:35:18 +02:00
Dave Halter
9a713bc36f Fix create_context for param default arguments/annotations 2019-05-31 00:21:35 +02:00
Dave Halter
c6dcfcdf6d Remove code that is not used anymore 2019-05-30 01:29:56 +02:00
Dave Halter
df038d8f05 Modules are obviously not executable, but should not lead to traceback when executed 2019-05-30 00:17:38 +02:00
Dave Halter
0e5b17be85 Tests and fixes for keyword completions 2019-05-29 01:26:38 +02:00
Dave Halter
4b3262622b Fix generator issues that were caused by the small refactoring 2019-05-28 23:27:25 +02:00
Dave Halter
3ef99863ee Even more indent increases for debugging 2019-05-28 18:58:58 +02:00
Dave Halter
255d4fc04f Better debugging with the increase_indent_cm 2019-05-28 18:50:46 +02:00
Dave Halter
742f385f23 Add a context manager for increasing indents 2019-05-28 10:53:05 +02:00
Dave Halter
0cc7ea9bc9 Fix crazier subscript operations 2019-05-28 10:20:06 +02:00
Dave Halter
b39928188f Rewrite BuiltinOverwrite with ContextWrappers 2019-05-28 09:48:54 +02:00
Dave Halter
946869ab23 Fix tests 2019-05-28 01:59:32 +02:00
Dave Halter
5fa8338886 Enable a test that is kind of xfailing 2019-05-28 01:55:22 +02:00
Dave Halter
ec7b6b8d80 Fix stub function inferrals 2019-05-28 01:51:37 +02:00
Dave Halter
6f41530a03 Very small refactoring 2019-05-27 23:57:23 +02:00
Dave Halter
1002acf907 Rename AnnotatedClass to GenericClass 2019-05-27 21:21:42 +02:00
Dave Halter
d2355ea53b Remove dead code 2019-05-27 21:12:08 +02:00
Dave Halter
bee9bd7621 given_types -> generics 2019-05-27 21:08:19 +02:00
Dave Halter
5a6d8ba010 Implement typing.cast 2019-05-27 20:59:04 +02:00
Dave Halter
8d24e35fa9 Fix signatures for builtin methods 2019-05-27 20:33:58 +02:00
Dave Halter
fc4d1151c7 Remove even more code that is probably not needed 2019-05-27 19:14:56 +02:00
Dave Halter
c9e3e6902b Removed dead code 2019-05-27 19:06:57 +02:00
Dave Halter
2a3ecbac60 Remove Coroutine classes again, they may not be needed after all 2019-05-27 09:47:32 +02:00
Dave Halter
8e27c60120 Fix async function inferring with decorators, fixes #1335 2019-05-27 09:47:05 +02:00
Dave Halter
11f3eece6d Preparations for some async changes 2019-05-27 09:41:50 +02:00
Daniel Hahler
901182bcfc include py38-dev 2019-05-24 16:07:53 +02:00
Daniel Hahler
6a67d2dad2 pyenv-whence 2019-05-24 16:03:56 +02:00
Daniel Hahler
1411fc11ee pyenv-system 2019-05-24 16:03:55 +02:00
Daniel Hahler
8e2e73fd81 fixup! Activate pyenv version [skip appveyor] 2019-05-24 16:03:25 +02:00
Daniel Hahler
4292129652 Activate pyenv version 2019-05-24 16:03:25 +02:00
Daniel Hahler
877705ca42 ci: Travis: dist=xenial 2019-05-24 16:03:23 +02:00
Dave Halter
7bd3669220 A small test change 2019-05-24 10:36:14 +02:00
Dave Halter
9aa8f6bcf2 Better signature calculation 2019-05-23 01:36:51 +02:00
Dave Halter
b2b08ab432 Better annotation signature string for classes 2019-05-22 20:34:35 +02:00
Dave Halter
3bec1a6938 Better signature generation 2019-05-22 20:31:30 +02:00
Dave Halter
9bb88b43ca Fix stub_to_actual_context_set for bound methods 2019-05-22 10:35:04 +02:00
Dave Halter
a2931d7a48 Introduce is_bound_method 2019-05-22 10:19:47 +02:00
Dave Halter
d241c31e3c Try to make qualified_names access clearer 2019-05-22 10:10:37 +02:00
Dave Halter
b1e6901d61 Some more signature tests 2019-05-22 00:51:52 +02:00
Dave Halter
f46d676130 Fix signature tests 2019-05-22 00:44:20 +02:00
Dave Halter
9463c112df Cleanup of finalizer did not work properly 2019-05-22 00:26:27 +02:00
Dave Halter
d44e7086d7 For now use parso master for tox testing 2019-05-22 00:17:42 +02:00
Dave Halter
c05629b3de Adapt small changes in parso's FileIO 2019-05-22 00:03:01 +02:00
Dave Halter
c64ee8a07c Make it clear what a param needs to implement 2019-05-21 18:21:40 +02:00
Dave Halter
857f6a79ae Merge branch 'master' of github.com:davidhalter/jedi 2019-05-21 13:39:27 +02:00
micbou
744662d096 Fix resource warnings 2019-05-21 13:35:12 +02:00
micbou
81e7dcf31e Enable all warnings when running tests 2019-05-21 13:35:12 +02:00
micbou
eca845fa81 Restrict Sphinx version in tests 2019-05-21 12:07:17 +02:00
micbou
3df63cff12 Fix docstring tests 2019-05-21 12:07:17 +02:00
micbou
16b64f59b7 Fix transform_path_to_dotted tests on Windows
Compiled modules end with the .pyd extension on Windows.
2019-05-21 12:07:17 +02:00
micbou
6f9f5102d0 Fix correct_zip_package_behavior tests on Windows 2019-05-21 12:07:17 +02:00
Dave Halter
b17e7d5746 A work in progress improvement for compiled signatures 2019-05-21 09:37:17 +02:00
Dave Halter
95cd8427f4 Fix a NotImplementedError when loading random modules 2019-05-20 09:54:41 +02:00
Dave Halter
03de39092a Reindent some code 2019-05-20 09:34:12 +02:00
Dave Halter
aa924cd09b Small change for a comment 2019-05-20 09:30:13 +02:00
Dave Halter
beacb58eb1 Remove a NotImplementedError and a bit of code where we don't seem to pass anymore 2019-05-20 09:20:49 +02:00
Dave Halter
70527d7329 Merge branch 'repr' of https://github.com/blueyed/jedi
Fixed a small merge conflict by hand.
2019-05-20 00:31:32 +02:00
Dave Halter
f01b2fb4d9 Merge pull request #1160 from blueyed/pytest
py.test -> pytest renamings. Originally "Revisit pytest config"
2019-05-20 00:23:20 +02:00
Dave Halter
655344c09c Merge branch 'master' into pytest 2019-05-20 00:21:57 +02:00
Dave Halter
d2d1bb4def Raise a speed limit a bit to avoid false positives 2019-05-19 18:22:47 +02:00
Dave Halter
b5016d6f43 Another try with MANIFEST.in 2019-05-19 18:13:18 +02:00
Dave Halter
7583d297ad Deal with SyntaxErrors coming from numpydoc when used with Python 2 2019-05-19 18:12:01 +02:00
Dave Halter
146ddd5669 Fix a few unicode accesses for Python 2 2019-05-19 17:52:35 +02:00
Dave Halter
ffd720c323 Rewrite reversed a bit 2019-05-19 17:51:30 +02:00
Dave Halter
8cad21819c Add only stubs/README/LICENSE, when packaging typshed 2019-05-19 17:14:49 +02:00
Dave Halter
016e66846b After upgrading tox, packaging works again 2019-05-19 17:11:29 +02:00
Dave Halter
6cf6903d32 It should be possible to pass posargs to pytest for tox 2019-05-19 17:01:25 +02:00
Dave Halter
ea490b9a2b Remove remap_type_vars, which was never used 2019-05-19 16:15:52 +02:00
Dave Halter
7ec76bc0b5 Remove get_matching_functions, it was unused code 2019-05-19 16:06:22 +02:00
Dave Halter
4b2518ca9a Remove special objects, they are no longer needed 2019-05-19 14:28:39 +02:00
Dave Halter
1b668966ce Better completions for MethodType 2019-05-19 14:27:09 +02:00
Dave Halter
c4f0c7940f Remove MODULE_CLASS in favor of a typeshed solution 2019-05-19 14:22:03 +02:00
Dave Halter
f9eedfbf64 Remove FUNCTION_CLASS, in favor of a typeshed solution 2019-05-19 14:19:30 +02:00
Dave Halter
05a3d7a3bc Remove _create_class_filter, it was unused 2019-05-19 14:06:21 +02:00
Dave Halter
cbd16e6d6b Bump latest grammar from 3.6 to 3.7 2019-05-19 14:03:29 +02:00
Dave Halter
7d41fb970e Fixed a typo 2019-05-19 14:02:29 +02:00
Dave Halter
3251d8ffe6 Bump Jedi version to 0.13.0 2019-05-19 14:02:15 +02:00
Dave Halter
6eb92f55df Apparently we need to whitelist pytest for tox to avoid a warning 2019-05-19 13:59:49 +02:00
Dave Halter
c654301f22 Add thirdpart/typeshed to MANIFEST.in 2019-05-19 13:59:25 +02:00
Dave Halter
55feb95d41 Fix an issue with the latest typeshed upgrade in tests 2019-05-19 13:27:38 +02:00
Dave Halter
9e29e35e16 Upgrade typeshed 2019-05-19 13:27:25 +02:00
Dave Halter
8db3bb3dc1 Upgrade typeshed to latest master and fix reversed execution 2019-05-18 23:35:28 +02:00
Dave Halter
7f5225cb70 Fix a setup.py assertion 2019-05-18 22:34:19 +02:00
Dave Halter
dc2f4e06c8 Fix a few casts for Python 2/3 interopability 2019-05-18 20:51:42 +02:00
Dave Halter
61ccbb0d3e Make sure to use a python 3 parser for stub files 2019-05-18 18:25:32 +02:00
Dave Halter
4176af337f A few Python 2 fixes 2019-05-18 01:09:09 +02:00
Dave Halter
cc68942ec1 Make sure that the deployment process checks out git submodules (e.g. typeshed) 2019-05-18 00:20:56 +02:00
Dave Halter
52ae6e7f0b Remove a print statement 2019-05-18 00:19:06 +02:00
Dave Halter
ba59ab40ab Make sure in setup.py that the typeshed submodule is loaded 2019-05-18 00:14:53 +02:00
Dave Halter
0fb5fd271a Better scanning for module names, now includes namespace packages and stubs 2019-05-18 00:11:08 +02:00
Dave Halter
8e3f85c475 Revert "One more small test change"
This reverts commit a6693616a0.
2019-05-17 23:49:26 +02:00
Dave Halter
b1bd630a37 Make it possible to use error for debugging 2019-05-17 23:39:26 +02:00
Dave Halter
4b829c358b Fix an import names completion issue 2019-05-17 23:34:17 +02:00
Dave Halter
02ab71ff26 Tests for stub import completions 2019-05-17 16:53:34 +02:00
Dave Halter
ac962ea6db Refactor stub completions a bit 2019-05-17 16:21:13 +02:00
Dave Halter
7de5fee3ad Minor change, because of typeshed changes 2019-05-17 16:09:23 +02:00
Dave Halter
e70c49fea2 Use completions from both stubs and actual modules 2019-05-17 16:04:16 +02:00
Dave Halter
c640aa9213 goto_assignments should work even if something is only defined in a stub 2019-05-17 14:58:55 +02:00
Dave Halter
9d5f57d798 Make sure inferring works even if a stub doesn't have all variables defined 2019-05-17 14:45:22 +02:00
Dave Halter
063eef3eaf Call goto_definitions for goto_assigments if we're on a name 2019-05-17 12:37:02 +02:00
Dave Halter
b5d1e00930 Deal better with instance conversions for stubs 2019-05-17 12:27:53 +02:00
Dave Halter
f53c977069 Fix an issue with stub conversion 2019-05-16 00:52:14 +02:00
Dave Halter
051db30dfb Proper loading for third-party stub packages 2019-05-16 00:45:09 +02:00
Dave Halter
4f64dd30f9 Make sure Python is still loadable in stub only folders 2019-05-15 22:23:23 +02:00
Dave Halter
904c4d04bb Make sure Python is still loadable in mixed stub/python folders 2019-05-15 22:20:57 +02:00
Dave Halter
f49d48fbd2 Add a few more tests for nested stub folders 2019-05-15 22:18:22 +02:00
Dave Halter
e4170d65b7 Make namespace folders work with stubs 2019-05-15 21:55:54 +02:00
Dave Halter
b7eeb60e9c Move stub caching around 2019-05-15 21:10:35 +02:00
Dave Halter
7fc7e631f8 Move a part of stub lookups 2019-05-15 21:06:36 +02:00
Dave Halter
0e95aaeaad A first try to load foo-stub directories 2019-05-15 08:19:46 +02:00
Dave Halter
dcbc60e1f0 Add a docstring to mention PEP 561 2019-05-14 21:12:34 +02:00
Dave Halter
03f29c51cf Improve stub loading from random places 2019-05-14 21:09:20 +02:00
Dave Halter
5ff3e4d1d1 Implement stub tests and a first iteration of loading them from some random place 2019-05-13 10:13:59 +02:00
Dave Halter
8b1d4a7824 Fix call signatures, use stubs if possible 2019-05-11 12:44:20 +02:00
Dave Halter
079783e3a1 Move trying to resolve stubs to a different place 2019-05-10 22:33:49 +02:00
Dave Halter
409bf907d9 Fix os path imports 2019-05-10 10:08:14 +02:00
Dave Halter
4a2ada56e5 Remove two asserts that were pointless 2019-05-10 01:31:12 +02:00
Dave Halter
de7b638e6c Remove StubClass, it should really not be needed anymore 2019-05-10 01:29:06 +02:00
Dave Halter
a6a71c59f4 Move some contents of gradual.stub_contexts to gradual.conversion 2019-05-10 01:24:58 +02:00
Dave Halter
e57ff54caa Some minor moving 2019-05-10 01:19:59 +02:00
Dave Halter
1430ac2675 Remove more unused code that was used for goto a long time ago 2019-05-10 01:12:03 +02:00
Dave Halter
eb07c0b4cf Remove a bit of code that was used to write goto code and is not used anymore 2019-05-10 01:07:53 +02:00
Dave Halter
be6760e427 Introduce get_qualified_names for names, it's easier to implement goto like this 2019-05-10 01:07:21 +02:00
Dave Halter
f8f858216f Make goto_assignments in BaseDefinition simpler 2019-05-08 22:00:13 +02:00
Dave Halter
037a069ddd Made TreeArguments methods a bit more understandable 2019-05-08 09:30:39 +02:00
Dave Halter
dc15470e0b ImportName should resolve properly to the module that it was designed to resolve for 2019-05-07 09:43:55 +02:00
Dave Halter
895eae1d54 Move all Name classes to a separate file 2019-05-07 00:30:16 +02:00
Dave Halter
ad48ec4cfd With typeshed OsPathName is no longer needed 2019-05-07 00:09:19 +02:00
Dave Halter
a6693616a0 One more small test change 2019-05-06 23:59:39 +02:00
Dave Halter
ea6462daf4 Forgot to add evaluate/names.py earlier 2019-05-06 19:50:26 +02:00
Dave Halter
67d7f8d867 Remove the load_stubs function, it's not needed anymore 2019-05-06 19:50:03 +02:00
Dave Halter
ee86b58ab9 Remove a usage of load_stubs, because we are already using stubs 2019-05-06 19:48:15 +02:00
Dave Halter
5099ef15b4 Move ImportName and add os path name to the submodule dict 2019-05-06 09:35:21 +02:00
Dave Halter
c675e85d69 Use sub_module_dict for completing modules, not its own function 2019-05-06 09:19:33 +02:00
Dave Halter
afced5014c Cleanup stub imports / caching 2019-05-05 22:52:48 +02:00
Dave Halter
cabdb7f032 sub_modules_dict improvement 2019-05-05 21:49:55 +02:00
Dave Halter
8fcf885de3 Small refactoring 2019-05-05 21:35:06 +02:00
Dave Halter
2d6c037f39 Some forgotten renames in tests 2019-05-05 21:05:38 +02:00
Dave Halter
d9919efb4c is_compiled fix 2019-05-05 21:03:37 +02:00
Dave Halter
1302d8abef Remove _add_non_stubs_in_filter 2019-05-05 21:00:07 +02:00
Dave Halter
c6586ed811 Remove _get_base_filters 2019-05-05 20:58:34 +02:00
Dave Halter
eb0977b700 helpers.is_compiled -> context.is_compiled 2019-05-05 20:55:18 +02:00
Dave Halter
b7c866f5e4 stub_only -> stub 2019-05-05 20:47:48 +02:00
Dave Halter
7c385f72a1 StubOnly -> Stub, for all different classes 2019-05-05 20:46:45 +02:00
Dave Halter
9af8638589 Small test fix 2019-05-05 20:30:11 +02:00
Dave Halter
16ec84efe4 Some test compiled fixes 2019-05-05 20:12:36 +02:00
Dave Halter
c0c1aff577 Remove get_call_signature_for_any 2019-05-05 19:51:54 +02:00
Dave Halter
45a5eee18a Better control over docstring generation 2019-05-05 19:50:52 +02:00
Dave Halter
d0b0fb3cb3 Docstrings for classes should use the class name and not __init__ 2019-05-05 19:38:01 +02:00
Dave Halter
f71d6883d9 Fixed signatures for keywords 2019-05-05 19:25:00 +02:00
Dave Halter
43849d2b8e Remove stub compiled classes 2019-05-05 19:20:12 +02:00
Dave Halter
2d8d4d5c99 Small test fixes for parser utils 2019-05-05 19:17:38 +02:00
Dave Halter
2cb1bd162f Better signature support for docstrings 2019-05-05 19:09:21 +02:00
Dave Halter
f996df087e Better docstring help 2019-05-05 17:21:23 +02:00
Dave Halter
c647bfa490 Fix a test 2019-05-05 17:09:15 +02:00
Dave Halter
a925301caf Remove the rest of the stub contexts 2019-05-05 16:12:55 +02:00
Dave Halter
202b1784a1 Remove with_stub_context_if_possible 2019-05-05 16:04:24 +02:00
Dave Halter
87fd56859d Remove stubify 2019-05-05 16:02:18 +02:00
Dave Halter
73aca23615 Remove get_stub_contexts 2019-05-05 16:00:45 +02:00
Dave Halter
44b9b8787a Some Bugfixes 2019-05-05 15:59:37 +02:00
Dave Halter
171874d288 Fix all gradual tests 2019-05-05 15:33:56 +02:00
Dave Halter
329270e444 Add is_compiled and a few other things to be more compatible with the new way of handling stubs 2019-05-05 13:23:29 +02:00
Dave Halter
4d3a698a12 Refactor things so goto is working in both directions 2019-05-05 01:16:52 +02:00
Dave Halter
df9c9d8dff Fix a flask issue 2019-05-01 10:47:20 +02:00
Dave Halter
0e42df2da7 Refactor Jedi so we use stub modules as much as possible 2019-05-01 00:52:02 +02:00
Dave Halter
3afcfccba8 Get the tests passing again 2019-04-14 19:02:43 +02:00
Dave Halter
2f562040ac Fix a few remaining issues about the current branch 2019-04-14 18:44:58 +02:00
Dave Halter
6ced926db0 Try to get some more stub to definitions working and vice versa 2019-04-14 17:37:48 +02:00
Dave Halter
ad0000886d Use MethodContext in create_context 2019-04-14 00:17:14 +02:00
Dave Halter
3c74b9bf10 Remove some code that is not necessary anmore, because of an improvment in get_parent_scope 2019-04-13 01:57:50 +02:00
Dave Halter
05eb06d91b Merge remote-tracking branch 'origin/master' into typeshed 2019-04-13 01:52:15 +02:00
Dave Halter
3602c95341 Refactor parent_scope a bit 2019-04-13 01:52:03 +02:00
Dave Halter
b2f6758a9c Merge pull request #1313 from CXuesong/master
get_module_names should include top-level async functions when all_scopes=False.
2019-04-13 00:43:58 +02:00
Andreas Mittelberger
e843c6108d fix add_bracket_after_function had no effect (#1297)
* fix add_bracket_after_function had no effect

* added test for fix

* using monkeypatch to set add_bracket_after_function.
2019-04-13 00:41:02 +02:00
forest93
2724ac9e07 Make a separate test case. 2019-04-12 23:31:06 +08:00
Dave Halter
201cf880f9 Remove an if that was unnecessary 2019-04-12 12:59:21 +02:00
Dave Halter
0bf4bf36f0 Small change 2019-04-12 12:54:38 +02:00
Dave Halter
3bef9a67b8 Refactor a bit of create_context 2019-04-12 12:34:07 +02:00
Dave Halter
3ba3d72d6b Fix a small issue 2019-04-12 12:09:03 +02:00
Dave Halter
44639ee50e Better py__getattribute__ for ContextSet 2019-04-11 22:59:54 +02:00
Dave Halter
0f037d0e6c Goto for stubs is now working better 2019-04-11 22:06:23 +02:00
forest93
1e12e1e318 Make get_module_names return top-level async functions when all_scopes=False. 2019-04-11 23:38:55 +08:00
Dave Halter
bb050eebed Move creating stub modules 2019-04-11 08:35:16 +02:00
Dave Halter
9f26c27b6d Start adding tests for goto_assignments on stubs 2019-04-10 20:41:05 +02:00
mlangkabel
c801e24afc fix get_system_environment misses if same python version has multiple installs
The Environment.__init__ may throw an InvalidPythonEnvironment if the call to _get_subprocess() fails. In this case other registered Python environments of the same Python version may still work and shall also be considered.
2019-04-09 23:01:37 +02:00
Dave Halter
31442ecb3b Merge branch 'master' into typeshed 2019-04-09 22:58:30 +02:00
Dave Halter
24a06d2bf9 Merge branch 'names-all-scopes-false-returns-class-fields' of https://github.com/immerrr/jedi 2019-04-09 22:58:03 +02:00
Dave Halter
e61e210b41 Remove some weird changes about importing again 2019-04-09 21:48:57 +02:00
Dave Halter
255d0d9fb5 Fix builtin import issues 2019-04-09 21:15:33 +02:00
Dave Halter
8c9ac923c6 Fix import names from sys path generation 2019-04-08 19:35:58 +02:00
Dave Halter
85fc799d62 Reintrodue a piece of sys_path code with test
This piece was thought to not be needed. It turns out it is
2019-04-08 10:05:12 +02:00
Dave Halter
3d5b13c25e Test function rename 2019-04-08 01:56:24 +02:00
Dave Halter
cccbf50a0e Fix an issue with transform_path_to_dotted 2019-04-08 01:56:05 +02:00
Dave Halter
e50f65527d Somehow removed a test when merging 2019-04-08 01:34:12 +02:00
Dave Halter
a356859e7e Got something small wrong with compatibility 2019-04-07 22:03:26 +02:00
Dave Halter
96d607d411 Cross Python version fixes for unicode/bytes things 2019-04-07 21:51:25 +02:00
Dave Halter
d6232e238a Merge branch 'master' into ts2 2019-04-05 15:44:25 +02:00
Nikhil Mitra
8d0c4d3cec Resolve path in get_cached_default_environment() in api/environment.py to
prevent unnecessary cache busting when using pipenv.
2019-04-05 15:21:46 +02:00
Dave Halter
e95f4c7aa5 Fix module loading in Python 2 2019-04-05 13:39:27 +02:00
Dave Halter
d222d78c7b Zip imports don't have to work in Python2 2019-04-05 12:31:24 +02:00
Dave Halter
aaae4b343e Errors in import module are now better reported 2019-04-05 12:21:05 +02:00
Dave Halter
7ccc0d9d7b Another Python2 fix 2019-04-05 12:20:46 +02:00
Dave Halter
02b01a8bc3 Fix an import error for Python 2 2019-04-04 13:20:41 +02:00
Dave Halter
c0f5c5f24c print_to_stderr can be replaced with a proper future import 2019-04-03 09:37:40 +02:00
Dave Halter
c997d568f3 Remove unused code 2019-04-03 09:30:22 +02:00
Dave Halter
87bcaadf40 Fix a 3.7 issue 2019-04-03 01:16:52 +02:00
Dave Halter
f4a6856e54 Fix some tests 2019-04-03 01:04:18 +02:00
Dave Halter
fa17681cf6 Goto definitions goto stubs now have a proper implementation 2019-04-03 00:28:15 +02:00
Dave Halter
7c56052d58 Make infer public on classes 2019-04-01 09:25:00 +02:00
Dave Halter
2fc53045c7 Goto stubs if on definition 2019-03-31 01:19:35 +01:00
Dave Halter
2f1ce2bbf9 Some test fixes 2019-03-28 19:23:55 +01:00
Dave Halter
aa37f6f738 Fixes for _follow_error_node_imports_if_possible 2019-03-28 10:12:23 +01:00
Dave Halter
2ad652a071 Fix a few more goto_definition error_node imports 2019-03-28 09:34:57 +01:00
Dave Halter
ab8d7e8659 Running a test should not fail if nothing is selected 2019-03-28 08:59:59 +01:00
Dave Halter
7cd79c440c Try to read bytes if possible, not unicode 2019-03-27 22:19:57 +01:00
Dave Halter
a4b5950495 Make it possible to use the parse functions without file_io 2019-03-27 01:05:45 +01:00
Dave Halter
04095f7682 Uncomment previous zip tests that needed bugfixing first 2019-03-27 00:56:15 +01:00
Dave Halter
1c105b5c68 Follow error node imports properly in goto assignments as well 2019-03-27 00:53:35 +01:00
Dave Halter
f4c17e578c Make it possible to use goto_definition on "broken" imports 2019-03-27 00:39:51 +01:00
Dave Halter
993567ca56 Remove submodule dict issues from namespace packages 2019-03-26 18:42:47 +01:00
Dave Halter
e01d901399 Test zip imports that have nested modules 2019-03-26 09:33:54 +01:00
Dave Halter
a437c2cb02 Fix test_imports tests, now zip imports work again 2019-03-26 09:16:38 +01:00
Dave Halter
b6612a83c3 WIP import improvement, getting rid of bad old code 2019-03-21 23:22:19 +01:00
Dave Halter
151935dc67 Avoid property, because there's a __getattr__ on that class 2019-03-21 18:49:56 +01:00
Dave Halter
ad69daf1a3 Update the imports in zip file to be correct 2019-03-20 22:21:47 +01:00
Dave Halter
234f3d93cd Rewrite py__package__ to return a list 2019-03-18 10:01:18 +01:00
micbou
77a7792afc Fix transform_path_to_dotted tests on Windows
Convert paths to normalized absolute ones in transform_path_to_dotted
tests.
2019-03-16 17:34:00 +01:00
Dave Halter
e2fea0a5de Fix some tests because of stub_context changes 2019-03-16 01:09:30 +01:00
Dave Halter
fce37fa0e3 Remove a few prints 2019-03-16 00:23:52 +01:00
Dave Halter
7ab3586e52 Merge branch 'master' into typeshed 2019-03-14 09:26:25 +01:00
Dave Halter
92a8a84ff2 Fix sys.path completions, #1298 2019-03-13 21:11:20 +01:00
Dave Halter
156e5f6beb Add two typeshed tests 2019-03-13 10:04:19 +01:00
Dave Halter
8e9a91abf8 Implement is_stub and goto_stubs for the API 2019-03-11 19:13:24 +01:00
Dave Halter
32d2397e64 Move test_stub to test_gradual 2019-03-10 12:02:51 +01:00
Dave Halter
087a58965b Add a typeshed test 2019-03-10 12:01:00 +01:00
Dave Halter
b7a164afa8 Merge branch 'master' into typeshed 2019-03-08 18:59:33 +01:00
Dave Halter
b659b20d27 Fix an issue between different subprocess versions 2019-03-08 18:20:00 +01:00
Dave Halter
d77e43b57d Enforce unicode because of Python 2 2019-03-08 16:41:08 +01:00
Dave Halter
bfd8ce475a Merge master into typeshed 2019-03-08 16:36:06 +01:00
Dave Halter
967d35e4be Correct a docstring 2019-03-08 16:35:15 +01:00
Dave Halter
0cad79ad18 Merge branch 'master' into typeshed 2019-03-08 16:25:45 +01:00
Dave Halter
cd8c9436c5 Merge branch 'master' of github.com:davidhalter/jedi 2019-03-08 16:23:46 +01:00
Dave Halter
f93134d4f8 Two simple test fixes 2019-03-08 16:23:37 +01:00
Dave Halter
5743f54d69 One more relative import fix 2019-03-08 16:01:56 +01:00
Dave Halter
1914d10836 Fix relative imports outside of the proper paths 2019-03-08 14:25:54 +01:00
Dave Halter
6b579d53ec Some more refactoring for relative imports 2019-03-08 10:54:28 +01:00
Stanislav Grozev
6031971028 Use expanded paths when looking for virtualenv root
This fixes virtualenv resolution under macOS and Pipenv.
2019-03-08 01:22:21 +01:00
Dave Halter
c1d65ff144 Start reworking the relative imports 2019-03-07 00:27:51 +01:00
Dave Halter
7374819ade Add a repr to ImplicitNamespaceContext 2019-03-06 08:36:50 +01:00
Dave Halter
9d19b060a9 Add a better comment for imports 2019-03-04 09:34:17 +01:00
Dave Halter
23d61e5e97 Restructure relative importing a bit and improve tests 2019-03-04 09:24:38 +01:00
micbou
46742328b6 Improve test_import_completion_docstring robustness 2019-03-02 09:58:01 +01:00
Dave Halter
467c2e5def Merge branch 'master' into typeshed
There were quite a few conflicts, because there were two rewrites of the path
to dotted function.
2019-03-01 10:13:16 +01:00
Dave Halter
ffd9a6b484 Make it possible to complete in non-Python files 2019-02-28 20:04:17 +01:00
Dave Halter
8aca357de6 Write a test for #1209
Relative imports were failing in nested Python packages. With the fix to
transforming paths to dotted paths this should already be a lot better,
still here's a regression test.
2019-02-28 09:51:47 +01:00
Dave Halter
1a32663f85 The calculation of dotted paths from normal paths was completely wrong 2019-02-28 09:42:56 +01:00
tamago324
4fecca032d Fix typo 2019-02-27 20:40:59 +01:00
Dave Halter
2a9e678877 Merge branch 'master' into typeshed 2019-02-27 13:13:17 +01:00
Dave Halter
17136e03d2 Fix get_parent_scope 2019-02-27 13:08:21 +01:00
Dave Halter
94f2677752 Fix names selection and params, fixes #1283 2019-02-26 00:20:33 +01:00
Dave Halter
eac69aef2b Infer names in the correct way, fixes #1286 2019-02-25 21:48:57 +01:00
Dave Halter
2dd2d06bca Add a todo 2019-02-25 00:27:27 +01:00
Dave Halter
5a2e3ee8e3 Filter self names in a more correct way, fixes #1275 2019-02-25 00:26:34 +01:00
Dave Halter
8ac7d1fdb6 Use the internal parse function to avoid UnicodeDecodeError in mixed, fixes #1277 2019-02-24 19:56:17 +01:00
Dave Halter
0bf8a69024 v13.3 release notes 2019-02-24 18:45:07 +01:00
Dave Halter
9bb8f335c9 A small improvement for environments
see comment in 8d313e014f
2019-02-22 01:04:01 +01:00
Dave Halter
8d313e014f Check for specific Python versions first on unix, see davidhalter/jedi-vim#870 2019-02-22 00:34:03 +01:00
Dave Halter
a79d386eba Cleanup SameEnvironment and use the same logic for creation in virtualenvs 2019-02-22 00:24:55 +01:00
Dave Halter
48b137a7f5 Revert "Remove an used function"
This reverts commit efd8861d62.
2019-02-21 17:54:01 +01:00
Dave Halter
b4a4dacebd Fix embedded Python with Jedi (see comments in source code), fixes davidhalter/jedi-vim#870 2019-02-21 10:19:28 +01:00
Dave Halter
efd8861d62 Remove an used function 2019-02-21 10:16:17 +01:00
Dave Halter
2f86f549f5 Improve an error message, see #1279 2019-02-16 04:32:03 +01:00
Marc Zimmermann
cc0c4cc308 fixing permission denied errors with project.json 2019-02-16 04:28:26 +01:00
Dave Halter
e3d5ee8332 Don't use a while loop in py__iter__ 2019-01-25 20:09:27 +01:00
Dave Halter
3c201cc36c Fix power operation for Python 2 2018-12-25 00:59:00 +01:00
Dave Halter
f6983d6126 Add an empty init file for Python 2 2018-12-25 00:53:26 +01:00
Dave Halter
1c80705276 Fix power operation, fixes #1268 2018-12-25 00:51:22 +01:00
Dave Halter
d3f205f634 Split up the typeshed file 2018-12-25 00:21:44 +01:00
Dave Halter
b542b17d93 Remove old todo list for annotations 2018-12-24 21:44:54 +01:00
Dave Halter
59c7623769 Move annotation pep0484 file (about anontations) to gradual folder 2018-12-24 17:48:21 +01:00
Dave Halter
e2ab4c060f Move all the gradual typing stuff into one folder 2018-12-24 17:40:47 +01:00
Dave Halter
025b8bba76 Fix a unicode path issue 2018-12-23 16:29:25 +01:00
Dave Halter
5e7ff808d4 Fix f-string evaluation, fixes #1248 2018-12-23 15:32:37 +01:00
Dave Halter
86fbf3fef6 Fixed a string deprecation warning, fixes #1261 2018-12-22 22:49:23 +01:00
Dave Halter
24174632d4 Fix some bugs of the last few commits 2018-12-22 22:08:54 +01:00
Dave Halter
1065768c77 Use ContextualizedNode instead of Node in get_calling_nodes
This improves working with these nodes by a lot.
2018-12-22 14:55:37 +01:00
Dave Halter
ca784916bb Fix get_modules_containing_name 2018-12-22 14:33:24 +01:00
Dave Halter
fcda3f7bc5 Properly handle no __getitem__ on CompiledObject 2018-12-20 00:34:15 +01:00
Dave Halter
fcda62862c Fix calculate_dotted_path_from_sys_path. It was broken beyond stupid. 2018-12-18 09:30:49 +01:00
Dave Halter
881ffadb5c Python 3.7 was not disabled in the right way for travis 2018-12-16 19:24:10 +01:00
Dave Halter
7b20ad7749 Make a doctest simpler that only led to issues in the past 2018-12-16 19:22:17 +01:00
Dave Halter
ddef626e66 Disable Python 3.7 on travis again for now 2018-12-16 19:17:46 +01:00
Dave Halter
50399935c9 Revert "Get rid of the fancy magic of preinstalling Python versions"
This reverts commit b561d1fc17.
2018-12-16 19:15:53 +01:00
Dave Halter
57587f71ab Make it possible that tests work also on Windows 2018-12-16 19:09:08 +01:00
Dave Halter
b561d1fc17 Get rid of the fancy magic of preinstalling Python versions 2018-12-16 18:58:04 +01:00
Dave Halter
ed90a69e2c Clone appveyor submodules recursively 2018-12-16 18:57:26 +01:00
Dave Halter
3703c43d62 Testing the nightly should use a more modern Python version 2018-12-16 18:27:47 +01:00
Dave Halter
30c2e64d9e py__name__ does not need to be defined 2018-12-16 18:24:10 +01:00
Dave Halter
af12789762 Try to fix the appveyor config 2018-12-16 18:20:00 +01:00
Dave Halter
9bf2b9f6e4 Add Python 3.7 to appveyor 2018-12-16 18:18:00 +01:00
Dave Halter
50edd82268 Add 3.7 to to travis config 2018-12-16 18:16:50 +01:00
Dave Halter
babf074448 Sometimes os_path_join is really too slow :( 2018-12-16 17:58:44 +01:00
Dave Halter
9d3043ee39 Cloning typeshed should be possible without git write access 2018-12-16 17:40:41 +01:00
Dave Halter
33b73d7fbc Typing does not need to be installed for Jedi to work, vendored typeshed is enough 2018-12-16 17:26:56 +01:00
Dave Halter
af51c9cc33 Fix Python 3 with Python 2 environment issues 2018-12-16 17:13:02 +01:00
Dave Halter
f55da1e1d6 Fix isues with Python 2.7 running a 3.6 env 2018-12-16 15:53:42 +01:00
Dave Halter
ba0d71bef1 Simplify tox.ini 2018-12-16 15:53:21 +01:00
Dave Halter
add33f5f80 Fix grammar cache problems, because multiple grammars were potentially loaded 2018-12-16 13:14:05 +01:00
Dave Halter
79189f243a Upgrade typeshed version 2018-12-16 00:13:54 +01:00
Dave Halter
81b42c8633 Fix a test for Python 2 2018-12-15 22:27:45 +01:00
Dave Halter
541a8d3a3e Fix some doctests that were slightly changed because of stubs 2018-12-15 22:20:05 +01:00
Dave Halter
3cbba71e7e Merge branch 'master' into typeshed 2018-12-15 22:19:02 +01:00
Dave Halter
9617d4527d setup.py was not executable in Python3.7 2018-12-15 22:18:42 +01:00
Dave Halter
dc77c12e83 Fix pytest issues with this branch 2018-12-15 20:48:58 +01:00
Dave Halter
3ec78ba6c9 Merge branch 'master' into typeshed 2018-12-15 20:38:03 +01:00
Dave Halter
86ae11eb43 Add a new release 0.13.2 2018-12-15 20:09:36 +01:00
Dave Halter
078595f8d7 Merge pull request #1262 from hoefling/pytest-marks
Use `pytest.param` when marking single parameters
2018-12-15 19:14:56 +01:00
Dave Halter
a21eaf9dba Merge remote-tracking branch 'origin/master' into typeshed 2018-12-15 19:05:10 +01:00
Bet4
76417cc3c1 Fix environment cache regression (#1238)
The only remaining issue with this PR is that it does compare with executable instead of _start_executable (they don't need to be the same).
2018-12-15 18:37:28 +01:00
Dave Halter
249564d6ea Merge remote-tracking branch 'origin/master' into typeshed 2018-12-15 18:20:51 +01:00
Dave Halter
90a28c7b1e Don't make complicated subprocess calls for version info comparisons 2018-12-15 17:10:40 +01:00
Dave Halter
46da1df5ae Add an assert that makes it impossible to nest classes of the same type 2018-12-14 09:37:30 +01:00
Dave Halter
fda6409600 Cache _apply_decorators 2018-12-14 09:36:13 +01:00
Dave Halter
d1be92ac80 Cache used names definition finding per evaluator 2018-12-14 09:20:42 +01:00
Dave Halter
b6cb1fb72d Rewrite the typeshed algorithm of matching actual and stub classes 2018-12-13 09:32:57 +01:00
Dave Halter
26b49f8d01 Make the profile_output script usable for Python 2 as well 2018-12-11 00:11:49 +01:00
Dave Halter
c87398a8c2 Remove unused code 2018-12-10 21:34:47 +01:00
Dave Halter
3940fd8eff Restructure eval_annotation so that it's more understandable 2018-12-09 20:48:18 +01:00
Dave Halter
aa4846bff6 If the stub module is not a package but the actual module is, it should not fail the import 2018-12-09 13:39:40 +01:00
Dave Halter
3ec194093d Fix _sre issues 2018-12-09 12:54:39 +01:00
Dave Halter
f7442032b2 Fix version differences for re.sub 2018-12-09 12:50:01 +01:00
Dave Halter
2c5e2609f3 Overloaded functions now return values even if nothing matches 2018-12-09 12:43:55 +01:00
Dave Halter
ae1f5fa511 Fix namedtuples and reactivate tests for Python 2 2018-12-09 12:41:58 +01:00
Dave Halter
0c37256050 Change some tests in Python2 2018-12-08 23:55:08 +01:00
oleg.hoefling
70800a6dc2 bumped pytest dependency to 3.1.0 2018-12-07 18:22:29 +01:00
oleg.hoefling
4711b85b50 used pytest.param to comply with pytest>=4 2018-12-07 17:49:39 +01:00
Dave Halter
decb5046ea Some Python 2.7 fixes 2018-12-07 08:58:17 +01:00
Dave Halter
b2824a3547 Remove a test, because it's different in Python 2/3 and covered by typeshed 2018-12-06 19:07:06 +01:00
Dave Halter
74c965b55c Fix a return type for py__iter__() 2018-12-06 18:54:51 +01:00
Dave Halter
83ba02d0fb Fix remaining issues for Python 3.4 2018-12-06 18:19:30 +01:00
Dave Halter
63bd762f91 Fix a colorama debug highlighting issue 2018-12-06 01:12:48 +01:00
Dave Halter
cc9641f8c1 Fixed an issue about compiled bound methods 2018-12-06 01:03:17 +01:00
Dave Halter
c446bcf885 Fix Python 3.5 issues 2018-12-06 00:59:56 +01:00
Dave Halter
d9e711ab11 Fix remaining Python 3.7 issues to get the tests to pass 2018-12-06 00:35:09 +01:00
Dave Halter
3260867918 Move the stdlib namedtuple template of 3.6 to Jedi. 2018-12-06 00:34:52 +01:00
Dave Halter
d90011c002 Cleanup a few issues with the latest module refactoring 2018-12-05 22:55:56 +01:00
Dave Halter
2406c8374f StubModuleContext is now a wrapped context 2018-12-05 21:33:23 +01:00
Dave Halter
3d4f241129 Cache Script._get_module 2018-12-05 18:18:26 +01:00
Dave Halter
9766abf1c5 Fix a small caching issue 2018-12-05 18:17:33 +01:00
Dave Halter
feefde400e Fix mro for typing classes 2018-12-05 00:16:06 +01:00
Dave Halter
15ae767a79 Fix mro detail 2018-12-05 00:07:21 +01:00
Dave Halter
b293e8e9e1 Reintroduce CompiledStubName, because we actually need it for positions 2018-12-04 19:25:01 +01:00
Dave Halter
bb0bf41cab Use ClassMixin the right way in typing 2018-12-04 00:36:53 +01:00
Dave Halter
b2c0597a7d Fix names for typing classes 2018-12-03 00:56:19 +01:00
Dave Halter
3c3ad7b240 Add a generator cache for py__mro__ 2018-12-03 00:51:45 +01:00
Dave Halter
a7c21eff4b Move py__mro__ to ClassMixin 2018-12-01 15:24:21 +01:00
Dave Halter
6b86ad9083 Move py__mro__ calls to direct calls, because it's defined on ClassMixin 2018-12-01 15:17:22 +01:00
Dave Halter
2b268435c4 Make some profile output better 2018-12-01 13:35:29 +01:00
Dave Halter
07d48df314 Make it possible to have higher precision with pstats displayed 2018-12-01 11:45:09 +01:00
Dave Halter
a07b062752 Merge StubName and CompiledNameWithStub 2018-11-30 23:36:30 +01:00
Dave Halter
dd1e53b498 Small refactoring 2018-11-28 22:48:33 +01:00
Dave Halter
2eb5e9b42d Improve the profiling script 2018-11-28 22:48:13 +01:00
Dave Halter
5e6e4356fc Start using gather_annotation_classes 2018-11-27 01:17:12 +01:00
Dave Halter
5bb88ca703 Make it possible to gather annotation classes for Union and Optional 2018-11-27 01:14:15 +01:00
micbou
368bf7e58a Improve docstring formatting 2018-11-26 00:26:34 +01:00
Dave Halter
eb27c64c71 Make os.path import issues clearer 2018-11-25 19:25:21 +01:00
Dave Halter
644e292fa7 Get rid of is_super_class and do some different things in analysis 2018-11-24 14:09:14 +01:00
Daniel Hahler
28ecbd6b6a Add qa env
Ignores tests with flake8 completely for now.
2018-11-23 22:12:08 +01:00
Dave Halter
021d1bc568 py__iter__ now takes a contextualized_node argument and raises the analysis errors itself 2018-11-23 18:22:38 +01:00
Dave Halter
12a0357f6b Remove class_context from BoundMethod, it's not really needed anymore 2018-11-23 00:11:39 +01:00
Dave Halter
55982d699b Use AnnotatedSubClass for Async classes like everywhere else as stubs 2018-11-23 00:03:32 +01:00
Dave Halter
1948f23fb3 Fix some issues around stub methods 2018-11-21 23:47:40 +01:00
Dave Halter
cb3cd3022d get_signatures should automatically use the stubs if possible 2018-11-19 09:58:35 +01:00
Dave Halter
d2c0b13a02 Fix some small small issues around the latest commits 2018-11-18 23:53:56 +01:00
Dave Halter
cf6cae728a Some issues with inheritance 2018-11-18 22:29:52 +01:00
Dave Halter
8b039287c8 Try to use a CompiledStubClass to wrap functions inside classes 2018-11-18 17:43:46 +01:00
Dave Halter
75203c55f8 Make some things clearer around CompiledStubs 2018-11-16 09:49:46 +01:00
Dave Halter
aeeb4880b1 Use the right context (stub) to check if we should use a CompiledStubClass or not 2018-11-14 22:59:49 +01:00
Dave Halter
d5d7679120 Fix a few of the issues with compiled classes and typeshed and docs 2018-11-14 19:19:56 +01:00
Dave Halter
986c69abea Simplify some more call signature things 2018-11-11 22:44:32 +01:00
Dave Halter
a73c7092bb Change signature a little bit 2018-11-11 22:36:05 +01:00
Dave Halter
3ecae30b5c Delete old get_param_names code in API. 2018-11-11 19:45:00 +01:00
Dave Halter
6dc53c3887 Add at least partial support for signatures for builtins 2018-11-11 19:32:29 +01:00
Dave Halter
4fbede7445 Rework some call signature issues 2018-11-11 17:01:12 +01:00
Dave Halter
c29cde6784 Refactor the call signatures to avoid getting multiple call signatures for some overloaded objects 2018-11-07 23:58:25 +01:00
Dave Halter
f610af36c6 Don't use get_function_slot_names in classes anymore 2018-11-07 09:49:59 +01:00
Dave Halter
d8090cfa0a Start implementing get_signatures 2018-11-07 01:20:39 +01:00
Dave Halter
b847bb1c72 Some minor test changes to get typeshed almost fully working 2018-11-06 09:00:07 +01:00
Dave Halter
4491175db4 Fix an issue with namedtuples when using strings as params 2018-11-06 08:59:30 +01:00
Dave Halter
d0fa228282 Change a test temporarily 2018-11-05 23:56:51 +01:00
Dave Halter
faacfb9578 One test needs to change a bit 2018-11-05 00:28:51 +01:00
Dave Halter
26329de5a5 Underscored objects in stubs are not public and should never be listed 2018-11-03 14:36:46 +01:00
Dave Halter
1eb8658922 Fix issues with itemgetter 2018-11-03 13:57:15 +01:00
Dave Halter
8fa3f093a1 Prefer stub contexts in bound methods 2018-11-02 16:32:38 +01:00
Dave Halter
fbc327b960 Refactor py__get__ support pretty heavily 2018-11-01 19:09:07 +01:00
Dave Halter
52aa5b6764 The builtins/typing module are not causing recursions. They are using annotations to give results. 2018-10-31 09:58:20 +01:00
Dave Halter
4a5cb389b7 Revert "Remove a function that is no longer needed"
This reverts commit 3581ce7059.
2018-10-30 23:35:02 +01:00
Dave Halter
f2d67f4a5d Make version_info understandable so it can be used in for typeshed 2018-10-30 23:31:57 +01:00
Dave Halter
3581ce7059 Remove a function that is no longer needed 2018-10-30 22:01:09 +01:00
Dave Halter
0a67b387c6 Fix most issues with dynamic arrays 2018-10-29 21:05:12 +01:00
Dave Halter
a352fc8595 Fix an issue with recursion for arrays 2018-10-26 00:26:23 +02:00
Dave Halter
a93dff2673 Fix star_expr unpacking issues. For now star_expr is not supported 2018-10-26 00:17:28 +02:00
Dave Halter
7856d27724 Clarify something about contexts 2018-10-24 00:45:06 +02:00
Dave Halter
da3ffd8bd0 Typo 2018-10-24 00:41:17 +02:00
Dave Halter
742179ee38 Add __class__, because of how it's represented as a property 2018-10-24 00:39:11 +02:00
Dave Halter
d5d9e51f66 Move py__call__to FunctionMixin 2018-10-24 00:33:07 +02:00
Dave Halter
19096f83db Hide a warning in some cases 2018-10-24 00:11:07 +02:00
Dave Halter
2f3fb54ebb Add another test for __itemsize__ 2018-10-23 23:33:43 +02:00
Dave Halter
e12f9d5a1c Fix a small oversight about type 2018-10-23 23:31:55 +02:00
Dave Halter
a45d86c2a4 The sqlite3 test was not correct and depends if there is a RowFactory present 2018-10-23 09:46:09 +02:00
Dave Halter
be58b627b2 Upgrade typeshed 2018-10-21 00:35:28 +02:00
Dave Halter
b008a525cb Fix some more things to get async working 2018-10-21 00:35:07 +02:00
Dave Halter
228440c03f Better wrapping of BoundMethod 2018-10-18 19:18:20 +02:00
immerrr
3f5ac0cf56 test_param_docstring: use all_scopes=True to ensure param is extracted 2018-10-18 14:47:20 +03:00
immerrr
1e8674b51c get_module_names: fix "all_scopes=False" handling
Previously, names defined within the scope of first-level classes or functions
were returned.
2018-10-18 14:47:20 +03:00
immerrr
a8401f6923 Add failing test for jedi.api.names(..., all_scopes=False) 2018-10-18 13:54:33 +03:00
Jelte Fennema
3bdb941daa Add an exact_key_items method to DictComprehension fixes #1233 2018-10-14 17:08:44 +02:00
Dave Halter
dddd302980 Fix issues with listing type vars 2018-10-10 02:05:23 +02:00
Dave Halter
5d44e1991f Create better class filters 2018-10-10 01:45:10 +02:00
Dave Halter
55f0966a9a StubClassContext is now also a ContextWrapper 2018-10-09 22:53:33 +02:00
Dave Halter
7daa26ce81 Move some functions in the base context to make ContextWrapper more usable 2018-10-09 10:00:17 +02:00
Dave Halter
8dca2b81e4 Start using ContextWrapper for annotated classes 2018-10-09 09:58:19 +02:00
Dave Halter
b14b3d1012 Better debugging 2018-10-06 16:42:02 +02:00
Dave Halter
43c04a71a8 The generics of a class of anonymous instances should never be inferred 2018-10-05 19:06:41 +02:00
Dave Halter
9313fb9021 Avoid an issue with dict comprehensions 2018-10-05 19:03:32 +02:00
Dave Halter
380f0ac404 Fix itemgetter for tuples 2018-10-05 10:51:52 +02:00
Dave Halter
1b8c87215d Fix an _sqlite3 issue temporarily 2018-10-05 10:51:39 +02:00
Dave Halter
65340e6e24 Some more work on the filter merging 2018-10-05 01:57:34 +02:00
Dave Halter
f96a14e7f4 Start rewriting the StubFilter 2018-10-03 23:01:56 +02:00
Dave Halter
ad83f5419a Merge branch 'master' into typeshed 2018-10-02 19:07:59 +02:00
Dave Halter
bd1010bbd2 Create a new 0.13.1 release 2018-10-02 19:07:35 +02:00
Dave Halter
23b3327b1d Fixed completions of global vars and tensorflow slowness, fixes #1228, #1116 2018-10-02 15:28:51 +02:00
Dave Halter
075577d50c The changelog date was wrong 2018-10-02 15:25:31 +02:00
Dave Halter
96b57f46cb Release notes for 0.13.0 2018-10-02 01:14:28 +02:00
Dave Halter
c24eb4bd67 Fix tensorflow issues with a few hacks (temporary), fixes #1195 2018-10-02 00:52:11 +02:00
Dave Halter
862f611829 If the VIRTUAL_ENV variable changes, need to reload the default environment, fixes #1201, #1200 2018-09-30 19:07:48 +02:00
Dave Halter
f9cbc65f2d Return SameEnvironment as a default, fixes #1226, #1196 2018-09-30 14:07:37 +02:00
Dave Halter
e1f9624bd4 Document that using the REPL autocompletion is only available on Linux/Mac, fixes #1184 2018-09-30 13:36:05 +02:00
Dave Halter
6a2a2a9fa1 Fix an issue with f-strings, fixes #1224 2018-09-30 13:26:54 +02:00
Dave Halter
4545d91929 Ignore some errors that are happening when the Python process ends and its subprocesses are cleaned up 2018-09-30 13:26:26 +02:00
Dave Halter
ba5abf4700 Change some tests slightly 2018-09-30 00:35:45 +02:00
Dave Halter
78f0cc9e8a Better indentation when running run.py 2018-09-29 01:19:36 +02:00
Dave Halter
d6bdb206c8 Remove the old typing module support in favor of the new one 2018-09-29 01:09:09 +02:00
Dave Halter
6539031d5a Remove CompiledStubClassContext, it's not used currently 2018-09-29 00:59:13 +02:00
Dave Halter
f35c233289 Fix some small issues with resulting types 2018-09-28 18:22:57 +02:00
Dave Halter
fbd72179a1 Define generics from a different function 2018-09-28 18:16:24 +02:00
Dave Halter
af5d9d804e A better way to define generics 2018-09-28 09:25:12 +02:00
Dave Halter
8e8271cf54 Refactor dict/set/list/tuple literal generic inferring 2018-09-27 00:01:35 +02:00
Dave Halter
b5b0214c3c Fix forward reference resolving 2018-09-26 09:18:04 +02:00
Dave Halter
4bb7a595e8 Fix some issues with signature matching 2018-09-25 23:05:23 +02:00
Dave Halter
7d3eba1d8d py__bool__ should be called on CompiledObject in CompiledValue 2018-09-25 08:58:01 +02:00
Dave Halter
f3b2d49880 Fix annotation variables 2018-09-25 00:33:44 +02:00
Dave Halter
bdff4e21a8 Fix classmethod issues 2018-09-25 00:19:55 +02:00
Dave Halter
f1b45bed96 Fix some property issues 2018-09-24 22:22:50 +02:00
Dave Halter
fe41c29b29 Implement iter, it's probably necessary 2018-09-24 21:10:54 +02:00
Dave Halter
a06ca5d035 Fix generator return issues 2018-09-24 20:59:43 +02:00
Dave Halter
75a02a13d9 Use ContextSet closer to they way how Python's set works 2018-09-24 20:30:57 +02:00
Dave Halter
8fad33b125 Fix some async issues 2018-09-24 09:45:10 +02:00
Dave Halter
bbc6e830e2 Make it possible to use ContextSet with an iterable parameter 2018-09-24 09:43:35 +02:00
Dave Halter
ef9d0421fa Merge remote-tracking branch 'origin/master' into typeshed 2018-09-24 00:16:13 +02:00
Dave Halter
cc493866cd Try to introduce is_instance and is_function 2018-09-24 00:15:16 +02:00
Dave Halter
2ec4d1e426 The BUILTINS special object is no longer used 2018-09-23 23:24:24 +02:00
Dave Halter
de311b2f2d Replace the Generator class for now 2018-09-23 23:22:33 +02:00
Dave Halter
c2b78b175c Use async generator/async functions from typeshed 2018-09-23 22:57:08 +02:00
Claude
a2b984ce24 also remove crashes with pep 448 unpacking of lists and sets 2018-09-23 21:00:11 +02:00
Claude
6bc79b4933 Fixed crash (and now recognises correctly) {**d, "b": "b"}["b"] 2018-09-23 21:00:11 +02:00
Claude
b9127147e4 Recognize {**d} as a dict instead of set 2018-09-23 21:00:11 +02:00
Dave Halter
ff6516d1d7 Replace AsyncGenerator 2018-09-23 15:41:23 +02:00
Dave Halter
f435f23570 Small changes so some type var inferring works better
However this change is a bit controversial, because it involves some strange class matching that we might need to revisit
2018-09-23 00:41:32 +02:00
Dave Halter
994e7d1910 Fix an issue with type vars 2018-09-22 21:00:42 +02:00
Daniel Hahler
afb2755c27 Add extras_require=testing 2018-09-22 10:03:12 +02:00
Dave Halter
389d4e3d9c Fix inferring dict.values() 2018-09-21 01:09:13 +02:00
Dave Halter
43ffcb0802 Also return the issues when retruning the executed params 2018-09-21 00:20:24 +02:00
Dave Halter
5fda4a2f8b Start putting the signature matching onto the ExecutedParam class 2018-09-20 21:14:07 +02:00
Dave Halter
9807a7f038 Infer dict.get() in a fancy way 2018-09-19 01:50:35 +02:00
Dave Halter
57fa5f5bd9 Fix some signature matching for methods 2018-09-18 23:48:26 +02:00
Dave Halter
1b11162132 Quite a few changes to prepare arrays 2018-09-18 00:17:51 +02:00
Dave Halter
75ab83da63 Make it possible to have a string_name attribute on instance params 2018-09-17 17:44:23 +02:00
Dave Halter
cc3b08fd1b More fixes, because of CompiledObject modifications 2018-09-17 02:40:34 +02:00
Dave Halter
eb9a852443 Remove fakes, RIP 2018-09-17 02:25:01 +02:00
Dave Halter
93d50e0f0c Get more things working 2018-09-17 02:16:16 +02:00
Dave Halter
62df944c47 Fix a few issues with the newly defined CompiledValue 2018-09-17 02:10:27 +02:00
Dave Halter
d07d1a78d3 Use CompiledValue for simple values 2018-09-17 01:05:36 +02:00
Dave Halter
1107967f76 Fix some small issues 2018-09-16 14:31:55 +02:00
Daniel Hahler
56bd795100 _get_virtual_env_from_var: use safe=False
Without this creating an env from VIRTUAL_ENV will always silently fail
if it is not the same/current environment.
2018-09-16 11:37:22 +02:00
Daniel Hahler
cdb760487b tests: venv_path: use session scope 2018-09-16 11:24:27 +02:00
Daniel Hahler
fc9a55b042 jedi/api/environment.py: minor flake8 fix 2018-09-16 11:22:02 +02:00
Dave Halter
5d9f29743c Get iter() working and a lot of other typeshed reverse engineering of type vars 2018-09-16 02:19:29 +02:00
Daniel Hahler
1cf5b194ca jedi.api.environment._SUPPORTED_PYTHONS: add 3.7
The grammar is available in parso already, and it works in general.
2018-09-15 16:58:07 +02:00
Dave Halter
6807e3b6d5 Use py__name__ instead of var_name for type vars 2018-09-15 11:43:23 +02:00
Dave Halter
1244eb9998 Better debug statements 2018-09-13 22:47:12 +02:00
Dave Halter
9ece2844f4 Better is_same_class function 2018-09-13 22:41:30 +02:00
Dave Halter
a646d930c8 Use some solid caching for typing 2018-09-12 22:58:35 +02:00
Dave Halter
6f8385143f Use a frozenset in context sets and make it comparable/hashable 2018-09-12 21:44:34 +02:00
Dave Halter
1a29552bff open returns str and bytes now with typeshed 2018-09-10 00:56:50 +02:00
Dave Halter
190a531daa Fix the reversed object 2018-09-10 00:30:24 +02:00
Nicholas Gates
a68e35c895 Comprehension parent 2018-09-09 22:49:06 +01:00
Dave Halter
9722860417 Don't use ValueError, it could be thrown somewhere else 2018-09-09 17:04:03 +02:00
Dave Halter
7fff203360 Fix the next builtin 2018-09-09 16:20:23 +02:00
Dave Halter
bd3bd2e53b Fix type completions on classes 2018-09-09 15:51:42 +02:00
Dave Halter
6abd96a398 Try to introduce a few new classes to better deal with compiled objects 2018-09-08 17:48:00 +02:00
Dave Halter
eac8cfe63d Fix mro 2018-09-08 17:04:07 +02:00
Dave Halter
928e80c9e9 Fix search_global for builtins 2018-09-08 16:58:18 +02:00
Dave Halter
4a69ab3bf8 Cleanup StubParserTreeFilter.values 2018-09-08 14:13:14 +02:00
Dave Halter
91a18ec63c Try to re-implement reversed 2018-09-07 23:00:32 +02:00
Dave Halter
9e7879d43f Move py__mro__ to a separate function 2018-09-07 00:46:54 +02:00
Dave Halter
99c08fd205 Flows should be respected even in stubs 2018-09-07 00:25:08 +02:00
Dave Halter
82af902cc8 Actually use the previously written builtins_next function 2018-09-06 19:24:48 +02:00
Dave Halter
d0c1df5f2a TreeContextWrapper -> ContextWrapper 2018-09-06 19:13:59 +02:00
Dave Halter
a5e6f26267 get_filters should always have the default search_global=False 2018-09-06 01:06:09 +02:00
Dave Halter
4730c71b16 Evaluate constraints instead of Any 2018-09-06 00:59:42 +02:00
Dave Halter
9cbf20aa48 Start replacing the builtin module 2018-09-06 00:30:08 +02:00
Dave Halter
68bd61708e pkg_resources doesn't come packaged with the CPython stdlib 2018-09-05 19:25:27 +02:00
Dave Halter
fa16c9e59d Fix some name inferance with stubs 2018-09-05 10:29:37 +02:00
Dave Halter
39162de2a8 Some more minor adaptions 2018-09-05 01:49:19 +02:00
Dave Halter
4a3fc91c1e Implement StubParserTreeFilter.values 2018-09-05 01:36:12 +02:00
Dave Halter
ab872b9a34 Fix some tests 2018-09-05 00:10:25 +02:00
Dave Halter
e086c433ff Fix compiled docstrings for stubs 2018-09-04 10:08:09 +02:00
Dave Halter
5d24bc7625 Refactor the compiled name stub wrappers a bit 2018-09-04 09:44:29 +02:00
Dave Halter
74db580671 Get compiled name working a bit better with stubs 2018-09-04 01:51:02 +02:00
Dave Halter
6036ea60d1 Fix interpreter issues with modules 2018-09-04 01:02:00 +02:00
Dave Halter
f432a0b7c4 Fix namedtuple and property issues 2018-09-04 00:27:40 +02:00
Dave Halter
38176ae7e6 Implement itemgetter partially 2018-09-04 00:01:55 +02:00
Dave Halter
35ce54630e Make it possible to use *args in argument clinic 2018-09-03 19:12:36 +02:00
Dave Halter
39f1dfc85e WIP of namedtuple/itemgetter/property 2018-09-03 09:50:51 +02:00
Dave Halter
0edc63ca8b Fix an issue in the tests that typeshed avoids 2018-09-03 01:41:55 +02:00
Dave Halter
3351b06603 Implement random.choice 2018-09-03 01:35:30 +02:00
Dave Halter
5302032b63 The sub typeshed definitions are wrong at the moment 2018-09-03 01:04:41 +02:00
Dave Halter
6bf21c4157 Better typevar class comparisons 2018-09-03 00:58:10 +02:00
Dave Halter
a28b179a45 Fix partial 2018-09-02 19:12:13 +02:00
Dave Halter
7d6141abb7 Fix some small things to make a lot more tests pass 2018-09-02 14:03:43 +02:00
Dave Halter
e3203ebaa5 Try to change the module cache 2018-09-02 13:06:36 +02:00
Dave Halter
ecda9cc746 Move py__getattribute__ to typeshed imports 2018-09-01 17:17:39 +02:00
Dave Halter
ab4e415aec Actually make nested stubs usable 2018-09-01 12:36:05 +02:00
Dave Halter
369dca79ef For now arrays just return tan integer if the index is something random 2018-09-01 12:35:30 +02:00
Dave Halter
8dc2aee4b4 Fix py__mro__ for typing classes 2018-08-31 09:50:04 +02:00
Dave Halter
78ac2c1f1f Fix another stub test 2018-08-31 01:32:26 +02:00
Dave Halter
2dfe2de0fe Fix some stub tests 2018-08-31 01:26:20 +02:00
Dave Halter
aef4aa6859 Fix the slice object 2018-08-31 01:09:21 +02:00
Dave Halter
2ec503d6eb Change some TypeVar base classes 2018-08-30 10:15:43 +02:00
Dave Halter
f5f9fc1955 Refactor TypeVar a bit so it's more resistant 2018-08-30 09:58:18 +02:00
Dave Halter
10383de959 Remove todo about overload, it was already done 2018-08-30 01:57:44 +02:00
Dave Halter
c0c6ce2987 Fix ClassVars and add tests 2018-08-30 01:52:05 +02:00
Dave Halter
7fc311bb3e Add tests for classes that have generics not defined 2018-08-30 01:46:48 +02:00
Dave Halter
5979b93a7a Tests for Type[] 2018-08-30 01:38:14 +02:00
Dave Halter
ac6b7ff14e Fix type var completions so that there's at least no error 2018-08-30 01:23:28 +02:00
Dave Halter
80ab4d8ff5 Add tests for typing.TYPE_CHECKING 2018-08-30 01:14:48 +02:00
Dave Halter
bf6974dabb Fix an issue with a type var lookups 2018-08-30 01:10:51 +02:00
Dave Halter
28a55386b6 Add some more tests about mappings 2018-08-30 00:59:10 +02:00
Dave Halter
1fce0b45f4 Fix subscriptlist unpacking in Generics 2018-08-30 00:52:22 +02:00
Dave Halter
18e6a784e8 Clean up some type alias things 2018-08-29 23:26:39 +02:00
Dave Halter
511ba5231a Get an own class for type aliases 2018-08-29 22:46:28 +02:00
Dave Halter
0edfe86d8b Fix Tuple support 2018-08-29 10:18:58 +02:00
Dave Halter
762d56204f Fix some filter issues 2018-08-29 09:46:10 +02:00
Dave Halter
a884b6c782 Fix forward references for some things 2018-08-29 01:12:19 +02:00
Dave Halter
1a5710f140 Do a bit better class matching, it's not good yet, but we'll get there. 2018-08-28 23:28:58 +02:00
Dave Halter
af9f019d37 Type aliases seem to be working, now. 2018-08-28 17:40:12 +02:00
Dave Halter
cbf6c617de Get MutableSequence working 2018-08-28 01:31:12 +02:00
Dave Halter
921ab6e391 Fix two bugs that were raising exceptions 2018-08-27 23:37:20 +02:00
Dave Halter
e74d4fe9b7 Get a first typing test with Sequence[int] working
This means basically that annotations are working at least in some way and Generic classes as well.
2018-08-27 23:24:46 +02:00
Dave Halter
7c8051feab Fix default parameters name resolution 2018-08-27 23:10:23 +02:00
Dave Halter
7b896ae5d0 Differentiate between functions and methods
This makes some analysis a lot easier when it comes to default arguments for example
2018-08-27 20:39:51 +02:00
Dave Halter
b3ffc092cd Obviously cannot return from a generator with an empty list 2018-08-27 20:16:57 +02:00
Dave Halter
bd5af5f148 More preparations for annotated classes 2018-08-27 20:13:35 +02:00
Dave Halter
4a7bded98d Fix the selection of overloaded functions. Now it's at least partially working 2018-08-26 23:04:54 +02:00
Dave Halter
5261cdf4a1 Now overloaded functions exist, but the matching doesn't work, yet 2018-08-26 19:39:55 +02:00
Dave Halter
05d07c23ab abstractmethod should just pass params 2018-08-26 13:23:49 +02:00
Dave Halter
10bc446255 Get Any working ab it better 2018-08-26 13:16:25 +02:00
Dave Halter
ac7ce7c481 Start implementing overload function 2018-08-26 03:37:26 +02:00
Dave Halter
4daa73d487 Merge with master 2018-08-26 03:16:57 +02:00
Dave Halter
84b07a8809 Removing a test from doctests, becaues it shouldn't be one 2018-08-26 03:09:46 +02:00
Dave Halter
6c555e62aa Refactor argument clinic usage 2018-08-26 03:02:58 +02:00
Dave Halter
3cfbedcb69 Refactor some more typing related things 2018-08-25 23:10:04 +02:00
Dave Halter
18b6febe86 Instances should use py__getitem__ instead of py__simple_getitem__ 2018-08-25 22:55:08 +02:00
Dave Halter
465264e07d Start getting inheritance working with e.g. typing.Iterable 2018-08-25 22:01:36 +02:00
Dave Halter
3526def0a0 Make a lot of progress with typeshed/typing 2018-08-25 02:35:31 +02:00
Dave Halter
05cf6af546 Implement a lot more for typing 2018-08-24 01:13:54 +02:00
Dave Halter
9fe9bed1c9 Fix the first issues with the new typing module implementation 2018-08-21 01:28:55 +02:00
Dave Halter
6ddc242746 Ignore some errors that are happening when the Python process ends and its subprocesses are cleaned up 2018-08-21 01:28:13 +02:00
Dave Halter
5081b06016 Add a first try of implementing the typing module 2018-08-20 19:51:36 +02:00
Dave Halter
fe78fa9850 Move to using py__getitem__ and py__simple_getitem__
This change is necessary to handle more complex cases with py__getitem__
2018-08-13 18:42:09 +02:00
Dave Halter
11b2ac9923 Gettattr needs unicode 2018-08-13 09:53:26 +02:00
Dave Halter
73682b95f5 Move get_item to a separate function 2018-08-10 19:50:21 +02:00
Dave Halter
705f561bdb Sometimes when terminating, the subprocess module is already gone and equals None. 2018-08-10 19:32:54 +02:00
Dave Halter
84b89f4689 Rename py__getitem__ to py__simple_getitem 2018-08-10 19:31:19 +02:00
Dave Halter
bc5ca4d8ae Fix flask issues with unicode in Python2 2018-08-10 00:37:36 +02:00
Dave Halter
53ca7c19cd Some changes in the PEP 0484 understanding (more future compatible) 2018-08-09 23:32:04 +02:00
Dave Halter
b3a07941bb Fix issues with the current branch 2018-08-09 23:25:29 +02:00
Dave Halter
62842c8ac1 For now don't use the TypeshedPlugin until we fix all other issues with Jedi 2018-08-09 18:48:08 +02:00
Dave Halter
d30af70351 Write a test for variables 2018-08-09 18:22:25 +02:00
Dave Halter
52746faabf Some better sys tests for compiled objects 2018-08-09 17:28:09 +02:00
Dave Halter
f7f32fe206 Better checking for sys 2018-08-09 17:16:53 +02:00
Dave Halter
aa8e2c7173 Get some sys completions working 2018-08-09 10:52:33 +02:00
Dave Halter
facbf61133 Working with CompiledObject in stubs is now possible 2018-08-08 18:57:05 +02:00
Dave Halter
1ade520ac0 Fix stub name resolution 2018-08-08 13:02:32 +02:00
Dave Halter
5466f930be Rename some stub classes 2018-08-07 03:36:18 +02:00
Dave Halter
505c424cf4 Merge branch 'master' into typeshed 2018-08-07 02:48:41 +02:00
Dave Halter
d6306a06a4 With the recent changes one performance optimization got lost 2018-08-07 02:47:25 +02:00
Dave Halter
62a941f233 Actually use the stub files 2018-08-06 23:14:58 +02:00
Dave Halter
97c9aca245 Merge branch 'master' into typeshed 2018-08-06 12:49:51 +02:00
Dave Halter
8fc2add242 FunctionExecutionContext should use the parent if possible 2018-08-06 12:49:31 +02:00
Dave Halter
4a593f9693 Use anonymous instance arguments in a different way 2018-08-06 11:19:29 +02:00
Dave Halter
38a22a4ae8 Move some anonymous instance function execution stuff 2018-08-05 23:37:46 +02:00
Dave Halter
10ecb77673 Get rid of InstanceFunctionExecution, because it's really not needed 2018-08-05 23:26:15 +02:00
Dave Halter
357c86ad9c Use the InstanceArguments for super as well 2018-08-05 14:58:35 +02:00
Dave Halter
8cae517821 Use InstanceArguments directly and not via InstanceFunctionExecution 2018-08-05 14:34:44 +02:00
Dave Halter
0101fdd9da Remove old garbage code 2018-08-05 14:19:18 +02:00
Dave Halter
e17d7f5d42 Don't use arguments that are not needed 2018-08-05 14:17:46 +02:00
Dave Halter
7d16a35693 Also move the remaining get_params to get_executed_params
Remove the class's get_params entirely, because it is apparently not needed and contained a funny return.
2018-08-05 13:58:06 +02:00
Dave Halter
1456a156a6 get_params -> get_executed_params where possible 2018-08-05 13:53:57 +02:00
Dave Halter
3d55b2d826 Subprocess error reporting improvements 2018-08-05 12:50:17 +02:00
Dave Halter
49eae5b6f8 Rename an execute function that is private 2018-08-05 01:32:13 +02:00
Dave Halter
7a48fdc5f6 Move execute_evaluated to a helper function 2018-08-05 01:28:03 +02:00
Dave Halter
faba29a42b Trying to use prefer type annotations if they are available 2018-08-05 00:36:11 +02:00
Dave Halter
403cf02c65 Fix the last issue to pass stub tests 2018-08-04 23:50:11 +02:00
Dave Halter
59d43683dc Merge branch 'master' into typeshed 2018-08-04 23:42:17 +02:00
Dave Halter
1547177128 Fix a recursion issue about compiled objects 2018-08-04 23:20:51 +02:00
Dave Halter
bd43608f98 Use a CompiledInstanceNameFilter that wraps the class name as well 2018-08-04 13:10:14 +02:00
Dave Halter
72f2a9e4a5 Prefer Python 3 import over 2 2018-08-04 12:07:41 +02:00
Dave Halter
b91203820c Now it's actually possible to specify a pytest environment for the same Python version 2018-08-04 02:00:13 +02:00
Dave Halter
71572e63cd Note that Python 3.3 support was dropped in Changelog 2018-08-04 00:49:45 +02:00
Hugo
7c9f24a18e Drop support for EOL Python 3.3 (#1019) 2018-08-04 00:40:00 +02:00
Dave Halter
9ca7b30e38 Rewrite the pyc test 2018-08-03 23:59:55 +02:00
Dave Halter
fd8f254ce1 Fix an issue with stderr debugging of subprocesses 2018-08-03 23:51:58 +02:00
Dave Halter
1c76359291 stderr of the child processes should be printed in debug output
This fixes #1169. It might have a bit of a different intention, but at least it's now possible to see output of the subprocess and it's not just a black hole.
2018-08-03 13:35:21 +02:00
Dave Halter
ccb460b433 Use close_fds for posix. 2018-08-03 13:08:07 +02:00
Dave Halter
30d14ea016 Remove some redundant code 2018-08-03 12:33:35 +02:00
Dave Halter
bbb1502e06 Use names of classes to infer names of instances 2018-08-03 12:23:54 +02:00
Dave Halter
f34a9281b9 Don't have execute and execute_evaluated on name 2018-08-03 11:34:33 +02:00
Dave Halter
95a1a69771 Fix an issue where __ prefixed variables where not hidden when accessed from a class
Everything worked well when looking at it from an instance perspective.
2018-08-03 11:05:49 +02:00
Dave Halter
50b58a314e Fix a test condition 2018-08-03 01:34:08 +02:00
Dave Halter
a3b5247de9 Merge branch 'master' into typeshed 2018-08-03 00:26:09 +02:00
Dave Halter
1a4be5c91c Bound methods are now working correctly in all Python versions. Therefore a test was wrong. 2018-08-03 00:25:25 +02:00
Dave Halter
40d3abe2b2 Remove a print in tests 2018-08-03 00:25:25 +02:00
Dave Halter
f25310e0b9 BoundMethods now have access to the function that they are using 2018-08-03 00:25:25 +02:00
Dave Halter
e576457a43 Remove another usage of is_class where it's not needed 2018-08-03 00:25:25 +02:00
Dave Halter
a1314ac3c1 FunctionContext should be created from a unified interface 2018-08-03 00:25:25 +02:00
Dave Halter
481e6bcff0 Don't create a FunctionExecutionContext if it's not used. 2018-08-03 00:25:25 +02:00
Dave Halter
9ff5050d01 Use TreeContext in a good way 2018-08-03 00:25:25 +02:00
Justin Moen
9a4a96b453 Fix broken link in documentation 2018-08-02 10:43:15 +02:00
Dave Halter
5143c71589 Change the typeshed test for methods a bit (not yet working, though) 2018-08-02 01:11:12 +02:00
Dave Halter
31bf8e48bb Fix some stub tests 2018-08-02 00:59:12 +02:00
Dave Halter
61de28f741 Get a first typeshed example fully working as intended 2018-08-02 00:15:54 +02:00
Dave Halter
c8caa8f4ac Use a class stub class 2018-08-01 10:47:46 +02:00
Dave Halter
c196075cb8 Actually use the stub function 2018-08-01 01:42:09 +02:00
Dave Halter
dfbd1f8772 Mix stub name with non-stub names in a better way 2018-07-31 23:25:13 +02:00
Dave Halter
b5670fdc5f Some progress in working with typeshed 2018-07-31 11:33:38 +02:00
Dave Halter
cdb96bff47 Avoid recursion issues for the typing module 2018-07-29 00:10:54 +02:00
Dave Halter
35361f4edc Add debug warnings when a user runs into a recursion error 2018-07-29 00:03:43 +02:00
Dave Halter
9bba91628a Annotations can contain forward references even if they are not a string anymore
Since Python 3.7 this behavior can be imported with from future import __annotations
2018-07-28 16:35:24 +02:00
Dave Halter
b073b05aa0 Fix a bug in the typeshed implementation 2018-07-28 14:50:02 +02:00
Dave Halter
e6f28b06b5 A bit better typeshed support 2018-07-28 14:39:55 +02:00
Dave Halter
4e75a35468 Fix stub searching for nested modules 2018-07-27 10:14:37 +02:00
Dave Halter
e827559340 Get some first stubs working 2018-07-25 23:48:53 +02:00
Dave Halter
6bcac44050 Add another stub file test 2018-07-25 11:44:48 +02:00
Dave Halter
ee43fd7579 Start testing the typeshed directory search 2018-07-25 11:37:03 +02:00
Dave Halter
b809768934 Start implementing some typeshed details 2018-07-25 11:00:51 +02:00
Dave Halter
1739ae44f0 Refactor some of the import logic so it's possible to load typeshed modules 2018-07-24 01:19:09 +02:00
Dave Halter
f72f3f3797 Better flake8 configuration 2018-07-24 00:50:31 +02:00
Dave Halter
18f26a0c04 Change a module is None check to raise an Exception 2018-07-23 23:57:27 +02:00
Dave Halter
873558a392 Move the os.path hack 2018-07-23 23:04:14 +02:00
Dave Halter
c88afb71c9 Import names are now always strings 2018-07-23 22:40:24 +02:00
Dave Halter
27ab4ba339 Add the flask plugin and move the import hacks there 2018-07-23 04:04:21 +02:00
Dave Halter
8a9202135b Move import logic around a bit 2018-07-23 03:54:10 +02:00
Dave Halter
7711167052 Start enabling the Typeshed plugin, even though it doesn't do anything, yet. 2018-07-23 02:40:18 +02:00
Dave Halter
e7635b40d5 Remove some unused code 2018-07-22 18:02:53 +02:00
Dave Halter
f5cbb5de49 Some refactoring in the stdlib plugin 2018-07-22 03:49:36 +02:00
Dave Halter
2cd1ae73ed Move stdlib content to the stdlib plugin 2018-07-22 03:45:02 +02:00
Dave Halter
061489ec9a Move the stdlib executions into a plugin 2018-07-22 03:38:12 +02:00
Dave Halter
df55f62ad8 Add a plugin infrastructure 2018-07-21 15:03:05 +02:00
Dave Halter
e7a019e628 The implicit namespace package test from 4b276bae87 can only be used for Python 3.4+ 2018-07-21 11:51:41 +02:00
Dave Halter
7d2b7bb3c1 Add typeshed as a submodule 2018-07-21 09:50:25 +02:00
Dave Halter
4b276bae87 The import resolution for namespace packages was wrong
With this change we can now include all parents of the script, which will make
relative imports always work.

Now the whole meta_path is scanned and not just importlib's PathFinder.

Fixes #1183.
2018-07-21 00:16:10 +02:00
Dave Halter
ad5170a37a Add a way to use the interpreter environment for tests 2018-07-20 19:16:02 +02:00
Dave Halter
d292333dab MergedExecutedParams -> DynamicExecutedParams 2018-07-18 10:02:57 +02:00
Dave Halter
a408fb3211 Fix a recursion error, fixes #1173 2018-07-18 10:01:41 +02:00
Dave Halter
3cabc4b969 Remove two recursion tests again that will belong into a commit at a point where it is not failing anymore 2018-07-17 18:34:42 +02:00
Dave Halter
fb360506fb Don't merge params if it's just one param 2018-07-17 09:53:26 +02:00
Dave Halter
fe1799d125 Add a repr for AnonymousArguments 2018-07-17 09:48:27 +02:00
Dave Halter
733919e34c Fix a doctest 2018-07-17 00:47:42 +02:00
Daniel Hahler
10b61c41f4 Some minor flake8 fixes 2018-07-16 23:41:42 +02:00
Daniel Hahler
08b0b668a6 Script.__repr__: include environment 2018-07-16 13:26:10 +02:00
Daniel Hahler
72a8ceed76 Add params to CallSignature.__repr__
Looks like this for `jedi.Script` then:

> <CallSignature: Script index=0 params=[source=None, line=None, column=None, path=None, encoding='utf-8', sys_path=None, environment=None]>

`_params_str` could be made public, and then could be used in jedi-vim,
which currently has this:

    params = [p.description.replace('\n', '').replace('param ', '', 1)
              for p in signature.params]

08792d3fd7/pythonx/jedi_vim.py (L492-L493)
2018-07-16 13:23:38 +02:00
Daniel Hahler
c4e2892100 Improve __repr__ for BaseDefinition and AbstractNameDefinition 2018-07-15 23:22:10 +02:00
Dave Halter
1e796fc08d Environments are now always created on request
The issue was that if something changed about the environment (e.g. version
switch) or sys.path change, re-creating the environment was possible, but did
not involve the change. The environments have now a __del__ function that
deletes the subprocess after every time an Environment is garbage collected.
2018-07-15 17:49:17 +02:00
Daniel Hahler
2fc91ceb64 Improve Environment
It only takes `executable` and gets all the information from the
subprocess directly.

Fixes https://github.com/davidhalter/jedi/issues/1107.
2018-07-15 17:49:17 +02:00
Dave Halter
f6bc166ea7 Add max line length 100 to the config for flake8 2018-07-13 09:25:51 +02:00
Daniel Hahler
08fa7941ce tests: use monkeypatch.setenv 2018-07-12 22:04:25 +02:00
Dave Halter
3a62d54403 Don't test Python 3.3 on appveyor anymore, it's getting really hard to get all the right dependencies for it, because 3.3 is deprecated everywhere. 2018-07-12 21:21:40 +02:00
Dave Halter
748946349f Mention that it's ok to have a line length of 100 characters in our files. 2018-07-12 21:18:54 +02:00
Dave Halter
71cea7200b Don't use invalid escape sequences in regex, see https://github.com/davidhalter/jedi-vim/issues/843 2018-07-12 21:13:26 +02:00
Daniel Hahler
87d7c59c6e subprocess: listen: exit normally with EOFError
This is an expected case, since the parent closed normally, and
therefore the subprocess should exit with 0.
2018-07-11 12:56:55 +02:00
Daniel Hahler
f3c1f4c548 Script: improve ValueError for column
Ref: https://github.com/davidhalter/jedi/issues/1168
2018-07-11 12:55:18 +02:00
Dave Halter
d06e55aab5 The sys path might be lazy or not in a venv 2018-07-10 10:07:18 +02:00
Dave Halter
cef769ecd8 The encoding parameter should be used again (includes test), fixes #1167 2018-07-09 18:25:28 +02:00
Dave Halter
aa4dcc1631 Remove source_encoding from documentation (see #1167) 2018-07-09 18:12:27 +02:00
Dave Halter
a59e5a016f Actually use the fast_parser setting again 2018-07-05 21:31:03 +02:00
Dave Halter
37a40d53a8 Use an import name list as long as possible 2018-07-05 18:11:58 +02:00
Dave Halter
d8c0d8e5d2 Different _load_module API 2018-07-05 10:15:49 +02:00
Dave Halter
508ed7e5b8 Directly load modules if possible, with this it's not necessary anymore to use dotted_from_fs_path, also fixes #1013. 2018-07-05 10:03:05 +02:00
Dave Halter
a12d62e9c9 Don't mutate the sys.path. This is pretty nasty bug that fixes #1148 2018-07-04 08:40:05 +02:00
Dave Halter
2500112f6c Don't follow builtin imports anymore by default when follow_imports is on (goto) 2018-07-04 00:01:03 +02:00
Dave Halter
6cdc1bcd8a Add a changelog entry for the include_builtins change 2018-07-03 23:00:46 +02:00
Dave Halter
80831d79c2 additional_module_paths in usages never actually worked 2018-07-03 22:54:47 +02:00
Dave Halter
d857668292 Add include_builtins to usages, fixes #1131. 2018-07-03 22:53:19 +02:00
Dave Halter
f4aad8bbfe Finally make it possible to use auto_import_modules for packages
This means that you can now write 'from gi.repository import Gtk' and Gtk completions work.

It also means that other libraries could be used like that for speed or other reasons.

Fixes #531
2018-07-03 00:58:43 +02:00
Dave Halter
5b7984c4d4 Test auto_import_modules in a very basic way 2018-07-02 09:57:18 +02:00
Dave Halter
2b1cbe4d42 Fix a bug about fstring completion 2018-07-02 01:26:17 +02:00
Daniel Hahler
61bc15b1aa docs: fix some incorrect reference and improve wording 2018-07-01 21:49:18 +02:00
Daniel Hahler
5bad06d4b6 docs: enable searchbox 2018-07-01 21:49:18 +02:00
Dave Halter
8ffdf6746f Comprehensions are also possible arguments. Fixes 1146 2018-07-01 03:33:24 +02:00
Daniel Hahler
b9f8daf848 tox.ini: upgrade pytest (<3.7) 2018-06-30 23:03:36 +02:00
Daniel Hahler
a34ee5bb92 Revisit pytest config
- add testpaths setting
- tox: remove testpaths from posargs default
- s/py.test/pytest/
2018-06-30 23:02:39 +02:00
Dave Halter
a79a1fbef5 Merge branch 'parso' 2018-06-30 14:27:30 +02:00
Dave Halter
58141f1e1e Don't use requirements for now, and use the git version instead in tox 2018-06-30 14:14:52 +02:00
Dave Halter
e0e2be3027 Add a better comment about why people need to upgrade parso 2018-06-29 18:17:29 +02:00
Dave Halter
1e7662c3e1 Prepare release of 0.12.1 2018-06-29 18:10:41 +02:00
Dave Halter
68974aee58 Don't use internal parso APIs if possible 2018-06-29 10:04:03 +02:00
Dave Halter
c208d37ac4 Remove code that is no longer used, because parso was refactored. 2018-06-29 09:56:56 +02:00
Dave Halter
38474061cf Make jedi work with the next parso release 2018-06-29 09:54:57 +02:00
micbou
95f835a014 Force unicode when listing module names
pkgutil.iter_modules may return the module name as str instead of unicode on
Python 2.
2018-06-24 22:41:14 +02:00
micbou
282c6a2ba1 Use highest possible pickle protocol 2018-06-23 14:45:34 +02:00
Daniel Hahler
ea71dedaa1 Include stderr with "subprocess has crashed" exception (#1124)
* Include stderr with "subprocess has crashed" exception

This does not add it to the other similar exception raised from `kill`,
since this should be something like "was killed already" anyway.

* fixup! Include stderr with "subprocess has crashed" exception
2018-06-23 11:37:43 +02:00
micbou
106b11f1af Set stdout and stdin to binary mode on Python 2 and Windows 2018-06-22 00:08:53 +02:00
micbou
f9e90e863b Use system default buffering on Python 2 2018-06-21 19:50:51 +02:00
micbou
197aa22f29 Use cPickle on Python 2 if available
Attempt to load the C version of pickle on Python 2 as it is way faster.
2018-06-21 19:39:08 +02:00
Tarcisio Eduardo Moreira Crocomo
e96ebbe88f Add tests for DefaultDict support. 2018-06-17 11:28:12 +02:00
Tarcisio Eduardo Moreira Crocomo
55941e506b Add support for DefaultDict on jedi_typing.py. 2018-06-17 11:28:12 +02:00
Carl George
ff4a77391a Parse correct AST attribute for version
Earlier development versions of Python 3.7 added the docstring field to
AST nodes.  This was later reverted in Python 3.7.0b5.

https://bugs.python.org/issue29463
https://github.com/python/cpython/pull/7121
2018-06-16 14:43:17 +02:00
micbou
70c2fce9c2 Replace distutils.spawn.find_executable with shutil.which
The distutils.spawn.find_executable function is not available on stock system
Python 3 in recent Debian-based distributions. Since shutil.which is a better
alternative but not available on Python 2.7, we include a copy of that function
and use it in place of find_executable.
2018-06-07 21:07:22 +02:00
Dave Halter
5dab97a303 Add an error message, see also #1139. 2018-06-07 21:01:41 +02:00
Dave Halter
e2cd228aad Dict comprehension items call should now work, fixes #1129 2018-06-07 21:00:23 +02:00
micbou
c1014e00ca Fix flow analysis test
There is no seekable method for file objects on Python 2. Use flush instead.
2018-06-07 01:01:18 +02:00
Dave Halter
62a3f99594 Fix a wrong branch check, fixes #1128 2018-06-01 08:59:16 +02:00
Dave Halter
6ebe3f87a3 Drop 3.3 tests from travis
They are causing only problems now that Python3.3 is deprecated. See e.g. https://travis-ci.org/davidhalter/jedi/jobs/381881020.
Also as a solution approach: https://github.com/davidhalter/jedi/pull/1125.
2018-05-23 11:24:39 +02:00
Dave Halter
50812b5836 A simple yield should not cause an error, fixes #1117 2018-05-23 11:12:19 +02:00
Daniel Hahler
d10eff5625 Travis: report coverage also to codecov.io 2018-05-21 23:40:42 +02:00
Daniel Hahler
6748faa071 Fix _get_numpy_doc_string_cls: use cache
I've noticed that Jedi tries to import numpydoc a lot when using
jedi-vim's goto method in jedi_vim.py itself (via printing in Neovim's
VimPathFinder.find_spec).

This patch uses the cache before trying the import again and again.
2018-05-06 10:54:49 +02:00
Maxim Novikov
fc14aad8f2 Fix namespace autocompletion error 2018-05-03 09:12:17 +02:00
Daniel Hahler
3c909a9849 Travis: remove TOXENV=cov from allowed failures 2018-05-02 20:04:46 +02:00
Daniel Hahler
b94b45cfa1 Environment._get_version: add msgs with exceptions 2018-05-02 00:09:40 +02:00
Dave Halter
a95274d66f None/False/True are atom non-terminals in the syntax tree, fixes #1103 2018-05-01 23:43:49 +02:00
Dave Halter
8d48e7453a When searching submodules, use all of __path__, fixes #1105 2018-05-01 23:17:42 +02:00
Dave Halter
91499565a9 Specially crafted docstrings sometimes lead to errors, fixes #1103 2018-04-25 21:04:05 +02:00
Dave Halter
ba96c21f83 Follow up from the last async issue, fixes more related things about #1092. 2018-04-24 01:02:31 +02:00
Dave Halter
8494164b22 Fix an async funcdef issue, fixes 1092. 2018-04-24 00:41:18 +02:00
Dave Halter
4075c384e6 In some very rare cases it was possible to get an interpreter crash because of this bug. Fixes #1087 2018-04-23 21:26:51 +02:00
Dave Halter
0bcd1701f0 Start using our own monkeypatch function for some things 2018-04-23 21:26:51 +02:00
Dima Gerasimov
ceb5509170 Include function return type annotation in docstring if it is present 2018-04-23 21:20:21 +02:00
Dave Halter
88243d2408 Don't catch IndexError where we don't have to 2018-04-20 01:46:32 +02:00
micbou
5f37d08761 Extend create_environment to accept an executable path
Assume environments specified by the user are safe.
2018-04-19 21:36:44 +02:00
Daniel Hahler
aa6857d22d check_fs: handle FileNotFoundError
Ref: https://github.com/davidhalter/jedi-vim/pull/801
2018-04-17 23:40:25 +02:00
Dave Halter
bd7c65d963 Finally fix all the get_system_environment issues 2018-04-15 16:43:53 +02:00
Dave Halter
fe0ad8f1da Fix a test 2018-04-15 16:23:29 +02:00
Dave Halter
a21d77e8ad There's really no bin/activate needed for an environment to work 2018-04-15 16:15:20 +02:00
Dave Halter
ed2a0a8218 Document get_sys_path and change the signature of get_system_environment a bit 2018-04-15 16:12:07 +02:00
Dave Halter
22b0c0f1fe Rework the time cache. 2018-04-15 15:51:16 +02:00
Dave Halter
a972d49e88 Cache default environment 2018-04-15 15:28:05 +02:00
Dave Halter
bcd05f560e Require parso 0.2.0 at least 2018-04-15 14:06:21 +02:00
Dave Halter
c7c95e7e6d Set a release date 2018-04-15 13:54:27 +02:00
Dave Halter
9dece93c13 Don't install the latest pip version anymore in appveyor
It caused problems. Somehow appveyor (or pip?) changed something. By removing
the update mechanism it all works again. I don't really see why we need to
update anyway, so I guess I'm fine with how it is now.

Passing:
https://ci.appveyor.com/project/davidhalter/jedi/build/32

Not Passing anymore:
https://ci.appveyor.com/project/davidhalter/jedi/build/36
2018-04-15 13:40:26 +02:00
Dave Halter
d2f9e83b25 Fix some references 2018-04-15 12:55:33 +02:00
Dave Halter
28004a9ed9 Mention Virtualenv support in readme and features 2018-04-15 12:18:17 +02:00
Dave Halter
92cd9a30e2 Title case for Mänu :) 2018-04-15 12:12:32 +02:00
Dave Halter
a711c29b59 Better overview over functions in the documentation 2018-04-15 12:11:06 +02:00
Dave Halter
b531b6f4fd A small docs correction 2018-04-15 11:56:24 +02:00
Dave Halter
97b5dd9312 Remove the old static analysis stuff. It was never really used 2018-04-15 11:53:07 +02:00
Dave Halter
7b15c70551 Fix a lot of old docs code that doesn't exist anymore 2018-04-15 11:52:45 +02:00
Dave Halter
698d50b65b Remove the old parser documentation (that's now part of parso) 2018-04-15 11:42:34 +02:00
Dave Halter
940a8c7c9c Don't call it the plugin API anymore, that's confusing 2018-04-15 11:35:58 +02:00
Dave Halter
9465eb7881 Reorder some functions 2018-04-15 11:30:35 +02:00
Dave Halter
bb979a040d Adda lot of environment documentation to sphinx 2018-04-15 11:25:46 +02:00
Dave Halter
336087fcf8 find_python_environments -> find_system_environments 2018-04-14 15:46:16 +02:00
Dave Halter
45fb770033 A small refactoring 2018-04-14 15:38:32 +02:00
Dave Halter
9f07e7e352 Remove from_executable, were not really using it, yet. 2018-04-14 15:13:02 +02:00
Dave Halter
43ab9563e2 For the second time in a row it's called creationflags not creation_flags 2018-04-14 11:06:24 +02:00
Dave Halter
db21942c61 Refactor something small 2018-04-14 01:48:52 +02:00
Dave Halter
737154d657 Remove an unnecessary else 2018-04-14 01:47:17 +02:00
Dave Halter
81771264e0 CREATE_NO_WINDOW was introduced in Python 3.7 and didn't exist before 2018-04-13 22:05:08 +02:00
Dave Halter
fac773a60d The SameEnvironment should not load by default if it's a portable
find_python_environments should only find Python versions if they are actually installed on the system. If people copy virtualenvs around etc. it will find nothing instead.
2018-04-13 21:53:06 +02:00
Dave Halter
8af4fc5728 Do binary comparisons to get virtualenvs working and not just venvs 2018-04-13 21:45:07 +02:00
Dave Halter
ed80ed9437 Use the correct parameter name for creation flags 2018-04-13 19:04:53 +02:00
Dave Halter
83d635cbac Add a way to generalize Popen 2018-04-13 10:17:30 +02:00
Dave Halter
81623c6b5d Check the windows environments in a better way 2018-04-12 14:26:17 +02:00
Dave Halter
27419be56d Fix some issues with the latest changes 2018-04-12 14:24:18 +02:00
Dave Halter
b8e879bc53 DefaultEnvironment -> SameEnvironment 2018-04-12 09:00:19 +02:00
Dave Halter
f4317dadc4 Better docs for Environment 2018-04-12 08:59:18 +02:00
Dave Halter
bf0169480d Some docstrings 2018-04-12 08:58:06 +02:00
Dave Halter
5bb3b8c122 Make the Environment clearly non-public 2018-04-12 08:56:07 +02:00
Dave Halter
9ac7182fea Make some names public 2018-04-12 08:52:24 +02:00
Dave Halter
93a28c4230 Make sure Windows environments are safe 2018-04-12 08:50:31 +02:00
Dave Halter
323a85db7c Fix the module_name issue again 2018-04-10 21:27:47 +02:00
Dave Halter
1c91cfa9d6 Write a test for #1079 to avoid a regression in the future. 2018-04-10 19:23:20 +02:00
Dave Halter
9b17be9ecf Cleanup some of the module cache stuff 2018-04-10 19:16:18 +02:00
micbou
cf5f06f378 Do not cache unimportable compiled module (#1079)
From the issue:

The issue can be reproduced by getting the description of the QtBluetooth module from PyQt5 on Windows:

import jedi
completions = jedi.Script('import PyQt5.QtBlueTooth').completions()
completions[0].description

It's hard to write a test for it so we don't write one for it.
2018-04-10 19:10:05 +02:00
Dave Halter
81aa70b168 Merge branch 'master' of github.com:davidhalter/jedi 2018-04-10 09:19:55 +02:00
micbou
286dd92e35 Fix permissions of Python 3.6 on Travis 2018-04-10 09:19:12 +02:00
micbou
903bdf5fef Fix virtual environment tests 2018-04-10 09:19:12 +02:00
Dave Halter
764b67d232 Multiple inheritance completion in Python 2 did not work
Fixes #1071.
2018-04-10 08:58:30 +02:00
Dave Halter
777d9defc5 Give the run.py script an environment parameter 2018-04-10 08:42:58 +02:00
Dave Halter
b74ba7cd01 Fix an import 2018-04-09 01:47:31 +02:00
Dave Halter
519f54321e Merge the environment changes for Windows 2018-04-09 01:43:57 +02:00
Dave Halter
f4c14864a5 Better tests for venvs 2018-04-09 01:28:43 +02:00
Dave Halter
81d8c49119 Write a test for venvs 2018-04-08 23:04:57 +02:00
Dave Halter
0c19219143 Obviously Python 3 syntax cannot be used in Python 2 2018-04-08 21:38:03 +02:00
micbou
b3b6b798ff Find Python environments on Windows using the registry 2018-04-08 19:04:11 +02:00
Dave Halter
aa9f7fd304 Update the changelog about f-strings 2018-04-08 01:38:09 +02:00
Dave Halter
7fca4c332d Use the latest parso version from master. 2018-04-07 16:06:29 +02:00
Dave Halter
806ae13b71 Better goto definition for fstrings 2018-04-07 12:40:52 +02:00
Dave Halter
ec1c6e1e4d Fix an issue around the new grammar 2018-04-05 09:52:08 +02:00
Dave Halter
567c8b8097 Fix some fstring issues for now 2018-04-05 01:11:04 +02:00
Dave Halter
af956d70a3 Make a few modifications to always use the latest environment available. 2018-04-04 09:53:23 +02:00
Dave Halter
6b75519145 Better tests for fstrings 2018-03-31 18:38:09 +02:00
Dave Halter
43df60ff7d With the changes in parso, f-strings are now completable
Parso now uses one syntax tree for f-strings and the classic syntax tree.
2018-03-31 17:51:27 +02:00
Dave Halter
27655db8a9 With the changes in parso, f-strings are now completable 2018-03-31 17:07:47 +02:00
Dave Halter
538996d8d3 Fix lambda dynamic param searches, fixes #1070 2018-03-25 23:54:43 +02:00
Dave Halter
f5ba6de38c Cleanup the namespace lookups so that it also works for Python 3.7 2018-03-25 23:25:23 +02:00
Dave Halter
a6b47141cc Add a note about the fixed Windows tests in the changelog 2018-03-24 23:27:14 +01:00
Dave Halter
49235f8910 Add micbou to AUTHORS 2018-03-24 23:25:49 +01:00
Dave Halter
73d0506fb0 Add a badge for AppVeyor. Running tests for Windows 2018-03-24 23:16:08 +01:00
micbou
0fd8e728f5 Add comment explaining why test_versions is disabled on Windows 2018-03-24 22:52:41 +01:00
micbou
bf57fa16fc Add JEDI_TEST_ENVIRONMENT_EXECUTABLE for AppVeyor 2018-03-24 22:52:41 +01:00
micbou
e8b301ebf9 Add AppVeyor configuration 2018-03-24 22:52:41 +01:00
micbou
65a8ec6abc Improve venv_and_pths test
Python is not necessarily installed in /usr/bin. Execute Python to find the
real prefix.
2018-03-24 20:52:51 +01:00
micbou
c6635ccc55 Properly raise broken pipe exception 2018-03-24 12:02:06 +01:00
Dave Halter
04708819fb Remove SourceLair from products, because it's a paid product 2018-03-23 01:47:05 +01:00
Dave Halter
53e011909d Add a note to the readme. 2018-03-23 01:32:46 +01:00
Dave Halter
b5bc25fc0b Fix another windows issue 2018-03-23 01:21:07 +01:00
Dave Halter
106573f20d Merge branch 'master' of github.com:davidhalter/jedi 2018-03-23 00:57:40 +01:00
Dave Halter
c8bb41662e Merge the windows fixes 2018-03-23 00:55:23 +01:00
micbou
51b44032bd Fix paths from assignment test on Windows 2018-03-23 00:35:57 +01:00
micbou
2283b67836 Specify executable extension to detect virtual environment on Windows 2018-03-22 23:17:23 +01:00
Dave Halter
4e5cbe8832 Some code cleanup 2018-03-20 01:40:16 +01:00
Dave Halter
e6a3a8882c Fix another error that surfaced in pandas 2018-03-20 01:04:00 +01:00
Dave Halter
a61742728b Fix an issue with docstrings that contain errors 2018-03-20 00:56:53 +01:00
Dave Halter
305fd66e1c Upgrade the wx widgets paths 2018-03-19 00:05:04 +01:00
Dave Halter
5c06d9871a Somehow forgot about subscriptlist. Just ignore those for now.
Fixes #1010.
2018-03-18 17:24:45 +01:00
Dave Halter
6042706922 Fix the first issue in #1010
Somehow it was still possible with lists to recurse.
2018-03-18 17:09:44 +01:00
Dave Halter
1672613d04 colorama should always color, even if it's not a shell
I need this for some_script.py | less -R
2018-03-18 01:05:59 +01:00
Dave Halter
11b7e95ecc os.path.join completion speed test is sometimes slow, so give it a bit more of time 2018-03-17 21:41:26 +01:00
Dave Halter
60da6034c0 Fix some code_lines issues 2018-03-17 19:41:26 +01:00
Dave Halter
094affaf84 Remove stdout/stderr from subprocesses (redirected to /dev/null)
This means that the subprocess should now not crash anymore because of people
writing to stdout in c modules and stderr should be empty.

Fixes #793.
2018-03-17 14:14:00 +01:00
Dave Halter
5f0b34a520 Add the module_path again 2018-03-16 10:30:11 +01:00
Dave Halter
cc9c9fc781 Clean up the namedtuple test for #1060 2018-03-16 10:28:51 +01:00
Dave Halter
90a226f898 All modules now have a code_lines attribute, see #1062 2018-03-16 10:20:26 +01:00
Dave Halter
24e1f7e6f0 The release date for 0.12.0 should not be set, yet. See #1061. 2018-03-15 15:16:27 +01:00
Dave Halter
1eeb7cb6aa And now remove a pep0484 function that is no longer needed 2018-03-14 21:51:06 +01:00
Dave Halter
053618edd0 Some more code to a function 2018-03-14 21:49:17 +01:00
Dave Halter
ce0aa224f1 More rewriting of the pep0484 logic 2018-03-14 21:34:01 +01:00
Dave Halter
ae6d01abf5 Start moving some of the pep0484 comment code around 2018-03-14 21:27:29 +01:00
Dave Halter
e6469f46c7 Cleanup some instance stuff 2018-03-14 21:04:55 +01:00
Dave Halter
e5546a8ae6 Better docs for funciton annotations 2018-03-14 19:19:38 +01:00
Dave Halter
f5cf4c1954 Fix an error in param comments 2018-03-14 09:53:25 +01:00
Dave Halter
13ba74515d Catch parser errors instead of error recovery when splitting param comments 2018-03-14 09:49:59 +01:00
Dave Halter
afda309cb9 Merge branch 'function_comment' of https://github.com/wilfred/jedi into mypy-comments 2018-03-14 00:55:06 +01:00
Dave Halter
144a1def6c Fix a few version issues in tests 2018-03-13 22:59:07 +01:00
Dave Halter
5d36114be4 Use inspect.Parameter.kind for better differentiation between param types
Refs #292
2018-03-13 22:47:08 +01:00
Dave Halter
f9ec989835 Fix REPL completion param name completion
There were two issues:
1. The filter for parameters was wrong
2. In general the equal sign would not be added in some circumstances
2018-03-13 21:36:04 +01:00
Dave Halter
0dda740c5d Add keyword argument test for #292 2018-03-13 19:09:33 +01:00
Lee Danilek
b9903ede1b Support mypy annotations using comment syntax
This allows us to use mypy annotations for completion in Python 2.

Closes #946
2018-03-13 17:55:28 +00:00
Dave Halter
d0b8f9e5a2 Fix an interpreter test in Python 2 2018-03-12 20:49:27 +01:00
Dave Halter
378a5846db Clean up zombie subprocesses, fixes #1048 2018-03-12 20:06:02 +01:00
Dave Halter
5c1d979522 Fix an issue around __dir__ in the interpreter
Fixes #1027.
2018-03-12 01:46:12 +01:00
Dave Halter
e0c682977c Fix doctest for replstartup 2018-03-11 22:19:35 +01:00
Dave Halter
54a8db503d Fix shell completion issues and documentation
This issue was raised in #990. The completer was never used in Python3.4+,
because it was overwritten by Python's completer. Oddly enough it has always
worked in Python2.7/3.3.

The documentation was also slightly modified. os.path.join was always a
complex beast.
2018-03-09 22:39:00 +01:00
Dave Halter
c4be83759c Merge branch 'master' of github.com:davidhalter/jedi 2018-03-08 10:09:08 +01:00
Dave Halter
51e0d5d12f Fix issues with default parameters in functions and classes
Default parameters were resolved at the wrong starting position. Fixes #1044
2018-03-08 09:59:09 +01:00
Dave Halter
14ac6b11b9 Correct mistakes of lambda names 2018-03-08 09:52:35 +01:00
Dave Halter
23e7c5bd2a eval_element -> eval_node 2018-03-07 20:11:19 +01:00
Oliver Newman
8586d2a995 Fix VS Code Python extension link 2018-03-07 10:11:16 +01:00
Dave Halter
a85f2d1049 Use the correct class for params when used in names. Fixes #1006 2018-03-07 09:59:31 +01:00
Dave Halter
72be3e5247 Get rid of a regex warning, where escaping was not properly used in a normal string 2018-03-05 10:56:27 +01:00
Dave Halter
9e9c62a5ab Get rid of the imp library import in Python3 to avoid warnings, fixes #1001 2018-03-05 10:55:21 +01:00
Dave Halter
d063dadcf7 Don't need the tests from #122 2018-03-05 01:01:43 +01:00
Dave Halter
0144de1290 Refactor the namespace package tests 2018-03-05 00:55:35 +01:00
Elvis Pranskevichus
3fb95e3a58 Add a failing test for nested PEP420 namespace packages 2018-03-05 00:18:30 +01:00
Dave Halter
074d0d6d07 Include __init__.py files in search for the project directory, fixes #773 2018-03-04 21:36:59 +01:00
Dave Halter
2885938e74 Add pytest cache to gitignore 2018-03-04 18:29:00 +01:00
Dave Halter
95d36473fc Improve some documentation/a failing doctest 2018-03-04 18:29:00 +01:00
Dave Halter
d4af314b65 Fix the recursion error with globals
This generalizes the fix to actually fix a lot of potential recursion issues
with if_stmt.
2018-03-04 18:29:00 +01:00
Dave Halter
a3a39c0757 Always pop nodes in recursion detector 2018-03-04 18:29:00 +01:00
Dave Halter
c9a64bd1d3 Globals should be looked up with the same priority as other defined nodes. 2018-03-04 18:29:00 +01:00
ggilmore
3c9aa9ef25 fix set.append syntax error 2018-03-03 10:37:56 +01:00
Dave Halter
89c616a475 Add a few bits to the changelog 2018-03-02 08:59:28 +01:00
Dave Halter
4dc10e0d4b Autocompletion in comments should at least not fail
Fixes #968
2018-03-01 08:57:32 +01:00
Dave Halter
cbcc95c671 Fix the last async issue 2018-02-28 23:47:59 +01:00
Dave Halter
2abcd0b6a6 Fix a few numpydocs tests 2018-02-28 23:44:50 +01:00
Dave Halter
3820111d1e Fix some more await things 2018-02-28 23:30:20 +01:00
Dave Halter
dfa383c744 Fix a yield from test 2018-02-28 23:01:07 +01:00
Dave Halter
a41a4562d2 AbstractIterableMixin -> IterableMixin 2018-02-28 22:51:27 +01:00
Dave Halter
0d0213ee4c Support generator returns when used with yield from. 2018-02-28 22:35:58 +01:00
Dave Halter
80ee3b8fcf Show in a test that something doesn't work properly around async analysis 2018-02-27 18:19:46 +01:00
Dave Halter
6e24c120cf A few documentation improvements 2018-02-27 18:06:47 +01:00
Dave Halter
eeacdc33a1 Try to make the whole Builtin overwriting more abstract 2018-02-26 23:09:18 +01:00
Dave Halter
8e26017a05 Fix a small remaining issue in Python 2 2018-02-21 01:38:30 +01:00
Dave Halter
4d980d8bd0 Reorder tests to make the async stuff pass on all python versions 2018-02-21 01:28:37 +01:00
Dave Halter
2d4636da5b Fix for all python versions 2018-02-21 01:23:50 +01:00
Dave Halter
c1d06f4638 Getting more edget cases work in 3.6 for async 2018-02-21 01:11:59 +01:00
Dave Halter
de5d7961e8 Fix an issue with async for 2018-02-21 00:41:59 +01:00
Dave Halter
bc0210af70 Use the await method properly and just use it instead of some crazy things 2018-02-21 00:27:15 +01:00
Dave Halter
bf01b9d47c Refactor the way builtins can be overwritten by jedi's own contexts 2018-02-21 00:09:41 +01:00
T.Rzepka
e869e700c7 Documented the misbehavior of Windows pipes in combination with Python. 2018-02-20 21:41:49 +01:00
Dave Halter
5c8300e62a Move all the asynchronous contexts to a separate module 2018-02-19 09:43:50 +01:00
Dave Halter
f1c2aef963 Fix the merge issues. Now async stuff should at least partially work 2018-02-19 01:35:37 +01:00
Dave Halter
8f4b68ae39 Merge the async branch 2018-02-18 13:45:08 +01:00
T.Rzepka
29be40ae3f Add author's name to AUTHORS.txt 2018-02-17 13:52:59 +01:00
T.Rzepka
99130e7664 Fix for Python 2 and 3 on Windows, see #1037. 2018-02-17 13:49:10 +01:00
T.Rzepka
afee465518 Merge remote-tracking branch 'origin/master' 2018-02-17 12:14:24 +01:00
T.Rzepka
446de51402 Revert "Fix for Python 2 on Windows, see #1037."
This reverts commit b38d31b99d.
2018-02-17 12:09:35 +01:00
Dave Halter
98761f6994 Get rid of an unused import 2018-02-16 21:16:43 +01:00
Dave Halter
88f521ad82 Add the name always to the script module 2018-02-16 21:15:53 +01:00
Dave Halter
24adebb69d Add the travere_parents function to a utility directory 2018-02-16 21:07:36 +01:00
Dave Halter
81a30d61d6 Fix Python 2 old-school relative imports 2018-02-16 20:53:31 +01:00
Dave Halter
5453566352 Use the project path as a prefix, because many times it's used as a higher priority than other stuff 2018-02-16 20:37:03 +01:00
Dave Halter
482b5e63db Move the buildout_project stuff to a separate examples folder 2018-02-16 15:01:40 +01:00
Dave Halter
424b6ae907 Rename of buildout stuff 2018-02-16 14:56:49 +01:00
Dave Halter
ab212cb8aa Small rename 2018-02-16 14:53:45 +01:00
Dave Halter
c23005f988 Use generators instead of complicated return of lists 2018-02-16 14:50:07 +01:00
Dave Halter
039e7ba07b Some more sys path corrections.
The sys path should be defined more or less in the beginning and not be different for all modules
2018-02-16 14:39:01 +01:00
Dave Halter
6a11b7d89e Generalize the use of smart import paths
Now a lot more parts of the current scripts path are used as a sys path.
2018-02-16 12:40:31 +01:00
Dave Halter
863fbb3702 Better handling of smart sys path 2018-02-16 11:57:58 +01:00
Dave Halter
30cfdee325 Some simplifications 2018-02-16 10:21:43 +01:00
Dave Halter
fa9364307f Add comments to implicit namespaces and fix some minor things.
See #1005.
2018-02-15 20:25:07 +01:00
Dave Halter
9177c120f4 Merge the implicit namespace improvement (pkgutils.itermodules modification)
There are still a few issues that need to be addressed.
2018-02-15 20:08:58 +01:00
Dave Halter
76df356628 Relative imports should be working again even when used in more special occasions. Fixes #973
There are more fixes needed. Some things are just very unclean and might lead to further bugs.
2018-02-15 14:10:01 +01:00
Dave Halter
276f2d0b52 parent_module is not needed for loading modules 2018-02-14 20:42:53 +01:00
Dave Halter
2a56323c16 Try to avoid CachedMetaClass for modules 2018-02-13 20:47:43 +01:00
Dave Halter
36699b77b2 DOn't check the parser cache, that's parso's responsibility 2018-02-13 19:19:00 +01:00
Dave Halter
a52b6edd01 Better module loading 2018-02-12 21:17:21 +01:00
Dave Halter
a33cbc8ae3 Try to put all module loading in one place including namespace packages 2018-02-12 20:49:45 +01:00
Dave Halter
9fec494e84 Unify load_module access 2018-02-12 20:39:42 +01:00
Dave Halter
514eaf89c3 Prepare a test to eventually solve a relative import problem 2018-02-12 20:33:48 +01:00
T.Rzepka
b38d31b99d Fix for Python 2 on Windows, see #1037. 2018-02-11 22:37:57 +01:00
Dave Halter
26774c79fb Add a module cache that has a bit more capabilites 2018-02-10 21:21:25 +01:00
Dave Halter
92c76537d6 print_to_stderr needs to be used with one argument
See #1010.
2018-02-05 19:19:05 +01:00
Dave Halter
ac597815d7 Print errors that happen when importing certain objects
See also #1010.
2018-02-04 23:50:28 +01:00
Dave Halter
1ca4d21359 Use unicode literals, to avoid potential issues 2018-02-04 00:55:45 +01:00
Dave Halter
a123d0ff3d Merge branch 'master' of github.com:davidhalter/jedi 2018-02-03 23:28:57 +01:00
Anton Zub
18819292e6 Add author's name to AUTHORS.txt 2018-02-03 11:55:53 +01:00
Anton Zub
c2bb795151 Fix typo in docstring for imports.py 2018-02-03 11:55:53 +01:00
Dave Halter
fe0e41e9d6 Fix some more dict.get/dict.values stuff 2018-02-02 18:24:18 +01:00
Dave Halter
8028138e8c Implememnt dict.values for FakeDict to avoid a recursion error. Fixes #1014. 2018-02-02 09:34:40 +01:00
Dave Halter
e50609c48b Add better error reporting 2018-02-01 09:58:28 +01:00
Dave Halter
a7e864638a Use a better string 2018-02-01 01:21:59 +01:00
Dave Halter
2c945488b3 Add better debugging for an assert, see also #1010 2018-02-01 01:20:17 +01:00
Dave Halter
24b4e725b5 Make some things clearer about lazy contexts 2018-01-31 23:52:56 +01:00
Dave Halter
ebe8123b4c Finding the autocompletion stack is a bit more complicated than I initially thought
Fixes #968.
2018-01-31 08:45:01 +01:00
Dave Halter
522e7123ed Move the ahead of time tests to the pep0526 file 2018-01-31 00:18:17 +01:00
Dave Halter
3ae0560f1c Fix an issue where a default value was wrongly used 2018-01-31 00:11:30 +01:00
Dave Halter
2b9429be38 Update the ahead of time tests 2018-01-30 23:09:42 +01:00
Dave Halter
6b535c0503 Fix the last remaining issues with ahead of time annotations, see #982 2018-01-30 01:19:55 +01:00
Dave Halter
24561759f6 Fix a bug related to a wrong parametrization at one point 2018-01-30 01:17:09 +01:00
Dave Halter
d2c0de3eb0 Merge branch 'master' of https://github.com/johannesmik/jedi 2018-01-30 01:02:07 +01:00
Dave Halter
91d3c1f6d3 Force unicode on django paths 2018-01-30 00:40:50 +01:00
Dave Halter
60f89522a7 Forgot to add the examples folder 2018-01-30 00:08:17 +01:00
Dave Halter
c9fa335145 Fix a goto_assignments issue with a better internal API
Fixes #996.
2018-01-29 08:58:59 +01:00
Dave Halter
82dc83e150 Merge remote-tracking branch 'origin/master' into virtualenv 2018-01-29 00:56:55 +01:00
Dave Halter
febe65f737 Disable predefined name analysis (if stmts) for all non-analysis tasks
It's really buggy and caused quite a few issues
2018-01-29 00:56:29 +01:00
Dave Halter
8149eabdf9 Remove something that obviously never happened 2018-01-28 20:56:04 +01:00
Dave Halter
1304b4f9e8 Reorder some open flags for Python 2 2018-01-26 01:31:47 +01:00
Dave Halter
fc458a3c2a inspect.signature throws weird errors sometimes, just make it a bit simpler
Fixes #1031
2018-01-26 01:30:10 +01:00
Dave Halter
d44385c25e Fix the implicit namespace test 2018-01-26 01:16:08 +01:00
Dave Halter
68f15c90ac Undo most of the namespace changes and use module again
Is a module like every other module, because if you import an empty
folder foobar it will be available as an object:
<module 'foobar' (namespace)>.

See #1033.
2018-01-25 20:51:55 +01:00
Dave Halter
04fba28d35 Differentiate between namespace and module as a type
Also fixed a bug related to implicit namespace contexts, fixes #1033.
2018-01-25 20:35:54 +01:00
Dave Halter
33c9d21e35 Use Scripts for virtualenvs instead of bin for windows
Thanks @blueyed for the hint.
2018-01-25 19:55:10 +01:00
Daniel Hahler
6bab112bb7 test/completion/imports.py: fix typo in comment 2018-01-25 07:57:43 +01:00
Dave Halter
68f840de60 Refactor django path support 2018-01-24 19:13:05 +01:00
Dave Halter
e4559bef51 Fix project path finding 2018-01-23 20:30:27 +01:00
Dave Halter
e6f934de11 Add a repr for Project
Also remove setstate from it, since we intend to serialize it with json.
2018-01-23 19:21:50 +01:00
Dave Halter
4653c30fa4 Use the PathFinder, because the FileFinder doesn't work without suffixes
This feels more like importlib was intended to be used anyway.
2018-01-21 23:52:44 +01:00
Dave Halter
7fcbf7b5f0 Create the importer stuff Python2.7 and 3.3 2018-01-21 15:46:40 +01:00
Dave Halter
baacb5ec0d Trying to use the import machinery to import jedi/parso in python3.4+
The problem was that adding stuff to sys.path is simply very risky, because it already caused import issues (because enum was installed in 2.7). It was bound to cause other issues
2018-01-21 15:25:59 +01:00
Dave Halter
fef594373a Better reporting of internal errors 2018-01-20 22:56:51 +01:00
Dave Halter
41b24ab46b Better error handling for subprocesses
I don't really understand why this wasn't an issue before, but it looks like we have to
catch both IOError and and socket.error in Python2.
2018-01-20 22:56:26 +01:00
Dave Halter
ddafe41bb6 Another merge with master 2018-01-20 22:01:57 +01:00
Dave Halter
98a3da674c Ahhh another bug... A bit stupid of me not to run the tests 2018-01-20 22:00:41 +01:00
Dave Halter
fc315108f0 Get rid of a cwd to tmpdir, because with the subprocess it doesn't behave the same depending on which tests you run first 2018-01-20 21:56:56 +01:00
Dave Halter
d3a5025635 Hopefully the last merge with master 2018-01-20 21:48:55 +01:00
Dave Halter
256f001480 Another small issue in the tests 2018-01-20 21:47:31 +01:00
Dave Halter
94ce54e776 Merge with master again
Some bugs were still present in master
2018-01-20 21:45:55 +01:00
Dave Halter
20d64cf2b3 Fix issues with a recent refactoring 2018-01-20 21:21:58 +01:00
Dave Halter
27a3be3b42 Merge a commit that adds the build folder to the ignored paths 2018-01-20 20:38:56 +01:00
Dave Halter
9c0b344962 Small mistake when opening a file 2018-01-20 20:30:44 +01:00
Dave Halter
1476551257 Add better error reporting for potential issues 2018-01-20 19:33:47 +01:00
Dave Halter
d986c44b94 Merge with master
The deprecation of Python2.6 and the insertion of environments made it quite difficult to merge.
2018-01-20 19:32:59 +01:00
Dave Halter
877383b110 Add a test to avoid encoding issues. Fixes #1003 2018-01-20 18:28:29 +01:00
Dave Halter
16b463a646 Refactor to avoid having unicode decode errors by default 2018-01-19 19:23:11 +01:00
Dave Halter
19b3580ba7 Get rid of some potential issues when using pandas interactively
The issue was that the python_object passed in was not hashable. Since it's not
used anyway and it doesn't make sense there, just ignore it.

Fixes #916, #875
2018-01-18 19:54:20 +01:00
Dave Halter
c1394a82b5 Better error reporting, see #944 2018-01-18 19:12:32 +01:00
Dave Halter
609f59ce41 Fix issues with random tuples in TreeArgument.
Thanks @micbou for noticing it.
b92c7d3351
2018-01-18 09:54:19 +01:00
Dave Halter
2b577fcd5c Clarity 2018-01-17 19:24:08 +01:00
Dave Halter
d61aa50399 Remove the get_default_project caching 2018-01-17 19:23:30 +01:00
Dave Halter
263989c0ab Add a comment about why the project is None in the subprocess 2018-01-17 19:12:58 +01:00
Dave Halter
4e4f75c882 evaluate.project doesn't exist anymore. Eliminated code that used it 2018-01-17 19:11:20 +01:00
Dave Halter
bf0b6741aa At the moment, don't allow projects as an input to script 2018-01-17 09:57:58 +01:00
Dave Halter
9b4abeac4e Remove the old project 2018-01-17 09:55:53 +01:00
Dave Halter
9b5e3447d9 Make the new project API fully work in tests 2018-01-17 09:54:11 +01:00
Dave Halter
fe813292cf Try to migrate to the new project API 2018-01-16 23:56:35 +01:00
Dave Halter
9b9587a9dd Refactor to make configuratios of sys paths easier 2018-01-16 19:20:55 +01:00
Dave Halter
ddaf175b11 Use the evaluate.project sys path stuff for api.project 2018-01-16 10:03:28 +01:00
Dave Halter
c6240d5453 Cache the default project 2018-01-16 00:20:33 +01:00
Dave Halter
2a0e8f91d3 A possible introduction for projects 2018-01-15 23:57:08 +01:00
Dave Halter
b92c7d3351 Some cleaning up of code 2018-01-13 18:59:03 +01:00
micbou
3a0ac37ee8 Fix error when using generators with variable-length arguments 2018-01-13 18:56:34 +01:00
Dave Halter
999fb35914 Check for safe and unsafe environments when searching for them 2018-01-11 08:59:39 +01:00
Dave Halter
d815470e54 Remove the copyright notice from docs 2018-01-09 23:29:39 +01:00
Dave Halter
598ea1b89b Add Python 2.6 removal to changelog. Refs #1018 2018-01-07 14:39:08 +01:00
Dave Halter
cc460a7126 Merge branch 'master' into rm-2.6 2018-01-07 14:32:47 +01:00
Dave Halter
4e52acbf26 Using setup.py build should not include part of tests
It looks like that we have to not only exclude the test package but also 'test.*'. Thanks to @david-geiger for noticing this. Fixes #1024.
2018-01-07 14:13:40 +01:00
Hugo
73c71d6475 This test will be removed in the virtualenv branch 2018-01-07 10:40:36 +02:00
Hugo
7e449af4bd Revert changes to test/completion and test/static_analysis except for 2.6 comment removal 2018-01-07 10:40:36 +02:00
Hugo
3e8cd9f128 Use set literals 2018-01-07 10:40:36 +02:00
Hugo
3644c72efe Add version badges, use SVG badges, fix typos 2018-01-07 10:40:36 +02:00
Hugo
abe0f27e6a Add python_requires to help pip 2018-01-07 10:40:06 +02:00
Hugo
f56035182c Remove trailing semicolons 2018-01-07 10:40:06 +02:00
Hugo
cc623218e5 Replace function call with set literal 2018-01-07 10:40:06 +02:00
Hugo
5755fcb900 Replace comparison with None with equality operator 2018-01-07 10:40:06 +02:00
Hugo
8cf708d0d4 Remove redundant parentheses 2018-01-07 10:40:06 +02:00
Hugo
a7ac647498 Remove redundant character escape 2018-01-07 10:40:06 +02:00
Hugo
7821203d8e Use automatic formatters 2018-01-07 10:40:05 +02:00
Hugo
7c31ea9042 Drop support for EOL Python 2.6 2018-01-07 10:40:05 +02:00
Hugo
0334918d73 Ignore IDE metadata 2018-01-07 10:40:05 +02:00
Dave Halter
d00b6ddd10 Sith still used NotFoundError which doesn't exist anymore in jedi 2018-01-06 14:14:16 +01:00
Dave Halter
9e1cce6111 Ignore pypy in travis for now
There are too many issues in there and I won't look at them.
2018-01-06 14:13:15 +01:00
Dave Halter
7c78882967 A path to ignore in coveragerc was wrong 2018-01-06 14:12:26 +01:00
Dave Halter
9fdf265a75 Allowing the cov tests did not properly work. Trying again. 2018-01-06 13:54:29 +01:00
Dave Halter
a3c7aaa65e Somehow previously removed the allowed failurs of TOXENV=cov 2018-01-06 13:52:31 +01:00
Dave Halter
5844ad0900 Try to put env variables on one line 2018-01-06 13:49:06 +01:00
Dave Halter
e3d399cb08 Coverage was unfortunately excluded 2018-01-06 13:39:03 +01:00
Dave Halter
f36f5ec234 Merge with master 2018-01-06 12:31:29 +01:00
Dave Halter
a8124b625c Add a comment to refactoring that it's not in active development 2018-01-06 12:29:03 +01:00
Dave Halter
bc57b08863 Change coveragerc a bit
Remove some exclude lines, because they don't matter and don't appear in our code base.
The files that are excluded are cannot be measured (because it's part of a subprocess) or are statically analyzed.

In addition refactoring.py hasn't been in use for a long time.
2018-01-06 12:27:48 +01:00
Dave Halter
14ac874e1a Use Python3.4 for coverage. 2018-01-06 12:14:32 +01:00
Dave Halter
db47686159 Correct the issue about has_zlib
It was never actually the case that travis has Python versions without zlib. I didn't realize that modifying the sys path made it impossible to import the zlib library.
2018-01-06 03:34:37 +01:00
Dave Halter
e42796ca10 Move the zip tests to the environment 2018-01-06 02:26:30 +01:00
Dave Halter
99eed91206 Only execute the zipimport tests fully if zlib is available for the environment Python. 2018-01-06 02:11:33 +01:00
MohamedAlFahim
ad5ac8c492 Made 'l' a string + added warning
One of the helper methods is missing, so be extra careful.
2018-01-05 22:49:47 +01:00
MohamedAlFahim
03961bf051 Fixed refactoring.py docstring mistake
Updated parameters in docstring
2018-01-05 22:42:29 +01:00
Hugo
4199ac1a6f http -> https 2018-01-05 11:39:42 +01:00
Dave Halter
db1a4415b3 Some tests that involved jedi were actually a bit wrong and only worked in certain environments. 2018-01-05 00:48:40 +01:00
Dave Halter
4d896892a3 Skip some 3.3 tests for travis
Python 3.3 on travis doesn't have zip support compiled.
Just ignore tests since 3.3 is End-of-Life anyway.
2018-01-04 01:46:07 +01:00
Dave Halter
3d39ffd16c Skipping was done wrong 2018-01-03 19:45:46 +01:00
Maxim Novikov
ff65cf8ebe Use compatible syntax 2018-01-02 19:14:12 +01:00
Maxim Novikov
7f21fdfbc7 Fallback 2018-01-02 19:10:15 +01:00
Maxim Novikov
a2031d89b1 Fix tests 2018-01-02 18:24:38 +01:00
Maxim Novikov
78cbad0d08 Fix implicit namespace autocompletion. Resolves: #959 2018-01-02 18:17:48 +01:00
Dave Halter
d2cf2e69c9 Try a bit more if modifying the PATH is now possible. 2018-01-02 16:53:48 +01:00
Dave Halter
5e8d7a3c87 A comparison was wrong 2018-01-02 16:34:58 +01:00
Dave Halter
b2b8607bd6 A new version of the travis install script 2018-01-02 16:33:05 +01:00
Dave Halter
9c5ce5a8d2 Try to use the virtual env that was defined in the VIRTUAL_ENV variable, if possible. 2018-01-02 01:28:02 +01:00
Dave Halter
bcb3f02a01 If a subprocess gets killed by an OOM killer or whatever it should respawn and raise an InternalError 2018-01-02 00:56:22 +01:00
Dave Halter
7ff6871548 Merge Subprocess and CompiledSubprocess 2018-01-02 00:33:30 +01:00
Dave Halter
927aa2bd91 Try to recover from errors that are happening in subprocesses 2018-01-02 00:24:15 +01:00
Dave Halter
d93b613fd9 Move the default environment around 2018-01-01 20:37:50 +01:00
Dave Halter
966bd53b40 More travis trying 2017-12-31 14:10:08 +01:00
Dave Halter
39f82bc5aa Better debugging for travis 2017-12-31 02:49:18 +01:00
Dave Halter
2fbdf0dc09 Forgot to add the executable bit to the travis installer. 2017-12-30 23:27:11 +01:00
Dave Halter
39a456be41 Experiment with travis and installing packages differently 2017-12-30 23:08:32 +01:00
Dave Halter
9b1d3ff207 The tags should be annotated if possible 2017-12-30 14:05:14 +01:00
Dave Halter
b5e0df0e8c Remove 2.6 from travis 2017-12-30 05:21:56 +01:00
Dave Halter
9c9b52422d Correct the travis file 2017-12-30 05:14:26 +01:00
Dave Halter
b901ab9b0d Some refactoring to finally get tests working with py27 and 3 environments 2017-12-30 05:01:50 +01:00
Dave Halter
b716fb7dc6 Use the parser to check for certain namedtuple features
This fixes tests that are used with python 2 but a different environment
2017-12-30 04:41:19 +01:00
Dave Halter
4514373de6 Use unicode strings in test to pass some tests in Python 2 2017-12-30 04:36:59 +01:00
Dave Halter
a14f665b5a Use Script everywhere where cwd_at is used, otherwise Python 2.7 is annoying 2017-12-30 03:55:23 +01:00
Dave Halter
0ed9e1c249 The given sys_path gets converted to unicode now in py2 2017-12-30 03:40:01 +01:00
Dave Halter
f17afc6519 Try to avoid the pth tests not working because of the created virtualenv in tox 2017-12-30 03:15:28 +01:00
Dave Halter
e2629b680f Test if virtualenvs and pth files work 2017-12-30 00:02:14 +01:00
Dave Halter
7de04fb28d Move the module name searching to the subprocess 2017-12-29 21:10:00 +01:00
Dave Halter
68381e09c9 Move the last test out of test_regressions and delete the file
This also deletes a test that probably has become useful because the issue it tested was caused by code that doesn't exist anymore
2017-12-29 20:38:30 +01:00
Dave Halter
01ffd2f981 Move most of the regression tests into other test files 2017-12-29 20:26:53 +01:00
Dave Halter
918153d55a Cleanup test_regression tests 2017-12-29 20:13:04 +01:00
Dave Halter
ff4f7d5471 Move test_integration_keywrod to test_api/test_keyword 2017-12-29 20:05:37 +01:00
Dave Halter
c7266d65c1 Cleanup the docstring tests 2017-12-29 19:47:28 +01:00
Dave Halter
bf73fcbed4 More test_evaluate Script fixtures 2017-12-29 19:36:05 +01:00
Dave Halter
5fc755b0cf stdlib fixture conversions 2017-12-29 19:13:15 +01:00
Dave Halter
ac21fc376e More Script fixture conversions in test_evaluate 2017-12-29 19:08:09 +01:00
Dave Halter
2493e6ea16 Migrate parso integration to script fixture 2017-12-29 18:47:13 +01:00
Dave Halter
181fe38c17 Use Script in more places 2017-12-29 18:43:10 +01:00
Dave Halter
da211aa63d Use the Script fixture more generally 2017-12-29 18:40:17 +01:00
Dave Halter
38cacba385 Differentiate between different Python versions in a specific test 2017-12-29 16:09:48 +01:00
Dave Halter
5efd67758e Start replacing Script calls with a fixture
This is important to migrate all tests to specific fixtures.
2017-12-29 15:51:16 +01:00
Dave Halter
05804f1768 Monkeypatch the Unpickler in Python3.3
Needed to get Python3.3 working. See the comment in the commit.
2017-12-29 15:37:37 +01:00
Dave Halter
408293085c Try to pass the environment variable for JEDI_TEST_ENVIRONMENT to pytest over tox 2017-12-29 13:49:24 +01:00
Dave Halter
ed57f6172f Correct the two last unicode issues 2017-12-29 12:59:06 +01:00
Dave Halter
2ba46759fc Some repr went crazy 2017-12-29 03:58:02 +01:00
Dave Halter
95bf858669 Make it more clear for debugging where dynamic search ended 2017-12-29 03:54:12 +01:00
Dave Halter
d7de3f3fec Fix pep0484 comments 2017-12-29 03:29:29 +01:00
Dave Halter
a1051bd5f2 Better display of descriptors 2017-12-29 03:29:08 +01:00
Dave Halter
35158f693d Remove some of the last py27 errors that were caused in combination with 3.6 2017-12-29 02:45:11 +01:00
Dave Halter
ec9b8e8c02 Forgot to cast a map to a list 2017-12-29 02:39:35 +01:00
Dave Halter
52298510ed Fixing more py27 stuff 2017-12-29 02:02:34 +01:00
Dave Halter
b4f301e082 More unicode literals 2017-12-29 01:42:22 +01:00
Dave Halter
59c44fe499 Use force_unicode for all sys paths 2017-12-29 01:28:23 +01:00
Dave Halter
6e69326aa9 Add a print_to_stderr function in compatibility 2017-12-29 01:26:15 +01:00
Dave Halter
05b2906dcc Some more small improvements for Python 2 2017-12-28 23:58:19 +01:00
Dave Halter
4b72a89379 There were a few bugs in the previous commit 2017-12-28 23:25:09 +01:00
Dave Halter
ba81aa16a2 Use unicode in way more cases 2017-12-28 23:19:17 +01:00
Dave Halter
5755d5a4ee Use unicode always for getting special objects 2017-12-28 22:41:20 +01:00
Dave Halter
9906c4f9fc Skip the correct tests 2017-12-28 22:03:02 +01:00
Dave Halter
37d282e67b Always use the parser of the environment 2017-12-28 21:19:26 +01:00
Dave Halter
7a7c93a2e5 Try to test on travis with different jedi test environment variables 2017-12-28 02:46:53 +01:00
Dave Halter
c946d421d6 Try adding more automated tests to travis 2017-12-28 02:28:44 +01:00
Dave Halter
2d3b15b485 Fix potential issues with py2 analysis 2017-12-28 02:19:42 +01:00
Dave Halter
5b8ed7f615 Check for bytes and unicode in dicts for Python 2 2017-12-28 02:15:27 +01:00
Dave Halter
d1d4986667 Eliminate is_py3 usages 2017-12-28 01:55:39 +01:00
Dave Halter
6b6795c40c Don't use python_version directly on evaluator anymore 2017-12-28 01:44:59 +01:00
Dave Halter
31f1913b07 Use unicode always in getattr 2017-12-28 01:42:58 +01:00
Dave Halter
7accd4fae3 Fix an issue with the new behavior of special methods 2017-12-28 01:38:16 +01:00
Dave Halter
a7dea9e821 Fix some more py36 to py27 issues 2017-12-28 01:33:51 +01:00
Dave Halter
a8d3c46e9d Refactor some things regarding Python 2 support 2017-12-27 02:09:58 +01:00
Dave Halter
7e063ff7af Also don't cast do a string for other names 2017-12-26 15:44:00 +01:00
Dave Halter
8a82a5237d Casting to str is not necessary 2017-12-26 15:32:25 +01:00
Dave Halter
e925661aff Skip tests according to the current environment 2017-12-26 15:07:57 +01:00
Dave Halter
a7168db1ea Remove unused keyword code 2017-12-26 14:13:56 +01:00
Dave Halter
c43009d5dc Do more comparisons in the subprocess 2017-12-26 13:38:47 +01:00
Dave Halter
ab42e856fb Use unicode in compiled access 2017-12-26 03:24:26 +01:00
Dave Halter
6d70bd7d5c Remove unused code 2017-12-26 03:18:16 +01:00
Dave Halter
c3483344fe Refactor allowed_getattr_callback a bit to not raise random errors. 2017-12-24 12:55:32 +01:00
Dave Halter
993b0973c5 The default of one function was not actually used 2017-12-24 12:12:27 +01:00
Dave Halter
f494bb5848 The string_name of a Name should always be unicode 2017-12-24 04:05:28 +01:00
Dave Halter
4a366ab728 Refactor a bit and force unicode in some places and use an appropriate function name for it 2017-12-24 04:05:02 +01:00
Dave Halter
96a4fd7bd6 Fix a test fail because of the unicode changes 2017-12-24 03:53:27 +01:00
Dave Halter
fdd405f552 The environment selection had a bug 2017-12-24 03:47:35 +01:00
Dave Halter
085a9e0e33 More unicode conversions 2017-12-24 03:46:33 +01:00
Dave Halter
ee099a4ff7 Don't use getattr, use the abstractions 2017-12-24 03:39:28 +01:00
Dave Halter
40f1354f67 More unicode conversions 2017-12-24 03:35:15 +01:00
Dave Halter
a117f9f2e7 Avoid execution of Jedi in test setup
This makes testing Jedi potentially faster.
2017-12-24 03:25:43 +01:00
Dave Halter
5a06ea2699 Start using a lot more unicode literals for Python 2 2017-12-24 03:11:28 +01:00
Dave Halter
1f4e0dd22e Make it possible to explicitly state the version in pytest for different envs 2017-12-24 03:01:47 +01:00
Dave Halter
a38acdbe08 Use unicode sys paths always 2017-12-24 02:42:14 +01:00
Dave Halter
7bfca5bcd7 Don't cast bytes to strings when unpickling 2017-12-23 21:18:04 +01:00
Dave Halter
c3520bea65 By default enable cross Python version tests in tox 2017-12-23 19:59:37 +01:00
Dave Halter
7ad37fb976 Skip more tests if it's necessary. 2017-12-23 19:56:47 +01:00
Dave Halter
87666d72a1 Move the import logic to the subprocess 2017-12-23 17:59:56 +01:00
Dave Halter
473be114f3 Move even more import stuff to a separate function 2017-12-23 17:10:57 +01:00
Dave Halter
e2f8d53ee4 Move some import parts around to refactor it 2017-12-23 16:16:17 +01:00
Dave Halter
4ab7f7a0b0 Make ImplicitNamespaceContext a bit cleaner 2017-12-21 23:43:47 +01:00
Dave Halter
723d6515ac Change two tests that were written in a strange way 2017-12-20 10:36:39 +01:00
Dave Halter
a96f2c43df Add a way to skip typing tests in non default environments 2017-12-20 10:07:16 +01:00
Dave Halter
890dd2213d Use better error messages for import errors 2017-12-19 23:51:05 +01:00
Dave Halter
456ae20aac Start using the new virtualenv code
There used to be a lot of code to kind of understand virtualenvs. This can all be removed now, because this is done in a subprocess with the correct interpreter
2017-12-19 21:05:04 +01:00
Daniel Hahler
adace8d7cb sys_path_with_modifications: append local file
This fixes "goto" preferring a local module instead of a global one.

Fixes https://github.com/davidhalter/jedi/issues/995.
2017-12-19 20:51:20 +01:00
Dave Halter
96a67f9a4c Start using the correct parser for each environment 2017-12-19 19:19:35 +01:00
Dave Halter
a9ebd92c20 Add a way to specify environments in tox 2017-12-19 19:02:57 +01:00
Dave Halter
6780eba157 Fix sys_path propagation for builtins load_module 2017-12-18 20:16:58 +01:00
Dave Halter
aa40ef3140 A small refactoring 2017-12-18 20:03:23 +01:00
Dave Halter
5f2b49d039 Merge branch 'master' into virtualenv 2017-12-18 01:41:29 +01:00
Dave Halter
46b62b7bed evaluate/docstrings.py
Make some docstring stuff easier
2017-12-18 01:40:21 +01:00
Dave Halter
f5c7e3bb06 Don't import numpydoc in the beginning
There were issues in combination with importing it with subprocesses
2017-12-18 01:34:19 +01:00
Dave Halter
8b3ee75654 Ignore the build directory for pytest 2017-12-17 21:35:39 +01:00
Dave Halter
fe3e8a0867 Refactor environments a bit 2017-12-17 18:47:28 +01:00
Dave Halter
1c62db04ba Make it possible to get the right version parser for a certain environment 2017-12-16 00:30:47 +01:00
Dave Halter
d0732e58cc api.virtualenv -> api.environment 2017-12-15 18:20:35 +01:00
Dave Halter
0d7f93c019 DefaultEnvironment -> get_default_environment 2017-12-15 18:13:21 +01:00
Dave Halter
3cd5fa3c20 Better support for searching python environments 2017-12-15 12:19:52 +01:00
Dave Halter
f37089e54b Bump the version number 2017-12-15 10:47:14 +01:00
Dave Halter
69237c4aa6 Add a changelog for jedi 0.11.1 2017-12-15 10:45:27 +01:00
Dave Halter
02f238ce08 Add the executable bit to deploy-master.sh 2017-12-14 22:51:02 +01:00
Dave Halter
e526cb1ae3 Don't run Python 2.6 in tox by default
Python 2.6 seems to be harder and harder to run in tox if setuptools is not properly configured for it.
It's still possible to run it and it still runs on travis.
2017-12-14 22:50:13 +01:00
Daniel Hahler
e621e8590c Improve IntegrationTestCase.__repr__
Having the path (together with the line only) makes it easy to go to the
actual test.
2017-12-14 22:44:24 +01:00
Dave Halter
62915686af Don't use pytest 3.3+ because it removed support for Python 3.3 2017-12-14 22:29:13 +01:00
Dave Halter
c3efde3bfa Add an optimization around compiled dir() 2017-12-14 22:28:22 +01:00
Dave Halter
950cab2849 Fix a potential issue in evaluate/stdlib 2017-12-14 22:24:37 +01:00
Dave Halter
9d094b68f3 Cache the subprocess results 2017-12-14 22:23:59 +01:00
Dave Halter
94e2e92888 Remove unit test class from speed tests 2017-12-13 19:22:45 +01:00
Dave Halter
e03afc60ef Make get_repr static in access. 2017-12-13 19:16:29 +01:00
Dave Halter
0acb7dcb18 There was a bug in creating modules in a subprocess 2017-12-12 18:08:49 +01:00
Dave Halter
8003d30b06 Fix the Python 2.7 tests 2017-12-11 21:39:30 +01:00
Dave Halter
b196c6849b Don't try to pickle ellipsis 2017-12-11 20:55:34 +01:00
Dave Halter
fa2712a128 Ignore __main__ modules 2017-12-11 09:23:13 +01:00
Dave Halter
3a7bc92863 Use builtins_module instead of BUILTINS 2017-12-10 18:52:51 +01:00
Dave Halter
afb73876ac Don't use the pickler modification anymore. That doesn't work in other python versions and was in general a bit hard to do 2017-12-10 18:39:03 +01:00
Dave Halter
aa7319dba5 Remove the last test failures. 2017-12-09 17:38:45 +01:00
Dave Halter
649225333f Get the subprocess mostly working 2017-12-08 09:44:12 +01:00
Dave Halter
a210be8198 Don't use the create function anymore in compiled
Now the whole creation of builtin objects is abstract and was moved to subprocesses etc.
2017-12-06 15:26:29 +01:00
Dave Halter
13f8f37547 Use even more subprocess accesses 2017-12-06 15:16:27 +01:00
Dave Halter
42fb93dc01 Use the subprocess access to create acceses 2017-12-06 15:06:48 +01:00
Dave Halter
f09ca9fc20 Use access handles everywhere 2017-12-06 14:46:27 +01:00
Dave Halter
7db6d11c49 Create a way of accessing access objects through a subprocess 2017-12-06 14:18:10 +01:00
Dave Halter
34bd19ee8d Use a class instead of a dict in get_special_objects 2017-12-05 08:44:36 +01:00
Dave Halter
79071790da Move get_special_object 2017-12-05 00:32:39 +01:00
Dave Halter
542644ad19 Move load_module a bit around 2017-12-04 19:18:30 +01:00
Dave Halter
617b11c92b Move another usage of create to builtin_from_name 2017-12-04 08:57:43 +01:00
Dave Halter
3f25ba436c Use sys.modules instead of __import__
The module should already have been imported at this point. Plus if the __module__ was wrong it won't just randomly import something.
2017-12-04 00:21:12 +01:00
Dave Halter
85abc55e89 Remove unused code 2017-12-03 19:39:31 +01:00
Dave Halter
15d9e64281 Start creating access objects in a different way 2017-12-03 19:37:03 +01:00
Dave Halter
3c78aad8b1 Use create_simple_object for a lot of use cases 2017-12-02 01:59:48 +01:00
Dave Halter
2aa2005502 Move some of the compiled.create calls to compiled.builtin_from_name 2017-12-01 09:54:29 +01:00
Dave Halter
543f4f7ff2 Move some stuff from compiled to context 2017-11-29 01:03:01 +01:00
Dave Halter
4f04f7f09c Remove stuff from CompiledObject that didn't belong there and wasn't used 2017-11-29 00:42:16 +01:00
Dave Halter
37c3f0904d create_from_access -> _create_from_access 2017-11-29 00:25:30 +01:00
Dave Halter
ba0768bab6 Refactor a bit more and remove the parent_context parameter from create_from_access 2017-11-29 00:24:28 +01:00
Dave Halter
187a523e05 Isolate fake stuff a bit more 2017-11-29 00:18:43 +01:00
Dave Halter
10e9dac758 Simplify an if 2017-11-28 21:39:00 +01:00
Dave Halter
6ec3e50a16 Rewrite bases 2017-11-28 21:20:55 +01:00
Dave Halter
cce9a1cf6a Use create only for non access objects 2017-11-28 21:15:55 +01:00
Dave Halter
c1f31e0328 Some simplification of _create_from_access 2017-11-28 20:35:49 +01:00
Dave Halter
74495d518f Remove the old now unused fake code 2017-11-28 18:39:05 +01:00
Dave Halter
47114178e9 Fake context python code is now not the base for a lot of things anymore. It just gets executed. 2017-11-28 18:26:12 +01:00
Dave Halter
a2b08eabc6 Rename SelfNameFilter to SelfAttributeFilter 2017-11-28 18:06:00 +01:00
Dave Halter
85bda448b1 Simplify one if statement 2017-11-28 08:43:56 +01:00
Dave Halter
e69509b1d9 Refactor LazyInstanceName -> SelfName 2017-11-27 21:08:39 +01:00
Dave Halter
b31d928704 Fix all tests except fake docstring stuff 2017-11-26 22:49:07 +01:00
Dave Halter
02fb73655c Fix a slice test with a better helper function 2017-11-26 22:18:51 +01:00
Dave Halter
accf20226d Fix a few more tests 2017-11-26 22:07:13 +01:00
Dave Halter
85ce57a863 Creating objects works now a bit better but is a huge mess. 2017-11-26 18:26:02 +01:00
Dave Halter
e71f0062dd Get a lot of tests passing 2017-11-26 17:48:00 +01:00
Dave Halter
c266fb301b Make params work with access 2017-11-26 01:48:43 +01:00
Dave Halter
7263d8565b Add an access abstraction (only array tests work, yet)
The access abstraction will be the new way of accessing builtin objects. This way it will be easier to move that to another process
2017-11-25 19:47:49 +01:00
Dave Halter
52bc1be84e The check if we should add type completions is now a bit more obvious 2017-11-24 08:55:16 +01:00
Dave Halter
1a7fc512bc Eliminate CompiledObject.type 2017-11-23 21:50:18 +01:00
Dave Halter
4dc2ad281d Make some faked things private 2017-11-22 19:22:18 +01:00
Dave Halter
37533c5d51 Cleanup some compiled stuff. 2017-11-22 19:04:02 +01:00
Dave Halter
96a0003cb5 Progress in executing builtin stuff in submodules. 2017-11-20 21:02:40 +01:00
Dave Halter
87452639ad Exceptions now also work over the subprocess. 2017-11-17 01:54:05 +01:00
Dave Halter
4a7d715a57 Finally got compiled_objects and the access to them working 2017-11-17 01:42:27 +01:00
Dave Halter
73576b2a8b Progress when working with evaluators 2017-11-17 01:21:38 +01:00
Dave Halter
4136dcaf08 Make the subprocesses work and return the right sys paths for the different versions 2017-11-15 08:58:13 +01:00
Dave Halter
96149d2e6a Make it possible to connect to a subprocess to get the sys path 2017-11-14 18:25:37 +01:00
Dave Halter
46b81dfa6d Subprocess progress
Also add an enviornment variable to Script
2017-11-13 00:40:32 +01:00
Dave Halter
3a4dc94ee6 Use types instead of special objects (see also #988) 2017-11-12 13:12:04 +01:00
Dave Halter
969d029499 Some subprocess progress 2017-11-12 11:46:35 +01:00
Dave Halter
6ee361864c Merge branch 'master' of github.com:davidhalter/jedi 2017-11-06 19:32:31 +01:00
Thomas A Caswell
22c97b0917 FIX: install on python 3.7 (#971)
* FIX: install on python 3.7

https://github.com/python/cpython/pull/46 /
https://bugs.python.org/issue29463 move the module comment into the
AST node and hence out of the tree which means the 2nd entry in the
tree is now the import rather than the `__version__` string.

Adds nightly on travis.

* BLD: update python tags in setup.py

* CI: switch to 3.7-dev

* CI: allow failure on 3.7 dev
2017-11-06 19:29:06 +01:00
Dave Halter
6c355a0ac2 Update version 2017-11-05 15:07:06 +01:00
Dave Halter
e783611030 Merge branch 'master' of github.com:davidhalter/jedi 2017-11-05 15:05:22 +01:00
Dave Halter
fc0397732e Update the parso dependency 2017-11-05 15:05:09 +01:00
Dave Halter
421ea222d1 virtualenv progress 2017-11-05 15:02:40 +01:00
Samuel Bishop
9cbcf00aa2 Fix dead link.
Hopefully this is close enough to the original video.
2017-11-03 17:52:22 +01:00
Dave Halter
baafea4a90 Remove unused code 2017-11-01 19:14:54 +01:00
Robin Roth
5b184fbd0c Use python3.6 for tox/sith 2017-11-01 14:18:29 +01:00
Robin Roth
dc43eba07b Support async/await syntax 2017-11-01 13:44:38 +01:00
Johannes Mikulasch
d9dc4ac840 Merge branch 'master' of https://github.com/johannesmik/jedi 2017-10-31 14:02:35 +01:00
Johannes Mikulasch
a1b60a978d add testcases for pep0484 ahead of time annotations 2017-10-31 13:57:10 +01:00
Johannes Mikulasch
6feac2a0ec add ahead of time annotations PEP 526 2017-10-31 12:58:56 +01:00
langsamer
1428b67c4d Replace TODO by explanation of the parameter 'cut_own_trailer'
(it cannot be dropped!) (#978)
2017-10-29 02:00:28 +02:00
Robin Roth
f13b4e800a Install docopt for dev setup 2017-10-28 14:36:16 +02:00
Robin Roth
88cf592c95 Make goto work with await
Created together with @langsamer and @davidhalter
2017-10-28 14:10:05 +02:00
Dave Halter
752b7d8d49 One more usages test. 2017-10-15 21:11:49 +02:00
Dave Halter
2b138b3150 Usages fix for more complex situations 2017-10-09 21:09:04 +02:00
Dave Halter
06004ad2f5 Some minor refactorings. 2017-10-09 20:32:28 +02:00
Dave Halter
8658ac5c28 Using additional_dynamic_modules sometimes led to weird behavior of using modules twice. 2017-10-09 20:28:39 +02:00
Dave Halter
bedff46735 Simplify usages. It should also work way better, now. 2017-10-08 20:13:24 +02:00
Dave Halter
4ddf7bf56d Remove the disabling of dynamic flow information
We should be able to handle this anyway also in completions. Don't hide issues here.
2017-10-07 10:52:43 +02:00
Dave Halter
8dba08eeb2 Small sys path refactoring. 2017-10-06 09:01:15 +02:00
Dave Halter
21531abd1e Fix a small test error 2017-10-05 20:43:31 +02:00
Dave Halter
7019ca643e Remove a possible security issue
sys paths are not executed anymore and use static analysis now.
2017-10-05 19:57:50 +02:00
Dave Halter
aa8a6d2482 Move a function around 2017-10-05 18:49:12 +02:00
Dave Halter
28dea46bed Better English 2017-10-05 18:35:17 +02:00
Dave Halter
2b30c6fee4 Remove documentation about caveats that are not realy 100% true anymore. 2017-10-05 18:33:02 +02:00
Dave Halter
51d2ffb078 Use sys path mostly from project and move some sys path stuff around. 2017-10-05 10:06:28 +02:00
Dave Halter
383f749026 Move the initial sys path generation into a new project class. 2017-10-02 20:19:55 +02:00
Dave Halter
0762c9218c Move arguments to a separate module. 2017-10-01 13:29:28 +02:00
Dave Halter
b6bb251c96 Common instance objects are now directly accessible 2017-09-30 18:19:25 +02:00
Dave Halter
604ca65a9b Directly importing FunctionContext. 2017-09-30 18:11:15 +02:00
Dave Halter
39b24ff2df Move lazy contexts to a separate module not in contexts 2017-09-30 18:02:02 +02:00
Dave Halter
16011a91af Move iterable to context/iterable. 2017-09-30 17:41:21 +02:00
Dave Halter
06b2857974 Import simplification. 2017-09-30 17:26:20 +02:00
Dave Halter
f733e07045 AbstractSequence -> AbstractIterable. 2017-09-30 17:23:15 +02:00
Dave Halter
3bfff846ed Move the special method filter from iterable to filters. 2017-09-30 17:15:23 +02:00
Dave Halter
2c81bd919e ClassContext is now importable from context. 2017-09-30 16:57:28 +02:00
Dave Halter
3c75f27376 Move the base Context stuff to another module to keep context free for imports. 2017-09-30 16:46:07 +02:00
Dave Halter
3c2221ec2d Don't use a star import. 2017-09-29 15:47:36 +02:00
Dave Halter
c8cae2140f Move the lazy contexts to a separate module. 2017-09-29 15:44:47 +02:00
Dave Halter
8c601a1c65 Also move the class to the context package. 2017-09-29 15:39:20 +02:00
Dave Halter
5f613ece28 Move the namespace to a separate module. 2017-09-29 15:31:26 +02:00
Dave Halter
32917d5565 Remove the function context to a separate module. 2017-09-29 15:28:17 +02:00
Dave Halter
8a9e1cd914 Move an import of a function. 2017-09-29 15:17:19 +02:00
Dave Halter
95930d293c Move instance module to the context package. 2017-09-29 15:14:56 +02:00
Dave Halter
8f177eea07 Move the ModuleContext to a separate module. 2017-09-29 13:24:48 +02:00
Dave Halter
41cfbe2382 Move context to base.py 2017-09-29 13:06:03 +02:00
Dave Halter
20a462597d Move context.py to a separate package. 2017-09-28 21:10:19 +02:00
Dave Halter
b70cef735a Find packages differently in setup.py 2017-09-28 21:03:56 +02:00
Dave Halter
d656ccd833 Move a BaseContext to jedi.common.context. 2017-09-28 17:06:58 +02:00
Dave Halter
d99d4deebf Merge branch 'values' 2017-09-28 16:19:38 +02:00
Dave Halter
3734d52c8b Move all the remaining imports out of the syntax tree functions 2017-09-28 14:44:58 +02:00
Dave Halter
18bab194c0 Move a few imports out of functions. 2017-09-28 14:38:11 +02:00
Dave Halter
e62d89bb03 Move the is_string etc functions to the helpers module. 2017-09-28 14:28:07 +02:00
Dave Halter
6b76e37673 Make some functions private in evaluate/iterable. 2017-09-28 14:19:11 +02:00
Dave Halter
612ad2f491 Move eval_subscript_list to the syntax_tree module. 2017-09-28 14:17:37 +02:00
Dave Halter
65ef6a3166 Move py__getitem__ to the context module. 2017-09-28 14:10:32 +02:00
Dave Halter
30df79e234 Rename py__iter__types to iterate_contexts. 2017-09-28 13:19:33 +02:00
Dave Halter
8c0845cf0c Move iterate logic to the context. 2017-09-28 13:13:09 +02:00
Dave Halter
47c249957d Make BuiltinMethod a Context object. 2017-09-28 12:04:44 +02:00
Dave Halter
b08300813e Fix an issue surrounding namedtuples where I didn't see the tests failing. 2017-09-28 10:39:54 +02:00
Dave Halter
1c9060ebc5 Remove evaluator as param from apply_decorators. 2017-09-28 09:18:12 +02:00
Dave Halter
d9d3aeb5bc Move more functions to the syntax tree module. 2017-09-28 09:16:43 +02:00
Dave Halter
0782a80cef Move all the search to py__getattribute__ and remove find_types. 2017-09-27 19:22:50 +02:00
Dave Halter
9073f0debc Use the typical ordering of arguments for ClassContext. 2017-09-27 19:16:05 +02:00
Dave Halter
a7a66024d4 Make a lot more functions private. 2017-09-27 19:13:19 +02:00
Dave Halter
ed43a68c03 Remove the precedence module in favor of the syntax tree module. 2017-09-27 19:09:30 +02:00
Dave Halter
d0939f0449 Move eval_or_test away from precedence module. 2017-09-27 18:51:53 +02:00
Dave Halter
08a48672bc A minor rename. 2017-09-27 18:15:12 +02:00
Dave Halter
d584b698b7 Move eval_element and eval_stmt to the syntax tree module. 2017-09-27 18:14:04 +02:00
Dave Halter
b997b538a7 Move eval_atom to the syntax tree module. 2017-09-27 16:27:37 +02:00
Dave Halter
5415a6164f Starting to try to move some functions away from Evaluator.
This time eval_trailer.
2017-09-27 16:21:02 +02:00
Dave Halter
313e1b3875 Use a different way of executing functions. 2017-09-27 16:07:24 +02:00
Dave Halter
025951089a Some conversions of eval_element -> eval_node. 2017-09-27 15:17:11 +02:00
Dave Halter
b1ed0c7d22 Add py__class__ to ContextSet. 2017-09-27 14:09:09 +02:00
Dave Halter
b74c8cb033 To be able to customize ContextSet, move a subclass to evaluate.context 2017-09-27 09:20:58 +02:00
Dave Halter
faa2d01593 The memoize decorator doesn't need to magically cache generators as lists.
This makes no sense at all. Explicit is better than implicit.
2017-09-26 18:36:10 +02:00
Dave Halter
a0a438fe6f Forgot an iterator in context sets. 2017-09-26 18:32:42 +02:00
Dave Halter
e4090910f6 Remove the ParamListener, it was not used anymore. 2017-09-26 18:24:42 +02:00
Dave Halter
00f2f9a90c Fix the final issues with the ContextSet refactoring. 2017-09-26 18:17:19 +02:00
Dave Halter
ee52cc7501 Fix most dynamic array issues. 2017-09-26 17:26:33 +02:00
Dave Halter
592f2dac95 A lot more fixes for tests. 2017-09-26 16:29:07 +02:00
Dave Halter
174eff5875 Replace a lot more of empty sets and unite calls. 2017-09-25 23:08:59 +02:00
Dave Halter
921d1008f2 First tests are now passing. 2017-09-25 11:10:09 +02:00
Dave Halter
5328d1e700 Add a ContextSet.
This is not bug free yet, but it's going to be a good abstraction for a lot of small things.
2017-09-25 11:04:09 +02:00
Dave Halter
dd924a287d Deployment script forgot to push the tags to github. 2017-09-21 00:05:52 +02:00
Dave Halter
a433ee7a7e Move common to evaluate.utils. 2017-09-20 20:33:01 +02:00
415 changed files with 33790 additions and 14428 deletions

View File

@@ -1,19 +1,13 @@
[run]
omit =
jedi/_compatibility.py
jedi/evaluate/site.py
jedi/inference/compiled/subprocess/__main__.py
jedi/__main__.py
# For now this is not being used.
[report]
# Regexes for lines to exclude from consideration
exclude_lines =
# Don't complain about missing debug-only code:
def __repr__
if self\.debug
# Don't complain if tests don't hit defensive assertion code:
raise AssertionError
raise NotImplementedError
# Don't complain if non-runnable code isn't run:
if 0:
if __name__ == .__main__.:
debug.warning

1
.github/FUNDING.yml vendored Normal file
View File

@@ -0,0 +1 @@
github: [davidhalter]

3
.gitignore vendored
View File

@@ -5,9 +5,12 @@
.tox
.coveralls.yml
.coverage
.idea
/build/
/docs/_build/
/dist/
jedi.egg-info/
record.json
/.cache/
/.pytest_cache
/venv/

6
.gitmodules vendored Normal file
View File

@@ -0,0 +1,6 @@
[submodule "jedi/third_party/typeshed"]
path = jedi/third_party/typeshed
url = https://github.com/davidhalter/typeshed.git
[submodule "jedi/third_party/django-stubs"]
path = jedi/third_party/django-stubs
url = https://github.com/davidhalter/django-stubs

View File

@@ -1,30 +1,73 @@
dist: xenial
language: python
sudo: false
python:
- 2.6
- 2.7
- 3.3
- 3.4
- 3.5
- 3.9-dev
- 3.8
- 3.7
- 3.6
- pypy
- 3.5
- 2.7
env:
- JEDI_TEST_ENVIRONMENT=38
- JEDI_TEST_ENVIRONMENT=37
- JEDI_TEST_ENVIRONMENT=36
- JEDI_TEST_ENVIRONMENT=35
- JEDI_TEST_ENVIRONMENT=27
- JEDI_TEST_ENVIRONMENT=interpreter
matrix:
allow_failures:
- python: pypy
- env: TOXENV=cov
- env: TOXENV=sith
include:
- python: 3.5
env: TOXENV=cov
- python: 3.5
env: TOXENV=sith
- python: 3.7
env:
- TOXENV=cov-py37
- JEDI_TEST_ENVIRONMENT=37
# For now ignore pypy, there are so many issues that we don't really need
# to run it.
#- python: pypy
# The 3.9 dev build does not seem to be available end 2019.
#- python: 3.9-dev
# env:
# - JEDI_TEST_ENVIRONMENT=39
install:
- pip install --quiet tox-travis
- sudo apt-get -y install python3-venv
script:
- |
# Setup/install Python for $JEDI_TEST_ENVIRONMENT.
set -ex
test_env_version=${JEDI_TEST_ENVIRONMENT:0:1}.${JEDI_TEST_ENVIRONMENT:1:1}
if [ "$TRAVIS_PYTHON_VERSION" != "$test_env_version" ] && [ "$JEDI_TEST_ENVIRONMENT" != "interpreter" ]; then
python_bin=python$test_env_version
python_path="$(which $python_bin || true)"
if [ -z "$python_path" ]; then
# Only required for JEDI_TEST_ENVIRONMENT=38, because it's not always
# available.
download_name=python-$test_env_version
wget https://s3.amazonaws.com/travis-python-archives/binaries/ubuntu/16.04/x86_64/$download_name.tar.bz2
sudo tar xjf $download_name.tar.bz2 --directory / opt/python
ln -s "/opt/python/${test_env_version}/bin/python" /home/travis/bin/$python_bin
elif [ "${python_path#/opt/pyenv/shims}" != "$python_path" ]; then
# Activate pyenv version (required with JEDI_TEST_ENVIRONMENT=36).
pyenv_bin="$(pyenv whence --path "$python_bin" | head -n1)"
ln -s "$pyenv_bin" /home/travis/bin/$python_bin
fi
$python_bin --version
python_ver=$($python_bin -c 'import sys; print("%d%d" % sys.version_info[0:2])')
if [ "$JEDI_TEST_ENVIRONMENT" != "$python_ver" ]; then
echo "Unexpected Python version for $JEDI_TEST_ENVIRONMENT: $python_ver"
set +ex
exit 2
fi
fi
set +ex
- tox
after_script:
- if [ $TOXENV == "cov" ]; then
pip install --quiet coveralls;
coveralls;
- |
if [ $TOXENV == "cov-py37" ]; then
pip install --quiet codecov coveralls
coverage xml
coverage report -m
coveralls
bash <(curl -s https://codecov.io/bash) -X gcov -X coveragepy -X search -X fix -X xcode -f coverage.xml
fi

View File

@@ -1,49 +1,66 @@
Main Authors
============
Main Authors
------------
David Halter (@davidhalter) <davidhalter88@gmail.com>
Takafumi Arakaki (@tkf) <aka.tkf@gmail.com>
- David Halter (@davidhalter) <davidhalter88@gmail.com>
- Takafumi Arakaki (@tkf) <aka.tkf@gmail.com>
Code Contributors
=================
-----------------
Danilo Bargen (@dbrgn) <mail@dbrgn.ch>
Laurens Van Houtven (@lvh) <_@lvh.cc>
Aldo Stracquadanio (@Astrac) <aldo.strac@gmail.com>
Jean-Louis Fuchs (@ganwell) <ganwell@fangorn.ch>
tek (@tek)
Yasha Borevich (@jjay) <j.borevich@gmail.com>
Aaron Griffin <aaronmgriffin@gmail.com>
andviro (@andviro)
Mike Gilbert (@floppym) <floppym@gentoo.org>
Aaron Meurer (@asmeurer) <asmeurer@gmail.com>
Lubos Trilety <ltrilety@redhat.com>
Akinori Hattori (@hattya) <hattya@gmail.com>
srusskih (@srusskih)
Steven Silvester (@blink1073)
Colin Duquesnoy (@ColinDuquesnoy) <colin.duquesnoy@gmail.com>
Jorgen Schaefer (@jorgenschaefer) <contact@jorgenschaefer.de>
Fredrik Bergroth (@fbergroth)
Mathias Fußenegger (@mfussenegger)
Syohei Yoshida (@syohex) <syohex@gmail.com>
ppalucky (@ppalucky)
immerrr (@immerrr) immerrr@gmail.com
Albertas Agejevas (@alga)
Savor d'Isavano (@KenetJervet) <newelevenken@163.com>
Phillip Berndt (@phillipberndt) <phillip.berndt@gmail.com>
Ian Lee (@IanLee1521) <IanLee1521@gmail.com>
Farkhad Khatamov (@hatamov) <comsgn@gmail.com>
Kevin Kelley (@kelleyk) <kelleyk@kelleyk.net>
Sid Shanker (@squidarth) <sid.p.shanker@gmail.com>
Reinoud Elhorst (@reinhrst)
Guido van Rossum (@gvanrossum) <guido@python.org>
Dmytro Sadovnychyi (@sadovnychyi) <jedi@dmit.ro>
Cristi Burcă (@scribu)
bstaint (@bstaint)
Mathias Rav (@Mortal) <rav@cs.au.dk>
Daniel Fiterman (@dfit99) <fitermandaniel2@gmail.com>
Simon Ruggier (@sruggier)
Élie Gouzien (@ElieGouzien)
- Danilo Bargen (@dbrgn) <mail@dbrgn.ch>
- Laurens Van Houtven (@lvh) <_@lvh.cc>
- Aldo Stracquadanio (@Astrac) <aldo.strac@gmail.com>
- Jean-Louis Fuchs (@ganwell) <ganwell@fangorn.ch>
- tek (@tek)
- Yasha Borevich (@jjay) <j.borevich@gmail.com>
- Aaron Griffin <aaronmgriffin@gmail.com>
- andviro (@andviro)
- Mike Gilbert (@floppym) <floppym@gentoo.org>
- Aaron Meurer (@asmeurer) <asmeurer@gmail.com>
- Lubos Trilety <ltrilety@redhat.com>
- Akinori Hattori (@hattya) <hattya@gmail.com>
- srusskih (@srusskih)
- Steven Silvester (@blink1073)
- Colin Duquesnoy (@ColinDuquesnoy) <colin.duquesnoy@gmail.com>
- Jorgen Schaefer (@jorgenschaefer) <contact@jorgenschaefer.de>
- Fredrik Bergroth (@fbergroth)
- Mathias Fußenegger (@mfussenegger)
- Syohei Yoshida (@syohex) <syohex@gmail.com>
- ppalucky (@ppalucky)
- immerrr (@immerrr) immerrr@gmail.com
- Albertas Agejevas (@alga)
- Savor d'Isavano (@KenetJervet) <newelevenken@163.com>
- Phillip Berndt (@phillipberndt) <phillip.berndt@gmail.com>
- Ian Lee (@IanLee1521) <IanLee1521@gmail.com>
- Farkhad Khatamov (@hatamov) <comsgn@gmail.com>
- Kevin Kelley (@kelleyk) <kelleyk@kelleyk.net>
- Sid Shanker (@squidarth) <sid.p.shanker@gmail.com>
- Reinoud Elhorst (@reinhrst)
- Guido van Rossum (@gvanrossum) <guido@python.org>
- Dmytro Sadovnychyi (@sadovnychyi) <jedi@dmit.ro>
- Cristi Burcă (@scribu)
- bstaint (@bstaint)
- Mathias Rav (@Mortal) <rav@cs.au.dk>
- Daniel Fiterman (@dfit99) <fitermandaniel2@gmail.com>
- Simon Ruggier (@sruggier)
- Élie Gouzien (@ElieGouzien)
- Robin Roth (@robinro)
- Malte Plath (@langsamer)
- Anton Zub (@zabulazza)
- Maksim Novikov (@m-novikov) <mnovikov.work@gmail.com>
- Tobias Rzepka (@TobiasRzepka)
- micbou (@micbou)
- Dima Gerasimov (@karlicoss) <karlicoss@gmail.com>
- Max Woerner Chase (@mwchase) <max.chase@gmail.com>
- Johannes Maria Frank (@jmfrank63) <jmfrank63@gmail.com>
- Shane Steinert-Threlkeld (@shanest) <ssshanest@gmail.com>
- Tim Gates (@timgates42) <tim.gates@iress.com>
- Lior Goldberg (@goldberglior)
- Ryan Clary (@mrclary)
- Max Mäusezahl (@mmaeusezahl) <maxmaeusezahl@googlemail.com>
- Vladislav Serebrennikov (@endilll)
- Andrii Kolomoiets (@muffinmad)
And a few more "anonymous" contributors.
Note: (@user) means a github user name.

View File

@@ -3,6 +3,197 @@
Changelog
---------
Unreleased
++++++++++
0.17.2 (2020-07-17)
+++++++++++++++++++
- Added an option to pass environment variables to ``Environment``
- ``Project(...).path`` exists now
- Support for Python 3.9
- A few bugfixes
This will be the last release that supports Python 2 and Python 3.5.
``0.18.0`` will be Python 3.6+.
0.17.1 (2020-06-20)
+++++++++++++++++++
- Django ``Model`` meta class support
- Django Manager support (completion on Managers/QuerySets)
- Added Django Stubs to Jedi, thanks to all contributors of the
`Django Stubs <https://github.com/typeddjango/django-stubs>`_ project
- Added ``SyntaxError.get_message``
- Python 3.9 support
- Bugfixes (mostly towards Generics)
0.17.0 (2020-04-14)
+++++++++++++++++++
- Added ``Project`` support. This allows a user to specify which folders Jedi
should work with.
- Added support for Refactoring. The following refactorings have been
implemented: ``Script.rename``, ``Script.inline``,
``Script.extract_variable`` and ``Script.extract_function``.
- Added ``Script.get_syntax_errors`` to display syntax errors in the current
script.
- Added code search capabilities both for individual files and projects. The
new functions are ``Project.search``, ``Project.complete_search``,
``Script.search`` and ``Script.complete_search``.
- Added ``Script.help`` to make it easier to display a help window to people.
Now returns pydoc information as well for Python keywords/operators. This
means that on the class keyword it will now return the docstring of Python's
builtin function ``help('class')``.
- The API documentation is now way more readable and complete. Check it out
under https://jedi.readthedocs.io. A lot of it has been rewritten.
- Removed Python 3.4 support
- Many bugfixes
This is likely going to be the last minor version that supports Python 2 and
Python3.5. Bugfixes will be provided in 0.17.1+. The next minor/major version
will probably be Jedi 1.0.0.
0.16.0 (2020-01-26)
+++++++++++++++++++
- **Added** ``Script.get_context`` to get information where you currently are.
- Completions/type inference of **Pytest fixtures**.
- Tensorflow, Numpy and Pandas completions should now be about **4-10x faster**
after the first time they are used.
- Dict key completions are working now. e.g. ``d = {1000: 3}; d[10`` will
expand to ``1000``.
- Completion for "proxies" works now. These are classes that have a
``__getattr__(self, name)`` method that does a ``return getattr(x, name)``.
after loading them initially.
- Goto on a function/attribute in a class now goes to the definition in its
super class.
- Big **Script API Changes**:
- The line and column parameters of ``jedi.Script`` are now deprecated
- ``completions`` deprecated, use ``complete`` instead
- ``goto_assignments`` deprecated, use ``goto`` instead
- ``goto_definitions`` deprecated, use ``infer`` instead
- ``call_signatures`` deprecated, use ``get_signatures`` instead
- ``usages`` deprecated, use ``get_references`` instead
- ``jedi.names`` deprecated, use ``jedi.Script(...).get_names()``
- ``BaseName.goto_assignments`` renamed to ``BaseName.goto``
- Add follow_imports to ``Name.goto``. Now its signature matches
``Script.goto``.
- **Python 2 support deprecated**. For this release it is best effort. Python 2
has reached the end of its life and now it's just about a smooth transition.
Bugs for Python 2 will not be fixed anymore and a third of the tests are
already skipped.
- Removed ``settings.no_completion_duplicates``. It wasn't tested and nobody
was probably using it anyway.
- Removed ``settings.use_filesystem_cache`` and
``settings.additional_dynamic_modules``, they have no usage anymore. Pretty
much nobody was probably using them.
0.15.2 (2019-12-20)
+++++++++++++++++++
- Signatures are now detected a lot better
- Add fuzzy completions with ``Script(...).completions(fuzzy=True)``
- Files bigger than one MB (about 20kLOC) get cropped to avoid getting
stuck completely.
- Many small Bugfixes
- A big refactoring around contexts/values
0.15.1 (2019-08-13)
+++++++++++++++++++
- Small bugfix and removal of a print statement
0.15.0 (2019-08-11)
+++++++++++++++++++
- Added file path completions, there's a **new** ``Completion.type`` now:
``path``. Example: ``'/ho`` -> ``'/home/``
- ``*args``/``**kwargs`` resolving. If possible Jedi replaces the parameters
with the actual alternatives.
- Better support for enums/dataclasses
- When using Interpreter, properties are now executed, since a lot of people
have complained about this. Discussion in #1299, #1347.
New APIs:
- ``Name.get_signatures() -> List[Signature]``. Signatures are similar to
``CallSignature``. ``Name.params`` is therefore deprecated.
- ``Signature.to_string()`` to format signatures.
- ``Signature.params -> List[ParamName]``, ParamName has the
following additional attributes ``infer_default()``, ``infer_annotation()``,
``to_string()``, and ``kind``.
- ``Name.execute() -> List[Name]``, makes it possible to infer
return values of functions.
0.14.1 (2019-07-13)
+++++++++++++++++++
- CallSignature.index should now be working a lot better
- A couple of smaller bugfixes
0.14.0 (2019-06-20)
+++++++++++++++++++
- Added ``goto_*(prefer_stubs=True)`` as well as ``goto_*(prefer_stubs=True)``
- Stubs are used now for type inference
- Typeshed is used for better type inference
- Reworked Name.full_name, should have more correct return values
0.13.3 (2019-02-24)
+++++++++++++++++++
- Fixed an issue with embedded Python, see https://github.com/davidhalter/jedi-vim/issues/870
0.13.2 (2018-12-15)
+++++++++++++++++++
- Fixed a bug that led to Jedi spawning a lot of subprocesses.
0.13.1 (2018-10-02)
+++++++++++++++++++
- Bugfixes, because tensorflow completions were still slow.
0.13.0 (2018-10-02)
+++++++++++++++++++
- A small release. Some bug fixes.
- Remove Python 3.3 support. Python 3.3 support has been dropped by the Python
foundation.
- Default environments are now using the same Python version as the Python
process. In 0.12.x, we used to load the latest Python version on the system.
- Added ``include_builtins`` as a parameter to usages.
- ``goto_assignments`` has a new ``follow_builtin_imports`` parameter that
changes the previous behavior slightly.
0.12.1 (2018-06-30)
+++++++++++++++++++
- This release forces you to upgrade parso. If you don't, nothing will work
anymore. Otherwise changes should be limited to bug fixes. Unfortunately Jedi
still uses a few internals of parso that make it hard to keep compatibility
over multiple releases. Parso >=0.3.0 is going to be needed.
0.12.0 (2018-04-15)
+++++++++++++++++++
- Virtualenv/Environment support
- F-String Completion/Goto Support
- Cannot crash with segfaults anymore
- Cleaned up import logic
- Understand async/await and autocomplete it (including async generators)
- Better namespace completions
- Passing tests for Windows (including CI for Windows)
- Remove Python 2.6 support
0.11.1 (2017-12-14)
+++++++++++++++++++
- Parso update - the caching layer was broken
- Better usages - a lot of internal code was ripped out and improved.
0.11.0 (2017-09-20)
+++++++++++++++++++
@@ -28,7 +219,7 @@ Changelog
- Actual semantic completions for the complete Python syntax.
- Basic type inference for ``yield from`` PEP 380.
- PEP 484 support (most of the important features of it). Thanks Claude! (@reinhrst)
- Added ``get_line_code`` to ``Definition`` and ``Completion`` objects.
- Added ``get_line_code`` to ``Name`` and ``Completion`` objects.
- Completely rewritten the type inference engine.
- A new and better parser for (fast) parsing diffs of Python code.
@@ -36,12 +227,12 @@ Changelog
++++++++++++++++++
- The import logic has been rewritten to look more like Python's. There is now
an ``Evaluator.modules`` import cache, which resembles ``sys.modules``.
an ``InferState.modules`` import cache, which resembles ``sys.modules``.
- Integrated the parser of 2to3. This will make refactoring possible. It will
also be possible to check for error messages (like compiling an AST would give)
in the future.
- With the new parser, the evaluation also completely changed. It's now simpler
and more readable.
- With the new parser, the type inference also completely changed. It's now
simpler and more readable.
- Completely rewritten REPL completion.
- Added ``jedi.names``, a command to do static analysis. Thanks to that
sourcegraph guys for sponsoring this!

View File

@@ -5,4 +5,4 @@ Pull Requests are great.
3. Add your name to AUTHORS.txt
4. Push to your fork and submit a pull request.
**Try to use the PEP8 style guide.**
**Try to use the PEP8 style guide** (and it's ok to have a line length of 100 characters).

View File

@@ -8,8 +8,10 @@ include conftest.py
include pytest.ini
include tox.ini
include requirements.txt
include jedi/evaluate/compiled/fake/*.pym
include jedi/parser/python/grammar*.txt
recursive-include jedi/third_party *.pyi
include jedi/third_party/typeshed/LICENSE
include jedi/third_party/django-stubs/LICENSE.txt
include jedi/third_party/typeshed/README
recursive-include test *
recursive-include docs *
recursive-exclude * *.pyc

View File

@@ -1,103 +1,107 @@
###################################################################
Jedi - an awesome autocompletion/static analysis library for Python
###################################################################
####################################################################################
Jedi - an awesome autocompletion, static analysis and refactoring library for Python
####################################################################################
.. image:: https://secure.travis-ci.org/davidhalter/jedi.png?branch=master
:target: http://travis-ci.org/davidhalter/jedi
:alt: Travis-CI build status
.. image:: http://isitmaintained.com/badge/open/davidhalter/jedi.svg
:target: https://github.com/davidhalter/jedi/issues
:alt: The percentage of open issues and pull requests
.. image:: https://coveralls.io/repos/davidhalter/jedi/badge.png?branch=master
.. image:: http://isitmaintained.com/badge/resolution/davidhalter/jedi.svg
:target: https://github.com/davidhalter/jedi/issues
:alt: The resolution time is the median time an issue or pull request stays open.
.. image:: https://travis-ci.org/davidhalter/jedi.svg?branch=master
:target: https://travis-ci.org/davidhalter/jedi
:alt: Linux Tests
.. image:: https://ci.appveyor.com/api/projects/status/mgva3bbawyma1new/branch/master?svg=true
:target: https://ci.appveyor.com/project/davidhalter/jedi/branch/master
:alt: Windows Tests
.. image:: https://coveralls.io/repos/davidhalter/jedi/badge.svg?branch=master
:target: https://coveralls.io/r/davidhalter/jedi
:alt: Coverage Status
:alt: Coverage status
.. image:: https://pepy.tech/badge/jedi
:target: https://pepy.tech/project/jedi
:alt: PyPI Downloads
*If you have specific questions, please add an issue or ask on* `stackoverflow
<https://stackoverflow.com/questions/tagged/python-jedi>`_ *with the label* ``python-jedi``.
Jedi is a static analysis tool for Python that is typically used in
IDEs/editors plugins. Jedi has a focus on autocompletion and goto
functionality. Other features include refactoring, code search and finding
references.
Jedi is a static analysis tool for Python that can be used in IDEs/editors. Its
historic focus is autocompletion, but does static analysis for now as well.
Jedi is fast and is very well tested. It understands Python on a deeper level
than all other static analysis frameworks for Python.
Jedi has support for two different goto functions. It's possible to search for
related names and to list all names in a Python file and infer them. Jedi
understands docstrings and you can use Jedi autocompletion in your REPL as
well.
Jedi uses a very simple API to connect with IDE's. There's a reference
implementation as a `VIM-Plugin <https://github.com/davidhalter/jedi-vim>`_,
which uses Jedi's autocompletion. We encourage you to use Jedi in your IDEs.
It's really easy.
Jedi has a simple API to work with. There is a reference implementation as a
`VIM-Plugin <https://github.com/davidhalter/jedi-vim>`_. Autocompletion in your
REPL is also possible, IPython uses it natively and for the CPython REPL you
can install it. Jedi is well tested and bugs should be rare.
Jedi can currently be used with the following editors/projects:
- Vim (jedi-vim_, YouCompleteMe_, deoplete-jedi_, completor.vim_)
- `Visual Studio Code`_ (via `Python Extension <https://marketplace.visualstudio.com/items?itemName=ms-python.python>`_)
- Emacs (Jedi.el_, company-mode_, elpy_, anaconda-mode_, ycmd_)
- Sublime Text (SublimeJEDI_ [ST2 + ST3], anaconda_ [only ST3])
- TextMate_ (Not sure if it's actually working)
- Kate_ version 4.13+ supports it natively, you have to enable it, though. [`proof
- Kate_ version 4.13+ supports it natively, you have to enable it, though. [`see
<https://projects.kde.org/projects/kde/applications/kate/repository/show?rev=KDE%2F4.13>`_]
- Atom_ (autocomplete-python-jedi_)
- SourceLair_
- `GNOME Builder`_ (with support for GObject Introspection)
- `Visual Studio Code`_ (via `Python Extension <https://marketplace.visualstudio.com/items?itemName=donjayamanne.python>`_)
- Gedit (gedi_)
- wdb_ - Web Debugger
- `Eric IDE`_ (Available as a plugin)
- `Ipython 6.0.0+ <http://ipython.readthedocs.io/en/stable/whatsnew/version6.html>`_
- `IPython 6.0.0+ <https://ipython.readthedocs.io/en/stable/whatsnew/version6.html>`_
and many more!
Here are some pictures taken from jedi-vim_:
.. image:: https://github.com/davidhalter/jedi/raw/master/docs/_screenshots/screenshot_complete.png
Completion for almost anything (Ctrl+Space).
Completion for almost anything:
.. image:: https://github.com/davidhalter/jedi/raw/master/docs/_screenshots/screenshot_function.png
Display of function/class bodies, docstrings.
Documentation:
.. image:: https://github.com/davidhalter/jedi/raw/master/docs/_screenshots/screenshot_pydoc.png
Pydoc support (Shift+k).
There is also support for goto and renaming.
Get the latest version from `github <https://github.com/davidhalter/jedi>`_
(master branch should always be kind of stable/working).
Docs are available at `https://jedi.readthedocs.org/en/latest/
<https://jedi.readthedocs.org/en/latest/>`_. Pull requests with documentation
enhancements and/or fixes are awesome and most welcome. Jedi uses `semantic
versioning <http://semver.org/>`_.
<https://jedi.readthedocs.org/en/latest/>`_. Pull requests with enhancements
and/or fixes are awesome and most welcome. Jedi uses `semantic versioning
<https://semver.org/>`_.
If you want to stay up-to-date (News / RFCs), please subscribe to this `github
thread <https://github.com/davidhalter/jedi/issues/1063>`_.:
Issues & Questions
==================
You can file issues and questions in the `issue tracker
<https://github.com/davidhalter/jedi/>`. Alternatively you can also ask on
`Stack Overflow <https://stackoverflow.com/questions/tagged/python-jedi>`_ with
the label ``python-jedi``.
Installation
============
pip install jedi
`Check out the docs <https://jedi.readthedocs.org/en/latest/docs/installation.html>`_.
Note: This just installs the Jedi library, not the editor plugins. For
information about how to make it work with your editor, refer to the
corresponding documentation.
Features and Limitations
========================
You don't want to use ``pip``? Please refer to the `manual
<https://jedi.readthedocs.org/en/latest/docs/installation.html>`_.
Jedi's features are listed here:
`Features <https://jedi.readthedocs.org/en/latest/docs/features.html>`_.
Feature Support and Caveats
===========================
Jedi really understands your Python code. For a comprehensive list what Jedi
understands, see: `Features
<https://jedi.readthedocs.org/en/latest/docs/features.html>`_. A list of
caveats can be found on the same page.
You can run Jedi on cPython 2.6, 2.7, 3.3, 3.4 or 3.5 but it should also
understand/parse code older than those versions.
You can run Jedi on CPython 2.7 or 3.5+ but it should also
understand code that is older than those versions. Additionally you should be
able to use `Virtualenvs <https://jedi.readthedocs.org/en/latest/docs/api.html#environments>`_
very well.
Tips on how to use Jedi efficiently can be found `here
<https://jedi.readthedocs.org/en/latest/docs/features.html#recipes>`_.
@@ -105,51 +109,62 @@ Tips on how to use Jedi efficiently can be found `here
API
---
You can find the documentation for the `API here <https://jedi.readthedocs.org/en/latest/docs/plugin-api.html>`_.
You can find a comprehensive documentation for the
`API here <https://jedi.readthedocs.org/en/latest/docs/api.html>`_.
Autocompletion / Goto / Documentation
-------------------------------------
Autocompletion / Goto / Pydoc
-----------------------------
There are the following commands:
Please check the API for a good explanation. There are the following commands:
- ``jedi.Script.goto_assignments``
- ``jedi.Script.completions``
- ``jedi.Script.usages``
The returned objects are very powerful and really all you might need.
- ``jedi.Script.goto``
- ``jedi.Script.infer``
- ``jedi.Script.help``
- ``jedi.Script.complete``
- ``jedi.Script.get_references``
- ``jedi.Script.get_signatures``
- ``jedi.Script.get_context``
The returned objects are very powerful and are really all you might need.
Autocompletion in your REPL (IPython, etc.)
-------------------------------------------
Starting with Ipython `6.0.0` Jedi is a dependency of IPython. Autocompletion
in IPython is therefore possible without additional configuration.
Jedi is a dependency of IPython. Autocompletion in IPython with Jedi is
therefore possible without additional configuration.
It's possible to have Jedi autocompletion in REPL modes - `example video <https://vimeo.com/122332037>`_.
This means that in Python you can enable tab completion in a `REPL
Here is an `example video <https://vimeo.com/122332037>`_ how REPL completion
can look like.
For the ``python`` shell you can enable tab completion in a `REPL
<https://jedi.readthedocs.org/en/latest/docs/usage.html#tab-completion-in-the-python-shell>`_.
Static Analysis
---------------
Static Analysis / Linter
------------------------
To do all forms of static analysis, please try to use ``jedi.names``. It will
return a list of names that you can use to infer types and so on.
Linting is another thing that is going to be part of Jedi. For now you can try
an alpha version ``python -m jedi linter``. The API might change though and
it's still buggy. It's Jedi's goal to be smarter than classic linter and
understand ``AttributeError`` and other code issues.
For a lot of forms of static analysis, you can try to use
``jedi.Script(...).get_names``. It will return a list of names that you can
then filter and work with. There is also a way to list the syntax errors in a
file: ``jedi.Script.get_syntax_errors``.
Refactoring
-----------
Jedi's parser would support refactoring, but there's no API to use it right
now. If you're interested in helping out here, let me know. With the latest
parser changes, it should be very easy to actually make it work.
Jedi supports the following refactorings:
- ``jedi.Script.inline``
- ``jedi.Script.rename``
- ``jedi.Script.extract_function``
- ``jedi.Script.extract_variable``
Code Search
-----------
There is support for module search with ``jedi.Script.search``, and project
search for ``jedi.Project.search``. The way to search is either by providing a
name like ``foo`` or by using dotted syntax like ``foo.bar``. Additionally you
can provide the API type like ``class foo.bar.Bar``. There are also the
functions ``jedi.Script.complete_search`` and ``jedi.Project.complete_search``.
Development
===========
@@ -157,43 +172,30 @@ Development
There's a pretty good and extensive `development documentation
<https://jedi.readthedocs.org/en/latest/docs/development.html>`_.
Testing
=======
The test suite depends on ``tox`` and ``pytest``::
The test suite uses ``pytest``::
pip install tox pytest
pip install pytest
To run the tests for all supported Python versions::
If you want to test only a specific Python version (e.g. Python 3.8), it is as
easy as::
tox
If you want to test only a specific Python version (e.g. Python 2.7), it's as
easy as ::
tox -e py27
Tests are also run automatically on `Travis CI
<https://travis-ci.org/davidhalter/jedi/>`_.
python3.8 -m pytest
For more detailed information visit the `testing documentation
<https://jedi.readthedocs.org/en/latest/docs/testing.html>`_
<https://jedi.readthedocs.org/en/latest/docs/testing.html>`_.
Acknowledgements
================
- Takafumi Arakaki (@tkf) for creating a solid test environment and a lot of
other things.
- Danilo Bargen (@dbrgn) for general housekeeping and being a good friend :).
- Guido van Rossum (@gvanrossum) for creating the parser generator pgen2
(originally used in lib2to3).
Thanks a lot to all the
`contributors <https://jedi.readthedocs.org/en/latest/docs/acknowledgements.html>`_!
.. _jedi-vim: https://github.com/davidhalter/jedi-vim
.. _youcompleteme: http://valloric.github.io/YouCompleteMe/
.. _youcompleteme: https://github.com/ycm-core/YouCompleteMe
.. _deoplete-jedi: https://github.com/zchee/deoplete-jedi
.. _completor.vim: https://github.com/maralla/completor.vim
.. _Jedi.el: https://github.com/tkf/emacs-jedi
@@ -205,11 +207,10 @@ Acknowledgements
.. _anaconda: https://github.com/DamnWidget/anaconda
.. _wdb: https://github.com/Kozea/wdb
.. _TextMate: https://github.com/lawrenceakka/python-jedi.tmbundle
.. _Kate: http://kate-editor.org
.. _Kate: https://kate-editor.org
.. _Atom: https://atom.io/
.. _autocomplete-python-jedi: https://atom.io/packages/autocomplete-python-jedi
.. _SourceLair: https://www.sourcelair.com
.. _GNOME Builder: https://wiki.gnome.org/Apps/Builder
.. _Visual Studio Code: https://code.visualstudio.com/
.. _gedi: https://github.com/isamert/gedi
.. _Eric IDE: http://eric-ide.python-projects.org
.. _Eric IDE: https://eric-ide.python-projects.org

59
appveyor.yml Normal file
View File

@@ -0,0 +1,59 @@
environment:
matrix:
- TOXENV: py37
PYTHON_PATH: C:\Python37
JEDI_TEST_ENVIRONMENT: 37
- TOXENV: py37
PYTHON_PATH: C:\Python37
JEDI_TEST_ENVIRONMENT: 36
- TOXENV: py37
PYTHON_PATH: C:\Python37
JEDI_TEST_ENVIRONMENT: 35
- TOXENV: py37
PYTHON_PATH: C:\Python37
JEDI_TEST_ENVIRONMENT: 27
- TOXENV: py36
PYTHON_PATH: C:\Python36
JEDI_TEST_ENVIRONMENT: 37
- TOXENV: py36
PYTHON_PATH: C:\Python36
JEDI_TEST_ENVIRONMENT: 36
- TOXENV: py36
PYTHON_PATH: C:\Python36
JEDI_TEST_ENVIRONMENT: 35
- TOXENV: py36
PYTHON_PATH: C:\Python36
JEDI_TEST_ENVIRONMENT: 27
- TOXENV: py35
PYTHON_PATH: C:\Python35
JEDI_TEST_ENVIRONMENT: 37
- TOXENV: py35
PYTHON_PATH: C:\Python35
JEDI_TEST_ENVIRONMENT: 36
- TOXENV: py35
PYTHON_PATH: C:\Python35
JEDI_TEST_ENVIRONMENT: 35
- TOXENV: py35
PYTHON_PATH: C:\Python35
JEDI_TEST_ENVIRONMENT: 27
- TOXENV: py27
PYTHON_PATH: C:\Python27
JEDI_TEST_ENVIRONMENT: 37
- TOXENV: py27
PYTHON_PATH: C:\Python27
JEDI_TEST_ENVIRONMENT: 36
- TOXENV: py27
PYTHON_PATH: C:\Python27
JEDI_TEST_ENVIRONMENT: 35
- TOXENV: py27
PYTHON_PATH: C:\Python27
JEDI_TEST_ENVIRONMENT: 27
install:
- git submodule update --init --recursive
- set PATH=%PYTHON_PATH%;%PYTHON_PATH%\Scripts;%PATH%
- pip install tox
build_script:
- tox

View File

@@ -1,18 +1,33 @@
import tempfile
import shutil
import os
import sys
from functools import partial
import pytest
import jedi
from jedi.api.environment import get_system_environment, InterpreterEnvironment
from jedi._compatibility import py_version
from test.helpers import test_dir
collect_ignore = ["setup.py"]
collect_ignore = [
'setup.py',
'jedi/__main__.py',
'jedi/inference/compiled/subprocess/__main__.py',
'build/',
'test/examples',
]
if sys.version_info < (3, 6):
# Python 2 not supported syntax
collect_ignore.append('test/test_inference/test_mixed.py')
# The following hooks (pytest_configure, pytest_unconfigure) are used
# to modify `jedi.settings.cache_directory` because `clean_jedi_cache`
# has no effect during doctests. Without these hooks, doctests uses
# user's cache (e.g., ~/.cache/jedi/). We should remove this
# workaround once the problem is fixed in py.test.
# workaround once the problem is fixed in pytest.
#
# See:
# - https://github.com/davidhalter/jedi/pull/168
@@ -29,6 +44,12 @@ def pytest_addoption(parser):
parser.addoption("--warning-is-error", action='store_true',
help="Warnings are treated as errors.")
parser.addoption("--env", action='store',
help="Execute the tests in that environment (e.g. 35 for python3.5).")
parser.addoption("--interpreter-env", "-I", action='store_true',
help="Don't use subprocesses to guarantee having safe "
"code execution. Useful for debugging.")
def pytest_configure(config):
global jedi_cache_directory_orig, jedi_cache_directory_temp
@@ -70,3 +91,108 @@ def clean_jedi_cache(request):
def restore():
settings.cache_directory = old
shutil.rmtree(tmp)
@pytest.fixture(scope='session')
def environment(request):
version = request.config.option.env
if version is None:
version = os.environ.get('JEDI_TEST_ENVIRONMENT', str(py_version))
if request.config.option.interpreter_env or version == 'interpreter':
return InterpreterEnvironment()
return get_system_environment(version[0] + '.' + version[1:])
@pytest.fixture(scope='session')
def Script(environment):
return partial(jedi.Script, environment=environment)
@pytest.fixture(scope='session')
def ScriptWithProject(Script):
project = jedi.Project(test_dir)
return partial(jedi.Script, project=project)
@pytest.fixture(scope='session')
def get_names(Script):
return lambda code, **kwargs: Script(code).get_names(**kwargs)
@pytest.fixture(scope='session', params=['goto', 'infer'])
def goto_or_infer(request, Script):
return lambda code, *args, **kwargs: getattr(Script(code), request.param)(*args, **kwargs)
@pytest.fixture(scope='session', params=['goto', 'help'])
def goto_or_help(request, Script):
return lambda code, *args, **kwargs: getattr(Script(code), request.param)(*args, **kwargs)
@pytest.fixture(scope='session', params=['goto', 'help', 'infer'])
def goto_or_help_or_infer(request, Script):
return lambda code, *args, **kwargs: getattr(Script(code), request.param)(*args, **kwargs)
@pytest.fixture(scope='session')
def has_typing(environment):
if environment.version_info >= (3, 5, 0):
# This if is just needed to avoid that tests ever skip way more than
# they should for all Python versions.
return True
script = jedi.Script('import typing', environment=environment)
return bool(script.infer())
@pytest.fixture(scope='session')
def has_django(environment):
script = jedi.Script('import django', environment=environment)
return bool(script.infer())
@pytest.fixture(scope='session')
def jedi_path():
return os.path.dirname(__file__)
@pytest.fixture()
def skip_python2(environment):
if environment.version_info.major == 2:
# This if is just needed to avoid that tests ever skip way more than
# they should for all Python versions.
pytest.skip()
@pytest.fixture()
def skip_pre_python38(environment):
if environment.version_info < (3, 8):
# This if is just needed to avoid that tests ever skip way more than
# they should for all Python versions.
pytest.skip()
@pytest.fixture()
def skip_pre_python37(environment):
if environment.version_info < (3, 7):
# This if is just needed to avoid that tests ever skip way more than
# they should for all Python versions.
pytest.skip()
@pytest.fixture()
def skip_pre_python35(environment):
if environment.version_info < (3, 5):
# This if is just needed to avoid that tests ever skip way more than
# they should for all Python versions.
pytest.skip()
@pytest.fixture()
def skip_pre_python36(environment):
if environment.version_info < (3, 6):
# This if is just needed to avoid that tests ever skip way more than
# they should for all Python versions.
pytest.skip()

11
deploy-master.sh Normal file → Executable file
View File

@@ -21,12 +21,13 @@ rm -rf $PROJECT_NAME
git clone .. $PROJECT_NAME
cd $PROJECT_NAME
git checkout $BRANCH
git submodule update --init
# Test first.
tox
# Create tag
tag=v$(python -c "import $PROJECT_NAME; print($PROJECT_NAME.__version__)")
tag=v$(python3 -c "import $PROJECT_NAME; print($PROJECT_NAME.__version__)")
master_ref=$(git show-ref -s heads/$BRANCH)
tag_ref=$(git show-ref -s $tag || true)
@@ -36,17 +37,17 @@ if [[ $tag_ref ]]; then
exit 1
fi
else
git tag $tag
git tag -a $tag
git push --tags
fi
# Package and upload to PyPI
#rm -rf dist/ - Not needed anymore, because the folder is never reused.
echo `pwd`
python setup.py sdist bdist_wheel
python3 setup.py sdist bdist_wheel
# Maybe do a pip install twine before.
twine upload dist/*
cd $BASE_DIR
# Back in the development directory fetch tags.
git fetch --tags
# The tags have been pushed to this repo. Push the tags to github, now.
git push --tags

9
docs/_static/custom_style.css vendored Normal file
View File

@@ -0,0 +1,9 @@
div.version {
color: black !important;
margin-top: -1.2em !important;
margin-bottom: .6em !important;
}
div.wy-side-nav-search {
padding-top: 0 !important;
}

View File

@@ -1,4 +1,4 @@
<h3>Github</h3>
<iframe src="http://ghbtns.com/github-btn.html?user=davidhalter&repo=jedi&type=watch&count=true&size=large"
<iframe src="https://ghbtns.com/github-btn.html?user=davidhalter&repo=jedi&type=watch&count=true&size=large"
frameborder="0" scrolling="0" width="170" height="30" allowtransparency="true"></iframe>
<br><br>

View File

@@ -1,37 +0,0 @@
Copyright (c) 2010 by Armin Ronacher.
Some rights reserved.
Redistribution and use in source and binary forms of the theme, with or
without modification, are permitted provided that the following conditions
are met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following
disclaimer in the documentation and/or other materials provided
with the distribution.
* The names of the contributors may not be used to endorse or
promote products derived from this software without specific
prior written permission.
We kindly ask you to only use these themes in an unmodified manner just
for Flask and Flask-related products, not for unrelated projects. If you
like the visual style and want to use it for your own projects, please
consider making some larger changes to the themes (such as changing
font faces, sizes, colors or margins).
THIS THEME IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
ARISING IN ANY WAY OUT OF THE USE OF THIS THEME, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.

View File

@@ -1,28 +0,0 @@
{%- extends "basic/layout.html" %}
{%- block extrahead %}
{{ super() }}
{% if theme_touch_icon %}
<link rel="apple-touch-icon" href="{{ pathto('_static/' ~ theme_touch_icon, 1) }}" />
{% endif %}
<link media="only screen and (max-device-width: 480px)" href="{{
pathto('_static/small_flask.css', 1) }}" type= "text/css" rel="stylesheet" />
<a href="https://github.com/davidhalter/jedi">
<img style="position: absolute; top: 0; right: 0; border: 0;" src="https://s3.amazonaws.com/github/ribbons/forkme_right_red_aa0000.png" alt="Fork me on GitHub">
</a>
{% endblock %}
{%- block relbar2 %}{% endblock %}
{% block header %}
{{ super() }}
{% if pagename == 'index' %}
<div class=indexwrapper>
{% endif %}
{% endblock %}
{%- block footer %}
<div class="footer">
&copy; Copyright {{ copyright }}.
Created using <a href="http://sphinx.pocoo.org/">Sphinx</a>.
</div>
{% if pagename == 'index' %}
</div>
{% endif %}
{%- endblock %}

View File

@@ -1,19 +0,0 @@
<h3>Related Topics</h3>
<ul>
<li><a href="{{ pathto(master_doc) }}">Documentation overview</a><ul>
{%- for parent in parents %}
<li><a href="{{ parent.link|e }}">{{ parent.title }}</a><ul>
{%- endfor %}
{%- if prev %}
<li>Previous: <a href="{{ prev.link|e }}" title="{{ _('previous chapter')
}}">{{ prev.title }}</a></li>
{%- endif %}
{%- if next %}
<li>Next: <a href="{{ next.link|e }}" title="{{ _('next chapter')
}}">{{ next.title }}</a></li>
{%- endif %}
{%- for parent in parents %}
</ul></li>
{%- endfor %}
</ul></li>
</ul>

View File

@@ -1,394 +0,0 @@
/*
* flasky.css_t
* ~~~~~~~~~~~~
*
* :copyright: Copyright 2010 by Armin Ronacher.
* :license: Flask Design License, see LICENSE for details.
*/
{% set page_width = '940px' %}
{% set sidebar_width = '220px' %}
@import url("basic.css");
/* -- page layout ----------------------------------------------------------- */
body {
font-family: 'Georgia', serif;
font-size: 17px;
background-color: white;
color: #000;
margin: 0;
padding: 0;
}
div.document {
width: {{ page_width }};
margin: 30px auto 0 auto;
}
div.documentwrapper {
float: left;
width: 100%;
}
div.bodywrapper {
margin: 0 0 0 {{ sidebar_width }};
}
div.sphinxsidebar {
width: {{ sidebar_width }};
}
hr {
border: 1px solid #B1B4B6;
}
div.body {
background-color: #ffffff;
color: #3E4349;
padding: 0 30px 0 30px;
}
img.floatingflask {
padding: 0 0 10px 10px;
float: right;
}
div.footer {
width: {{ page_width }};
margin: 20px auto 30px auto;
font-size: 14px;
color: #888;
text-align: right;
}
div.footer a {
color: #888;
}
div.related {
display: none;
}
div.sphinxsidebar a {
color: #444;
text-decoration: none;
border-bottom: 1px dotted #999;
}
div.sphinxsidebar a:hover {
border-bottom: 1px solid #999;
}
div.sphinxsidebar {
font-size: 14px;
line-height: 1.5;
}
div.sphinxsidebarwrapper {
padding: 18px 10px;
}
div.sphinxsidebarwrapper p.logo {
padding: 0 0 20px 0;
margin: 0;
text-align: center;
}
div.sphinxsidebar h3,
div.sphinxsidebar h4 {
font-family: 'Garamond', 'Georgia', serif;
color: #444;
font-size: 24px;
font-weight: normal;
margin: 0 0 5px 0;
padding: 0;
}
div.sphinxsidebar h4 {
font-size: 20px;
}
div.sphinxsidebar h3 a {
color: #444;
}
div.sphinxsidebar p.logo a,
div.sphinxsidebar h3 a,
div.sphinxsidebar p.logo a:hover,
div.sphinxsidebar h3 a:hover {
border: none;
}
div.sphinxsidebar p {
color: #555;
margin: 10px 0;
}
div.sphinxsidebar ul {
margin: 10px 0;
padding: 0;
color: #000;
}
div.sphinxsidebar input {
border: 1px solid #ccc;
font-family: 'Georgia', serif;
font-size: 1em;
}
/* -- body styles ----------------------------------------------------------- */
a {
color: #004B6B;
text-decoration: underline;
}
a:hover {
color: #6D4100;
text-decoration: underline;
}
div.body h1,
div.body h2,
div.body h3,
div.body h4,
div.body h5,
div.body h6 {
font-family: 'Garamond', 'Georgia', serif;
font-weight: normal;
margin: 30px 0px 10px 0px;
padding: 0;
}
{% if theme_index_logo %}
div.indexwrapper h1 {
text-indent: -999999px;
background: url({{ theme_index_logo }}) no-repeat center center;
height: {{ theme_index_logo_height }};
}
{% endif %}
div.body h1 { margin-top: 0; padding-top: 0; font-size: 240%; }
div.body h2 { font-size: 180%; }
div.body h3 { font-size: 150%; }
div.body h4 { font-size: 130%; }
div.body h5 { font-size: 100%; }
div.body h6 { font-size: 100%; }
a.headerlink {
color: #ddd;
padding: 0 4px;
text-decoration: none;
}
a.headerlink:hover {
color: #444;
}
div.body p, div.body dd, div.body li {
line-height: 1.4em;
}
div.admonition {
background: #fafafa;
margin: 20px -30px;
padding: 10px 30px;
border-top: 1px solid #ccc;
border-bottom: 1px solid #ccc;
}
div.admonition tt.xref, div.admonition a tt {
border-bottom: 1px solid #fafafa;
}
dd div.admonition {
margin-left: -60px;
padding-left: 60px;
}
div.admonition p.admonition-title {
font-family: 'Garamond', 'Georgia', serif;
font-weight: normal;
font-size: 24px;
margin: 0 0 10px 0;
padding: 0;
line-height: 1;
}
div.admonition p.last {
margin-bottom: 0;
}
div.highlight {
background-color: white;
}
dt:target, .highlight {
background: #FAF3E8;
}
div.note {
background-color: #eee;
border: 1px solid #ccc;
}
div.seealso {
background-color: #ffc;
border: 1px solid #ff6;
}
div.topic {
background-color: #eee;
}
p.admonition-title {
display: inline;
}
p.admonition-title:after {
content: ":";
}
pre, tt {
font-family: 'Consolas', 'Menlo', 'Deja Vu Sans Mono', 'Bitstream Vera Sans Mono', monospace;
font-size: 0.9em;
}
img.screenshot {
}
tt.descname, tt.descclassname {
font-size: 0.95em;
}
tt.descname {
padding-right: 0.08em;
}
img.screenshot {
-moz-box-shadow: 2px 2px 4px #eee;
-webkit-box-shadow: 2px 2px 4px #eee;
box-shadow: 2px 2px 4px #eee;
}
table.docutils {
border: 1px solid #888;
-moz-box-shadow: 2px 2px 4px #eee;
-webkit-box-shadow: 2px 2px 4px #eee;
box-shadow: 2px 2px 4px #eee;
}
table.docutils td, table.docutils th {
border: 1px solid #888;
padding: 0.25em 0.7em;
}
table.field-list, table.footnote {
border: none;
-moz-box-shadow: none;
-webkit-box-shadow: none;
box-shadow: none;
}
table.footnote {
margin: 15px 0;
width: 100%;
border: 1px solid #eee;
background: #fdfdfd;
font-size: 0.9em;
}
table.footnote + table.footnote {
margin-top: -15px;
border-top: none;
}
table.field-list th {
padding: 0 0.8em 0 0;
}
table.field-list td {
padding: 0;
}
table.footnote td.label {
width: 0px;
padding: 0.3em 0 0.3em 0.5em;
}
table.footnote td {
padding: 0.3em 0.5em;
}
dl {
margin: 0;
padding: 0;
}
dl dd {
margin-left: 30px;
}
blockquote {
margin: 0 0 0 30px;
padding: 0;
}
ul, ol {
margin: 10px 0 10px 30px;
padding: 0;
}
pre {
background: #eee;
padding: 7px 30px;
margin: 15px -30px;
line-height: 1.3em;
}
dl pre, blockquote pre, li pre {
margin-left: -60px;
padding-left: 60px;
}
dl dl pre {
margin-left: -90px;
padding-left: 90px;
}
tt {
background-color: #ecf0f3;
color: #222;
/* padding: 1px 2px; */
}
tt.xref, a tt {
background-color: #FBFBFB;
border-bottom: 1px solid white;
}
a.reference {
text-decoration: none;
border-bottom: 1px dotted #004B6B;
}
a.reference:hover {
border-bottom: 1px solid #6D4100;
}
a.footnote-reference {
text-decoration: none;
font-size: 0.7em;
vertical-align: top;
border-bottom: 1px dotted #004B6B;
}
a.footnote-reference:hover {
border-bottom: 1px solid #6D4100;
}
a:hover tt {
background: #EEE;
}

View File

@@ -1,70 +0,0 @@
/*
* small_flask.css_t
* ~~~~~~~~~~~~~~~~~
*
* :copyright: Copyright 2010 by Armin Ronacher.
* :license: Flask Design License, see LICENSE for details.
*/
body {
margin: 0;
padding: 20px 30px;
}
div.documentwrapper {
float: none;
background: white;
}
div.sphinxsidebar {
display: block;
float: none;
width: 102.5%;
margin: 50px -30px -20px -30px;
padding: 10px 20px;
background: #333;
color: white;
}
div.sphinxsidebar h3, div.sphinxsidebar h4, div.sphinxsidebar p,
div.sphinxsidebar h3 a {
color: white;
}
div.sphinxsidebar a {
color: #aaa;
}
div.sphinxsidebar p.logo {
display: none;
}
div.document {
width: 100%;
margin: 0;
}
div.related {
display: block;
margin: 0;
padding: 10px 0 20px 0;
}
div.related ul,
div.related ul li {
margin: 0;
padding: 0;
}
div.footer {
display: none;
}
div.bodywrapper {
margin: 0;
}
div.body {
min-height: 0;
padding: 0;
}

View File

@@ -1,9 +0,0 @@
[theme]
inherit = basic
stylesheet = flasky.css
pygments_style = flask_theme_support.FlaskyStyle
[options]
index_logo =
index_logo_height = 120px
touch_icon =

View File

@@ -1,125 +0,0 @@
"""
Copyright (c) 2010 by Armin Ronacher.
Some rights reserved.
Redistribution and use in source and binary forms of the theme, with or
without modification, are permitted provided that the following conditions
are met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following
disclaimer in the documentation and/or other materials provided
with the distribution.
* The names of the contributors may not be used to endorse or
promote products derived from this software without specific
prior written permission.
We kindly ask you to only use these themes in an unmodified manner just
for Flask and Flask-related products, not for unrelated projects. If you
like the visual style and want to use it for your own projects, please
consider making some larger changes to the themes (such as changing
font faces, sizes, colors or margins).
THIS THEME IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
ARISING IN ANY WAY OUT OF THE USE OF THIS THEME, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
"""
# flasky extensions. flasky pygments style based on tango style
from pygments.style import Style
from pygments.token import Keyword, Name, Comment, String, Error, \
Number, Operator, Generic, Whitespace, Punctuation, Other, Literal
class FlaskyStyle(Style):
background_color = "#f8f8f8"
default_style = ""
styles = {
# No corresponding class for the following:
#Text: "", # class: ''
Whitespace: "underline #f8f8f8", # class: 'w'
Error: "#a40000 border:#ef2929", # class: 'err'
Other: "#000000", # class 'x'
Comment: "italic #8f5902", # class: 'c'
Comment.Preproc: "noitalic", # class: 'cp'
Keyword: "bold #004461", # class: 'k'
Keyword.Constant: "bold #004461", # class: 'kc'
Keyword.Declaration: "bold #004461", # class: 'kd'
Keyword.Namespace: "bold #004461", # class: 'kn'
Keyword.Pseudo: "bold #004461", # class: 'kp'
Keyword.Reserved: "bold #004461", # class: 'kr'
Keyword.Type: "bold #004461", # class: 'kt'
Operator: "#582800", # class: 'o'
Operator.Word: "bold #004461", # class: 'ow' - like keywords
Punctuation: "bold #000000", # class: 'p'
# because special names such as Name.Class, Name.Function, etc.
# are not recognized as such later in the parsing, we choose them
# to look the same as ordinary variables.
Name: "#000000", # class: 'n'
Name.Attribute: "#c4a000", # class: 'na' - to be revised
Name.Builtin: "#004461", # class: 'nb'
Name.Builtin.Pseudo: "#3465a4", # class: 'bp'
Name.Class: "#000000", # class: 'nc' - to be revised
Name.Constant: "#000000", # class: 'no' - to be revised
Name.Decorator: "#888", # class: 'nd' - to be revised
Name.Entity: "#ce5c00", # class: 'ni'
Name.Exception: "bold #cc0000", # class: 'ne'
Name.Function: "#000000", # class: 'nf'
Name.Property: "#000000", # class: 'py'
Name.Label: "#f57900", # class: 'nl'
Name.Namespace: "#000000", # class: 'nn' - to be revised
Name.Other: "#000000", # class: 'nx'
Name.Tag: "bold #004461", # class: 'nt' - like a keyword
Name.Variable: "#000000", # class: 'nv' - to be revised
Name.Variable.Class: "#000000", # class: 'vc' - to be revised
Name.Variable.Global: "#000000", # class: 'vg' - to be revised
Name.Variable.Instance: "#000000", # class: 'vi' - to be revised
Number: "#990000", # class: 'm'
Literal: "#000000", # class: 'l'
Literal.Date: "#000000", # class: 'ld'
String: "#4e9a06", # class: 's'
String.Backtick: "#4e9a06", # class: 'sb'
String.Char: "#4e9a06", # class: 'sc'
String.Doc: "italic #8f5902", # class: 'sd' - like a comment
String.Double: "#4e9a06", # class: 's2'
String.Escape: "#4e9a06", # class: 'se'
String.Heredoc: "#4e9a06", # class: 'sh'
String.Interpol: "#4e9a06", # class: 'si'
String.Other: "#4e9a06", # class: 'sx'
String.Regex: "#4e9a06", # class: 'sr'
String.Single: "#4e9a06", # class: 's1'
String.Symbol: "#4e9a06", # class: 'ss'
Generic: "#000000", # class: 'g'
Generic.Deleted: "#a40000", # class: 'gd'
Generic.Emph: "italic #000000", # class: 'ge'
Generic.Error: "#ef2929", # class: 'gr'
Generic.Heading: "bold #000080", # class: 'gh'
Generic.Inserted: "#00A000", # class: 'gi'
Generic.Output: "#888", # class: 'go'
Generic.Prompt: "#745334", # class: 'gp'
Generic.Strong: "bold #000000", # class: 'gs'
Generic.Subheading: "bold #800080", # class: 'gu'
Generic.Traceback: "bold #a40000", # class: 'gt'
}

View File

@@ -13,13 +13,11 @@
import sys
import os
import datetime
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
sys.path.insert(0, os.path.abspath('..'))
sys.path.append(os.path.abspath('_themes'))
# -- General configuration -----------------------------------------------------
@@ -29,7 +27,8 @@ sys.path.append(os.path.abspath('_themes'))
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = ['sphinx.ext.autodoc', 'sphinx.ext.viewcode', 'sphinx.ext.todo',
'sphinx.ext.intersphinx', 'sphinx.ext.inheritance_diagram']
'sphinx.ext.intersphinx', 'sphinx.ext.inheritance_diagram',
'sphinx_rtd_theme', 'sphinx.ext.autosummary']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
@@ -45,7 +44,7 @@ master_doc = 'index'
# General information about the project.
project = u'Jedi'
copyright = u'2012 - {today.year}, Jedi contributors'.format(today=datetime.date.today())
copyright = u'jedi contributors'
import jedi
from jedi.utils import version_info
@@ -54,8 +53,8 @@ from jedi.utils import version_info
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = '.'.join(str(x) for x in version_info()[:2])
# The short X.Y.Z version.
version = '.'.join(str(x) for x in version_info()[:3])
# The full version, including alpha/beta/rc tags.
release = jedi.__version__
@@ -98,12 +97,15 @@ pygments_style = 'sphinx'
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'flask'
html_theme = 'sphinx_rtd_theme'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#html_theme_options = {}
html_theme_options = {
'logo_only': True,
'style_nav_header_background': 'white',
}
# Add any paths that contain custom themes here, relative to this directory.
html_theme_path = ['_themes']
@@ -117,7 +119,7 @@ html_theme_path = ['_themes']
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#html_logo = None
html_logo = '_static/logo.png'
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
@@ -129,6 +131,8 @@ html_theme_path = ['_themes']
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
html_css_files = ['custom_style.css']
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
@@ -145,7 +149,7 @@ html_sidebars = {
#'relations.html',
'ghbuttons.html',
#'sourcelink.html',
#'searchbox.html'
'searchbox.html'
]
}
@@ -163,13 +167,13 @@ html_sidebars = {
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
html_show_sourcelink = False
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#html_show_sphinx = True
html_show_sphinx = False
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#html_show_copyright = True
html_show_copyright = False
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
@@ -274,7 +278,8 @@ autodoc_default_flags = []
# -- Options for intersphinx module --------------------------------------------
intersphinx_mapping = {
'http://docs.python.org/': None,
'python': ('https://docs.python.org/', None),
'parso': ('https://parso.readthedocs.io/en/latest/', None),
}

View File

@@ -0,0 +1,66 @@
.. include global.rst
History & Acknowledgements
==========================
Acknowledgements
----------------
- Dave Halter for creating and maintaining Jedi & Parso.
- Takafumi Arakaki (@tkf) for creating a solid test environment and a lot of
other things.
- Danilo Bargen (@dbrgn) for general housekeeping and being a good friend :).
- Guido van Rossum (@gvanrossum) for creating the parser generator pgen2
(originally used in lib2to3).
- Thanks to all the :ref:`contributors <contributors>`.
A Little Bit of History
-----------------------
Written by Dave.
The Star Wars Jedi are awesome. My Jedi software tries to imitate a little bit
of the precognition the Jedi have. There's even an awesome `scene
<https://youtu.be/yHRJLIf7wMU>`_ of Monty Python Jedis :-).
But actually the name has not much to do with Star Wars. It's part of my
second name Jedidjah.
I actually started Jedi back in 2012, because there were no good solutions
available for VIM. Most auto-completion solutions just did not work well. The
only good solution was PyCharm. But I liked my good old VIM very much. There
was also a solution called Rope that did not work at all for me. So I decided
to write my own version of a completion engine.
The first idea was to execute non-dangerous code. But I soon realized, that
this would not work. So I started to build a static analysis tool.
The biggest problem that I had at the time was that I did not know a thing
about parsers.I did not did not even know the word static analysis. It turns
out they are the foundation of a good static analysis tool. I of course did not
know that and tried to write my own poor version of a parser that I ended up
throwing away two years later.
Because of my lack of knowledge, everything after 2012 and before 2020 was
basically refactoring. I rewrote the core parts of Jedi probably like 5-10
times. The last big rewrite (that I did twice) was the inclusion of
gradual typing and stubs.
I learned during that time that it is crucial to have a good understanding of
your problem. Otherwise you just end up doing it again. I only wrote features
in the beginning and in the end. Everything else was bugfixing and refactoring.
However now I am really happy with the result. It works well, bugfixes can be
quick and is pretty much feature complete.
--------
I will leave you with a small annectote that happend in 2012, if I remember
correctly. After I explained Guido van Rossum, how some parts of my
auto-completion work, he said:
*"Oh, that worries me..."*
Now that it is finished, I hope he likes it :-).
.. _contributors:
.. include:: ../../AUTHORS.txt

53
docs/docs/api-classes.rst Normal file
View File

@@ -0,0 +1,53 @@
.. include:: ../global.rst
.. _api-classes:
API Return Classes
------------------
Abstract Base Class
~~~~~~~~~~~~~~~~~~~
.. autoclass:: jedi.api.classes.BaseName
:members:
:show-inheritance:
Name
~~~~
.. autoclass:: jedi.api.classes.Name
:members:
:show-inheritance:
Completion
~~~~~~~~~~
.. autoclass:: jedi.api.classes.Completion
:members:
:show-inheritance:
BaseSignature
~~~~~~~~~~~~~
.. autoclass:: jedi.api.classes.BaseSignature
:members:
:show-inheritance:
Signature
~~~~~~~~~
.. autoclass:: jedi.api.classes.Signature
:members:
:show-inheritance:
ParamName
~~~~~~~~~
.. autoclass:: jedi.api.classes.ParamName
:members:
:show-inheritance:
Refactoring
~~~~~~~~~~~
.. autoclass:: jedi.api.refactoring.Refactoring
:members:
:show-inheritance:
.. autoclass:: jedi.api.errors.SyntaxError
:members:
:show-inheritance:

174
docs/docs/api.rst Normal file
View File

@@ -0,0 +1,174 @@
.. include:: ../global.rst
API Overview
============
.. note:: This documentation is mostly for Plugin developers, who want to
improve their editors/IDE with Jedi.
.. _api:
The API consists of a few different parts:
- The main starting points for complete/goto: :class:`.Script` and
:class:`.Interpreter`. If you work with Jedi you want to understand these
classes first.
- :ref:`API Result Classes <api-classes>`
- :ref:`Python Versions/Virtualenv Support <environments>` with functions like
:func:`.find_system_environments` and :func:`.find_virtualenvs`
- A way to work with different :ref:`Folders / Projects <projects>`
- Helpful functions: :func:`.preload_module` and :func:`.set_debug_function`
The methods that you are most likely going to use to work with Jedi are the
following ones:
.. currentmodule:: jedi
.. autosummary::
:nosignatures:
Script.complete
Script.goto
Script.infer
Script.help
Script.get_signatures
Script.get_references
Script.get_context
Script.get_names
Script.get_syntax_errors
Script.rename
Script.inline
Script.extract_variable
Script.extract_function
Script.search
Script.complete_search
Project.search
Project.complete_search
Script
------
.. autoclass:: jedi.Script
:members:
Interpreter
-----------
.. autoclass:: jedi.Interpreter
:members:
.. _projects:
Projects
--------
.. automodule:: jedi.api.project
.. autofunction:: jedi.get_default_project
.. autoclass:: jedi.Project
:members:
.. _environments:
Environments
------------
.. automodule:: jedi.api.environment
.. autofunction:: jedi.find_system_environments
.. autofunction:: jedi.find_virtualenvs
.. autofunction:: jedi.get_system_environment
.. autofunction:: jedi.create_environment
.. autofunction:: jedi.get_default_environment
.. autoexception:: jedi.InvalidPythonEnvironment
.. autoclass:: jedi.api.environment.Environment
:members:
Helper Functions
----------------
.. autofunction:: jedi.preload_module
.. autofunction:: jedi.set_debug_function
Errors
------
.. autoexception:: jedi.InternalError
.. autoexception:: jedi.RefactoringError
Examples
--------
Completions
~~~~~~~~~~~
.. sourcecode:: python
>>> import jedi
>>> code = '''import json; json.l'''
>>> script = jedi.Script(code, path='example.py')
>>> script
<Script: 'example.py' <SameEnvironment: 3.5.2 in /usr>>
>>> completions = script.complete(1, 19)
>>> completions
[<Completion: load>, <Completion: loads>]
>>> completions[1]
<Completion: loads>
>>> completions[1].complete
'oads'
>>> completions[1].name
'loads'
Type Inference / Goto
~~~~~~~~~~~~~~~~~~~~~
.. sourcecode:: python
>>> import jedi
>>> code = '''\
... def my_func():
... print 'called'
...
... alias = my_func
... my_list = [1, None, alias]
... inception = my_list[2]
...
... inception()'''
>>> script = jedi.Script(code)
>>>
>>> script.goto(8, 1)
[<Name full_name='__main__.inception', description='inception = my_list[2]'>]
>>>
>>> script.infer(8, 1)
[<Name full_name='__main__.my_func', description='def my_func'>]
References
~~~~~~~~~~
.. sourcecode:: python
>>> import jedi
>>> code = '''\
... x = 3
... if 1 == 2:
... x = 4
... else:
... del x'''
>>> script = jedi.Script(code)
>>> rns = script.get_references(5, 8)
>>> rns
[<Name full_name='__main__.x', description='x = 3'>,
<Name full_name='__main__.x', description='x = 4'>,
<Name full_name='__main__.x', description='del x'>]
>>> rns[1].line
3
>>> rns[1].column
4
Deprecations
------------
The deprecation process is as follows:
1. A deprecation is announced in the next major/minor release.
2. We wait either at least a year and at least two minor releases until we
remove the deprecated functionality.

1
docs/docs/changelog.rst Normal file
View File

@@ -0,0 +1 @@
.. include:: ../../CHANGELOG.rst

View File

@@ -7,7 +7,9 @@ Jedi Development
.. note:: This documentation is for Jedi developers who want to improve Jedi
itself, but have no idea how Jedi works. If you want to use Jedi for
your IDE, look at the `plugin api <plugin-api.html>`_.
your IDE, look at the `plugin api <api.html>`_.
It is also important to note that it's a pretty old version and some things
might not apply anymore.
Introduction
@@ -20,16 +22,12 @@ couldn't get rid of complexity. I know that **simple is better than complex**,
but unfortunately it sometimes requires complex solutions to understand complex
systems.
Since most of the Jedi internals have been written by me (David Halter), this
introduction will be written mostly by me, because no one else understands to
the same level how Jedi works. Actually this is also the reason for exactly this
part of the documentation. To make multiple people able to edit the Jedi core.
In five chapters I'm trying to describe the internals of |jedi|:
In six chapters I'm trying to describe the internals of |jedi|:
- :ref:`The Jedi Core <core>`
- :ref:`Core Extensions <core-extensions>`
- :ref:`Imports & Modules <imports-modules>`
- :ref:`Stubs & Annotations <stubs>`
- :ref:`Caching & Recursions <caching-recursions>`
- :ref:`Helper modules <dev-helpers>`
@@ -45,76 +43,60 @@ The Jedi Core
The core of Jedi consists of three parts:
- :ref:`Parser <parser>`
- :ref:`Python code evaluation <evaluate>`
- :ref:`Python type inference <inference>`
- :ref:`API <dev-api>`
Most people are probably interested in :ref:`code evaluation <evaluate>`,
Most people are probably interested in :ref:`type inference <inference>`,
because that's where all the magic happens. I need to introduce the :ref:`parser
<parser>` first, because :mod:`jedi.evaluate` uses it extensively.
<parser>` first, because :mod:`jedi.inference` uses it extensively.
.. _parser:
Parser (parser/__init__.py)
Parser
~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. automodule:: jedi.parser
Jedi used to have its internal parser, however this is now a separate project
and is called `parso <http://parso.readthedocs.io>`_.
Parser Tree (parser/tree.py)
++++++++++++++++++++++++++++++++++++++++++++++++
The parser creates a syntax tree that |jedi| analyses and tries to understand.
The grammar that this parser uses is very similar to the official Python
`grammar files <https://docs.python.org/3/reference/grammar.html>`_.
.. automodule:: jedi.parser.tree
.. _inference:
Class inheritance diagram:
Type inference of python code (inference/__init__.py)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. inheritance-diagram::
Module
Class
Function
Lambda
Flow
ForStmt
Import
ExprStmt
Param
Name
CompFor
:parts: 1
.. automodule:: jedi.inference
.. _evaluate:
Evaluation of python code (evaluate/__init__.py)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. automodule:: jedi.evaluate
Evaluation Representation (evaluate/representation.py)
Inference Values (inference/base_value.py)
++++++++++++++++++++++++++++++++++++++++++++++++++++++
.. automodule:: jedi.evaluate.representation
.. automodule:: jedi.inference.base_value
.. inheritance-diagram::
jedi.evaluate.instance.TreeInstance
jedi.evaluate.representation.ClassContext
jedi.evaluate.representation.FunctionContext
jedi.evaluate.representation.FunctionExecutionContext
jedi.inference.value.instance.TreeInstance
jedi.inference.value.klass.ClassValue
jedi.inference.value.function.FunctionValue
jedi.inference.value.function.FunctionExecutionContext
:parts: 1
.. _name_resolution:
Name resolution (evaluate/finder.py)
++++++++++++++++++++++++++++++++++++
Name resolution (inference/finder.py)
+++++++++++++++++++++++++++++++++++++
.. automodule:: jedi.evaluate.finder
.. automodule:: jedi.inference.finder
.. _dev-api:
API (api.py and api_classes.py)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
API (api/__init__.py and api/classes.py)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The API has been designed to be as easy to use as possible. The API
documentation can be found `here <plugin-api.html>`_. The API itself contains
documentation can be found `here <api.html>`_. The API itself contains
little code that needs to be mentioned here. Generally I'm trying to be
conservative with the API. I'd rather not add new API features if they are not
necessary, because it's much harder to deprecate stuff than to add it later.
@@ -128,8 +110,7 @@ Core Extensions
Core Extensions is a summary of the following topics:
- :ref:`Iterables & Dynamic Arrays <iterables>`
- :ref:`Dynamic Parameters <dynamic>`
- :ref:`Diff Parser <diff-parser>`
- :ref:`Dynamic Parameters <dynamic_params>`
- :ref:`Docstrings <docstrings>`
- :ref:`Refactoring <refactoring>`
@@ -139,49 +120,42 @@ without some features.
.. _iterables:
Iterables & Dynamic Arrays (evaluate/iterable.py)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Iterables & Dynamic Arrays (inference/value/iterable.py)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To understand Python on a deeper level, |jedi| needs to understand some of the
dynamic features of Python like lists that are filled after creation:
.. automodule:: jedi.evaluate.iterable
.. automodule:: jedi.inference.value.iterable
.. _dynamic:
.. _dynamic_params:
Parameter completion (evaluate/dynamic.py)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Parameter completion (inference/dynamic_params.py)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. automodule:: jedi.evaluate.dynamic
.. automodule:: jedi.inference.dynamic_params
.. _diff-parser:
Diff Parser (parser/diff.py)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. automodule:: jedi.parser.python.diff
.. _docstrings:
Docstrings (evaluate/docstrings.py)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Docstrings (inference/docstrings.py)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. automodule:: jedi.evaluate.docstrings
.. automodule:: jedi.inference.docstrings
.. _refactoring:
Refactoring (evaluate/refactoring.py)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Refactoring (api/refactoring.py)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. automodule:: jedi.refactoring
.. automodule:: jedi.api.refactoring
.. _imports-modules:
Imports & Modules
-------------------
-----------------
- :ref:`Modules <modules>`
@@ -191,19 +165,25 @@ Imports & Modules
.. _builtin:
Compiled Modules (evaluate/compiled.py)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Compiled Modules (inference/compiled.py)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. automodule:: jedi.evaluate.compiled
.. automodule:: jedi.inference.compiled
.. _imports:
Imports (evaluate/imports.py)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Imports (inference/imports.py)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. automodule:: jedi.evaluate.imports
.. automodule:: jedi.inference.imports
.. _stubs:
Stubs & Annotations (inference/gradual)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. automodule:: jedi.inference.gradual
.. _caching-recursions:
@@ -226,19 +206,14 @@ Caching (cache.py)
Recursions (recursion.py)
~~~~~~~~~~~~~~~~~~~~~~~~~
.. automodule:: jedi.evaluate.recursion
.. automodule:: jedi.inference.recursion
.. _dev-helpers:
Helper Modules
---------------
--------------
Most other modules are not really central to how Jedi works. They all contain
relevant code, but you if you understand the modules above, you pretty much
understand Jedi.
Python 2/3 compatibility (_compatibility.py)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. automodule:: jedi._compatibility

View File

@@ -1,31 +1,30 @@
.. include:: ../global.rst
Features and Caveats
====================
Features and Limitations
========================
Jedi obviously supports autocompletion. It's also possible to get it working in
(:ref:`your REPL (IPython, etc.) <repl-completion>`).
Jedi's main API calls and features are:
Static analysis is also possible by using the command ``jedi.names``.
- Autocompletion: :meth:`.Script.complete`; It's also possible to get it
working in :ref:`your REPL (IPython, etc.) <repl-completion>`
- Goto/Type Inference: :meth:`.Script.goto` and :meth:`.Script.infer`
- Static Analysis: :meth:`.Script.get_names` and :meth:`.Script.get_syntax_errors`
- Refactorings: :meth:`.Script.rename`, :meth:`.Script.inline`,
:meth:`.Script.extract_variable` and :meth:`.Script.extract_function`
- Code Search: :meth:`.Script.search` and :meth:`.Project.search`
The Jedi Linter is currently in an alpha version and can be tested by calling
``python -m jedi linter``.
Basic Features
--------------
Jedi would in theory support refactoring, but we have never publicized it,
because it's not production ready. If you're interested in helping out here,
let me know. With the latest parser changes, it should be very easy to actually
make it work.
General Features
----------------
- python 2.6+ and 3.3+ support
- ignores syntax errors and wrong indentation
- can deal with complex module / function / class structures
- virtualenv support
- can infer function arguments from sphinx, epydoc and basic numpydoc docstrings,
and PEP0484-style type hints (:ref:`type hinting <type-hinting>`)
- Python 2.7 and 3.5+ support
- Ignores syntax errors and wrong indentation
- Can deal with complex module / function / class structures
- Great ``virtualenv``/``venv`` support
- Works great with Python's :ref:`type hinting <type-hinting>`,
- Understands stub files
- Can infer function arguments for sphinx, epydoc and basic numpydoc docstrings
- Is overall a very solid piece of software that has been refined for a long
time. Bug reports are very welcome and are usually fixed within a few weeks.
Supported Python Features
@@ -40,224 +39,72 @@ Supported Python Features
- ``*args`` / ``**kwargs``
- decorators / lambdas / closures
- generators / iterators
- some descriptors: property / staticmethod / classmethod
- descriptors: property / staticmethod / classmethod / custom descriptors
- some magic methods: ``__call__``, ``__iter__``, ``__next__``, ``__get__``,
``__getitem__``, ``__init__``
- ``list.append()``, ``set.add()``, ``list.extend()``, etc.
- (nested) list comprehensions / ternary expressions
- relative imports
- ``getattr()`` / ``__getattr__`` / ``__getattribute__``
- function annotations (py3k feature, are ignored right now, but being parsed.
I don't know what to do with them.)
- class decorators (py3k feature, are being ignored too, until I find a use
case, that doesn't work with |jedi|)
- simple/usual ``sys.path`` modifications
- function annotations
- simple/typical ``sys.path`` modifications
- ``isinstance`` checks for if/while/assert
- namespace packages (includes ``pkgutil`` and ``pkg_resources`` namespaces)
- namespace packages (includes ``pkgutil``, ``pkg_resources`` and PEP420 namespaces)
- Django / Flask / Buildout support
- Understands Pytest fixtures
Unsupported Features
--------------------
Limitations
-----------
Not yet implemented:
In general Jedi's limit are quite high, but for very big projects or very
complex code, sometimes Jedi intentionally stops type inference, to avoid
hanging for a long time.
- manipulations of instances outside the instance variables without using
methods
- implicit namespace packages (Python 3.3+, `PEP 420 <https://www.python.org/dev/peps/pep-0420/>`_)
Additionally there are some Python patterns Jedi does not support. This is
intentional and below should be a complete list:
Will probably never be implemented:
- metaclasses (how could an auto-completion ever support this)
- Arbitrary metaclasses: Some metaclasses like enums and dataclasses are
reimplemented in Jedi to make them work. Most of the time stubs are good
enough to get type inference working, even when metaclasses are involved.
- ``setattr()``, ``__import__()``
- writing to some dicts: ``globals()``, ``locals()``, ``object.__dict__``
- evaluating ``if`` / ``while`` / ``del``
- Writing to some dicts: ``globals()``, ``locals()``, ``object.__dict__``
- Manipulations of instances outside the instance variables without using
methods
Caveats
-------
**Malformed Syntax**
Syntax errors and other strange stuff may lead to undefined behaviour of the
completion. |jedi| is **NOT** a Python compiler, that tries to correct you. It
is a tool that wants to help you. But **YOU** have to know Python, not |jedi|.
**Legacy Python 2 Features**
This framework should work for both Python 2/3. However, some things were just
not as *pythonic* in Python 2 as things should be. To keep things simple, some
older Python 2 features have been left out:
- Classes: Always Python 3 like, therefore all classes inherit from ``object``.
- Generators: No ``next()`` method. The ``__next__()`` method is used instead.
**Slow Performance**
Performance Issues
~~~~~~~~~~~~~~~~~~
Importing ``numpy`` can be quite slow sometimes, as well as loading the
builtins the first time. If you want to speed things up, you could write import
hooks in |jedi|, which preload stuff. However, once loaded, this is not a
problem anymore. The same is true for huge modules like ``PySide``, ``wx``,
etc.
builtins the first time. If you want to speed things up, you could preload
libriaries in |jedi|, with :func:`.preload_module`. However, once loaded, this
should not be a problem anymore. The same is true for huge modules like
``PySide``, ``wx``, ``tensorflow``, ``pandas``, etc.
**Security**
Jedi does not have a very good cache layer. This is probably the biggest and
only architectural `issue <https://github.com/davidhalter/jedi/issues/1059>`_ in
Jedi. Unfortunately it is not easy to change that. Dave Halter is thinking
about rewriting Jedi in Rust, but it has taken Jedi more than 8 years to reach
version 1.0, a rewrite will probably also take years.
Security is an important issue for |jedi|. Therefore no Python code is
executed. As long as you write pure python, everything is evaluated
statically. But: If you use builtin modules (``c_builtin``) there is no other
option than to execute those modules. However: Execute isn't that critical (as
e.g. in pythoncomplete, which used to execute *every* import!), because it
means one import and no more. So basically the only dangerous thing is using
the import itself. If your ``c_builtin`` uses some strange initializations, it
might be dangerous. But if it does you're screwed anyways, because eventually
you're going to execute your code, which executes the import.
Security
--------
For :class:`.Script`
~~~~~~~~~~~~~~~~~~~~
Recipes
-------
Security is an important topic for |jedi|. By default, no code is executed
within Jedi. As long as you write pure Python, everything is inferred
statically. If you enable ``load_unsafe_extensions=True`` for your
:class:`.Project` and you use builtin modules (``c_builtin``) Jedi will execute
those modules. If you don't trust a code base, please do not enable that
option. It might lead to arbitrary code execution.
Here are some tips on how to use |jedi| efficiently.
For :class:`.Interpreter`
~~~~~~~~~~~~~~~~~~~~~~~~~
.. _type-hinting:
Type Hinting
~~~~~~~~~~~~
If |jedi| cannot detect the type of a function argument correctly (due to the
dynamic nature of Python), you can help it by hinting the type using
one of the following docstring/annotation syntax styles:
**PEP-0484 style**
https://www.python.org/dev/peps/pep-0484/
function annotations (python 3 only; python 2 function annotations with
comments in planned but not yet implemented)
::
def myfunction(node: ProgramNode, foo: str) -> None:
"""Do something with a ``node``.
"""
node.| # complete here
assignment, for-loop and with-statement type hints (all python versions).
Note that the type hints must be on the same line as the statement
::
x = foo() # type: int
x, y = 2, 3 # type: typing.Optional[int], typing.Union[int, str] # typing module is mostly supported
for key, value in foo.items(): # type: str, Employee # note that Employee must be in scope
pass
with foo() as f: # type: int
print(f + 3)
Most of the features in PEP-0484 are supported including the typing module
(for python < 3.5 you have to do ``pip install typing`` to use these),
and forward references.
Things that are missing (and this is not an exhaustive list; some of these
are planned, others might be hard to implement and provide little worth):
- annotating functions with comments: https://www.python.org/dev/peps/pep-0484/#suggested-syntax-for-python-2-7-and-straddling-code
- understanding ``typing.cast()``
- stub files: https://www.python.org/dev/peps/pep-0484/#stub-files
- ``typing.Callable``
- ``typing.TypeVar``
- User defined generic types: https://www.python.org/dev/peps/pep-0484/#user-defined-generic-types
**Sphinx style**
http://sphinx-doc.org/domains.html#info-field-lists
::
def myfunction(node, foo):
"""Do something with a ``node``.
:type node: ProgramNode
:param str foo: foo parameter description
"""
node.| # complete here
**Epydoc**
http://epydoc.sourceforge.net/manual-fields.html
::
def myfunction(node):
"""Do something with a ``node``.
@type node: ProgramNode
"""
node.| # complete here
**Numpydoc**
https://github.com/numpy/numpy/blob/master/doc/HOWTO_DOCUMENT.rst.txt
In order to support the numpydoc format, you need to install the `numpydoc
<https://pypi.python.org/pypi/numpydoc>`__ package.
::
def foo(var1, var2, long_var_name='hi'):
r"""A one-line summary that does not use variable names or the
function name.
...
Parameters
----------
var1 : array_like
Array_like means all those objects -- lists, nested lists,
etc. -- that can be converted to an array. We can also
refer to variables like `var1`.
var2 : int
The type above can either refer to an actual Python type
(e.g. ``int``), or describe the type of the variable in more
detail, e.g. ``(N,) ndarray`` or ``array_like``.
long_variable_name : {'hi', 'ho'}, optional
Choices in brackets, default first when optional.
...
"""
var2.| # complete here
A little history
----------------
The Star Wars Jedi are awesome. My Jedi software tries to imitate a little bit
of the precognition the Jedi have. There's even an awesome `scene
<http://www.youtube.com/watch?v=5BDO3pyavOY>`_ of Monty Python Jedis :-).
But actually the name hasn't so much to do with Star Wars. It's part of my
second name.
After I explained Guido van Rossum, how some parts of my auto-completion work,
he said (we drank a beer or two):
*"Oh, that worries me..."*
When it's finished, I hope he'll like it :-)
I actually started Jedi, because there were no good solutions available for VIM.
Most auto-completions just didn't work well. The only good solution was PyCharm.
But I like my good old VIM. Rope was never really intended to be an
auto-completion (and also I really hate project folders for my Python scripts).
It's more of a refactoring suite. So I decided to do my own version of a
completion, which would execute non-dangerous code. But I soon realized, that
this wouldn't work. So I built an extremely recursive thing which understands
many of Python's key features.
By the way, I really tried to program it as understandable as possible. But I
think understanding it might need quite some time, because of its recursive
nature.
If you want security for :class:`.Interpreter`, ``do not`` use it. Jedi does
execute properties and in general is not very careful to avoid code execution.
This is intentional: Most people trust the code bases they have imported,
because at that point a malicious code base would have had code execution
already.

View File

@@ -3,6 +3,15 @@
Installation and Configuration
==============================
.. warning:: Most people will want to install Jedi as a submodule/vendored and
not through pip/system wide. The reason for this is that it makes sense that
the plugin that uses Jedi has always access to it. Otherwise Jedi will not
work properly when virtualenvs are activated. So please read the
documentation of your editor/IDE plugin to install Jedi.
For plugin developers, Jedi works best if it is always available. Vendoring
is a pretty good option for that.
You can either include |jedi| as a submodule in your text editor plugin (like
jedi-vim_ does by default), or you can install it systemwide.
@@ -11,8 +20,16 @@ jedi-vim_ does by default), or you can install it systemwide.
editor, refer to the corresponding documentation.
The preferred way
-----------------
The normal way
--------------
Most people use Jedi with a :ref:`editor plugins<editor-plugins>`. Typically
you install Jedi by installing an editor plugin. No necessary steps are needed.
Just take a look at the instructions for the plugin.
With pip
--------
On any system you can install |jedi| directly from the Python package index
using pip::
@@ -49,7 +66,7 @@ Debian
~~~~~~
Debian packages are available in the `unstable repository
<http://packages.debian.org/search?keywords=python%20jedi>`__.
<https://packages.debian.org/search?keywords=python%20jedi>`__.
Others
~~~~~~
@@ -57,19 +74,15 @@ Others
We are in the discussion of adding |jedi| to the Fedora repositories.
Manual installation from a downloaded package
Manual installation from GitHub
---------------------------------------------
If you prefer not to use an automated package installer, you can `download
<https://github.com/davidhalter/jedi/archive/master.zip>`__ a current copy of
|jedi| and install it manually.
To install it, navigate to the directory containing `setup.py` on your console
and type::
If you prefer not to use an automated package installer, you can clone the source from GitHub and install it manually. To install it, run these commands::
git clone --recurse-submodules https://github.com/davidhalter/jedi
cd jedi
sudo python setup.py install
Inclusion as a submodule
------------------------

View File

@@ -1,36 +0,0 @@
.. _xxx:
Parser Tree
===========
Usage
-----
.. automodule:: jedi.parser.python
:members:
:undoc-members:
Parser Tree Base Class
----------------------
All nodes and leaves have these methods/properties:
.. autoclass:: jedi.parser.tree.NodeOrLeaf
:members:
:undoc-members:
Python Parser Tree
------------------
.. automodule:: jedi.parser.python.tree
:members:
:undoc-members:
:show-inheritance:
Utility
-------
.. autofunction:: jedi.parser.tree.search_ancestor

View File

@@ -1,10 +0,0 @@
.. include:: ../global.rst
.. _plugin-api-classes:
API Return Classes
------------------
.. automodule:: jedi.api.classes
:members:
:undoc-members:

View File

@@ -1,100 +0,0 @@
.. include:: ../global.rst
The Plugin API
==============
.. currentmodule:: jedi
Note: This documentation is for Plugin developers, who want to improve their
editors/IDE autocompletion
If you want to use |jedi|, you first need to ``import jedi``. You then have
direct access to the :class:`.Script`. You can then call the functions
documented here. These functions return :ref:`API classes
<plugin-api-classes>`.
Deprecations
------------
The deprecation process is as follows:
1. A deprecation is announced in the next major/minor release.
2. We wait either at least a year & at least two minor releases until we remove
the deprecated functionality.
API documentation
-----------------
API Interface
~~~~~~~~~~~~~
.. automodule:: jedi.api
:members:
:undoc-members:
Examples
--------
Completions:
.. sourcecode:: python
>>> import jedi
>>> source = '''import json; json.l'''
>>> script = jedi.Script(source, 1, 19, '')
>>> script
<jedi.api.Script object at 0x2121b10>
>>> completions = script.completions()
>>> completions
[<Completion: load>, <Completion: loads>]
>>> completions[1]
<Completion: loads>
>>> completions[1].complete
'oads'
>>> completions[1].name
'loads'
Definitions / Goto:
.. sourcecode:: python
>>> import jedi
>>> source = '''def my_func():
... print 'called'
...
... alias = my_func
... my_list = [1, None, alias]
... inception = my_list[2]
...
... inception()'''
>>> script = jedi.Script(source, 8, 1, '')
>>>
>>> script.goto_assignments()
[<Definition inception=my_list[2]>]
>>>
>>> script.goto_definitions()
[<Definition def my_func>]
Related names:
.. sourcecode:: python
>>> import jedi
>>> source = '''x = 3
... if 1 == 2:
... x = 4
... else:
... del x'''
>>> script = jedi.Script(source, 5, 8, '')
>>> rns = script.related_names()
>>> rns
[<RelatedName x@3,4>, <RelatedName x@1,0>]
>>> rns[0].start_pos
(3, 4)
>>> rns[0].is_keyword
False
>>> rns[0].text
'x'

View File

@@ -1,106 +0,0 @@
This file is the start of the documentation of how static analysis works.
Below is a list of parser names that are used within nodes_to_execute.
------------ cared for:
global_stmt
exec_stmt # no priority
assert_stmt
if_stmt
while_stmt
for_stmt
try_stmt
(except_clause)
with_stmt
(with_item)
(with_var)
print_stmt
del_stmt
return_stmt
raise_stmt
yield_expr
file_input
funcdef
param
old_lambdef
lambdef
import_name
import_from
(import_as_name)
(dotted_as_name)
(import_as_names)
(dotted_as_names)
(dotted_name)
classdef
comp_for
(comp_if) ?
decorator
----------- add basic
test
or_test
and_test
not_test
expr
xor_expr
and_expr
shift_expr
arith_expr
term
factor
power
atom
comparison
expr_stmt
testlist
testlist1
testlist_safe
----------- special care:
# mostly depends on how we handle the other ones.
testlist_star_expr # should probably just work with expr_stmt
star_expr
exprlist # just ignore? then names are just resolved. Strange anyway, bc expr is not really allowed in the list, typically.
----------- ignore:
suite
subscriptlist
subscript
simple_stmt
?? sliceop # can probably just be added.
testlist_comp # prob ignore and care about it with atom.
dictorsetmaker
trailer
decorators
decorated
# always execute function arguments? -> no problem with stars.
# Also arglist and argument are different in different grammars.
arglist
argument
----------- remove:
tname # only exists in current Jedi parser. REMOVE!
tfpdef # python 2: tuple assignment; python 3: annotation
vfpdef # reduced in python 3 and therefore not existing.
tfplist # not in 3
vfplist # not in 3
--------- not existing with parser reductions.
small_stmt
import_stmt
flow_stmt
compound_stmt
stmt
pass_stmt
break_stmt
continue_stmt
comp_op
augassign
old_test
typedargslist # afaik becomes [param]
varargslist # dito
vname
comp_iter
test_nocond

View File

@@ -3,18 +3,14 @@
Jedi Testing
============
The test suite depends on ``tox`` and ``pytest``::
The test suite depends on ``pytest``::
pip install tox pytest
pip install pytest
To run the tests for all supported Python versions::
tox
If you want to test only a specific Python version (e.g. Python 2.7), it's as
If you want to test only a specific Python version (e.g. Python 3.8), it is as
easy as::
tox -e py27
python3.8 -m pytest
Tests are also run automatically on `Travis CI
<https://travis-ci.org/davidhalter/jedi/>`_.
@@ -28,8 +24,8 @@ simple and readable testing structure.
.. _blackbox:
Blackbox Tests (run.py)
~~~~~~~~~~~~~~~~~~~~~~~
Integration Tests (run.py)
~~~~~~~~~~~~~~~~~~~~~~~~~~
.. automodule:: test.run

View File

@@ -1,15 +1,12 @@
.. include:: ../global.rst
End User Usage
==============
Using Jedi
==========
If you are a not an IDE Developer, the odds are that you just want to use
|jedi| as a browser plugin or in the shell. Yes that's :ref:`also possible
<repl-completion>`!
|jedi| is can be used with a variety of plugins and software. It is also possible
to use |jedi| in the :ref:`Python shell or with IPython <repl-completion>`.
|jedi| is relatively young and can be used in a variety of Plugins and
Software. If your Editor/IDE is not among them, recommend |jedi| to your IDE
developers.
Below you can also find a list of :ref:`recipes for type hinting <recipes>`.
.. _editor-plugins:
@@ -17,64 +14,72 @@ developers.
Editor Plugins
--------------
Vim:
Vim
~~~
- jedi-vim_
- YouCompleteMe_
- deoplete-jedi_
Emacs:
Visual Studio Code
~~~~~~~~~~~~~~~~~~
- `Python Extension`_
Emacs
~~~~~
- Jedi.el_
- elpy_
- anaconda-mode_
Sublime Text 2/3:
Sublime Text 2/3
~~~~~~~~~~~~~~~~
- SublimeJEDI_ (ST2 & ST3)
- anaconda_ (only ST3)
SynWrite:
SynWrite
~~~~~~~~
- SynJedi_
TextMate:
TextMate
~~~~~~~~
- Textmate_ (Not sure if it's actually working)
Kate:
Kate
~~~~
- Kate_ version 4.13+ `supports it natively
<https://projects.kde.org/projects/kde/applications/kate/repository/entry/addons/kate/pate/src/plugins/python_autocomplete_jedi.py?rev=KDE%2F4.13>`__,
you have to enable it, though.
Visual Studio Code:
- `Python Extension`_
Atom:
Atom
~~~~
- autocomplete-python-jedi_
SourceLair:
- SourceLair_
GNOME Builder:
GNOME Builder
~~~~~~~~~~~~~
- `GNOME Builder`_ `supports it natively
<https://git.gnome.org/browse/gnome-builder/tree/plugins/jedi>`__,
and is enabled by default.
Gedit:
Gedit
~~~~~
- gedi_
Eric IDE:
Eric IDE
~~~~~~~~
- `Eric IDE`_ (Available as a plugin)
Web Debugger:
Web Debugger
~~~~~~~~~~~~
- wdb_
@@ -85,11 +90,14 @@ and many more!
Tab Completion in the Python Shell
----------------------------------
Starting with Ipython `6.0.0` Jedi is a dependency of IPython. Autocompletion
in IPython is therefore possible without additional configuration.
Jedi is a dependency of IPython. Autocompletion in IPython is therefore
possible without additional configuration.
Here is an `example video <https://vimeo.com/122332037>`_ how REPL completion
can look like in a different shell.
There are two different options how you can use Jedi autocompletion in
your Python interpreter. One with your custom ``$HOME/.pythonrc.py`` file
your ``python`` interpreter. One with your custom ``$HOME/.pythonrc.py`` file
and one that uses ``PYTHONSTARTUP``.
Using ``PYTHONSTARTUP``
@@ -97,13 +105,139 @@ Using ``PYTHONSTARTUP``
.. automodule:: jedi.api.replstartup
Using a custom ``$HOME/.pythonrc.py``
Using a Custom ``$HOME/.pythonrc.py``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autofunction:: jedi.utils.setup_readline
.. _recipes:
Recipes
-------
Here are some tips on how to use |jedi| efficiently.
.. _type-hinting:
Type Hinting
~~~~~~~~~~~~
If |jedi| cannot detect the type of a function argument correctly (due to the
dynamic nature of Python), you can help it by hinting the type using
one of the docstring/annotation styles below. **Only gradual typing will
always work**, all the docstring solutions are glorified hacks and more
complicated cases will probably not work.
Official Gradual Typing (Recommended)
+++++++++++++++++++++++++++++++++++++
You can read a lot about Python's gradual typing system in the corresponding
PEPs like:
- `PEP 484 <https://www.python.org/dev/peps/pep-0484/>`_ as an introduction
- `PEP 526 <https://www.python.org/dev/peps/pep-0526/>`_ for variable annotations
- `PEP 589 <https://www.python.org/dev/peps/pep-0589/>`_ for ``TypeDict``
- There are probably more :)
Below you can find a few examples how you can use this feature.
Function annotations::
def myfunction(node: ProgramNode, foo: str) -> None:
"""Do something with a ``node``.
"""
node.| # complete here
Assignment, for-loop and with-statement type hints::
import typing
x: int = foo()
y: typing.Optional[int] = 3
key: str
value: Employee
for key, value in foo.items():
pass
f: Union[int, float]
with foo() as f:
print(f + 3)
PEP-0484 should be supported in its entirety. Feel free to open issues if that
is not the case. You can also use stub files.
Sphinx style
++++++++++++
http://www.sphinx-doc.org/en/stable/domains.html#info-field-lists
::
def myfunction(node, foo):
"""
Do something with a ``node``.
:type node: ProgramNode
:param str foo: foo parameter description
"""
node.| # complete here
Epydoc
++++++
http://epydoc.sourceforge.net/manual-fields.html
::
def myfunction(node):
"""
Do something with a ``node``.
@type node: ProgramNode
"""
node.| # complete here
Numpydoc
++++++++
https://github.com/numpy/numpy/blob/master/doc/HOWTO_DOCUMENT.rst.txt
In order to support the numpydoc format, you need to install the `numpydoc
<https://pypi.python.org/pypi/numpydoc>`__ package.
::
def foo(var1, var2, long_var_name='hi'):
r"""
A one-line summary that does not use variable names or the
function name.
...
Parameters
----------
var1 : array_like
Array_like means all those objects -- lists, nested lists,
etc. -- that can be converted to an array. We can also
refer to variables like `var1`.
var2 : int
The type above can either refer to an actual Python type
(e.g. ``int``), or describe the type of the variable in more
detail, e.g. ``(N,) ndarray`` or ``array_like``.
long_variable_name : {'hi', 'ho'}, optional
Choices in brackets, default first when optional.
...
"""
var2.| # complete here
.. _jedi-vim: https://github.com/davidhalter/jedi-vim
.. _youcompleteme: http://valloric.github.io/YouCompleteMe/
.. _youcompleteme: https://valloric.github.io/YouCompleteMe/
.. _deoplete-jedi: https://github.com/zchee/deoplete-jedi
.. _Jedi.el: https://github.com/tkf/emacs-jedi
.. _elpy: https://github.com/jorgenschaefer/elpy
@@ -113,10 +247,9 @@ Using a custom ``$HOME/.pythonrc.py``
.. _SynJedi: http://uvviewsoft.com/synjedi/
.. _wdb: https://github.com/Kozea/wdb
.. _TextMate: https://github.com/lawrenceakka/python-jedi.tmbundle
.. _kate: http://kate-editor.org/
.. _kate: https://kate-editor.org/
.. _autocomplete-python-jedi: https://atom.io/packages/autocomplete-python-jedi
.. _SourceLair: https://www.sourcelair.com
.. _GNOME Builder: https://wiki.gnome.org/Apps/Builder/
.. _gedi: https://github.com/isamert/gedi
.. _Eric IDE: http://eric-ide.python-projects.org
.. _Python Extension: https://marketplace.visualstudio.com/items?itemName=donjayamanne.python
.. _Eric IDE: https://eric-ide.python-projects.org
.. _Python Extension: https://marketplace.visualstudio.com/items?itemName=ms-python.python

View File

@@ -1,3 +1,3 @@
:orphan:
.. |jedi| replace:: *Jedi*
.. |jedi| replace:: Jedi

View File

@@ -1,13 +1,44 @@
.. include global.rst
Jedi - an awesome autocompletion/static analysis library for Python
===================================================================
.. meta::
:github_url: https://github.com/davidhalter/jedi
Release v\ |release|. (:doc:`Installation <docs/installation>`)
Jedi - an awesome autocompletion, static analysis and refactoring library for Python
====================================================================================
.. image:: https://img.shields.io/github/stars/davidhalter/jedi.svg?style=social&label=Star&maxAge=2592000
:target: https://github.com/davidhalter/jedi
:alt: GitHub stars
.. image:: http://isitmaintained.com/badge/open/davidhalter/jedi.svg
:target: https://github.com/davidhalter/jedi/issues
:alt: The percentage of open issues and pull requests
.. image:: http://isitmaintained.com/badge/resolution/davidhalter/jedi.svg
:target: https://github.com/davidhalter/jedi/issues
:alt: The resolution time is the median time an issue or pull request stays open.
.. image:: https://travis-ci.org/davidhalter/jedi.svg?branch=master
:target: https://travis-ci.org/davidhalter/jedi
:alt: Linux Tests
.. image:: https://ci.appveyor.com/api/projects/status/mgva3bbawyma1new/branch/master?svg=true
:target: https://ci.appveyor.com/project/davidhalter/jedi/branch/master
:alt: Windows Tests
.. image:: https://coveralls.io/repos/davidhalter/jedi/badge.svg?branch=master
:target: https://coveralls.io/r/davidhalter/jedi
:alt: Coverage status
.. image:: https://pepy.tech/badge/jedi
:target: https://pepy.tech/project/jedi
:alt: PyPI Downloads
`Github Repository <https://github.com/davidhalter/jedi>`_
.. automodule:: jedi
Autocompletion can look like this (e.g. VIM plugin):
Autocompletion can for example look like this in jedi-vim:
.. figure:: _screenshots/screenshot_complete.png
@@ -18,16 +49,18 @@ Docs
----
.. toctree::
:maxdepth: 2
:maxdepth: 1
docs/usage
docs/installation
docs/features
docs/plugin-api
docs/plugin-api-classes
docs/api
docs/api-classes
docs/installation
docs/settings
docs/development
docs/testing
docs/acknowledgements
docs/changelog
.. _resources:
@@ -37,4 +70,4 @@ Resources
- `Source Code on Github <https://github.com/davidhalter/jedi>`_
- `Travis Testing <https://travis-ci.org/davidhalter/jedi>`_
- `Python Package Index <http://pypi.python.org/pypi/jedi/>`_
- `Python Package Index <https://pypi.python.org/pypi/jedi/>`_

View File

@@ -1,43 +1,43 @@
"""
Jedi is a static analysis tool for Python that can be used in IDEs/editors. Its
historic focus is autocompletion, but does static analysis for now as well.
Jedi is fast and is very well tested. It understands Python on a deeper level
than all other static analysis frameworks for Python.
Jedi is a static analysis tool for Python that is typically used in
IDEs/editors plugins. Jedi has a focus on autocompletion and goto
functionality. Other features include refactoring, code search and finding
references.
Jedi has support for two different goto functions. It's possible to search for
related names and to list all names in a Python file and infer them. Jedi
understands docstrings and you can use Jedi autocompletion in your REPL as
well.
Jedi has a simple API to work with. There is a reference implementation as a
`VIM-Plugin <https://github.com/davidhalter/jedi-vim>`_. Autocompletion in your
REPL is also possible, IPython uses it natively and for the CPython REPL you
can install it. Jedi is well tested and bugs should be rare.
Jedi uses a very simple API to connect with IDE's. There's a reference
implementation as a `VIM-Plugin <https://github.com/davidhalter/jedi-vim>`_,
which uses Jedi's autocompletion. We encourage you to use Jedi in your IDEs.
It's really easy.
To give you a simple example how you can use the Jedi library, here is an
example for the autocompletion feature:
Here's a simple example of the autocompletion feature:
>>> import jedi
>>> source = '''
... import datetime
... datetime.da'''
>>> script = jedi.Script(source, 3, len('datetime.da'), 'example.py')
... import json
... json.lo'''
>>> script = jedi.Script(source, path='example.py')
>>> script
<Script: 'example.py'>
>>> completions = script.completions()
>>> completions #doctest: +ELLIPSIS
[<Completion: date>, <Completion: datetime>, ...]
<Script: 'example.py' ...>
>>> completions = script.complete(3, len('json.lo'))
>>> completions
[<Completion: load>, <Completion: loads>]
>>> print(completions[0].complete)
te
ad
>>> print(completions[0].name)
date
As you see Jedi is pretty simple and allows you to concentrate on writing a
good text editor, while still having very good IDE features for Python.
load
"""
__version__ = '0.11.0'
__version__ = '0.17.2'
from jedi.api import Script, Interpreter, set_debug_function, \
preload_module, names
from jedi import settings
from jedi.api.environment import find_virtualenvs, find_system_environments, \
get_default_environment, InvalidPythonEnvironment, create_environment, \
get_system_environment, InterpreterEnvironment
from jedi.api.project import Project, get_default_project
from jedi.api.exceptions import InternalError, RefactoringError
# Finally load the internal plugins. This is only internal.
from jedi.plugins import registry
del registry

View File

@@ -27,8 +27,8 @@ def _start_linter():
paths = [path]
try:
for path in paths:
for error in jedi.Script(path=path)._analysis():
for p in paths:
for error in jedi.Script(path=p)._analysis():
print(error)
except Exception:
if '--pdb' in sys.argv:
@@ -40,9 +40,33 @@ def _start_linter():
raise
def _complete():
import jedi
import pdb
if '-d' in sys.argv:
sys.argv.remove('-d')
jedi.set_debug_function()
try:
completions = jedi.Script(sys.argv[2]).complete()
for c in completions:
c.docstring()
c.type
except Exception as e:
print(repr(e))
pdb.post_mortem()
else:
print(completions)
if len(sys.argv) == 2 and sys.argv[1] == 'repl':
# don't want to use __main__ only for repl yet, maybe we want to use it for
# something else. So just use the keyword ``repl`` for now.
print(join(dirname(abspath(__file__)), 'api', 'replstartup.py'))
elif len(sys.argv) > 1 and sys.argv[1] == 'linter':
elif len(sys.argv) > 1 and sys.argv[1] == '_linter':
_start_linter()
elif len(sys.argv) > 1 and sys.argv[1] == '_complete':
_complete()
else:
print('Command not implemented: %s' % sys.argv[1])

View File

@@ -1,28 +1,52 @@
"""
To ensure compatibility from Python ``2.6`` - ``3.3``, a module has been
To ensure compatibility from Python ``2.7`` - ``3.x``, a module has been
created. Clearly there is huge need to use conforming syntax.
"""
from __future__ import print_function
import atexit
import errno
import functools
import sys
import imp
import os
import re
import pkgutil
import warnings
import subprocess
import weakref
try:
import importlib
except ImportError:
pass
from zipimport import zipimporter
from jedi.file_io import KnownContentFileIO, ZipFileIO
# Cannot use sys.version.major and minor names, because in Python 2.6 it's not
# a namedtuple.
is_py3 = sys.version_info[0] >= 3
is_py33 = is_py3 and sys.version_info[1] >= 3
is_py34 = is_py3 and sys.version_info[1] >= 4
is_py35 = is_py3 and sys.version_info[1] >= 5
is_py26 = not is_py3 and sys.version_info[1] < 7
py_version = int(str(sys.version_info[0]) + str(sys.version_info[1]))
if sys.version_info[:2] < (3, 5):
"""
A super-minimal shim around listdir that behave like
scandir for the information we need.
"""
class _DirEntry:
def __init__(self, name, basepath):
self.name = name
self.basepath = basepath
def is_dir(self):
path_for_name = os.path.join(self.basepath, self.name)
return os.path.isdir(path_for_name)
def scandir(dir):
return [_DirEntry(name, dir) for name in os.listdir(dir)]
else:
from os import scandir
class DummyFile(object):
def __init__(self, loader, string):
self.loader = loader
@@ -35,28 +59,36 @@ class DummyFile(object):
del self.loader
def find_module_py34(string, path=None, fullname=None):
implicit_namespace_pkg = False
def find_module_py34(string, path=None, full_name=None, is_global_search=True):
spec = None
loader = None
spec = importlib.machinery.PathFinder.find_spec(string, path)
if hasattr(spec, 'origin'):
origin = spec.origin
implicit_namespace_pkg = origin == 'namespace'
for finder in sys.meta_path:
if is_global_search and finder != importlib.machinery.PathFinder:
p = None
else:
p = path
try:
find_spec = finder.find_spec
except AttributeError:
# These are old-school clases that still have a different API, just
# ignore those.
continue
# We try to disambiguate implicit namespace pkgs with non implicit namespace pkgs
if implicit_namespace_pkg:
fullname = string if not path else fullname
implicit_ns_info = ImplicitNSInfo(fullname, spec.submodule_search_locations._path)
return None, implicit_ns_info, False
spec = find_spec(string, p)
if spec is not None:
loader = spec.loader
if loader is None and not spec.has_location:
# This is a namespace package.
full_name = string if not path else full_name
implicit_ns_info = ImplicitNSInfo(full_name, spec.submodule_search_locations._path)
return implicit_ns_info, True
break
# we have found the tail end of the dotted path
if hasattr(spec, 'loader'):
loader = spec.loader
return find_module_py33(string, path, loader)
def find_module_py33(string, path=None, loader=None, fullname=None):
def find_module_py33(string, path=None, loader=None, full_name=None, is_global_search=True):
loader = loader or importlib.machinery.PathFinder.find_module(string, path)
if loader is None and path is None: # Fallback to find builtins
@@ -74,46 +106,91 @@ def find_module_py33(string, path=None, loader=None, fullname=None):
raise ImportError("Originally " + repr(e))
if loader is None:
raise ImportError("Couldn't find a loader for {0}".format(string))
raise ImportError("Couldn't find a loader for {}".format(string))
return _from_loader(loader, string)
def _from_loader(loader, string):
try:
is_package = loader.is_package(string)
if is_package:
if hasattr(loader, 'path'):
module_path = os.path.dirname(loader.path)
else:
# At least zipimporter does not have path attribute
module_path = os.path.dirname(loader.get_filename(string))
if hasattr(loader, 'archive'):
module_file = DummyFile(loader, string)
else:
module_file = None
else:
module_path = loader.get_filename(string)
module_file = DummyFile(loader, string)
is_package_method = loader.is_package
except AttributeError:
# ExtensionLoader has not attribute get_filename, instead it has a
# path attribute that we can use to retrieve the module path
try:
module_path = loader.path
module_file = DummyFile(loader, string)
except AttributeError:
module_path = string
module_file = None
finally:
is_package = False
is_package = False
else:
is_package = is_package_method(string)
try:
get_filename = loader.get_filename
except AttributeError:
return None, is_package
else:
module_path = cast_path(get_filename(string))
if hasattr(loader, 'archive'):
module_path = loader.archive
# To avoid unicode and read bytes, "overwrite" loader.get_source if
# possible.
try:
f = type(loader).get_source
except AttributeError:
raise ImportError("get_source was not defined on loader")
return module_file, module_path, is_package
if is_py3 and f is not importlib.machinery.SourceFileLoader.get_source:
# Unfortunately we are reading unicode here, not bytes.
# It seems hard to get bytes, because the zip importer
# logic just unpacks the zip file and returns a file descriptor
# that we cannot as easily access. Therefore we just read it as
# a string in the cases where get_source was overwritten.
code = loader.get_source(string)
else:
code = _get_source(loader, string)
if code is None:
return None, is_package
if isinstance(loader, zipimporter):
return ZipFileIO(module_path, code, cast_path(loader.archive)), is_package
return KnownContentFileIO(module_path, code), is_package
def find_module_pre_py33(string, path=None, fullname=None):
def _get_source(loader, fullname):
"""
This method is here as a replacement for SourceLoader.get_source. That
method returns unicode, but we prefer bytes.
"""
path = loader.get_filename(fullname)
try:
return loader.get_data(path)
except OSError:
raise ImportError('source not available through get_data()',
name=fullname)
def find_module_pre_py3(string, path=None, full_name=None, is_global_search=True):
# This import is here, because in other places it will raise a
# DeprecationWarning.
import imp
try:
module_file, module_path, description = imp.find_module(string, path)
module_type = description[2]
return module_file, module_path, module_type is imp.PKG_DIRECTORY
is_package = module_type is imp.PKG_DIRECTORY
if is_package:
# In Python 2 directory package imports are returned as folder
# paths, not __init__.py paths.
p = os.path.join(module_path, '__init__.py')
try:
module_file = open(p)
module_path = p
except FileNotFoundError:
pass
elif module_type != imp.PY_SOURCE:
if module_file is not None:
module_file.close()
module_file = None
if module_file is None:
return None, is_package
with module_file:
code = module_file.read()
return KnownContentFileIO(cast_path(module_path), code), is_package
except ImportError:
pass
@@ -122,34 +199,13 @@ def find_module_pre_py33(string, path=None, fullname=None):
for item in path:
loader = pkgutil.get_importer(item)
if loader:
try:
loader = loader.find_module(string)
if loader:
is_package = loader.is_package(string)
is_archive = hasattr(loader, 'archive')
try:
module_path = loader.get_filename(string)
except AttributeError:
# fallback for py26
try:
module_path = loader._get_filename(string)
except AttributeError:
continue
if is_package:
module_path = os.path.dirname(module_path)
if is_archive:
module_path = loader.archive
file = None
if not is_package or is_archive:
file = DummyFile(loader, string)
return (file, module_path, is_package)
except ImportError:
pass
raise ImportError("No module named {0}".format(string))
loader = loader.find_module(string)
if loader is not None:
return _from_loader(loader, string)
raise ImportError("No module named {}".format(string))
find_module = find_module_py33 if is_py33 else find_module_pre_py33
find_module = find_module_py34 if is_py34 else find_module
find_module = find_module_py34 if is_py3 else find_module_pre_py3
find_module.__doc__ = """
Provides information about a module.
@@ -167,6 +223,16 @@ class ImplicitNSInfo(object):
self.name = name
self.paths = paths
if is_py3:
all_suffixes = importlib.machinery.all_suffixes
else:
def all_suffixes():
# Is deprecated and raises a warning in Python 3.6.
import imp
return [suffix for suffix, _, _ in imp.get_suffixes()]
# unicode function
try:
unicode = unicode
@@ -174,14 +240,6 @@ except NameError:
unicode = str
# exec function
if is_py3:
def exec_function(source, global_map):
exec(source, global_map)
else:
eval(compile("""def exec_function(source, global_map):
exec source in global_map """, 'blub', 'exec'))
# re-raise function
if is_py3:
def reraise(exception, traceback):
@@ -201,22 +259,12 @@ Usage::
"""
class Python3Method(object):
def __init__(self, func):
self.func = func
def __get__(self, obj, objtype):
if obj is None:
return lambda *args, **kwargs: self.func(*args, **kwargs)
else:
return lambda *args, **kwargs: self.func(obj, *args, **kwargs)
def use_metaclass(meta, *bases):
""" Create a class with a metaclass. """
if not bases:
bases = (object,)
return meta("HackClass", bases, {})
return meta("Py2CompatibilityMetaClass", bases, {})
try:
@@ -227,47 +275,77 @@ except AttributeError:
encoding = 'ascii'
def u(string):
def u(string, errors='strict'):
"""Cast to unicode DAMMIT!
Written because Python2 repr always implicitly casts to a string, so we
have to cast back to a unicode (and we now that we always deal with valid
unicode, because we check that in the beginning).
"""
if is_py3:
return str(string)
if not isinstance(string, unicode):
return unicode(str(string), 'UTF-8')
if isinstance(string, bytes):
return unicode(string, encoding='UTF-8', errors=errors)
return string
def cast_path(obj):
"""
Take a bytes or str path and cast it to unicode.
Apparently it is perfectly fine to pass both byte and unicode objects into
the sys.path. This probably means that byte paths are normal at other
places as well.
Since this just really complicates everything and Python 2.7 will be EOL
soon anyway, just go with always strings.
"""
return u(obj, errors='replace')
def force_unicode(obj):
# Intentionally don't mix those two up, because those two code paths might
# be different in the future (maybe windows?).
return cast_path(obj)
try:
import builtins # module name in python 3
except ImportError:
import __builtin__ as builtins
import __builtin__ as builtins # noqa: F401
import ast
import ast # noqa: F401
def literal_eval(string):
# py3.0, py3.1 and py32 don't support unicode literals. Support those, I
# don't want to write two versions of the tokenizer.
if is_py3 and sys.version_info.minor < 3:
if re.match('[uU][\'"]', string):
string = string[1:]
return ast.literal_eval(string)
try:
from itertools import zip_longest
except ImportError:
from itertools import izip_longest as zip_longest # Python 2
from itertools import izip_longest as zip_longest # Python 2 # noqa: F401
try:
FileNotFoundError = FileNotFoundError
except NameError:
FileNotFoundError = IOError
try:
IsADirectoryError = IsADirectoryError
except NameError:
IsADirectoryError = IOError
try:
PermissionError = PermissionError
except NameError:
PermissionError = IOError
try:
NotADirectoryError = NotADirectoryError
except NameError:
class NotADirectoryError(Exception):
# Don't implement this for Python 2 anymore.
pass
def no_unicode_pprint(dct):
"""
@@ -297,3 +375,257 @@ def utf8_repr(func):
return func
else:
return wrapper
if is_py3:
import queue
else:
import Queue as queue # noqa: F401
try:
# Attempt to load the C implementation of pickle on Python 2 as it is way
# faster.
import cPickle as pickle
except ImportError:
import pickle
def pickle_load(file):
try:
if is_py3:
return pickle.load(file, encoding='bytes')
return pickle.load(file)
# Python on Windows don't throw EOF errors for pipes. So reraise them with
# the correct type, which is caught upwards.
except OSError:
if sys.platform == 'win32':
raise EOFError()
raise
def _python2_dct_keys_to_unicode(data):
"""
Python 2 stores object __dict__ entries as bytes, not unicode, correct it
here. Python 2 can deal with both, Python 3 expects unicode.
"""
if isinstance(data, tuple):
return tuple(_python2_dct_keys_to_unicode(x) for x in data)
elif isinstance(data, list):
return list(_python2_dct_keys_to_unicode(x) for x in data)
elif hasattr(data, '__dict__') and type(data.__dict__) == dict:
data.__dict__ = {unicode(k): v for k, v in data.__dict__.items()}
return data
def pickle_dump(data, file, protocol):
try:
if not is_py3:
data = _python2_dct_keys_to_unicode(data)
pickle.dump(data, file, protocol)
# On Python 3.3 flush throws sometimes an error even though the writing
# operation should be completed.
file.flush()
# Python on Windows don't throw EPIPE errors for pipes. So reraise them with
# the correct type and error number.
except OSError:
if sys.platform == 'win32':
raise IOError(errno.EPIPE, "Broken pipe")
raise
# Determine the highest protocol version compatible for a given list of Python
# versions.
def highest_pickle_protocol(python_versions):
protocol = 4
for version in python_versions:
if version[0] == 2:
# The minimum protocol version for the versions of Python that we
# support (2.7 and 3.3+) is 2.
return 2
if version[1] < 4:
protocol = 3
return protocol
try:
from inspect import Parameter
except ImportError:
class Parameter(object):
POSITIONAL_ONLY = object()
POSITIONAL_OR_KEYWORD = object()
VAR_POSITIONAL = object()
KEYWORD_ONLY = object()
VAR_KEYWORD = object()
class GeneralizedPopen(subprocess.Popen):
def __init__(self, *args, **kwargs):
if os.name == 'nt':
try:
# Was introduced in Python 3.7.
CREATE_NO_WINDOW = subprocess.CREATE_NO_WINDOW
except AttributeError:
CREATE_NO_WINDOW = 0x08000000
kwargs['creationflags'] = CREATE_NO_WINDOW
# The child process doesn't need file descriptors except 0, 1, 2.
# This is unix only.
kwargs['close_fds'] = 'posix' in sys.builtin_module_names
super(GeneralizedPopen, self).__init__(*args, **kwargs)
# shutil.which is not available on Python 2.7.
def which(cmd, mode=os.F_OK | os.X_OK, path=None):
"""Given a command, mode, and a PATH string, return the path which
conforms to the given mode on the PATH, or None if there is no such
file.
`mode` defaults to os.F_OK | os.X_OK. `path` defaults to the result
of os.environ.get("PATH"), or can be overridden with a custom search
path.
"""
# Check that a given file can be accessed with the correct mode.
# Additionally check that `file` is not a directory, as on Windows
# directories pass the os.access check.
def _access_check(fn, mode):
return (os.path.exists(fn) and os.access(fn, mode)
and not os.path.isdir(fn))
# If we're given a path with a directory part, look it up directly rather
# than referring to PATH directories. This includes checking relative to the
# current directory, e.g. ./script
if os.path.dirname(cmd):
if _access_check(cmd, mode):
return cmd
return None
if path is None:
path = os.environ.get("PATH", os.defpath)
if not path:
return None
path = path.split(os.pathsep)
if sys.platform == "win32":
# The current directory takes precedence on Windows.
if os.curdir not in path:
path.insert(0, os.curdir)
# PATHEXT is necessary to check on Windows.
pathext = os.environ.get("PATHEXT", "").split(os.pathsep)
# See if the given file matches any of the expected path extensions.
# This will allow us to short circuit when given "python.exe".
# If it does match, only test that one, otherwise we have to try
# others.
if any(cmd.lower().endswith(ext.lower()) for ext in pathext):
files = [cmd]
else:
files = [cmd + ext for ext in pathext]
else:
# On other platforms you don't have things like PATHEXT to tell you
# what file suffixes are executable, so just pass on cmd as-is.
files = [cmd]
seen = set()
for dir in path:
normdir = os.path.normcase(dir)
if normdir not in seen:
seen.add(normdir)
for thefile in files:
name = os.path.join(dir, thefile)
if _access_check(name, mode):
return name
return None
if not is_py3:
# Simplified backport of Python 3 weakref.finalize:
# https://github.com/python/cpython/blob/ded4737989316653469763230036b04513cb62b3/Lib/weakref.py#L502-L662
class finalize(object):
"""Class for finalization of weakrefable objects.
finalize(obj, func, *args, **kwargs) returns a callable finalizer
object which will be called when obj is garbage collected. The
first time the finalizer is called it evaluates func(*arg, **kwargs)
and returns the result. After this the finalizer is dead, and
calling it just returns None.
When the program exits any remaining finalizers will be run.
"""
# Finalizer objects don't have any state of their own.
# This ensures that they cannot be part of a ref-cycle.
__slots__ = ()
_registry = {}
def __init__(self, obj, func, *args, **kwargs):
info = functools.partial(func, *args, **kwargs)
info.weakref = weakref.ref(obj, self)
self._registry[self] = info
# To me it's an absolute mystery why in Python 2 we need _=None. It
# makes really no sense since it's never really called. Then again it
# might be called by Python 2.7 itself, but weakref.finalize is not
# documented in Python 2 and therefore shouldn't be randomly called.
# We never call this stuff with a parameter and therefore this
# parameter should not be needed. But it is. ~dave
def __call__(self, _=None):
"""Return func(*args, **kwargs) if alive."""
info = self._registry.pop(self, None)
if info:
return info()
@classmethod
def _exitfunc(cls):
if not cls._registry:
return
for finalizer in list(cls._registry):
try:
finalizer()
except Exception:
sys.excepthook(*sys.exc_info())
assert finalizer not in cls._registry
atexit.register(finalize._exitfunc)
weakref.finalize = finalize
if is_py3 and sys.version_info[1] > 5:
from inspect import unwrap
else:
# Only Python >=3.6 does properly limit the amount of unwraps. This is very
# relevant in the case of unittest.mock.patch.
# Below is the implementation of Python 3.7.
def unwrap(func, stop=None):
"""Get the object wrapped by *func*.
Follows the chain of :attr:`__wrapped__` attributes returning the last
object in the chain.
*stop* is an optional callback accepting an object in the wrapper chain
as its sole argument that allows the unwrapping to be terminated early if
the callback returns a true value. If the callback never returns a true
value, the last object in the chain is returned as usual. For example,
:func:`signature` uses this to stop unwrapping if any object in the
chain has a ``__signature__`` attribute defined.
:exc:`ValueError` is raised if a cycle is encountered.
"""
if stop is None:
def _is_wrapper(f):
return hasattr(f, '__wrapped__')
else:
def _is_wrapper(f):
return hasattr(f, '__wrapped__') and not stop(f)
f = func # remember the original func for error reporting
# Memoise by id to tolerate non-hashable objects, but store objects to
# ensure they aren't destroyed, which would allow their IDs to be reused.
memo = {id(f): f}
recursion_limit = sys.getrecursionlimit()
while _is_wrapper(func):
func = func.__wrapped__
id_func = id(func)
if (id_func in memo) or (len(memo) >= recursion_limit):
raise ValueError('wrapper loop when unwrapping {!r}'.format(f))
memo[id_func] = func
return func

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,76 +1,111 @@
from parso.python import token
import re
from textwrap import dedent
from parso.python.token import PythonTokenTypes
from parso.python import tree
from parso.tree import search_ancestor, Leaf
from parso import split_lines
from jedi._compatibility import Parameter
from jedi import debug
from jedi import settings
from jedi.api import classes
from jedi.api import helpers
from jedi.evaluate import imports
from jedi.api import keywords
from jedi.evaluate.helpers import evaluate_call_of_leaf
from jedi.evaluate.filters import get_global_filters
from jedi.parser_utils import get_statement_of_position
from jedi.api.strings import complete_dict
from jedi.api.file_name import complete_file_name
from jedi.inference import imports
from jedi.inference.base_value import ValueSet
from jedi.inference.helpers import infer_call_of_leaf, parse_dotted_names
from jedi.inference.context import get_global_filters
from jedi.inference.value import TreeInstance, ModuleValue
from jedi.inference.names import ParamNameWrapper, SubModuleName
from jedi.inference.gradual.conversion import convert_values, convert_names
from jedi.parser_utils import cut_value_at_position
from jedi.plugins import plugin_manager
def get_call_signature_param_names(call_signatures):
# add named params
for call_sig in call_signatures:
for p in call_sig.params:
class ParamNameWithEquals(ParamNameWrapper):
def get_public_name(self):
return self.string_name + '='
def _get_signature_param_names(signatures, positional_count, used_kwargs):
# Add named params
for call_sig in signatures:
for i, p in enumerate(call_sig.params):
# Allow protected access, because it's a public API.
tree_name = p._name.tree_name
# Compiled modules typically don't allow keyword arguments.
if tree_name is not None:
# Allow access on _definition here, because it's a
# public API and we don't want to make the internal
# Name object public.
tree_param = tree.search_ancestor(tree_name, 'param')
if tree_param.star_count == 0: # no *args/**kwargs
yield p._name
# TODO reconsider with Python 2 drop
kind = p._name.get_kind()
if i < positional_count and kind == Parameter.POSITIONAL_OR_KEYWORD:
continue
if kind in (Parameter.POSITIONAL_OR_KEYWORD, Parameter.KEYWORD_ONLY) \
and p.name not in used_kwargs:
yield ParamNameWithEquals(p._name)
def filter_names(evaluator, completion_names, stack, like_name):
comp_dct = {}
def _must_be_kwarg(signatures, positional_count, used_kwargs):
if used_kwargs:
return True
must_be_kwarg = True
for signature in signatures:
for i, p in enumerate(signature.params):
# TODO reconsider with Python 2 drop
kind = p._name.get_kind()
if kind is Parameter.VAR_POSITIONAL:
# In case there were not already kwargs, the next param can
# always be a normal argument.
return False
if i >= positional_count and kind in (Parameter.POSITIONAL_OR_KEYWORD,
Parameter.POSITIONAL_ONLY):
must_be_kwarg = False
break
if not must_be_kwarg:
break
return must_be_kwarg
def filter_names(inference_state, completion_names, stack, like_name, fuzzy, cached_name):
comp_dct = set()
if settings.case_insensitive_completion:
like_name = like_name.lower()
for name in completion_names:
if settings.case_insensitive_completion \
and name.string_name.lower().startswith(like_name.lower()) \
or name.string_name.startswith(like_name):
string = name.string_name
if settings.case_insensitive_completion:
string = string.lower()
if helpers.match(string, like_name, fuzzy=fuzzy):
new = classes.Completion(
evaluator,
inference_state,
name,
stack,
len(like_name)
len(like_name),
is_fuzzy=fuzzy,
cached_name=cached_name,
)
k = (new.name, new.complete) # key
if k in comp_dct and settings.no_completion_duplicates:
comp_dct[k]._same_name_completions.append(new)
else:
comp_dct[k] = new
if k not in comp_dct:
comp_dct.add(k)
tree_name = name.tree_name
if tree_name is not None:
definition = tree_name.get_definition()
if definition is not None and definition.type == 'del_stmt':
continue
yield new
def get_user_scope(module_context, position):
def _remove_duplicates(completions, other_completions):
names = {d.name for d in other_completions}
return [c for c in completions if c.name not in names]
def get_user_context(module_context, position):
"""
Returns the scope in which the user resides. This includes flows.
"""
user_stmt = get_statement_of_position(module_context.tree_node, position)
if user_stmt is None:
def scan(scope):
for s in scope.children:
if s.start_pos <= position <= s.end_pos:
if isinstance(s, (tree.Scope, tree.Flow)):
return scan(s) or s
elif s.type in ('suite', 'decorated'):
return scan(s)
return None
scanned_node = scan(module_context.tree_node)
if scanned_node:
return module_context.create_context(scanned_node, node_is_context=True)
return module_context
else:
return module_context.create_context(user_stmt)
leaf = module_context.tree_node.get_leaf_for_position(position, include_prefixes=True)
return module_context.create_context(leaf)
def get_flow_scope_node(module_node, position):
@@ -81,33 +116,76 @@ def get_flow_scope_node(module_node, position):
return node
@plugin_manager.decorate()
def complete_param_names(context, function_name, decorator_nodes):
# Basically there's no way to do param completion. The plugins are
# responsible for this.
return []
class Completion:
def __init__(self, evaluator, module, code_lines, position, call_signatures_method):
self._evaluator = evaluator
self._module_context = module
self._module_node = module.tree_node
def __init__(self, inference_state, module_context, code_lines, position,
signatures_callback, fuzzy=False):
self._inference_state = inference_state
self._module_context = module_context
self._module_node = module_context.tree_node
self._code_lines = code_lines
# The first step of completions is to get the name
self._like_name = helpers.get_on_completion_name(self._module_node, code_lines, position)
# The actual cursor position is not what we need to calculate
# everything. We want the start of the name we're on.
self._position = position[0], position[1] - len(self._like_name)
self._call_signatures_method = call_signatures_method
self._original_position = position
self._signatures_callback = signatures_callback
def completions(self):
completion_names = self._get_context_completions()
self._fuzzy = fuzzy
completions = filter_names(self._evaluator, completion_names,
self.stack, self._like_name)
def complete(self):
leaf = self._module_node.get_leaf_for_position(
self._original_position,
include_prefixes=True
)
string, start_leaf, quote = _extract_string_while_in_string(leaf, self._original_position)
return sorted(completions, key=lambda x: (x.name.startswith('__'),
x.name.startswith('_'),
x.name.lower()))
prefixed_completions = complete_dict(
self._module_context,
self._code_lines,
start_leaf or leaf,
self._original_position,
None if string is None else quote + string,
fuzzy=self._fuzzy,
)
def _get_context_completions(self):
if string is not None and not prefixed_completions:
prefixed_completions = list(complete_file_name(
self._inference_state, self._module_context, start_leaf, quote, string,
self._like_name, self._signatures_callback,
self._code_lines, self._original_position,
self._fuzzy
))
if string is not None:
if not prefixed_completions and '\n' in string:
# Complete only multi line strings
prefixed_completions = self._complete_in_string(start_leaf, string)
return prefixed_completions
cached_name, completion_names = self._complete_python(leaf)
completions = list(filter_names(self._inference_state, completion_names,
self.stack, self._like_name,
self._fuzzy, cached_name=cached_name))
return (
# Removing duplicates mostly to remove False/True/None duplicates.
_remove_duplicates(prefixed_completions, completions)
+ sorted(completions, key=lambda x: (x.name.startswith('__'),
x.name.startswith('_'),
x.name.lower()))
)
def _complete_python(self, leaf):
"""
Analyzes the context that a completion is made in and decides what to
Analyzes the current context of a completion and decides what to
return.
Technically this works by generating a parser stack and analysing the
@@ -120,26 +198,32 @@ class Completion:
- In params (also lambda): no completion before =
"""
grammar = self._evaluator.grammar
grammar = self._inference_state.grammar
self.stack = stack = None
self._position = (
self._original_position[0],
self._original_position[1] - len(self._like_name)
)
cached_name = None
try:
self.stack = helpers.get_stack_at_position(
grammar, self._code_lines, self._module_node, self._position
self.stack = stack = helpers.get_stack_at_position(
grammar, self._code_lines, leaf, self._position
)
except helpers.OnErrorLeaf as e:
self.stack = None
if e.error_leaf.value == '.':
value = e.error_leaf.value
if value == '.':
# After ErrorLeaf's that are dots, we will not do any
# completions since this probably just confuses the user.
return []
# If we don't have a context, just use global completion.
return cached_name, []
return self._global_completions()
# If we don't have a value, just use global completion.
return cached_name, self._complete_global_scope()
allowed_keywords, allowed_tokens = \
helpers.get_possible_completion_types(grammar._pgen_grammar, self.stack)
allowed_transitions = \
list(stack._allowed_transition_names_and_token_types())
if 'if' in allowed_keywords:
if 'if' in allowed_transitions:
leaf = self._module_node.get_leaf_for_position(self._position, include_prefixes=True)
previous_leaf = leaf.get_previous_leaf()
@@ -165,127 +249,417 @@ class Completion:
# Compare indents
if stmt.start_pos[1] == indent:
if type_ == 'if_stmt':
allowed_keywords += ['elif', 'else']
allowed_transitions += ['elif', 'else']
elif type_ == 'try_stmt':
allowed_keywords += ['except', 'finally', 'else']
allowed_transitions += ['except', 'finally', 'else']
elif type_ == 'for_stmt':
allowed_keywords.append('else')
allowed_transitions.append('else')
completion_names = list(self._get_keyword_completion_names(allowed_keywords))
completion_names = []
if token.NAME in allowed_tokens or token.INDENT in allowed_tokens:
kwargs_only = False
if any(t in allowed_transitions for t in (PythonTokenTypes.NAME,
PythonTokenTypes.INDENT)):
# This means that we actually have to do type inference.
symbol_names = list(self.stack.get_node_names(grammar._pgen_grammar))
nodes = list(self.stack.get_nodes())
nonterminals = [stack_node.nonterminal for stack_node in stack]
nodes = _gather_nodes(stack)
if nodes and nodes[-1] in ('as', 'def', 'class'):
# No completions for ``with x as foo`` and ``import x as foo``.
# Also true for defining names as a class or function.
return list(self._get_class_context_completions(is_function=True))
elif "import_stmt" in symbol_names:
level, names = self._parse_dotted_names(nodes, "import_from" in symbol_names)
return cached_name, list(self._complete_inherited(is_function=True))
elif "import_stmt" in nonterminals:
level, names = parse_dotted_names(nodes, "import_from" in nonterminals)
only_modules = not ("import_from" in symbol_names and 'import' in nodes)
only_modules = not ("import_from" in nonterminals and 'import' in nodes)
completion_names += self._get_importer_names(
names,
level,
only_modules=only_modules,
)
elif symbol_names[-1] in ('trailer', 'dotted_name') and nodes[-1] == '.':
elif nonterminals[-1] in ('trailer', 'dotted_name') and nodes[-1] == '.':
dot = self._module_node.get_leaf_for_position(self._position)
completion_names += self._trailer_completions(dot.get_previous_leaf())
cached_name, n = self._complete_trailer(dot.get_previous_leaf())
completion_names += n
elif self._is_parameter_completion():
completion_names += self._complete_params(leaf)
else:
completion_names += self._global_completions()
completion_names += self._get_class_context_completions(is_function=False)
# Apparently this looks like it's good enough to filter most cases
# so that signature completions don't randomly appear.
# To understand why this works, three things are important:
# 1. trailer with a `,` in it is either a subscript or an arglist.
# 2. If there's no `,`, it's at the start and only signatures start
# with `(`. Other trailers could start with `.` or `[`.
# 3. Decorators are very primitive and have an optional `(` with
# optional arglist in them.
if nodes[-1] in ['(', ','] \
and nonterminals[-1] in ('trailer', 'arglist', 'decorator'):
signatures = self._signatures_callback(*self._position)
if signatures:
call_details = signatures[0]._call_details
used_kwargs = list(call_details.iter_used_keyword_arguments())
positional_count = call_details.count_positional_arguments()
if 'trailer' in symbol_names:
call_signatures = self._call_signatures_method()
completion_names += get_call_signature_param_names(call_signatures)
completion_names += _get_signature_param_names(
signatures,
positional_count,
used_kwargs,
)
return completion_names
kwargs_only = _must_be_kwarg(signatures, positional_count, used_kwargs)
def _get_keyword_completion_names(self, keywords_):
for k in keywords_:
yield keywords.keyword(self._evaluator, k).name
if not kwargs_only:
completion_names += self._complete_global_scope()
completion_names += self._complete_inherited(is_function=False)
def _global_completions(self):
context = get_user_scope(self._module_context, self._position)
if not kwargs_only:
current_line = self._code_lines[self._position[0] - 1][:self._position[1]]
completion_names += self._complete_keywords(
allowed_transitions,
only_values=not (not current_line or current_line[-1] in ' \t.;'
and current_line[-3:] != '...')
)
return cached_name, completion_names
def _is_parameter_completion(self):
tos = self.stack[-1]
if tos.nonterminal == 'lambdef' and len(tos.nodes) == 1:
# We are at the position `lambda `, where basically the next node
# is a param.
return True
if tos.nonterminal in 'parameters':
# Basically we are at the position `foo(`, there's nothing there
# yet, so we have no `typedargslist`.
return True
# var args is for lambdas and typed args for normal functions
return tos.nonterminal in ('typedargslist', 'varargslist') and tos.nodes[-1] == ','
def _complete_params(self, leaf):
stack_node = self.stack[-2]
if stack_node.nonterminal == 'parameters':
stack_node = self.stack[-3]
if stack_node.nonterminal == 'funcdef':
context = get_user_context(self._module_context, self._position)
node = search_ancestor(leaf, 'error_node', 'funcdef')
if node is not None:
if node.type == 'error_node':
n = node.children[0]
if n.type == 'decorators':
decorators = n.children
elif n.type == 'decorator':
decorators = [n]
else:
decorators = []
else:
decorators = node.get_decorators()
function_name = stack_node.nodes[1]
return complete_param_names(context, function_name.value, decorators)
return []
def _complete_keywords(self, allowed_transitions, only_values):
for k in allowed_transitions:
if isinstance(k, str) and k.isalpha():
if not only_values or k in ('True', 'False', 'None'):
yield keywords.KeywordName(self._inference_state, k)
def _complete_global_scope(self):
context = get_user_context(self._module_context, self._position)
debug.dbg('global completion scope: %s', context)
flow_scope_node = get_flow_scope_node(self._module_node, self._position)
filters = get_global_filters(
self._evaluator,
context,
self._position,
origin_scope=flow_scope_node
flow_scope_node
)
completion_names = []
for filter in filters:
completion_names += filter.values()
return completion_names
def _trailer_completions(self, previous_leaf):
user_context = get_user_scope(self._module_context, self._position)
evaluation_context = self._evaluator.create_context(
self._module_context, previous_leaf
)
contexts = evaluate_call_of_leaf(evaluation_context, previous_leaf)
completion_names = []
debug.dbg('trailer completion contexts: %s', contexts)
for context in contexts:
for filter in context.get_filters(
search_global=False, origin_scope=user_context.tree_node):
completion_names += filter.values()
return completion_names
def _complete_trailer(self, previous_leaf):
inferred_context = self._module_context.create_context(previous_leaf)
values = infer_call_of_leaf(inferred_context, previous_leaf)
debug.dbg('trailer completion values: %s', values, color='MAGENTA')
def _parse_dotted_names(self, nodes, is_import_from):
level = 0
names = []
for node in nodes[1:]:
if node in ('.', '...'):
if not names:
level += len(node.value)
elif node.type == 'dotted_name':
names += node.children[::2]
elif node.type == 'name':
names.append(node)
elif node == ',':
if not is_import_from:
names = []
else:
# Here if the keyword `import` comes along it stops checking
# for names.
break
return level, names
# The cached name simply exists to make speed optimizations for certain
# modules.
cached_name = None
if len(values) == 1:
v, = values
if v.is_module():
if len(v.string_names) == 1:
module_name = v.string_names[0]
if module_name in ('numpy', 'tensorflow', 'matplotlib', 'pandas'):
cached_name = module_name
return cached_name, self._complete_trailer_for_values(values)
def _complete_trailer_for_values(self, values):
user_context = get_user_context(self._module_context, self._position)
return complete_trailer(user_context, values)
def _get_importer_names(self, names, level=0, only_modules=True):
names = [n.value for n in names]
i = imports.Importer(self._evaluator, names, self._module_context, level)
return i.completion_names(self._evaluator, only_modules=only_modules)
i = imports.Importer(self._inference_state, names, self._module_context, level)
return i.completion_names(self._inference_state, only_modules=only_modules)
def _get_class_context_completions(self, is_function=True):
def _complete_inherited(self, is_function=True):
"""
Autocomplete inherited methods when overriding in child class.
"""
leaf = self._module_node.get_leaf_for_position(self._position, include_prefixes=True)
cls = tree.search_ancestor(leaf, 'classdef')
if isinstance(cls, (tree.Class, tree.Function)):
# Complete the methods that are defined in the super classes.
random_context = self._module_context.create_context(
cls,
node_is_context=True
)
else:
if cls is None:
return
# Complete the methods that are defined in the super classes.
class_value = self._module_context.create_value(cls)
if cls.start_pos[1] >= leaf.start_pos[1]:
return
filters = random_context.get_filters(search_global=False, is_instance=True)
filters = class_value.get_filters(is_instance=True)
# The first dict is the dictionary of class itself.
next(filters)
for filter in filters:
for name in filter.values():
# TODO we should probably check here for properties
if (name.api_type == 'function') == is_function:
yield name
def _complete_in_string(self, start_leaf, string):
"""
To make it possible for people to have completions in doctests or
generally in "Python" code in docstrings, we use the following
heuristic:
- Having an indented block of code
- Having some doctest code that starts with `>>>`
- Having backticks that doesn't have whitespace inside it
"""
def iter_relevant_lines(lines):
include_next_line = False
for l in code_lines:
if include_next_line or l.startswith('>>>') or l.startswith(' '):
yield re.sub(r'^( *>>> ?| +)', '', l)
else:
yield None
include_next_line = bool(re.match(' *>>>', l))
string = dedent(string)
code_lines = split_lines(string, keepends=True)
relevant_code_lines = list(iter_relevant_lines(code_lines))
if relevant_code_lines[-1] is not None:
# Some code lines might be None, therefore get rid of that.
relevant_code_lines = ['\n' if c is None else c for c in relevant_code_lines]
return self._complete_code_lines(relevant_code_lines)
match = re.search(r'`([^`\s]+)', code_lines[-1])
if match:
return self._complete_code_lines([match.group(1)])
return []
def _complete_code_lines(self, code_lines):
module_node = self._inference_state.grammar.parse(''.join(code_lines))
module_value = ModuleValue(
self._inference_state,
module_node,
code_lines=code_lines,
)
module_value.parent_context = self._module_context
return Completion(
self._inference_state,
module_value.as_context(),
code_lines=code_lines,
position=module_node.end_pos,
signatures_callback=lambda *args, **kwargs: [],
fuzzy=self._fuzzy
).complete()
def _gather_nodes(stack):
nodes = []
for stack_node in stack:
if stack_node.dfa.from_rule == 'small_stmt':
nodes = []
else:
nodes += stack_node.nodes
return nodes
_string_start = re.compile(r'^\w*(\'{3}|"{3}|\'|")')
def _extract_string_while_in_string(leaf, position):
def return_part_of_leaf(leaf):
kwargs = {}
if leaf.line == position[0]:
kwargs['endpos'] = position[1] - leaf.column
match = _string_start.match(leaf.value, **kwargs)
if not match:
return None, None, None
start = match.group(0)
if leaf.line == position[0] and position[1] < leaf.column + match.end():
return None, None, None
return cut_value_at_position(leaf, position)[match.end():], leaf, start
if position < leaf.start_pos:
return None, None, None
if leaf.type == 'string':
return return_part_of_leaf(leaf)
leaves = []
while leaf is not None:
if leaf.type == 'error_leaf' and ('"' in leaf.value or "'" in leaf.value):
if len(leaf.value) > 1:
return return_part_of_leaf(leaf)
prefix_leaf = None
if not leaf.prefix:
prefix_leaf = leaf.get_previous_leaf()
if prefix_leaf is None or prefix_leaf.type != 'name' \
or not all(c in 'rubf' for c in prefix_leaf.value.lower()):
prefix_leaf = None
return (
''.join(cut_value_at_position(l, position) for l in leaves),
prefix_leaf or leaf,
('' if prefix_leaf is None else prefix_leaf.value)
+ cut_value_at_position(leaf, position),
)
if leaf.line != position[0]:
# Multi line strings are always simple error leaves and contain the
# whole string, single line error leaves are atherefore important
# now and since the line is different, it's not really a single
# line string anymore.
break
leaves.insert(0, leaf)
leaf = leaf.get_previous_leaf()
return None, None, None
def complete_trailer(user_context, values):
completion_names = []
for value in values:
for filter in value.get_filters(origin_scope=user_context.tree_node):
completion_names += filter.values()
if not value.is_stub() and isinstance(value, TreeInstance):
completion_names += _complete_getattr(user_context, value)
python_values = convert_values(values)
for c in python_values:
if c not in values:
for filter in c.get_filters(origin_scope=user_context.tree_node):
completion_names += filter.values()
return completion_names
def _complete_getattr(user_context, instance):
"""
A heuristic to make completion for proxy objects work. This is not
intended to work in all cases. It works exactly in this case:
def __getattr__(self, name):
...
return getattr(any_object, name)
It is important that the return contains getattr directly, otherwise it
won't work anymore. It's really just a stupid heuristic. It will not
work if you write e.g. `return (getatr(o, name))`, because of the
additional parentheses. It will also not work if you move the getattr
to some other place that is not the return statement itself.
It is intentional that it doesn't work in all cases. Generally it's
really hard to do even this case (as you can see below). Most people
will write it like this anyway and the other ones, well they are just
out of luck I guess :) ~dave.
"""
names = (instance.get_function_slot_names(u'__getattr__')
or instance.get_function_slot_names(u'__getattribute__'))
functions = ValueSet.from_sets(
name.infer()
for name in names
)
for func in functions:
tree_node = func.tree_node
if tree_node is None or tree_node.type != 'funcdef':
continue
for return_stmt in tree_node.iter_return_stmts():
# Basically until the next comment we just try to find out if a
# return statement looks exactly like `return getattr(x, name)`.
if return_stmt.type != 'return_stmt':
continue
atom_expr = return_stmt.children[1]
if atom_expr.type != 'atom_expr':
continue
atom = atom_expr.children[0]
trailer = atom_expr.children[1]
if len(atom_expr.children) != 2 or atom.type != 'name' \
or atom.value != 'getattr':
continue
arglist = trailer.children[1]
if arglist.type != 'arglist' or len(arglist.children) < 3:
continue
context = func.as_context()
object_node = arglist.children[0]
# Make sure it's a param: foo in __getattr__(self, foo)
name_node = arglist.children[2]
name_list = context.goto(name_node, name_node.start_pos)
if not any(n.api_type == 'param' for n in name_list):
continue
# Now that we know that these are most probably completion
# objects, we just infer the object and return them as
# completions.
objects = context.infer_node(object_node)
return complete_trailer(user_context, objects)
return []
def search_in_module(inference_state, module_context, names, wanted_names,
wanted_type, complete=False, fuzzy=False,
ignore_imports=False, convert=False):
for s in wanted_names[:-1]:
new_names = []
for n in names:
if s == n.string_name:
if n.tree_name is not None and n.api_type == 'module' \
and ignore_imports:
continue
new_names += complete_trailer(
module_context,
n.infer()
)
debug.dbg('dot lookup on search %s from %s', new_names, names[:10])
names = new_names
last_name = wanted_names[-1].lower()
for n in names:
string = n.string_name.lower()
if complete and helpers.match(string, last_name, fuzzy=fuzzy) \
or not complete and string == last_name:
if isinstance(n, SubModuleName):
names = [v.name for v in n.infer()]
else:
names = [n]
if convert:
names = convert_names(names)
for n2 in names:
if complete:
def_ = classes.Completion(
inference_state, n2,
stack=None,
like_name_length=len(last_name),
is_fuzzy=fuzzy,
)
else:
def_ = classes.Name(inference_state, n2)
if not wanted_type or wanted_type == def_.type:
yield def_

View File

@@ -0,0 +1,25 @@
_cache = {}
def save_entry(module_name, name, cache):
try:
module_cache = _cache[module_name]
except KeyError:
module_cache = _cache[module_name] = {}
module_cache[name] = cache
def _create_get_from_cache(number):
def _get_from_cache(module_name, name, get_cache_values):
try:
return _cache[module_name][name][number]
except KeyError:
v = get_cache_values()
save_entry(module_name, name, v)
return v[number]
return _get_from_cache
get_type = _create_get_from_cache(0)
get_docstring_signature = _create_get_from_cache(1)
get_docstring = _create_get_from_cache(2)

489
jedi/api/environment.py Normal file
View File

@@ -0,0 +1,489 @@
"""
Environments are a way to activate different Python versions or Virtualenvs for
static analysis. The Python binary in that environment is going to be executed.
"""
import os
import sys
import hashlib
import filecmp
from collections import namedtuple
from jedi._compatibility import highest_pickle_protocol, which
from jedi.cache import memoize_method, time_cache
from jedi.inference.compiled.subprocess import CompiledSubprocess, \
InferenceStateSameProcess, InferenceStateSubprocess
import parso
_VersionInfo = namedtuple('VersionInfo', 'major minor micro')
_SUPPORTED_PYTHONS = ['3.8', '3.7', '3.6', '3.5', '2.7']
_SAFE_PATHS = ['/usr/bin', '/usr/local/bin']
_CONDA_VAR = 'CONDA_PREFIX'
_CURRENT_VERSION = '%s.%s' % (sys.version_info.major, sys.version_info.minor)
class InvalidPythonEnvironment(Exception):
"""
If you see this exception, the Python executable or Virtualenv you have
been trying to use is probably not a correct Python version.
"""
class _BaseEnvironment(object):
@memoize_method
def get_grammar(self):
version_string = '%s.%s' % (self.version_info.major, self.version_info.minor)
return parso.load_grammar(version=version_string)
@property
def _sha256(self):
try:
return self._hash
except AttributeError:
self._hash = _calculate_sha256_for_file(self.executable)
return self._hash
def _get_info():
return (
sys.executable,
sys.prefix,
sys.version_info[:3],
)
class Environment(_BaseEnvironment):
"""
This class is supposed to be created by internal Jedi architecture. You
should not create it directly. Please use create_environment or the other
functions instead. It is then returned by that function.
"""
_subprocess = None
def __init__(self, executable, env_vars=None):
self._start_executable = executable
self._env_vars = env_vars
# Initialize the environment
self._get_subprocess()
def _get_subprocess(self):
if self._subprocess is not None and not self._subprocess.is_crashed:
return self._subprocess
try:
self._subprocess = CompiledSubprocess(self._start_executable,
env_vars=self._env_vars)
info = self._subprocess._send(None, _get_info)
except Exception as exc:
raise InvalidPythonEnvironment(
"Could not get version information for %r: %r" % (
self._start_executable,
exc))
# Since it could change and might not be the same(?) as the one given,
# set it here.
self.executable = info[0]
"""
The Python executable, matches ``sys.executable``.
"""
self.path = info[1]
"""
The path to an environment, matches ``sys.prefix``.
"""
self.version_info = _VersionInfo(*info[2])
"""
Like :data:`sys.version_info`: a tuple to show the current
Environment's Python version.
"""
# py2 sends bytes via pickle apparently?!
if self.version_info.major == 2:
self.executable = self.executable.decode()
self.path = self.path.decode()
# Adjust pickle protocol according to host and client version.
self._subprocess._pickle_protocol = highest_pickle_protocol([
sys.version_info, self.version_info])
return self._subprocess
def __repr__(self):
version = '.'.join(str(i) for i in self.version_info)
return '<%s: %s in %s>' % (self.__class__.__name__, version, self.path)
def get_inference_state_subprocess(self, inference_state):
return InferenceStateSubprocess(inference_state, self._get_subprocess())
@memoize_method
def get_sys_path(self):
"""
The sys path for this environment. Does not include potential
modifications from e.g. appending to :data:`sys.path`.
:returns: list of str
"""
# It's pretty much impossible to generate the sys path without actually
# executing Python. The sys path (when starting with -S) itself depends
# on how the Python version was compiled (ENV variables).
# If you omit -S when starting Python (normal case), additionally
# site.py gets executed.
return self._get_subprocess().get_sys_path()
class _SameEnvironmentMixin(object):
def __init__(self):
self._start_executable = self.executable = sys.executable
self.path = sys.prefix
self.version_info = _VersionInfo(*sys.version_info[:3])
self._env_vars = None
class SameEnvironment(_SameEnvironmentMixin, Environment):
pass
class InterpreterEnvironment(_SameEnvironmentMixin, _BaseEnvironment):
def get_inference_state_subprocess(self, inference_state):
return InferenceStateSameProcess(inference_state)
def get_sys_path(self):
return sys.path
def _get_virtual_env_from_var(env_var='VIRTUAL_ENV'):
"""Get virtualenv environment from VIRTUAL_ENV environment variable.
It uses `safe=False` with ``create_environment``, because the environment
variable is considered to be safe / controlled by the user solely.
"""
var = os.environ.get(env_var)
if var:
# Under macOS in some cases - notably when using Pipenv - the
# sys.prefix of the virtualenv is /path/to/env/bin/.. instead of
# /path/to/env so we need to fully resolve the paths in order to
# compare them.
if os.path.realpath(var) == os.path.realpath(sys.prefix):
return _try_get_same_env()
try:
return create_environment(var, safe=False)
except InvalidPythonEnvironment:
pass
def _calculate_sha256_for_file(path):
sha256 = hashlib.sha256()
with open(path, 'rb') as f:
for block in iter(lambda: f.read(filecmp.BUFSIZE), b''):
sha256.update(block)
return sha256.hexdigest()
def get_default_environment():
"""
Tries to return an active Virtualenv or conda environment.
If there is no VIRTUAL_ENV variable or no CONDA_PREFIX variable set
set it will return the latest Python version installed on the system. This
makes it possible to use as many new Python features as possible when using
autocompletion and other functionality.
:returns: :class:`.Environment`
"""
virtual_env = _get_virtual_env_from_var()
if virtual_env is not None:
return virtual_env
conda_env = _get_virtual_env_from_var(_CONDA_VAR)
if conda_env is not None:
return conda_env
return _try_get_same_env()
def _try_get_same_env():
env = SameEnvironment()
if not os.path.basename(env.executable).lower().startswith('python'):
# This tries to counter issues with embedding. In some cases (e.g.
# VIM's Python Mac/Windows, sys.executable is /foo/bar/vim. This
# happens, because for Mac a function called `_NSGetExecutablePath` is
# used and for Windows `GetModuleFileNameW`. These are both platform
# specific functions. For all other systems sys.executable should be
# alright. However here we try to generalize:
#
# 1. Check if the executable looks like python (heuristic)
# 2. In case it's not try to find the executable
# 3. In case we don't find it use an interpreter environment.
#
# The last option will always work, but leads to potential crashes of
# Jedi - which is ok, because it happens very rarely and even less,
# because the code below should work for most cases.
if os.name == 'nt':
# The first case would be a virtualenv and the second a normal
# Python installation.
checks = (r'Scripts\python.exe', 'python.exe')
else:
# For unix it looks like Python is always in a bin folder.
checks = (
'bin/python%s.%s' % (sys.version_info[0], sys.version[1]),
'bin/python%s' % (sys.version_info[0]),
'bin/python',
)
for check in checks:
guess = os.path.join(sys.exec_prefix, check)
if os.path.isfile(guess):
# Bingo - We think we have our Python.
return Environment(guess)
# It looks like there is no reasonable Python to be found.
return InterpreterEnvironment()
# If no virtualenv is found, use the environment we're already
# using.
return env
def get_cached_default_environment():
var = os.environ.get('VIRTUAL_ENV') or os.environ.get(_CONDA_VAR)
environment = _get_cached_default_environment()
# Under macOS in some cases - notably when using Pipenv - the
# sys.prefix of the virtualenv is /path/to/env/bin/.. instead of
# /path/to/env so we need to fully resolve the paths in order to
# compare them.
if var and os.path.realpath(var) != os.path.realpath(environment.path):
_get_cached_default_environment.clear_cache()
return _get_cached_default_environment()
return environment
@time_cache(seconds=10 * 60) # 10 Minutes
def _get_cached_default_environment():
try:
return get_default_environment()
except InvalidPythonEnvironment:
# It's possible that `sys.executable` is wrong. Typically happens
# when Jedi is used in an executable that embeds Python. For further
# information, have a look at:
# https://github.com/davidhalter/jedi/issues/1531
return InterpreterEnvironment()
def find_virtualenvs(paths=None, **kwargs):
"""
:param paths: A list of paths in your file system to be scanned for
Virtualenvs. It will search in these paths and potentially execute the
Python binaries.
:param safe: Default True. In case this is False, it will allow this
function to execute potential `python` environments. An attacker might
be able to drop an executable in a path this function is searching by
default. If the executable has not been installed by root, it will not
be executed.
:param use_environment_vars: Default True. If True, the VIRTUAL_ENV
variable will be checked if it contains a valid VirtualEnv.
CONDA_PREFIX will be checked to see if it contains a valid conda
environment.
:yields: :class:`.Environment`
"""
def py27_comp(paths=None, safe=True, use_environment_vars=True):
if paths is None:
paths = []
_used_paths = set()
if use_environment_vars:
# Using this variable should be safe, because attackers might be
# able to drop files (via git) but not environment variables.
virtual_env = _get_virtual_env_from_var()
if virtual_env is not None:
yield virtual_env
_used_paths.add(virtual_env.path)
conda_env = _get_virtual_env_from_var(_CONDA_VAR)
if conda_env is not None:
yield conda_env
_used_paths.add(conda_env.path)
for directory in paths:
if not os.path.isdir(directory):
continue
directory = os.path.abspath(directory)
for path in os.listdir(directory):
path = os.path.join(directory, path)
if path in _used_paths:
# A path shouldn't be inferred twice.
continue
_used_paths.add(path)
try:
executable = _get_executable_path(path, safe=safe)
yield Environment(executable)
except InvalidPythonEnvironment:
pass
return py27_comp(paths, **kwargs)
def find_system_environments(**kwargs):
"""
Ignores virtualenvs and returns the Python versions that were installed on
your system. This might return nothing, if you're running Python e.g. from
a portable version.
The environments are sorted from latest to oldest Python version.
:yields: :class:`.Environment`
"""
for version_string in _SUPPORTED_PYTHONS:
try:
yield get_system_environment(version_string, **kwargs)
except InvalidPythonEnvironment:
pass
# TODO: this function should probably return a list of environments since
# multiple Python installations can be found on a system for the same version.
def get_system_environment(version, **kwargs):
"""
Return the first Python environment found for a string of the form 'X.Y'
where X and Y are the major and minor versions of Python.
:raises: :exc:`.InvalidPythonEnvironment`
:returns: :class:`.Environment`
"""
exe = which('python' + version)
if exe:
if exe == sys.executable:
return SameEnvironment()
return Environment(exe)
if os.name == 'nt':
for exe in _get_executables_from_windows_registry(version):
try:
return Environment(exe, **kwargs)
except InvalidPythonEnvironment:
pass
raise InvalidPythonEnvironment("Cannot find executable python%s." % version)
def create_environment(path, safe=True, **kwargs):
"""
Make it possible to manually create an Environment object by specifying a
Virtualenv path or an executable path and optional environment variables.
:raises: :exc:`.InvalidPythonEnvironment`
:returns: :class:`.Environment`
TODO: make env_vars a kwarg when Python 2 is dropped. For now, preserve API
"""
return _create_environment(path, safe, **kwargs)
def _create_environment(path, safe=True, env_vars=None):
if os.path.isfile(path):
_assert_safe(path, safe)
return Environment(path, env_vars=env_vars)
return Environment(_get_executable_path(path, safe=safe), env_vars=env_vars)
def _get_executable_path(path, safe=True):
"""
Returns None if it's not actually a virtual env.
"""
if os.name == 'nt':
python = os.path.join(path, 'Scripts', 'python.exe')
else:
python = os.path.join(path, 'bin', 'python')
if not os.path.exists(python):
raise InvalidPythonEnvironment("%s seems to be missing." % python)
_assert_safe(python, safe)
return python
def _get_executables_from_windows_registry(version):
# The winreg module is named _winreg on Python 2.
try:
import winreg
except ImportError:
import _winreg as winreg
# TODO: support Python Anaconda.
sub_keys = [
r'SOFTWARE\Python\PythonCore\{version}\InstallPath',
r'SOFTWARE\Wow6432Node\Python\PythonCore\{version}\InstallPath',
r'SOFTWARE\Python\PythonCore\{version}-32\InstallPath',
r'SOFTWARE\Wow6432Node\Python\PythonCore\{version}-32\InstallPath'
]
for root_key in [winreg.HKEY_CURRENT_USER, winreg.HKEY_LOCAL_MACHINE]:
for sub_key in sub_keys:
sub_key = sub_key.format(version=version)
try:
with winreg.OpenKey(root_key, sub_key) as key:
prefix = winreg.QueryValueEx(key, '')[0]
exe = os.path.join(prefix, 'python.exe')
if os.path.isfile(exe):
yield exe
except WindowsError:
pass
def _assert_safe(executable_path, safe):
if safe and not _is_safe(executable_path):
raise InvalidPythonEnvironment(
"The python binary is potentially unsafe.")
def _is_safe(executable_path):
# Resolve sym links. A venv typically is a symlink to a known Python
# binary. Only virtualenvs copy symlinks around.
real_path = os.path.realpath(executable_path)
if _is_unix_safe_simple(real_path):
return True
# Just check the list of known Python versions. If it's not in there,
# it's likely an attacker or some Python that was not properly
# installed in the system.
for environment in find_system_environments():
if environment.executable == real_path:
return True
# If the versions don't match, just compare the binary files. If we
# don't do that, only venvs will be working and not virtualenvs.
# venvs are symlinks while virtualenvs are actual copies of the
# Python files.
# This still means that if the system Python is updated and the
# virtualenv's Python is not (which is probably never going to get
# upgraded), it will not work with Jedi. IMO that's fine, because
# people should just be using venv. ~ dave
if environment._sha256 == _calculate_sha256_for_file(real_path):
return True
return False
def _is_unix_safe_simple(real_path):
if _is_unix_admin():
# In case we are root, just be conservative and
# only execute known paths.
return any(real_path.startswith(p) for p in _SAFE_PATHS)
uid = os.stat(real_path).st_uid
# The interpreter needs to be owned by root. This means that it wasn't
# written by a user and therefore attacking Jedi is not as simple.
# The attack could look like the following:
# 1. A user clones a repository.
# 2. The repository has an innocent looking folder called foobar. jedi
# searches for the folder and executes foobar/bin/python --version if
# there's also a foobar/bin/activate.
# 3. The attacker has gained code execution, since he controls
# foobar/bin/python.
return uid == 0
def _is_unix_admin():
try:
return os.getuid() == 0
except AttributeError:
return False # Windows

46
jedi/api/errors.py Normal file
View File

@@ -0,0 +1,46 @@
"""
This file is about errors in Python files and not about exception handling in
Jedi.
"""
def parso_to_jedi_errors(grammar, module_node):
return [SyntaxError(e) for e in grammar.iter_errors(module_node)]
class SyntaxError(object):
"""
Syntax errors are generated by :meth:`.Script.get_syntax_errors`.
"""
def __init__(self, parso_error):
self._parso_error = parso_error
@property
def line(self):
"""The line where the error starts (starting with 1)."""
return self._parso_error.start_pos[0]
@property
def column(self):
"""The column where the error starts (starting with 0)."""
return self._parso_error.start_pos[1]
@property
def until_line(self):
"""The line where the error ends (starting with 1)."""
return self._parso_error.end_pos[0]
@property
def until_column(self):
"""The column where the error ends (starting with 0)."""
return self._parso_error.end_pos[1]
def get_message(self):
return self._parso_error.message
def __repr__(self):
return '<%s from=%s to=%s>' % (
self.__class__.__name__,
self._parso_error.start_pos,
self._parso_error.end_pos,
)

31
jedi/api/exceptions.py Normal file
View File

@@ -0,0 +1,31 @@
class _JediError(Exception):
pass
class InternalError(_JediError):
"""
This error might happen a subprocess is crashing. The reason for this is
usually broken C code in third party libraries. This is not a very common
thing and it is safe to use Jedi again. However using the same calls might
result in the same error again.
"""
class WrongVersion(_JediError):
"""
This error is reserved for the future, shouldn't really be happening at the
moment.
"""
class RefactoringError(_JediError):
"""
Refactorings can fail for various reasons. So if you work with refactorings
like :meth:`.Script.rename`, :meth:`.Script.inline`,
:meth:`.Script.extract_variable` and :meth:`.Script.extract_function`, make
sure to catch these. The descriptions in the errors are ususally valuable
for end users.
A typical ``RefactoringError`` would tell the user that inlining is not
possible if no name is under the cursor.
"""

156
jedi/api/file_name.py Normal file
View File

@@ -0,0 +1,156 @@
import os
from jedi._compatibility import FileNotFoundError, force_unicode, scandir
from jedi.api import classes
from jedi.api.strings import StringName, get_quote_ending
from jedi.api.helpers import match
from jedi.inference.helpers import get_str_or_none
class PathName(StringName):
api_type = u'path'
def complete_file_name(inference_state, module_context, start_leaf, quote, string,
like_name, signatures_callback, code_lines, position, fuzzy):
# First we want to find out what can actually be changed as a name.
like_name_length = len(os.path.basename(string))
addition = _get_string_additions(module_context, start_leaf)
if string.startswith('~'):
string = os.path.expanduser(string)
if addition is None:
return
string = addition + string
# Here we use basename again, because if strings are added like
# `'foo' + 'bar`, it should complete to `foobar/`.
must_start_with = os.path.basename(string)
string = os.path.dirname(string)
sigs = signatures_callback(*position)
is_in_os_path_join = sigs and all(s.full_name == 'os.path.join' for s in sigs)
if is_in_os_path_join:
to_be_added = _add_os_path_join(module_context, start_leaf, sigs[0].bracket_start)
if to_be_added is None:
is_in_os_path_join = False
else:
string = to_be_added + string
base_path = os.path.join(inference_state.project.path, string)
try:
listed = sorted(scandir(base_path), key=lambda e: e.name)
# OSError: [Errno 36] File name too long: '...'
except (FileNotFoundError, OSError):
return
quote_ending = get_quote_ending(quote, code_lines, position)
for entry in listed:
name = entry.name
if match(name, must_start_with, fuzzy=fuzzy):
if is_in_os_path_join or not entry.is_dir():
name += quote_ending
else:
name += os.path.sep
yield classes.Completion(
inference_state,
PathName(inference_state, name[len(must_start_with) - like_name_length:]),
stack=None,
like_name_length=like_name_length,
is_fuzzy=fuzzy,
)
def _get_string_additions(module_context, start_leaf):
def iterate_nodes():
node = addition.parent
was_addition = True
for child_node in reversed(node.children[:node.children.index(addition)]):
if was_addition:
was_addition = False
yield child_node
continue
if child_node != '+':
break
was_addition = True
addition = start_leaf.get_previous_leaf()
if addition != '+':
return ''
context = module_context.create_context(start_leaf)
return _add_strings(context, reversed(list(iterate_nodes())))
def _add_strings(context, nodes, add_slash=False):
string = ''
first = True
for child_node in nodes:
values = context.infer_node(child_node)
if len(values) != 1:
return None
c, = values
s = get_str_or_none(c)
if s is None:
return None
if not first and add_slash:
string += os.path.sep
string += force_unicode(s)
first = False
return string
def _add_os_path_join(module_context, start_leaf, bracket_start):
def check(maybe_bracket, nodes):
if maybe_bracket.start_pos != bracket_start:
return None
if not nodes:
return ''
context = module_context.create_context(nodes[0])
return _add_strings(context, nodes, add_slash=True) or ''
if start_leaf.type == 'error_leaf':
# Unfinished string literal, like `join('`
value_node = start_leaf.parent
index = value_node.children.index(start_leaf)
if index > 0:
error_node = value_node.children[index - 1]
if error_node.type == 'error_node' and len(error_node.children) >= 2:
index = -2
if error_node.children[-1].type == 'arglist':
arglist_nodes = error_node.children[-1].children
index -= 1
else:
arglist_nodes = []
return check(error_node.children[index + 1], arglist_nodes[::2])
return None
# Maybe an arglist or some weird error case. Therefore checked below.
searched_node_child = start_leaf
while searched_node_child.parent is not None \
and searched_node_child.parent.type not in ('arglist', 'trailer', 'error_node'):
searched_node_child = searched_node_child.parent
if searched_node_child.get_first_leaf() is not start_leaf:
return None
searched_node = searched_node_child.parent
if searched_node is None:
return None
index = searched_node.children.index(searched_node_child)
arglist_nodes = searched_node.children[:index]
if searched_node.type == 'arglist':
trailer = searched_node.parent
if trailer.type == 'error_node':
trailer_index = trailer.children.index(searched_node)
assert trailer_index >= 2
assert trailer.children[trailer_index - 1] == '('
return check(trailer.children[trailer_index - 1], arglist_nodes[::2])
elif trailer.type == 'trailer':
return check(trailer.children[0], arglist_nodes[::2])
elif searched_node.type == 'trailer':
return check(searched_node.children[0], [])
elif searched_node.type == 'error_node':
# Stuff like `join(""`
return check(arglist_nodes[-1], [])

View File

@@ -4,22 +4,47 @@ Helpers for the API
import re
from collections import namedtuple
from textwrap import dedent
from itertools import chain
from functools import wraps
from parso.python.parser import Parser
from parso.python import tree
from parso import split_lines
from jedi._compatibility import u
from jedi.evaluate.helpers import evaluate_call_of_leaf
from jedi.cache import time_cache
from jedi._compatibility import u, Parameter
from jedi.inference.base_value import NO_VALUES
from jedi.inference.syntax_tree import infer_atom
from jedi.inference.helpers import infer_call_of_leaf
from jedi.inference.compiled import get_string_value_set
from jedi.cache import signature_time_cache, memoize_method
from jedi.parser_utils import get_parent_scope
CompletionParts = namedtuple('CompletionParts', ['path', 'has_dot', 'name'])
def _start_match(string, like_name):
return string.startswith(like_name)
def _fuzzy_match(string, like_name):
if len(like_name) <= 1:
return like_name in string
pos = string.find(like_name[0])
if pos >= 0:
return _fuzzy_match(string[pos + 1:], like_name[1:])
return False
def match(string, like_name, fuzzy=False):
if fuzzy:
return _fuzzy_match(string, like_name)
else:
return _start_match(string, like_name)
def sorted_definitions(defs):
# Note: `or ''` below is required because `module_path` could be
return sorted(defs, key=lambda x: (x.module_path or '', x.line or 0, x.column or 0))
return sorted(defs, key=lambda x: (x.module_path or '', x.line or 0, x.column or 0, x.name))
def get_on_completion_name(module_node, lines, position):
@@ -43,7 +68,7 @@ def _get_code(code_lines, start_pos, end_pos):
lines[-1] = lines[-1][:end_pos[1]]
# Remove first line indentation.
lines[0] = lines[0][start_pos[1]:]
return '\n'.join(lines)
return ''.join(lines)
class OnErrorLeaf(Exception):
@@ -52,28 +77,10 @@ class OnErrorLeaf(Exception):
return self.args[0]
def _is_on_comment(leaf, position):
comment_lines = split_lines(leaf.prefix)
difference = leaf.start_pos[0] - position[0]
prefix_start_pos = leaf.get_start_pos_of_prefix()
if difference == 0:
indent = leaf.start_pos[1]
elif position[0] == prefix_start_pos[0]:
indent = prefix_start_pos[1]
else:
indent = 0
line = comment_lines[-difference - 1][:position[1] - indent]
return '#' in line
def _get_code_for_stack(code_lines, module_node, position):
leaf = module_node.get_leaf_for_position(position, include_prefixes=True)
def _get_code_for_stack(code_lines, leaf, position):
# It might happen that we're on whitespace or on a comment. This means
# that we would not get the right leaf.
if leaf.start_pos >= position:
if _is_on_comment(leaf, position):
return u('')
# If we're not on a comment simply get the previous leaf and proceed.
leaf = leaf.get_previous_leaf()
if leaf is None:
@@ -103,14 +110,14 @@ def _get_code_for_stack(code_lines, module_node, position):
if is_after_newline:
if user_stmt.start_pos[1] > position[1]:
# This means that it's actually a dedent and that means that we
# start without context (part of a suite).
# start without value (part of a suite).
return u('')
# This is basically getting the relevant lines.
return _get_code(code_lines, user_stmt.get_start_pos_of_prefix(), position)
def get_stack_at_position(grammar, code_lines, module_node, pos):
def get_stack_at_position(grammar, code_lines, leaf, pos):
"""
Returns the possible node names (e.g. import_from, xor_test or yield_stmt).
"""
@@ -121,99 +128,229 @@ def get_stack_at_position(grammar, code_lines, module_node, pos):
# TODO This is for now not an official parso API that exists purely
# for Jedi.
tokens = grammar._tokenize(code)
for token_ in tokens:
if token_.string == safeword:
for token in tokens:
if token.string == safeword:
raise EndMarkerReached()
elif token.prefix.endswith(safeword):
# This happens with comments.
raise EndMarkerReached()
elif token.string.endswith(safeword):
yield token # Probably an f-string literal that was not finished.
raise EndMarkerReached()
else:
yield token_
yield token
# The code might be indedented, just remove it.
code = dedent(_get_code_for_stack(code_lines, module_node, pos))
code = dedent(_get_code_for_stack(code_lines, leaf, pos))
# We use a word to tell Jedi when we have reached the start of the
# completion.
# Use Z as a prefix because it's not part of a number suffix.
safeword = 'ZZZ_USER_WANTS_TO_COMPLETE_HERE_WITH_JEDI'
code = code + safeword
code = code + ' ' + safeword
p = Parser(grammar._pgen_grammar, error_recovery=True)
try:
p.parse(tokens=tokenize_without_endmarker(code))
except EndMarkerReached:
return Stack(p.pgen_parser.stack)
raise SystemError("This really shouldn't happen. There's a bug in Jedi.")
return p.stack
raise SystemError(
"This really shouldn't happen. There's a bug in Jedi:\n%s"
% list(tokenize_without_endmarker(code))
)
class Stack(list):
def get_node_names(self, grammar):
for dfa, state, (node_number, nodes) in self:
yield grammar.number2symbol[node_number]
def get_nodes(self):
for dfa, state, (node_number, nodes) in self:
for node in nodes:
yield node
def get_possible_completion_types(pgen_grammar, stack):
def add_results(label_index):
try:
grammar_labels.append(inversed_tokens[label_index])
except KeyError:
try:
keywords.append(inversed_keywords[label_index])
except KeyError:
t, v = pgen_grammar.labels[label_index]
assert t >= 256
# See if it's a symbol and if we're in its first set
inversed_keywords
itsdfa = pgen_grammar.dfas[t]
itsstates, itsfirst = itsdfa
for first_label_index in itsfirst.keys():
add_results(first_label_index)
inversed_keywords = dict((v, k) for k, v in pgen_grammar.keywords.items())
inversed_tokens = dict((v, k) for k, v in pgen_grammar.tokens.items())
keywords = []
grammar_labels = []
def scan_stack(index):
dfa, state, node = stack[index]
states, first = dfa
arcs = states[state]
for label_index, new_state in arcs:
if label_index == 0:
# An accepting state, check the stack below.
scan_stack(index - 1)
else:
add_results(label_index)
scan_stack(-1)
return keywords, grammar_labels
def evaluate_goto_definition(evaluator, context, leaf):
def infer(inference_state, context, leaf):
if leaf.type == 'name':
# In case of a name we can just use goto_definition which does all the
# magic itself.
return evaluator.goto_definitions(context, leaf)
return inference_state.infer(context, leaf)
parent = leaf.parent
definitions = NO_VALUES
if parent.type == 'atom':
return context.eval_node(leaf.parent)
# e.g. `(a + b)`
definitions = context.infer_node(leaf.parent)
elif parent.type == 'trailer':
return evaluate_call_of_leaf(context, leaf)
# e.g. `a()`
definitions = infer_call_of_leaf(context, leaf)
elif isinstance(leaf, tree.Literal):
return context.evaluator.eval_atom(context, leaf)
return []
# e.g. `"foo"` or `1.0`
return infer_atom(context, leaf)
elif leaf.type in ('fstring_string', 'fstring_start', 'fstring_end'):
return get_string_value_set(inference_state)
return definitions
CallSignatureDetails = namedtuple(
'CallSignatureDetails',
['bracket_leaf', 'call_index', 'keyword_name_str']
)
def filter_follow_imports(names, follow_builtin_imports=False):
for name in names:
if name.is_import():
new_names = list(filter_follow_imports(
name.goto(),
follow_builtin_imports=follow_builtin_imports,
))
found_builtin = False
if follow_builtin_imports:
for new_name in new_names:
if new_name.start_pos is None:
found_builtin = True
if found_builtin:
yield name
else:
for new_name in new_names:
yield new_name
else:
yield name
class CallDetails(object):
def __init__(self, bracket_leaf, children, position):
['bracket_leaf', 'call_index', 'keyword_name_str']
self.bracket_leaf = bracket_leaf
self._children = children
self._position = position
@property
def index(self):
return _get_index_and_key(self._children, self._position)[0]
@property
def keyword_name_str(self):
return _get_index_and_key(self._children, self._position)[1]
@memoize_method
def _list_arguments(self):
return list(_iter_arguments(self._children, self._position))
def calculate_index(self, param_names):
positional_count = 0
used_names = set()
star_count = -1
args = self._list_arguments()
if not args:
if param_names:
return 0
else:
return None
is_kwarg = False
for i, (star_count, key_start, had_equal) in enumerate(args):
is_kwarg |= had_equal | (star_count == 2)
if star_count:
pass # For now do nothing, we don't know what's in there here.
else:
if i + 1 != len(args): # Not last
if had_equal:
used_names.add(key_start)
else:
positional_count += 1
for i, param_name in enumerate(param_names):
kind = param_name.get_kind()
if not is_kwarg:
if kind == Parameter.VAR_POSITIONAL:
return i
if kind in (Parameter.POSITIONAL_OR_KEYWORD, Parameter.POSITIONAL_ONLY):
if i == positional_count:
return i
if key_start is not None and not star_count == 1 or star_count == 2:
if param_name.string_name not in used_names \
and (kind == Parameter.KEYWORD_ONLY
or kind == Parameter.POSITIONAL_OR_KEYWORD
and positional_count <= i):
if star_count:
return i
if had_equal:
if param_name.string_name == key_start:
return i
else:
if param_name.string_name.startswith(key_start):
return i
if kind == Parameter.VAR_KEYWORD:
return i
return None
def iter_used_keyword_arguments(self):
for star_count, key_start, had_equal in list(self._list_arguments()):
if had_equal and key_start:
yield key_start
def count_positional_arguments(self):
count = 0
for star_count, key_start, had_equal in self._list_arguments()[:-1]:
if star_count:
break
count += 1
return count
def _iter_arguments(nodes, position):
def remove_after_pos(name):
if name.type != 'name':
return None
return name.value[:position[1] - name.start_pos[1]]
# Returns Generator[Tuple[star_count, Optional[key_start: str], had_equal]]
nodes_before = [c for c in nodes if c.start_pos < position]
if nodes_before[-1].type == 'arglist':
for x in _iter_arguments(nodes_before[-1].children, position):
yield x # Python 2 :(
return
previous_node_yielded = False
stars_seen = 0
for i, node in enumerate(nodes_before):
if node.type == 'argument':
previous_node_yielded = True
first = node.children[0]
second = node.children[1]
if second == '=':
if second.start_pos < position:
yield 0, first.value, True
else:
yield 0, remove_after_pos(first), False
elif first in ('*', '**'):
yield len(first.value), remove_after_pos(second), False
else:
# Must be a Comprehension
first_leaf = node.get_first_leaf()
if first_leaf.type == 'name' and first_leaf.start_pos >= position:
yield 0, remove_after_pos(first_leaf), False
else:
yield 0, None, False
stars_seen = 0
elif node.type in ('testlist', 'testlist_star_expr'): # testlist is Python 2
for n in node.children[::2]:
if n.type == 'star_expr':
stars_seen = 1
n = n.children[1]
yield stars_seen, remove_after_pos(n), False
stars_seen = 0
# The count of children is even if there's a comma at the end.
previous_node_yielded = bool(len(node.children) % 2)
elif isinstance(node, tree.PythonLeaf) and node.value == ',':
if not previous_node_yielded:
yield stars_seen, '', False
stars_seen = 0
previous_node_yielded = False
elif isinstance(node, tree.PythonLeaf) and node.value in ('*', '**'):
stars_seen = len(node.value)
elif node == '=' and nodes_before[-1]:
previous_node_yielded = True
before = nodes_before[i - 1]
if before.type == 'name':
yield 0, before.value, True
else:
yield 0, None, False
# Just ignore the star that is probably a syntax error.
stars_seen = 0
if not previous_node_yielded:
if nodes_before[-1].type == 'name':
yield stars_seen, remove_after_pos(nodes_before[-1]), False
else:
yield stars_seen, '', False
def _get_index_and_key(nodes, position):
@@ -222,22 +359,22 @@ def _get_index_and_key(nodes, position):
"""
nodes_before = [c for c in nodes if c.start_pos < position]
if nodes_before[-1].type == 'arglist':
nodes_before = [c for c in nodes_before[-1].children if c.start_pos < position]
return _get_index_and_key(nodes_before[-1].children, position)
key_str = None
if nodes_before:
last = nodes_before[-1]
if last.type == 'argument' and last.children[1].end_pos <= position:
# Checked if the argument
key_str = last.children[0].value
elif last == '=':
key_str = nodes_before[-2].value
last = nodes_before[-1]
if last.type == 'argument' and last.children[1] == '=' \
and last.children[1].end_pos <= position:
# Checked if the argument
key_str = last.children[0].value
elif last == '=':
key_str = nodes_before[-2].value
return nodes_before.count(','), key_str
def _get_call_signature_details_from_error_node(node, position):
def _get_signature_details_from_error_node(node, additional_children, position):
for index, element in reversed(list(enumerate(node.children))):
# `index > 0` means that it's a trailer and not an atom.
if element == '(' and element.end_pos <= position and index > 0:
@@ -248,59 +385,72 @@ def _get_call_signature_details_from_error_node(node, position):
if name is None:
continue
if name.type == 'name' or name.parent.type in ('trailer', 'atom'):
return CallSignatureDetails(
element,
*_get_index_and_key(children, position)
)
return CallDetails(element, children + additional_children, position)
def get_call_signature_details(module, position):
def get_signature_details(module, position):
leaf = module.get_leaf_for_position(position, include_prefixes=True)
# It's easier to deal with the previous token than the next one in this
# case.
if leaf.start_pos >= position:
# Whitespace / comments after the leaf count towards the previous leaf.
leaf = leaf.get_previous_leaf()
if leaf is None:
return None
if leaf == ')':
if leaf.end_pos == position:
leaf = leaf.get_next_leaf()
# Now that we know where we are in the syntax tree, we start to look at
# parents for possible function definitions.
node = leaf.parent
while node is not None:
if node.type in ('funcdef', 'classdef'):
# Don't show call signatures if there's stuff before it that just
# makes it feel strange to have a call signature.
if node.type in ('funcdef', 'classdef', 'decorated', 'async_stmt'):
# Don't show signatures if there's stuff before it that just
# makes it feel strange to have a signature.
return None
for n in node.children[::-1]:
if n.start_pos < position and n.type == 'error_node':
result = _get_call_signature_details_from_error_node(n, position)
if result is not None:
return result
additional_children = []
for n in reversed(node.children):
if n.start_pos < position:
if n.type == 'error_node':
result = _get_signature_details_from_error_node(
n, additional_children, position
)
if result is not None:
return result
if node.type == 'trailer' and node.children[0] == '(':
leaf = node.get_previous_leaf()
if leaf is None:
return None
return CallSignatureDetails(
node.children[0], *_get_index_and_key(node.children, position))
additional_children[0:0] = n.children
continue
additional_children.insert(0, n)
# Find a valid trailer
if node.type == 'trailer' and node.children[0] == '(' \
or node.type == 'decorator' and node.children[2] == '(':
# Additionally we have to check that an ending parenthesis isn't
# interpreted wrong. There are two cases:
# 1. Cursor before paren -> The current signature is good
# 2. Cursor after paren -> We need to skip the current signature
if not (leaf is node.children[-1] and position >= leaf.end_pos):
leaf = node.get_previous_leaf()
if leaf is None:
return None
return CallDetails(
node.children[0] if node.type == 'trailer' else node.children[2],
node.children,
position
)
node = node.parent
return None
@time_cache("call_signatures_validity")
def cache_call_signatures(evaluator, context, bracket_leaf, code_lines, user_pos):
@signature_time_cache("call_signatures_validity")
def cache_signatures(inference_state, context, bracket_leaf, code_lines, user_pos):
"""This function calculates the cache key."""
index = user_pos[0] - 1
line_index = user_pos[0] - 1
before_cursor = code_lines[index][:user_pos[1]]
other_lines = code_lines[bracket_leaf.start_pos[0]:index]
whole = '\n'.join(other_lines + [before_cursor])
before_cursor = code_lines[line_index][:user_pos[1]]
other_lines = code_lines[bracket_leaf.start_pos[0]:line_index]
whole = ''.join(other_lines + [before_cursor])
before_bracket = re.match(r'.*\(', whole, re.DOTALL)
module_path = context.get_root_context().py__file__()
@@ -308,8 +458,65 @@ def cache_call_signatures(evaluator, context, bracket_leaf, code_lines, user_pos
yield None # Don't cache!
else:
yield (module_path, before_bracket, bracket_leaf.start_pos)
yield evaluate_goto_definition(
evaluator,
yield infer(
inference_state,
context,
bracket_leaf.get_previous_leaf()
bracket_leaf.get_previous_leaf(),
)
def validate_line_column(func):
@wraps(func)
def wrapper(self, line=None, column=None, *args, **kwargs):
line = max(len(self._code_lines), 1) if line is None else line
if not (0 < line <= len(self._code_lines)):
raise ValueError('`line` parameter is not in a valid range.')
line_string = self._code_lines[line - 1]
line_len = len(line_string)
if line_string.endswith('\r\n'):
line_len -= 2
elif line_string.endswith('\n'):
line_len -= 1
column = line_len if column is None else column
if not (0 <= column <= line_len):
raise ValueError('`column` parameter (%d) is not in a valid range '
'(0-%d) for line %d (%r).' % (
column, line_len, line, line_string))
return func(self, line, column, *args, **kwargs)
return wrapper
def get_module_names(module, all_scopes, definitions=True, references=False):
"""
Returns a dictionary with name parts as keys and their call paths as
values.
"""
def def_ref_filter(name):
is_def = name.is_definition()
return definitions and is_def or references and not is_def
names = list(chain.from_iterable(module.get_used_names().values()))
if not all_scopes:
# We have to filter all the names that don't have the module as a
# parent_scope. There's None as a parent, because nodes in the module
# node have the parent module and not suite as all the others.
# Therefore it's important to catch that case.
def is_module_scope_name(name):
parent_scope = get_parent_scope(name)
# async functions have an extra wrapper. Strip it.
if parent_scope and parent_scope.type == 'async_stmt':
parent_scope = parent_scope.parent
return parent_scope in (module, None)
names = [n for n in names if is_module_scope_name(n)]
return filter(def_ref_filter, names)
def split_search_string(name):
type, _, dotted_names = name.rpartition(' ')
if type == 'def':
type = 'function'
return type, dotted_names.split('.')

View File

@@ -2,10 +2,19 @@
TODO Some parts of this module are still not well documented.
"""
from jedi.evaluate.representation import ModuleContext
from jedi.evaluate import compiled
from jedi.evaluate.compiled import mixed
from jedi.evaluate.context import Context
from jedi.inference import compiled
from jedi.inference.base_value import ValueSet
from jedi.inference.filters import ParserTreeFilter, MergedFilter
from jedi.inference.names import TreeNameDefinition
from jedi.inference.compiled import mixed
from jedi.inference.compiled.access import create_access_path
from jedi.inference.context import ModuleContext
def _create(inference_state, obj):
return compiled.create_from_access_path(
inference_state, create_access_path(inference_state, obj)
)
class NamespaceObject(object):
@@ -13,35 +22,54 @@ class NamespaceObject(object):
self.__dict__ = dct
class MixedModuleContext(Context):
resets_positions = True
type = 'mixed_module'
class MixedTreeName(TreeNameDefinition):
def infer(self):
"""
In IPython notebook it is typical that some parts of the code that is
provided was already executed. In that case if something is not properly
inferred, it should still infer from the variables it already knows.
"""
inferred = super(MixedTreeName, self).infer()
if not inferred:
for compiled_value in self.parent_context.mixed_values:
for f in compiled_value.get_filters():
values = ValueSet.from_sets(
n.infer() for n in f.get(self.string_name)
)
if values:
return values
return inferred
def __init__(self, evaluator, tree_module, namespaces, path):
self.evaluator = evaluator
self._namespaces = namespaces
self._namespace_objects = [NamespaceObject(n) for n in namespaces]
self._module_context = ModuleContext(evaluator, tree_module, path=path)
self.tree_node = tree_module
class MixedParserTreeFilter(ParserTreeFilter):
name_class = MixedTreeName
def get_node(self):
return self.tree_node
def get_filters(self, *args, **kwargs):
for filter in self._module_context.get_filters(*args, **kwargs):
yield filter
class MixedModuleContext(ModuleContext):
def __init__(self, tree_module_value, namespaces):
super(MixedModuleContext, self).__init__(tree_module_value)
self.mixed_values = [
self._get_mixed_object(
_create(self.inference_state, NamespaceObject(n))
) for n in namespaces
]
for namespace_obj in self._namespace_objects:
compiled_object = compiled.create(self.evaluator, namespace_obj)
mixed_object = mixed.MixedObject(
self.evaluator,
def _get_mixed_object(self, compiled_value):
return mixed.MixedObject(
compiled_value=compiled_value,
tree_value=self._value
)
def get_filters(self, until_position=None, origin_scope=None):
yield MergedFilter(
MixedParserTreeFilter(
parent_context=self,
compiled_object=compiled_object,
tree_context=self._module_context
)
for filter in mixed_object.get_filters(*args, **kwargs):
yield filter
until_position=until_position,
origin_scope=origin_scope
),
self.get_global_filter(),
)
def __getattr__(self, name):
return getattr(self._module_context, name)
for mixed_object in self.mixed_values:
for filter in mixed_object.get_filters(until_position, origin_scope):
yield filter

View File

@@ -1,10 +1,7 @@
import pydoc
import keyword
from jedi._compatibility import is_py3, is_py35
from jedi import common
from jedi.evaluate.filters import AbstractNameDefinition
from parso.python.tree import Leaf
from jedi.inference.utils import ignored
from jedi.inference.names import AbstractArbitraryName
try:
from pydoc_data import topics as pydoc_topics
@@ -17,98 +14,12 @@ except ImportError:
# pydoc_data module in its file python3x.zip.
pydoc_topics = None
if is_py3:
if is_py35:
# in python 3.5 async and await are not proper keywords, but for
# completion pursposes should as as though they are
keys = keyword.kwlist + ["async", "await"]
else:
keys = keyword.kwlist
else:
keys = keyword.kwlist + ['None', 'False', 'True']
class KeywordName(AbstractArbitraryName):
api_type = u'keyword'
def has_inappropriate_leaf_keyword(pos, module):
relevant_errors = filter(
lambda error: error.first_pos[0] == pos[0],
module.error_statement_stacks)
for error in relevant_errors:
if error.next_token in keys:
return True
return False
def completion_names(evaluator, stmt, pos, module):
keyword_list = all_keywords(evaluator)
if not isinstance(stmt, Leaf) or has_inappropriate_leaf_keyword(pos, module):
keyword_list = filter(
lambda keyword: not keyword.only_valid_as_leaf,
keyword_list
)
return [keyword.name for keyword in keyword_list]
def all_keywords(evaluator, pos=(0, 0)):
return set([Keyword(evaluator, k, pos) for k in keys])
def keyword(evaluator, string, pos=(0, 0)):
if string in keys:
return Keyword(evaluator, string, pos)
else:
return None
def get_operator(evaluator, string, pos):
return Keyword(evaluator, string, pos)
keywords_only_valid_as_leaf = (
'continue',
'break',
)
class KeywordName(AbstractNameDefinition):
api_type = 'keyword'
def __init__(self, evaluator, name):
self.evaluator = evaluator
self.string_name = name
self.parent_context = evaluator.BUILTINS
def eval(self):
return set()
def infer(self):
return [Keyword(self.evaluator, self.string_name, (0, 0))]
class Keyword(object):
api_type = 'keyword'
def __init__(self, evaluator, name, pos):
self.name = KeywordName(evaluator, name)
self.start_pos = pos
self.parent = evaluator.BUILTINS
@property
def only_valid_as_leaf(self):
return self.name.value in keywords_only_valid_as_leaf
@property
def names(self):
""" For a `parsing.Name` like comparision """
return [self.name]
def py__doc__(self, include_call_signature=False):
return imitate_pydoc(self.name.string_name)
def __repr__(self):
return '<%s: %s>' % (type(self).__name__, self.name)
def py__doc__(self):
return imitate_pydoc(self.string_name)
def imitate_pydoc(string):
@@ -123,12 +34,14 @@ def imitate_pydoc(string):
# with unicode strings)
string = str(string)
h = pydoc.help
with common.ignored(KeyError):
with ignored(KeyError):
# try to access symbols
string = h.symbols[string]
string, _, related = string.partition(' ')
get_target = lambda s: h.topics.get(s, h.keywords.get(s))
def get_target(s):
return h.topics.get(s, h.keywords.get(s))
while isinstance(string, str):
string = get_target(string)

427
jedi/api/project.py Normal file
View File

@@ -0,0 +1,427 @@
"""
Projects are a way to handle Python projects within Jedi. For simpler plugins
you might not want to deal with projects, but if you want to give the user more
flexibility to define sys paths and Python interpreters for a project,
:class:`.Project` is the perfect way to allow for that.
Projects can be saved to disk and loaded again, to allow project definitions to
be used across repositories.
"""
import os
import errno
import json
import sys
from jedi._compatibility import FileNotFoundError, PermissionError, \
IsADirectoryError, NotADirectoryError
from jedi import debug
from jedi.api.environment import get_cached_default_environment, create_environment
from jedi.api.exceptions import WrongVersion
from jedi.api.completion import search_in_module
from jedi.api.helpers import split_search_string, get_module_names
from jedi._compatibility import force_unicode
from jedi.inference.imports import load_module_from_path, \
load_namespace_from_path, iter_module_names
from jedi.inference.sys_path import discover_buildout_paths
from jedi.inference.cache import inference_state_as_method_param_cache
from jedi.inference.references import recurse_find_python_folders_and_files, search_in_file_ios
from jedi.file_io import FolderIO
from jedi.common import traverse_parents
_CONFIG_FOLDER = '.jedi'
_CONTAINS_POTENTIAL_PROJECT = \
'setup.py', '.git', '.hg', 'requirements.txt', 'MANIFEST.in', 'pyproject.toml'
_SERIALIZER_VERSION = 1
def _try_to_skip_duplicates(func):
def wrapper(*args, **kwargs):
found_tree_nodes = []
found_modules = []
for definition in func(*args, **kwargs):
tree_node = definition._name.tree_name
if tree_node is not None and tree_node in found_tree_nodes:
continue
if definition.type == 'module' and definition.module_path is not None:
if definition.module_path in found_modules:
continue
found_modules.append(definition.module_path)
yield definition
found_tree_nodes.append(tree_node)
return wrapper
def _remove_duplicates_from_path(path):
used = set()
for p in path:
if p in used:
continue
used.add(p)
yield p
def _force_unicode_list(lst):
return list(map(force_unicode, lst))
class Project(object):
"""
Projects are a simple way to manage Python folders and define how Jedi does
import resolution. It is mostly used as a parameter to :class:`.Script`.
Additionally there are functions to search a whole project.
"""
_environment = None
@staticmethod
def _get_config_folder_path(base_path):
return os.path.join(base_path, _CONFIG_FOLDER)
@staticmethod
def _get_json_path(base_path):
return os.path.join(Project._get_config_folder_path(base_path), 'project.json')
@classmethod
def load(cls, path):
"""
Loads a project from a specific path. You should not provide the path
to ``.jedi/project.json``, but rather the path to the project folder.
:param path: The path of the directory you want to use as a project.
"""
with open(cls._get_json_path(path)) as f:
version, data = json.load(f)
if version == 1:
return cls(**data)
else:
raise WrongVersion(
"The Jedi version of this project seems newer than what we can handle."
)
def save(self):
"""
Saves the project configuration in the project in ``.jedi/project.json``.
"""
data = dict(self.__dict__)
data.pop('_environment', None)
data.pop('_django', None) # TODO make django setting public?
data = {k.lstrip('_'): v for k, v in data.items()}
# TODO when dropping Python 2 use pathlib.Path.mkdir(parents=True, exist_ok=True)
try:
os.makedirs(self._get_config_folder_path(self._path))
except OSError as e:
if e.errno != errno.EEXIST:
raise
with open(self._get_json_path(self._path), 'w') as f:
return json.dump((_SERIALIZER_VERSION, data), f)
def __init__(self, path, **kwargs):
"""
:param path: The base path for this project.
:param environment_path: The Python executable path, typically the path
of a virtual environment.
:param load_unsafe_extensions: Default False, Loads extensions that are not in the
sys path and in the local directories. With this option enabled,
this is potentially unsafe if you clone a git repository and
analyze it's code, because those compiled extensions will be
important and therefore have execution privileges.
:param sys_path: list of str. You can override the sys path if you
want. By default the ``sys.path.`` is generated by the
environment (virtualenvs, etc).
:param added_sys_path: list of str. Adds these paths at the end of the
sys path.
:param smart_sys_path: If this is enabled (default), adds paths from
local directories. Otherwise you will have to rely on your packages
being properly configured on the ``sys.path``.
"""
def py2_comp(path, environment_path=None, load_unsafe_extensions=False,
sys_path=None, added_sys_path=(), smart_sys_path=True):
self._path = os.path.abspath(path)
self._environment_path = environment_path
self._sys_path = sys_path
self._smart_sys_path = smart_sys_path
self._load_unsafe_extensions = load_unsafe_extensions
self._django = False
self.added_sys_path = list(added_sys_path)
"""The sys path that is going to be added at the end of the """
py2_comp(path, **kwargs)
@property
def path(self):
"""
The base path for this project.
"""
return self._path
@inference_state_as_method_param_cache()
def _get_base_sys_path(self, inference_state):
# The sys path has not been set explicitly.
sys_path = list(inference_state.environment.get_sys_path())
try:
sys_path.remove('')
except ValueError:
pass
return sys_path
@inference_state_as_method_param_cache()
def _get_sys_path(self, inference_state, add_parent_paths=True, add_init_paths=False):
"""
Keep this method private for all users of jedi. However internally this
one is used like a public method.
"""
suffixed = list(self.added_sys_path)
prefixed = []
if self._sys_path is None:
sys_path = list(self._get_base_sys_path(inference_state))
else:
sys_path = list(self._sys_path)
if self._smart_sys_path:
prefixed.append(self._path)
if inference_state.script_path is not None:
suffixed += discover_buildout_paths(inference_state, inference_state.script_path)
if add_parent_paths:
# Collect directories in upward search by:
# 1. Skipping directories with __init__.py
# 2. Stopping immediately when above self._path
traversed = []
for parent_path in traverse_parents(inference_state.script_path):
if parent_path == self._path or not parent_path.startswith(self._path):
break
if not add_init_paths \
and os.path.isfile(os.path.join(parent_path, "__init__.py")):
continue
traversed.append(parent_path)
# AFAIK some libraries have imports like `foo.foo.bar`, which
# leads to the conclusion to by default prefer longer paths
# rather than shorter ones by default.
suffixed += reversed(traversed)
if self._django:
prefixed.append(self._path)
path = prefixed + sys_path + suffixed
return list(_force_unicode_list(_remove_duplicates_from_path(path)))
def get_environment(self):
if self._environment is None:
if self._environment_path is not None:
self._environment = create_environment(self._environment_path, safe=False)
else:
self._environment = get_cached_default_environment()
return self._environment
def search(self, string, **kwargs):
"""
Searches a name in the whole project. If the project is very big,
at some point Jedi will stop searching. However it's also very much
recommended to not exhaust the generator. Just display the first ten
results to the user.
There are currently three different search patterns:
- ``foo`` to search for a definition foo in any file or a file called
``foo.py`` or ``foo.pyi``.
- ``foo.bar`` to search for the ``foo`` and then an attribute ``bar``
in it.
- ``class foo.bar.Bar`` or ``def foo.bar.baz`` to search for a specific
API type.
:param bool all_scopes: Default False; searches not only for
definitions on the top level of a module level, but also in
functions and classes.
:yields: :class:`.Name`
"""
return self._search(string, **kwargs)
def complete_search(self, string, **kwargs):
"""
Like :meth:`.Script.search`, but completes that string. An empty string
lists all definitions in a project, so be careful with that.
:param bool all_scopes: Default False; searches not only for
definitions on the top level of a module level, but also in
functions and classes.
:yields: :class:`.Completion`
"""
return self._search_func(string, complete=True, **kwargs)
def _search(self, string, all_scopes=False): # Python 2..
return self._search_func(string, all_scopes=all_scopes)
@_try_to_skip_duplicates
def _search_func(self, string, complete=False, all_scopes=False):
# Using a Script is they easiest way to get an empty module context.
from jedi import Script
s = Script('', project=self)
inference_state = s._inference_state
empty_module_context = s._get_module_context()
if inference_state.grammar.version_info < (3, 6) or sys.version_info < (3, 6):
raise NotImplementedError(
"No support for refactorings/search on Python 2/3.5"
)
debug.dbg('Search for string %s, complete=%s', string, complete)
wanted_type, wanted_names = split_search_string(string)
name = wanted_names[0]
stub_folder_name = name + '-stubs'
ios = recurse_find_python_folders_and_files(FolderIO(self._path))
file_ios = []
# 1. Search for modules in the current project
for folder_io, file_io in ios:
if file_io is None:
file_name = folder_io.get_base_name()
if file_name == name or file_name == stub_folder_name:
f = folder_io.get_file_io('__init__.py')
try:
m = load_module_from_path(inference_state, f).as_context()
except FileNotFoundError:
f = folder_io.get_file_io('__init__.pyi')
try:
m = load_module_from_path(inference_state, f).as_context()
except FileNotFoundError:
m = load_namespace_from_path(inference_state, folder_io).as_context()
else:
continue
else:
file_ios.append(file_io)
file_name = os.path.basename(file_io.path)
if file_name in (name + '.py', name + '.pyi'):
m = load_module_from_path(inference_state, file_io).as_context()
else:
continue
debug.dbg('Search of a specific module %s', m)
for x in search_in_module(
inference_state,
m,
names=[m.name],
wanted_type=wanted_type,
wanted_names=wanted_names,
complete=complete,
convert=True,
ignore_imports=True,
):
yield x # Python 2...
# 2. Search for identifiers in the project.
for module_context in search_in_file_ios(inference_state, file_ios, name):
names = get_module_names(module_context.tree_node, all_scopes=all_scopes)
names = [module_context.create_name(n) for n in names]
names = _remove_imports(names)
for x in search_in_module(
inference_state,
module_context,
names=names,
wanted_type=wanted_type,
wanted_names=wanted_names,
complete=complete,
ignore_imports=True,
):
yield x # Python 2...
# 3. Search for modules on sys.path
sys_path = [
p for p in self._get_sys_path(inference_state)
# Exclude folders that are handled by recursing of the Python
# folders.
if not p.startswith(self._path)
]
names = list(iter_module_names(inference_state, empty_module_context, sys_path))
for x in search_in_module(
inference_state,
empty_module_context,
names=names,
wanted_type=wanted_type,
wanted_names=wanted_names,
complete=complete,
convert=True,
):
yield x # Python 2...
def __repr__(self):
return '<%s: %s>' % (self.__class__.__name__, self._path)
def _is_potential_project(path):
for name in _CONTAINS_POTENTIAL_PROJECT:
if os.path.exists(os.path.join(path, name)):
return True
return False
def _is_django_path(directory):
""" Detects the path of the very well known Django library (if used) """
try:
with open(os.path.join(directory, 'manage.py'), 'rb') as f:
return b"DJANGO_SETTINGS_MODULE" in f.read()
except (FileNotFoundError, IsADirectoryError, PermissionError):
return False
def get_default_project(path=None):
"""
If a project is not defined by the user, Jedi tries to define a project by
itself as well as possible. Jedi traverses folders until it finds one of
the following:
1. A ``.jedi/config.json``
2. One of the following files: ``setup.py``, ``.git``, ``.hg``,
``requirements.txt`` and ``MANIFEST.in``.
"""
if path is None:
path = os.getcwd()
check = os.path.realpath(path)
probable_path = None
first_no_init_file = None
for dir in traverse_parents(check, include_current=True):
try:
return Project.load(dir)
except (FileNotFoundError, IsADirectoryError, PermissionError):
pass
except NotADirectoryError:
continue
if first_no_init_file is None:
if os.path.exists(os.path.join(dir, '__init__.py')):
# In the case that a __init__.py exists, it's in 99% just a
# Python package and the project sits at least one level above.
continue
else:
first_no_init_file = dir
if _is_django_path(dir):
project = Project(dir)
project._django = True
return project
if probable_path is None and _is_potential_project(dir):
probable_path = dir
if probable_path is not None:
# TODO search for setup.py etc
return Project(probable_path)
if first_no_init_file is not None:
return Project(first_no_init_file)
curdir = path if os.path.isdir(path) else os.path.dirname(path)
return Project(curdir)
def _remove_imports(names):
return [
n for n in names
if n.tree_name is None or n.api_type != 'module'
]

View File

@@ -0,0 +1,249 @@
from os.path import dirname, basename, join, relpath
import os
import re
import difflib
from parso import split_lines
from jedi.api.exceptions import RefactoringError
EXPRESSION_PARTS = (
'or_test and_test not_test comparison '
'expr xor_expr and_expr shift_expr arith_expr term factor power atom_expr'
).split()
class ChangedFile(object):
def __init__(self, inference_state, from_path, to_path,
module_node, node_to_str_map):
self._inference_state = inference_state
self._from_path = from_path
self._to_path = to_path
self._module_node = module_node
self._node_to_str_map = node_to_str_map
def get_diff(self):
old_lines = split_lines(self._module_node.get_code(), keepends=True)
new_lines = split_lines(self.get_new_code(), keepends=True)
# Add a newline at the end if it's missing. Otherwise the diff will be
# very weird. A `diff -u file1 file2` would show the string:
#
# \ No newline at end of file
#
# This is not necessary IMO, because Jedi does not really play with
# newlines and the ending newline does not really matter in Python
# files. ~dave
if old_lines[-1] != '':
old_lines[-1] += '\n'
if new_lines[-1] != '':
new_lines[-1] += '\n'
project_path = self._inference_state.project.path
if self._from_path is None:
from_p = ''
else:
from_p = relpath(self._from_path, project_path)
if self._to_path is None:
to_p = ''
else:
to_p = relpath(self._to_path, project_path)
diff = difflib.unified_diff(
old_lines, new_lines,
fromfile=from_p,
tofile=to_p,
)
# Apparently there's a space at the end of the diff - for whatever
# reason.
return ''.join(diff).rstrip(' ')
def get_new_code(self):
return self._inference_state.grammar.refactor(self._module_node, self._node_to_str_map)
def apply(self):
if self._from_path is None:
raise RefactoringError(
'Cannot apply a refactoring on a Script with path=None'
)
with open(self._from_path, 'w', newline='') as f:
f.write(self.get_new_code())
def __repr__(self):
return '<%s: %s>' % (self.__class__.__name__, self._from_path)
class Refactoring(object):
def __init__(self, inference_state, file_to_node_changes, renames=()):
self._inference_state = inference_state
self._renames = renames
self._file_to_node_changes = file_to_node_changes
def get_changed_files(self):
"""
Returns a path to ``ChangedFile`` map.
"""
def calculate_to_path(p):
if p is None:
return p
for from_, to in renames:
if p.startswith(from_):
p = to + p[len(from_):]
return p
renames = self.get_renames()
return {
path: ChangedFile(
self._inference_state,
from_path=path,
to_path=calculate_to_path(path),
module_node=next(iter(map_)).get_root_node(),
node_to_str_map=map_
) for path, map_ in sorted(self._file_to_node_changes.items())
}
def get_renames(self):
"""
Files can be renamed in a refactoring.
Returns ``Iterable[Tuple[str, str]]``.
"""
return sorted(self._renames)
def get_diff(self):
text = ''
project_path = self._inference_state.project.path
for from_, to in self.get_renames():
text += 'rename from %s\nrename to %s\n' \
% (relpath(from_, project_path), relpath(to, project_path))
return text + ''.join(f.get_diff() for f in self.get_changed_files().values())
def apply(self):
"""
Applies the whole refactoring to the files, which includes renames.
"""
for f in self.get_changed_files().values():
f.apply()
for old, new in self.get_renames():
os.rename(old, new)
def _calculate_rename(path, new_name):
name = basename(path)
dir_ = dirname(path)
if name in ('__init__.py', '__init__.pyi'):
parent_dir = dirname(dir_)
return dir_, join(parent_dir, new_name)
ending = re.search(r'\.pyi?$', name).group(0)
return path, join(dir_, new_name + ending)
def rename(inference_state, definitions, new_name):
file_renames = set()
file_tree_name_map = {}
if not definitions:
raise RefactoringError("There is no name under the cursor")
for d in definitions:
tree_name = d._name.tree_name
if d.type == 'module' and tree_name is None:
file_renames.add(_calculate_rename(d.module_path, new_name))
else:
# This private access is ok in a way. It's not public to
# protect Jedi users from seeing it.
if tree_name is not None:
fmap = file_tree_name_map.setdefault(d.module_path, {})
fmap[tree_name] = tree_name.prefix + new_name
return Refactoring(inference_state, file_tree_name_map, file_renames)
def inline(inference_state, names):
if not names:
raise RefactoringError("There is no name under the cursor")
if any(n.api_type == 'module' for n in names):
raise RefactoringError("Cannot inline imports or modules")
if any(n.tree_name is None for n in names):
raise RefactoringError("Cannot inline builtins/extensions")
definitions = [n for n in names if n.tree_name.is_definition()]
if len(definitions) == 0:
raise RefactoringError("No definition found to inline")
if len(definitions) > 1:
raise RefactoringError("Cannot inline a name with multiple definitions")
if len(names) == 1:
raise RefactoringError("There are no references to this name")
tree_name = definitions[0].tree_name
expr_stmt = tree_name.get_definition()
if expr_stmt.type != 'expr_stmt':
type_ = dict(
funcdef='function',
classdef='class',
).get(expr_stmt.type, expr_stmt.type)
raise RefactoringError("Cannot inline a %s" % type_)
if len(expr_stmt.get_defined_names(include_setitem=True)) > 1:
raise RefactoringError("Cannot inline a statement with multiple definitions")
first_child = expr_stmt.children[1]
if first_child.type == 'annassign' and len(first_child.children) == 4:
first_child = first_child.children[2]
if first_child != '=':
if first_child.type == 'annassign':
raise RefactoringError(
'Cannot inline a statement that is defined by an annotation'
)
else:
raise RefactoringError(
'Cannot inline a statement with "%s"'
% first_child.get_code(include_prefix=False)
)
rhs = expr_stmt.get_rhs()
replace_code = rhs.get_code(include_prefix=False)
references = [n for n in names if not n.tree_name.is_definition()]
file_to_node_changes = {}
for name in references:
tree_name = name.tree_name
path = name.get_root_context().py__file__()
s = replace_code
if rhs.type == 'testlist_star_expr' \
or tree_name.parent.type in EXPRESSION_PARTS \
or tree_name.parent.type == 'trailer' \
and tree_name.parent.get_next_sibling() is not None:
s = '(' + replace_code + ')'
of_path = file_to_node_changes.setdefault(path, {})
n = tree_name
prefix = n.prefix
par = n.parent
if par.type == 'trailer' and par.children[0] == '.':
prefix = par.parent.children[0].prefix
n = par
for some_node in par.parent.children[:par.parent.children.index(par)]:
of_path[some_node] = ''
of_path[n] = prefix + s
path = definitions[0].get_root_context().py__file__()
changes = file_to_node_changes.setdefault(path, {})
changes[expr_stmt] = _remove_indent_of_prefix(expr_stmt.get_first_leaf().prefix)
next_leaf = expr_stmt.get_next_leaf()
# Most of the time we have to remove the newline at the end of the
# statement, but if there's a comment we might not need to.
if next_leaf.prefix.strip(' \t') == '' \
and (next_leaf.type == 'newline' or next_leaf == ';'):
changes[next_leaf] = ''
return Refactoring(inference_state, file_to_node_changes)
def _remove_indent_of_prefix(prefix):
r"""
Removes the last indentation of a prefix, e.g. " \n \n " becomes " \n \n".
"""
return ''.join(split_lines(prefix, keepends=True)[:-1])

View File

@@ -0,0 +1,387 @@
from textwrap import dedent
from parso import split_lines
from jedi import debug
from jedi.api.exceptions import RefactoringError
from jedi.api.refactoring import Refactoring, EXPRESSION_PARTS
from jedi.common import indent_block
from jedi.parser_utils import function_is_classmethod, function_is_staticmethod
_DEFINITION_SCOPES = ('suite', 'file_input')
_VARIABLE_EXCTRACTABLE = EXPRESSION_PARTS + \
('atom testlist_star_expr testlist test lambdef lambdef_nocond '
'keyword name number string fstring').split()
def extract_variable(inference_state, path, module_node, name, pos, until_pos):
nodes = _find_nodes(module_node, pos, until_pos)
debug.dbg('Extracting nodes: %s', nodes)
is_expression, message = _is_expression_with_error(nodes)
if not is_expression:
raise RefactoringError(message)
generated_code = name + ' = ' + _expression_nodes_to_string(nodes)
file_to_node_changes = {path: _replace(nodes, name, generated_code, pos)}
return Refactoring(inference_state, file_to_node_changes)
def _is_expression_with_error(nodes):
"""
Returns a tuple (is_expression, error_string).
"""
if any(node.type == 'name' and node.is_definition() for node in nodes):
return False, 'Cannot extract a name that defines something'
if nodes[0].type not in _VARIABLE_EXCTRACTABLE:
return False, 'Cannot extract a "%s"' % nodes[0].type
return True, ''
def _find_nodes(module_node, pos, until_pos):
"""
Looks up a module and tries to find the appropriate amount of nodes that
are in there.
"""
start_node = module_node.get_leaf_for_position(pos, include_prefixes=True)
if until_pos is None:
if start_node.type == 'operator':
next_leaf = start_node.get_next_leaf()
if next_leaf is not None and next_leaf.start_pos == pos:
start_node = next_leaf
if _is_not_extractable_syntax(start_node):
start_node = start_node.parent
if start_node.parent.type == 'trailer':
start_node = start_node.parent.parent
while start_node.parent.type in EXPRESSION_PARTS:
start_node = start_node.parent
nodes = [start_node]
else:
# Get the next leaf if we are at the end of a leaf
if start_node.end_pos == pos:
next_leaf = start_node.get_next_leaf()
if next_leaf is not None:
start_node = next_leaf
# Some syntax is not exactable, just use its parent
if _is_not_extractable_syntax(start_node):
start_node = start_node.parent
# Find the end
end_leaf = module_node.get_leaf_for_position(until_pos, include_prefixes=True)
if end_leaf.start_pos > until_pos:
end_leaf = end_leaf.get_previous_leaf()
if end_leaf is None:
raise RefactoringError('Cannot extract anything from that')
parent_node = start_node
while parent_node.end_pos < end_leaf.end_pos:
parent_node = parent_node.parent
nodes = _remove_unwanted_expression_nodes(parent_node, pos, until_pos)
# If the user marks just a return statement, we return the expression
# instead of the whole statement, because the user obviously wants to
# extract that part.
if len(nodes) == 1 and start_node.type in ('return_stmt', 'yield_expr'):
return [nodes[0].children[1]]
return nodes
def _replace(nodes, expression_replacement, extracted, pos,
insert_before_leaf=None, remaining_prefix=None):
# Now try to replace the nodes found with a variable and move the code
# before the current statement.
definition = _get_parent_definition(nodes[0])
if insert_before_leaf is None:
insert_before_leaf = definition.get_first_leaf()
first_node_leaf = nodes[0].get_first_leaf()
lines = split_lines(insert_before_leaf.prefix, keepends=True)
if first_node_leaf is insert_before_leaf:
if remaining_prefix is not None:
# The remaining prefix has already been calculated.
lines[:-1] = remaining_prefix
lines[-1:-1] = [indent_block(extracted, lines[-1]) + '\n']
extracted_prefix = ''.join(lines)
replacement_dct = {}
if first_node_leaf is insert_before_leaf:
replacement_dct[nodes[0]] = extracted_prefix + expression_replacement
else:
if remaining_prefix is None:
p = first_node_leaf.prefix
else:
p = remaining_prefix + _get_indentation(nodes[0])
replacement_dct[nodes[0]] = p + expression_replacement
replacement_dct[insert_before_leaf] = extracted_prefix + insert_before_leaf.value
for node in nodes[1:]:
replacement_dct[node] = ''
return replacement_dct
def _expression_nodes_to_string(nodes):
return ''.join(n.get_code(include_prefix=i != 0) for i, n in enumerate(nodes))
def _suite_nodes_to_string(nodes, pos):
n = nodes[0]
prefix, part_of_code = _split_prefix_at(n.get_first_leaf(), pos[0] - 1)
code = part_of_code + n.get_code(include_prefix=False) \
+ ''.join(n.get_code() for n in nodes[1:])
return prefix, code
def _split_prefix_at(leaf, until_line):
"""
Returns a tuple of the leaf's prefix, split at the until_line
position.
"""
# second means the second returned part
second_line_count = leaf.start_pos[0] - until_line
lines = split_lines(leaf.prefix, keepends=True)
return ''.join(lines[:-second_line_count]), ''.join(lines[-second_line_count:])
def _get_indentation(node):
return split_lines(node.get_first_leaf().prefix)[-1]
def _get_parent_definition(node):
"""
Returns the statement where a node is defined.
"""
while node is not None:
if node.parent.type in _DEFINITION_SCOPES:
return node
node = node.parent
raise NotImplementedError('We should never even get here')
def _remove_unwanted_expression_nodes(parent_node, pos, until_pos):
"""
This function makes it so for `1 * 2 + 3` you can extract `2 + 3`, even
though it is not part of the expression.
"""
typ = parent_node.type
is_suite_part = typ in ('suite', 'file_input')
if typ in EXPRESSION_PARTS or is_suite_part:
nodes = parent_node.children
for i, n in enumerate(nodes):
if n.end_pos > pos:
start_index = i
if n.type == 'operator':
start_index -= 1
break
for i, n in reversed(list(enumerate(nodes))):
if n.start_pos < until_pos:
end_index = i
if n.type == 'operator':
end_index += 1
# Something like `not foo or bar` should not be cut after not
for n2 in nodes[i:]:
if _is_not_extractable_syntax(n2):
end_index += 1
else:
break
break
nodes = nodes[start_index:end_index + 1]
if not is_suite_part:
nodes[0:1] = _remove_unwanted_expression_nodes(nodes[0], pos, until_pos)
nodes[-1:] = _remove_unwanted_expression_nodes(nodes[-1], pos, until_pos)
return nodes
return [parent_node]
def _is_not_extractable_syntax(node):
return node.type == 'operator' \
or node.type == 'keyword' and node.value not in ('None', 'True', 'False')
def extract_function(inference_state, path, module_context, name, pos, until_pos):
nodes = _find_nodes(module_context.tree_node, pos, until_pos)
assert len(nodes)
is_expression, _ = _is_expression_with_error(nodes)
context = module_context.create_context(nodes[0])
is_bound_method = context.is_bound_method()
params, return_variables = list(_find_inputs_and_outputs(module_context, context, nodes))
# Find variables
# Is a class method / method
if context.is_module():
insert_before_leaf = None # Leaf will be determined later
else:
node = _get_code_insertion_node(context.tree_node, is_bound_method)
insert_before_leaf = node.get_first_leaf()
if is_expression:
code_block = 'return ' + _expression_nodes_to_string(nodes) + '\n'
remaining_prefix = None
has_ending_return_stmt = False
else:
has_ending_return_stmt = _is_node_ending_return_stmt(nodes[-1])
if not has_ending_return_stmt:
# Find the actually used variables (of the defined ones). If none are
# used (e.g. if the range covers the whole function), return the last
# defined variable.
return_variables = list(_find_needed_output_variables(
context,
nodes[0].parent,
nodes[-1].end_pos,
return_variables
)) or [return_variables[-1]] if return_variables else []
remaining_prefix, code_block = _suite_nodes_to_string(nodes, pos)
after_leaf = nodes[-1].get_next_leaf()
first, second = _split_prefix_at(after_leaf, until_pos[0])
code_block += first
code_block = dedent(code_block)
if not has_ending_return_stmt:
output_var_str = ', '.join(return_variables)
code_block += 'return ' + output_var_str + '\n'
# Check if we have to raise RefactoringError
_check_for_non_extractables(nodes[:-1] if has_ending_return_stmt else nodes)
decorator = ''
self_param = None
if is_bound_method:
if not function_is_staticmethod(context.tree_node):
function_param_names = context.get_value().get_param_names()
if len(function_param_names):
self_param = function_param_names[0].string_name
params = [p for p in params if p != self_param]
if function_is_classmethod(context.tree_node):
decorator = '@classmethod\n'
else:
code_block += '\n'
function_code = '%sdef %s(%s):\n%s' % (
decorator,
name,
', '.join(params if self_param is None else [self_param] + params),
indent_block(code_block)
)
function_call = '%s(%s)' % (
('' if self_param is None else self_param + '.') + name,
', '.join(params)
)
if is_expression:
replacement = function_call
else:
if has_ending_return_stmt:
replacement = 'return ' + function_call + '\n'
else:
replacement = output_var_str + ' = ' + function_call + '\n'
replacement_dct = _replace(nodes, replacement, function_code, pos,
insert_before_leaf, remaining_prefix)
if not is_expression:
replacement_dct[after_leaf] = second + after_leaf.value
file_to_node_changes = {path: replacement_dct}
return Refactoring(inference_state, file_to_node_changes)
def _check_for_non_extractables(nodes):
for n in nodes:
try:
children = n.children
except AttributeError:
if n.value == 'return':
raise RefactoringError(
'Can only extract return statements if they are at the end.')
if n.value == 'yield':
raise RefactoringError('Cannot extract yield statements.')
else:
_check_for_non_extractables(children)
def _is_name_input(module_context, names, first, last):
for name in names:
if name.api_type == 'param' or not name.parent_context.is_module():
if name.get_root_context() is not module_context:
return True
if name.start_pos is None or not (first <= name.start_pos < last):
return True
return False
def _find_inputs_and_outputs(module_context, context, nodes):
first = nodes[0].start_pos
last = nodes[-1].end_pos
inputs = []
outputs = []
for name in _find_non_global_names(nodes):
if name.is_definition():
if name not in outputs:
outputs.append(name.value)
else:
if name.value not in inputs:
name_definitions = context.goto(name, name.start_pos)
if not name_definitions \
or _is_name_input(module_context, name_definitions, first, last):
inputs.append(name.value)
# Check if outputs are really needed:
return inputs, outputs
def _find_non_global_names(nodes):
for node in nodes:
try:
children = node.children
except AttributeError:
if node.type == 'name':
yield node
else:
# We only want to check foo in foo.bar
if node.type == 'trailer' and node.children[0] == '.':
continue
for x in _find_non_global_names(children): # Python 2...
yield x
def _get_code_insertion_node(node, is_bound_method):
if not is_bound_method or function_is_staticmethod(node):
while node.parent.type != 'file_input':
node = node.parent
while node.parent.type in ('async_funcdef', 'decorated', 'async_stmt'):
node = node.parent
return node
def _find_needed_output_variables(context, search_node, at_least_pos, return_variables):
"""
Searches everything after at_least_pos in a node and checks if any of the
return_variables are used in there and returns those.
"""
for node in search_node.children:
if node.start_pos < at_least_pos:
continue
return_variables = set(return_variables)
for name in _find_non_global_names([node]):
if not name.is_definition() and name.value in return_variables:
return_variables.remove(name.value)
yield name.value
def _is_node_ending_return_stmt(node):
t = node.type
if t == 'simple_stmt':
return _is_node_ending_return_stmt(node.children[0])
return t == 'return_stmt'

View File

@@ -1,6 +1,8 @@
"""
To use Jedi completion in Python interpreter, add the following in your shell
setup (e.g., ``.bashrc``)::
setup (e.g., ``.bashrc``). This works only on Linux/Mac, because readline is
not available on Windows. If you still want Jedi autocompletion in your REPL,
just use IPython instead::
export PYTHONSTARTUP="$(python -m jedi repl)"
@@ -11,15 +13,15 @@ Then you will be able to use Jedi completer in your Python interpreter::
[GCC 4.6.1] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import os
>>> os.path.join().split().in<TAB> # doctest: +SKIP
os.path.join().split().index os.path.join().split().insert
>>> os.path.join('a', 'b').split().in<TAB> # doctest: +SKIP
..dex ..sert
"""
import jedi.utils
from jedi import __version__ as __jedi_version__
print('REPL completion using Jedi %s' % __jedi_version__)
jedi.utils.setup_readline()
jedi.utils.setup_readline(fuzzy=False)
del jedi

109
jedi/api/strings.py Normal file
View File

@@ -0,0 +1,109 @@
"""
This module is here for string completions. This means mostly stuff where
strings are returned, like `foo = dict(bar=3); foo["ba` would complete to
`"bar"]`.
It however does the same for numbers. The difference between string completions
and other completions is mostly that this module doesn't return defined
names in a module, but pretty much an arbitrary string.
"""
import re
from jedi._compatibility import unicode
from jedi.inference.names import AbstractArbitraryName
from jedi.inference.helpers import infer_call_of_leaf
from jedi.api.classes import Completion
from jedi.parser_utils import cut_value_at_position
_sentinel = object()
class StringName(AbstractArbitraryName):
api_type = u'string'
is_value_name = False
def complete_dict(module_context, code_lines, leaf, position, string, fuzzy):
bracket_leaf = leaf
if bracket_leaf != '[':
bracket_leaf = leaf.get_previous_leaf()
cut_end_quote = ''
if string:
cut_end_quote = get_quote_ending(string, code_lines, position, invert_result=True)
if bracket_leaf == '[':
if string is None and leaf is not bracket_leaf:
string = cut_value_at_position(leaf, position)
context = module_context.create_context(bracket_leaf)
before_bracket_leaf = bracket_leaf.get_previous_leaf()
if before_bracket_leaf.type in ('atom', 'trailer', 'name'):
values = infer_call_of_leaf(context, before_bracket_leaf)
return list(_completions_for_dicts(
module_context.inference_state,
values,
'' if string is None else string,
cut_end_quote,
fuzzy=fuzzy,
))
return []
def _completions_for_dicts(inference_state, dicts, literal_string, cut_end_quote, fuzzy):
for dict_key in sorted(_get_python_keys(dicts), key=lambda x: repr(x)):
dict_key_str = _create_repr_string(literal_string, dict_key)
if dict_key_str.startswith(literal_string):
name = StringName(inference_state, dict_key_str[:-len(cut_end_quote) or None])
yield Completion(
inference_state,
name,
stack=None,
like_name_length=len(literal_string),
is_fuzzy=fuzzy
)
def _create_repr_string(literal_string, dict_key):
if not isinstance(dict_key, (unicode, bytes)) or not literal_string:
return repr(dict_key)
r = repr(dict_key)
prefix, quote = _get_string_prefix_and_quote(literal_string)
if quote is None:
return r
if quote == r[0]:
return prefix + r
return prefix + quote + r[1:-1] + quote
def _get_python_keys(dicts):
for dct in dicts:
if dct.array_type == 'dict':
for key in dct.get_key_values():
dict_key = key.get_safe_value(default=_sentinel)
if dict_key is not _sentinel:
yield dict_key
def _get_string_prefix_and_quote(string):
match = re.match(r'(\w*)("""|\'{3}|"|\')', string)
if match is None:
return None, None
return match.group(1), match.group(2)
def _matches_quote_at_position(code_lines, quote, position):
string = code_lines[position[0] - 1][position[1]:position[1] + len(quote)]
return string == quote
def get_quote_ending(string, code_lines, position, invert_result=False):
_, quote = _get_string_prefix_and_quote(string)
if quote is None:
return ''
# Add a quote only if it's not already there.
if _matches_quote_at_position(code_lines, quote, position) != invert_result:
return ''
return quote

View File

@@ -1,75 +0,0 @@
from jedi.api import classes
from parso.python import tree
from jedi.evaluate import imports
from jedi.evaluate.filters import TreeNameDefinition
from jedi.evaluate.representation import ModuleContext
def compare_contexts(c1, c2):
return c1 == c2 or (c1[1] == c2[1] and c1[0].tree_node == c2[0].tree_node)
def usages(evaluator, definition_names, mods):
"""
:param definitions: list of Name
"""
def resolve_names(definition_names):
for name in definition_names:
if name.api_type == 'module':
found = False
for context in name.infer():
if isinstance(context, ModuleContext):
found = True
yield context.name
if not found:
yield name
else:
yield name
def compare_array(definition_names):
""" `definitions` are being compared by module/start_pos, because
sometimes the id's of the objects change (e.g. executions).
"""
return [
(name.get_root_context(), name.start_pos)
for name in resolve_names(definition_names)
]
search_name = list(definition_names)[0].string_name
compare_definitions = compare_array(definition_names)
mods = mods | set([d.get_root_context() for d in definition_names])
definition_names = set(resolve_names(definition_names))
for m in imports.get_modules_containing_name(evaluator, mods, search_name):
if isinstance(m, ModuleContext):
for name_node in m.tree_node.get_used_names().get(search_name, []):
context = evaluator.create_context(m, name_node)
result = evaluator.goto(context, name_node)
if any(compare_contexts(c1, c2)
for c1 in compare_array(result)
for c2 in compare_definitions):
name = TreeNameDefinition(context, name_node)
definition_names.add(name)
# Previous definitions might be imports, so include them
# (because goto might return that import name).
compare_definitions += compare_array([name])
else:
# compiled objects
definition_names.add(m.name)
return [classes.Definition(evaluator, n) for n in definition_names]
def resolve_potential_imports(evaluator, definitions):
""" Adds the modules of the imports """
new = set()
for d in definitions:
if isinstance(d, TreeNameDefinition):
imp_or_stmt = d.tree_name.get_definition()
if isinstance(imp_or_stmt, tree.Import):
new |= resolve_potential_imports(
evaluator,
set(imports.infer_import(
d.parent_context, d.tree_name, is_goto=True
))
)
return set(definitions) | new

View File

@@ -12,7 +12,7 @@ there are global variables, which are holding the cache information. Some of
these variables are being cleaned after every API usage.
"""
import time
import inspect
from functools import wraps
from jedi import settings
from parso.cache import parser_cache
@@ -20,40 +20,6 @@ from parso.cache import parser_cache
_time_caches = {}
def underscore_memoization(func):
"""
Decorator for methods::
class A(object):
def x(self):
if self._x:
self._x = 10
return self._x
Becomes::
class A(object):
@underscore_memoization
def x(self):
return 10
A now has an attribute ``_x`` written by this decorator.
"""
name = '_' + func.__name__
def wrapper(self):
try:
return getattr(self, name)
except AttributeError:
result = func(self)
if inspect.isgenerator(result):
result = list(result)
setattr(self, name, result)
return result
return wrapper
def clear_time_caches(delete_all=False):
""" Jedi caches many things, that should be completed after each completion
finishes.
@@ -77,7 +43,7 @@ def clear_time_caches(delete_all=False):
del tc[key]
def time_cache(time_add_setting):
def signature_time_cache(time_add_setting):
"""
This decorator works as follows: Call it with a setting and after that
use the function with a callable that returns the key.
@@ -109,8 +75,32 @@ def time_cache(time_add_setting):
return _temp
def time_cache(seconds):
def decorator(func):
cache = {}
@wraps(func)
def wrapper(*args, **kwargs):
key = (args, frozenset(kwargs.items()))
try:
created, result = cache[key]
if time.time() < created + seconds:
return result
except KeyError:
pass
result = func(*args, **kwargs)
cache[key] = time.time(), result
return result
wrapper.clear_cache = lambda: cache.clear()
return wrapper
return decorator
def memoize_method(method):
"""A normal memoize function."""
@wraps(method)
def wrapper(self, *args, **kwargs):
cache_dict = self.__dict__.setdefault('_memoize_method_dct', {})
dct = cache_dict.setdefault(method, {})

View File

@@ -1,81 +1,29 @@
""" A universal module with functions / classes without dependencies. """
import sys
import contextlib
import functools
from jedi._compatibility import reraise
from jedi import settings
import os
from contextlib import contextmanager
class UncaughtAttributeError(Exception):
def traverse_parents(path, include_current=False):
if not include_current:
path = os.path.dirname(path)
previous = None
while previous != path:
yield path
previous = path
path = os.path.dirname(path)
@contextmanager
def monkeypatch(obj, attribute_name, new_value):
"""
Important, because `__getattr__` and `hasattr` catch AttributeErrors
implicitly. This is really evil (mainly because of `__getattr__`).
`hasattr` in Python 2 is even more evil, because it catches ALL exceptions.
Therefore this class originally had to be derived from `BaseException`
instead of `Exception`. But because I removed relevant `hasattr` from
the code base, we can now switch back to `Exception`.
:param base: return values of sys.exc_info().
Like pytest's monkeypatch, but as a value manager.
"""
def safe_property(func):
return property(reraise_uncaught(func))
def reraise_uncaught(func):
"""
Re-throw uncaught `AttributeError`.
Usage: Put ``@rethrow_uncaught`` in front of the function
which does **not** suppose to raise `AttributeError`.
AttributeError is easily get caught by `hasattr` and another
``except AttributeError`` clause. This becomes problem when you use
a lot of "dynamic" attributes (e.g., using ``@property``) because you
can't distinguish if the property does not exist for real or some code
inside of the "dynamic" attribute through that error. In a well
written code, such error should not exist but getting there is very
difficult. This decorator is to help us getting there by changing
`AttributeError` to `UncaughtAttributeError` to avoid unexpected catch.
This helps us noticing bugs earlier and facilitates debugging.
.. note:: Treating StopIteration here is easy.
Add that feature when needed.
"""
@functools.wraps(func)
def wrapper(*args, **kwds):
try:
return func(*args, **kwds)
except AttributeError:
exc_info = sys.exc_info()
reraise(UncaughtAttributeError(exc_info[1]), exc_info[2])
return wrapper
class PushBackIterator(object):
def __init__(self, iterator):
self.pushes = []
self.iterator = iterator
self.current = None
def push_back(self, value):
self.pushes.append(value)
def __iter__(self):
return self
def next(self):
""" Python 2 Compatibility """
return self.__next__()
def __next__(self):
if self.pushes:
self.current = self.pushes.pop()
else:
self.current = next(self.iterator)
return self.current
old_value = getattr(obj, attribute_name)
try:
setattr(obj, attribute_name, new_value)
yield
finally:
setattr(obj, attribute_name, old_value)
def indent_block(text, indention=' '):
@@ -86,26 +34,3 @@ def indent_block(text, indention=' '):
text = text[:-1]
lines = text.split('\n')
return '\n'.join(map(lambda s: indention + s, lines)) + temp
@contextlib.contextmanager
def ignored(*exceptions):
"""
Context manager that ignores all of the specified exceptions. This will
be in the standard library starting with Python 3.4.
"""
try:
yield
except exceptions:
pass
def unite(iterable):
"""Turns a two dimensional array into a one dimensional."""
return set(typ for types in iterable for typ in types)
def to_list(func):
def wrapper(*args, **kwargs):
return list(func(*args, **kwargs))
return wrapper

View File

@@ -1,17 +1,20 @@
from jedi._compatibility import encoding, is_py3, u
import os
import time
from contextlib import contextmanager
from jedi._compatibility import encoding, is_py3, u
_inited = False
def _lazy_colorama_init():
"""
Lazily init colorama if necessary, not to screw up stdout is debug not
enabled.
Lazily init colorama if necessary, not to screw up stdout if debugging is
not enabled.
This version of the function does nothing.
"""
pass
_inited=False
try:
if os.name == 'nt':
@@ -21,7 +24,8 @@ try:
# Use colorama for nicer console output.
from colorama import Fore, init
from colorama import initialise
def _lazy_colorama_init():
def _lazy_colorama_init(): # noqa: F811
"""
Lazily init colorama if necessary, not to screw up stdout is
debug not enabled.
@@ -35,7 +39,7 @@ try:
# need this.
initialise.atexit_done = True
try:
init()
init(strip=False)
except Exception:
# Colorama fails with initializing under vim and is buggy in
# version 0.3.6.
@@ -49,6 +53,7 @@ except ImportError:
YELLOW = ''
MAGENTA = ''
RESET = ''
BLUE = ''
NOTICE = object()
WARNING = object()
@@ -73,15 +78,25 @@ def reset_time():
def increase_indent(func):
"""Decorator for makin """
def wrapper(*args, **kwargs):
global _debug_indent
_debug_indent += 1
try:
with increase_indent_cm():
return func(*args, **kwargs)
finally:
_debug_indent -= 1
return wrapper
@contextmanager
def increase_indent_cm(title=None, color='MAGENTA'):
global _debug_indent
if title:
dbg('Start: ' + title, color=color)
_debug_indent += 1
try:
yield
finally:
_debug_indent -= 1
if title:
dbg('End: ' + title, color=color)
def dbg(message, *args, **kwargs):
""" Looks at the stack, to see if a debug message should be printed. """
# Python 2 compatibility, because it doesn't understand default args

View File

@@ -1,629 +0,0 @@
"""
Evaluation of Python code in |jedi| is based on three assumptions:
* The code uses as least side effects as possible. Jedi understands certain
list/tuple/set modifications, but there's no guarantee that Jedi detects
everything (list.append in different modules for example).
* No magic is being used:
- metaclasses
- ``setattr()`` / ``__import__()``
- writing to ``globals()``, ``locals()``, ``object.__dict__``
* The programmer is not a total dick, e.g. like `this
<https://github.com/davidhalter/jedi/issues/24>`_ :-)
The actual algorithm is based on a principle called lazy evaluation. If you
don't know about it, google it. That said, the typical entry point for static
analysis is calling ``eval_statement``. There's separate logic for
autocompletion in the API, the evaluator is all about evaluating an expression.
Now you need to understand what follows after ``eval_statement``. Let's
make an example::
import datetime
datetime.date.toda# <-- cursor here
First of all, this module doesn't care about completion. It really just cares
about ``datetime.date``. At the end of the procedure ``eval_statement`` will
return the ``date`` class.
To *visualize* this (simplified):
- ``Evaluator.eval_statement`` doesn't do much, because there's no assignment.
- ``Evaluator.eval_element`` cares for resolving the dotted path
- ``Evaluator.find_types`` searches for global definitions of datetime, which
it finds in the definition of an import, by scanning the syntax tree.
- Using the import logic, the datetime module is found.
- Now ``find_types`` is called again by ``eval_element`` to find ``date``
inside the datetime module.
Now what would happen if we wanted ``datetime.date.foo.bar``? Two more
calls to ``find_types``. However the second call would be ignored, because the
first one would return nothing (there's no foo attribute in ``date``).
What if the import would contain another ``ExprStmt`` like this::
from foo import bar
Date = bar.baz
Well... You get it. Just another ``eval_statement`` recursion. It's really
easy. Python can obviously get way more complicated then this. To understand
tuple assignments, list comprehensions and everything else, a lot more code had
to be written.
Jedi has been tested very well, so you can just start modifying code. It's best
to write your own test first for your "new" feature. Don't be scared of
breaking stuff. As long as the tests pass, you're most likely to be fine.
I need to mention now that lazy evaluation is really good because it
only *evaluates* what needs to be *evaluated*. All the statements and modules
that are not used are just being ignored.
"""
import copy
import sys
from parso.python import tree
import parso
from jedi import debug
from jedi.common import unite
from jedi.evaluate import representation as er
from jedi.evaluate import imports
from jedi.evaluate import recursion
from jedi.evaluate import iterable
from jedi.evaluate.cache import evaluator_function_cache
from jedi.evaluate import stdlib
from jedi.evaluate import finder
from jedi.evaluate import compiled
from jedi.evaluate import precedence
from jedi.evaluate import param
from jedi.evaluate import helpers
from jedi.evaluate import pep0484
from jedi.evaluate.filters import TreeNameDefinition, ParamName
from jedi.evaluate.instance import AnonymousInstance, BoundMethod
from jedi.evaluate.context import ContextualizedName, ContextualizedNode
from jedi import parser_utils
def _limit_context_infers(func):
"""
This is for now the way how we limit type inference going wild. There are
other ways to ensure recursion limits as well. This is mostly necessary
because of instance (self) access that can be quite tricky to limit.
I'm still not sure this is the way to go, but it looks okay for now and we
can still go anther way in the future. Tests are there. ~ dave
"""
def wrapper(evaluator, context, *args, **kwargs):
n = context.tree_node
try:
evaluator.inferred_element_counts[n] += 1
if evaluator.inferred_element_counts[n] > 300:
debug.warning('In context %s there were too many inferences.', n)
return set()
except KeyError:
evaluator.inferred_element_counts[n] = 1
return func(evaluator, context, *args, **kwargs)
return wrapper
class Evaluator(object):
def __init__(self, grammar, sys_path=None):
self.grammar = grammar
self.latest_grammar = parso.load_grammar(version='3.6')
self.memoize_cache = {} # for memoize decorators
# To memorize modules -> equals `sys.modules`.
self.modules = {} # like `sys.modules`.
self.compiled_cache = {} # see `evaluate.compiled.create()`
self.inferred_element_counts = {}
self.mixed_cache = {} # see `evaluate.compiled.mixed._create()`
self.analysis = []
self.dynamic_params_depth = 0
self.is_analysis = False
self.python_version = sys.version_info[:2]
if sys_path is None:
sys_path = sys.path
self.sys_path = copy.copy(sys_path)
try:
self.sys_path.remove('')
except ValueError:
pass
self.reset_recursion_limitations()
# Constants
self.BUILTINS = compiled.get_special_object(self, 'BUILTINS')
def reset_recursion_limitations(self):
self.recursion_detector = recursion.RecursionDetector()
self.execution_recursion_detector = recursion.ExecutionRecursionDetector(self)
def find_types(self, context, name_or_str, name_context, position=None,
search_global=False, is_goto=False, analysis_errors=True):
"""
This is the search function. The most important part to debug.
`remove_statements` and `filter_statements` really are the core part of
this completion.
:param position: Position of the last statement -> tuple of line, column
:return: List of Names. Their parents are the types.
"""
f = finder.NameFinder(self, context, name_context, name_or_str,
position, analysis_errors=analysis_errors)
filters = f.get_filters(search_global)
if is_goto:
return f.filter_name(filters)
return f.find(filters, attribute_lookup=not search_global)
@_limit_context_infers
def eval_statement(self, context, stmt, seek_name=None):
with recursion.execution_allowed(self, stmt) as allowed:
if allowed or context.get_root_context() == self.BUILTINS:
return self._eval_stmt(context, stmt, seek_name)
return set()
#@evaluator_function_cache(default=[])
@debug.increase_indent
def _eval_stmt(self, context, stmt, seek_name=None):
"""
The starting point of the completion. A statement always owns a call
list, which are the calls, that a statement does. In case multiple
names are defined in the statement, `seek_name` returns the result for
this name.
:param stmt: A `tree.ExprStmt`.
"""
debug.dbg('eval_statement %s (%s)', stmt, seek_name)
rhs = stmt.get_rhs()
types = self.eval_element(context, rhs)
if seek_name:
c_node = ContextualizedName(context, seek_name)
types = finder.check_tuple_assignments(self, c_node, types)
first_operator = next(stmt.yield_operators(), None)
if first_operator not in ('=', None) and first_operator.type == 'operator':
# `=` is always the last character in aug assignments -> -1
operator = copy.copy(first_operator)
operator.value = operator.value[:-1]
name = stmt.get_defined_names()[0].value
left = context.py__getattribute__(
name, position=stmt.start_pos, search_global=True)
for_stmt = tree.search_ancestor(stmt, 'for_stmt')
if for_stmt is not None and for_stmt.type == 'for_stmt' and types \
and parser_utils.for_stmt_defines_one_name(for_stmt):
# Iterate through result and add the values, that's possible
# only in for loops without clutter, because they are
# predictable. Also only do it, if the variable is not a tuple.
node = for_stmt.get_testlist()
cn = ContextualizedNode(context, node)
ordered = list(iterable.py__iter__(self, cn.infer(), cn))
for lazy_context in ordered:
dct = {for_stmt.children[1].value: lazy_context.infer()}
with helpers.predefine_names(context, for_stmt, dct):
t = self.eval_element(context, rhs)
left = precedence.calculate(self, context, left, operator, t)
types = left
else:
types = precedence.calculate(self, context, left, operator, types)
debug.dbg('eval_statement result %s', types)
return types
def eval_element(self, context, element):
if isinstance(context, iterable.CompForContext):
return self._eval_element_not_cached(context, element)
if_stmt = element
while if_stmt is not None:
if_stmt = if_stmt.parent
if if_stmt.type in ('if_stmt', 'for_stmt'):
break
if parser_utils.is_scope(if_stmt):
if_stmt = None
break
predefined_if_name_dict = context.predefined_names.get(if_stmt)
if predefined_if_name_dict is None and if_stmt and if_stmt.type == 'if_stmt':
if_stmt_test = if_stmt.children[1]
name_dicts = [{}]
# If we already did a check, we don't want to do it again -> If
# context.predefined_names is filled, we stop.
# We don't want to check the if stmt itself, it's just about
# the content.
if element.start_pos > if_stmt_test.end_pos:
# Now we need to check if the names in the if_stmt match the
# names in the suite.
if_names = helpers.get_names_of_node(if_stmt_test)
element_names = helpers.get_names_of_node(element)
str_element_names = [e.value for e in element_names]
if any(i.value in str_element_names for i in if_names):
for if_name in if_names:
definitions = self.goto_definitions(context, if_name)
# Every name that has multiple different definitions
# causes the complexity to rise. The complexity should
# never fall below 1.
if len(definitions) > 1:
if len(name_dicts) * len(definitions) > 16:
debug.dbg('Too many options for if branch evaluation %s.', if_stmt)
# There's only a certain amount of branches
# Jedi can evaluate, otherwise it will take to
# long.
name_dicts = [{}]
break
original_name_dicts = list(name_dicts)
name_dicts = []
for definition in definitions:
new_name_dicts = list(original_name_dicts)
for i, name_dict in enumerate(new_name_dicts):
new_name_dicts[i] = name_dict.copy()
new_name_dicts[i][if_name.value] = set([definition])
name_dicts += new_name_dicts
else:
for name_dict in name_dicts:
name_dict[if_name.value] = definitions
if len(name_dicts) > 1:
result = set()
for name_dict in name_dicts:
with helpers.predefine_names(context, if_stmt, name_dict):
result |= self._eval_element_not_cached(context, element)
return result
else:
return self._eval_element_if_evaluated(context, element)
else:
if predefined_if_name_dict:
return self._eval_element_not_cached(context, element)
else:
return self._eval_element_if_evaluated(context, element)
def _eval_element_if_evaluated(self, context, element):
"""
TODO This function is temporary: Merge with eval_element.
"""
parent = element
while parent is not None:
parent = parent.parent
predefined_if_name_dict = context.predefined_names.get(parent)
if predefined_if_name_dict is not None:
return self._eval_element_not_cached(context, element)
return self._eval_element_cached(context, element)
@evaluator_function_cache(default=set())
def _eval_element_cached(self, context, element):
return self._eval_element_not_cached(context, element)
@debug.increase_indent
@_limit_context_infers
def _eval_element_not_cached(self, context, element):
debug.dbg('eval_element %s@%s', element, element.start_pos)
types = set()
typ = element.type
if typ in ('name', 'number', 'string', 'atom'):
types = self.eval_atom(context, element)
elif typ == 'keyword':
# For False/True/None
if element.value in ('False', 'True', 'None'):
types.add(compiled.builtin_from_name(self, element.value))
# else: print e.g. could be evaluated like this in Python 2.7
elif typ == 'lambdef':
types = set([er.FunctionContext(self, context, element)])
elif typ == 'expr_stmt':
types = self.eval_statement(context, element)
elif typ in ('power', 'atom_expr'):
first_child = element.children[0]
if not (first_child.type == 'keyword' and first_child.value == 'await'):
types = self.eval_atom(context, first_child)
for trailer in element.children[1:]:
if trailer == '**': # has a power operation.
right = self.eval_element(context, element.children[2])
types = set(precedence.calculate(self, context, types, trailer, right))
break
types = self.eval_trailer(context, types, trailer)
elif typ in ('testlist_star_expr', 'testlist',):
# The implicit tuple in statements.
types = set([iterable.SequenceLiteralContext(self, context, element)])
elif typ in ('not_test', 'factor'):
types = self.eval_element(context, element.children[-1])
for operator in element.children[:-1]:
types = set(precedence.factor_calculate(self, types, operator))
elif typ == 'test':
# `x if foo else y` case.
types = (self.eval_element(context, element.children[0]) |
self.eval_element(context, element.children[-1]))
elif typ == 'operator':
# Must be an ellipsis, other operators are not evaluated.
# In Python 2 ellipsis is coded as three single dot tokens, not
# as one token 3 dot token.
assert element.value in ('.', '...')
types = set([compiled.create(self, Ellipsis)])
elif typ == 'dotted_name':
types = self.eval_atom(context, element.children[0])
for next_name in element.children[2::2]:
# TODO add search_global=True?
types = unite(
typ.py__getattribute__(next_name, name_context=context)
for typ in types
)
types = types
elif typ == 'eval_input':
types = self._eval_element_not_cached(context, element.children[0])
elif typ == 'annassign':
types = pep0484._evaluate_for_annotation(context, element.children[1])
else:
types = precedence.calculate_children(self, context, element.children)
debug.dbg('eval_element result %s', types)
return types
def eval_atom(self, context, atom):
"""
Basically to process ``atom`` nodes. The parser sometimes doesn't
generate the node (because it has just one child). In that case an atom
might be a name or a literal as well.
"""
if atom.type == 'name':
# This is the first global lookup.
stmt = tree.search_ancestor(
atom, 'expr_stmt', 'lambdef'
) or atom
if stmt.type == 'lambdef':
stmt = atom
return context.py__getattribute__(
name_or_str=atom,
position=stmt.start_pos,
search_global=True
)
elif isinstance(atom, tree.Literal):
string = parser_utils.safe_literal_eval(atom.value)
return set([compiled.create(self, string)])
else:
c = atom.children
if c[0].type == 'string':
# Will be one string.
types = self.eval_atom(context, c[0])
for string in c[1:]:
right = self.eval_atom(context, string)
types = precedence.calculate(self, context, types, '+', right)
return types
# Parentheses without commas are not tuples.
elif c[0] == '(' and not len(c) == 2 \
and not(c[1].type == 'testlist_comp' and
len(c[1].children) > 1):
return self.eval_element(context, c[1])
try:
comp_for = c[1].children[1]
except (IndexError, AttributeError):
pass
else:
if comp_for == ':':
# Dict comprehensions have a colon at the 3rd index.
try:
comp_for = c[1].children[3]
except IndexError:
pass
if comp_for.type == 'comp_for':
return set([iterable.Comprehension.from_atom(self, context, atom)])
# It's a dict/list/tuple literal.
array_node = c[1]
try:
array_node_c = array_node.children
except AttributeError:
array_node_c = []
if c[0] == '{' and (array_node == '}' or ':' in array_node_c):
context = iterable.DictLiteralContext(self, context, atom)
else:
context = iterable.SequenceLiteralContext(self, context, atom)
return set([context])
def eval_trailer(self, context, types, trailer):
trailer_op, node = trailer.children[:2]
if node == ')': # `arglist` is optional.
node = ()
new_types = set()
if trailer_op == '[':
new_types |= iterable.py__getitem__(self, context, types, trailer)
else:
for typ in types:
debug.dbg('eval_trailer: %s in scope %s', trailer, typ)
if trailer_op == '.':
new_types |= typ.py__getattribute__(
name_context=context,
name_or_str=node
)
elif trailer_op == '(':
arguments = param.TreeArguments(self, context, node, trailer)
new_types |= self.execute(typ, arguments)
return new_types
@debug.increase_indent
def execute(self, obj, arguments):
if self.is_analysis:
arguments.eval_all()
debug.dbg('execute: %s %s', obj, arguments)
try:
# Some stdlib functions like super(), namedtuple(), etc. have been
# hard-coded in Jedi to support them.
return stdlib.execute(self, obj, arguments)
except stdlib.NotInStdLib:
pass
try:
func = obj.py__call__
except AttributeError:
debug.warning("no execution possible %s", obj)
return set()
else:
types = func(arguments)
debug.dbg('execute result: %s in %s', types, obj)
return types
def goto_definitions(self, context, name):
def_ = name.get_definition(import_name_always=True)
if def_ is not None:
type_ = def_.type
if type_ == 'classdef':
return [er.ClassContext(self, name.parent, context)]
elif type_ == 'funcdef':
return [er.FunctionContext(self, context, name.parent)]
if type_ == 'expr_stmt':
is_simple_name = name.parent.type not in ('power', 'trailer')
if is_simple_name:
return self.eval_statement(context, def_, name)
if type_ == 'for_stmt':
container_types = self.eval_element(context, def_.children[3])
cn = ContextualizedNode(context, def_.children[3])
for_types = iterable.py__iter__types(self, container_types, cn)
c_node = ContextualizedName(context, name)
return finder.check_tuple_assignments(self, c_node, for_types)
if type_ in ('import_from', 'import_name'):
return imports.infer_import(context, name)
return helpers.evaluate_call_of_leaf(context, name)
def goto(self, context, name):
definition = name.get_definition(import_name_always=True)
if definition is not None:
type_ = definition.type
if type_ == 'expr_stmt':
# Only take the parent, because if it's more complicated than just
# a name it's something you can "goto" again.
is_simple_name = name.parent.type not in ('power', 'trailer')
if is_simple_name:
return [TreeNameDefinition(context, name)]
elif type_ == 'param':
return [ParamName(context, name)]
elif type_ in ('funcdef', 'classdef'):
return [TreeNameDefinition(context, name)]
elif type_ in ('import_from', 'import_name'):
module_names = imports.infer_import(context, name, is_goto=True)
return module_names
par = name.parent
typ = par.type
if typ == 'argument' and par.children[1] == '=' and par.children[0] == name:
# Named param goto.
trailer = par.parent
if trailer.type == 'arglist':
trailer = trailer.parent
if trailer.type != 'classdef':
if trailer.type == 'decorator':
types = self.eval_element(context, trailer.children[1])
else:
i = trailer.parent.children.index(trailer)
to_evaluate = trailer.parent.children[:i]
types = self.eval_element(context, to_evaluate[0])
for trailer in to_evaluate[1:]:
types = self.eval_trailer(context, types, trailer)
param_names = []
for typ in types:
try:
get_param_names = typ.get_param_names
except AttributeError:
pass
else:
for param_name in get_param_names():
if param_name.string_name == name.value:
param_names.append(param_name)
return param_names
elif typ == 'dotted_name': # Is a decorator.
index = par.children.index(name)
if index > 0:
new_dotted = helpers.deep_ast_copy(par)
new_dotted.children[index - 1:] = []
values = self.eval_element(context, new_dotted)
return unite(
value.py__getattribute__(name, name_context=context, is_goto=True)
for value in values
)
if typ == 'trailer' and par.children[0] == '.':
values = helpers.evaluate_call_of_leaf(context, name, cut_own_trailer=True)
return unite(
value.py__getattribute__(name, name_context=context, is_goto=True)
for value in values
)
else:
stmt = tree.search_ancestor(
name, 'expr_stmt', 'lambdef'
) or name
if stmt.type == 'lambdef':
stmt = name
return context.py__getattribute__(
name,
position=stmt.start_pos,
search_global=True, is_goto=True
)
def create_context(self, base_context, node, node_is_context=False, node_is_object=False):
def parent_scope(node):
while True:
node = node.parent
if parser_utils.is_scope(node):
return node
elif node.type in ('argument', 'testlist_comp'):
if node.children[1].type == 'comp_for':
return node.children[1]
elif node.type == 'dictorsetmaker':
for n in node.children[1:4]:
# In dictionaries it can be pretty much anything.
if n.type == 'comp_for':
return n
def from_scope_node(scope_node, child_is_funcdef=None, is_nested=True, node_is_object=False):
if scope_node == base_node:
return base_context
is_funcdef = scope_node.type in ('funcdef', 'lambdef')
parent_scope = parser_utils.get_parent_scope(scope_node)
parent_context = from_scope_node(parent_scope, child_is_funcdef=is_funcdef)
if is_funcdef:
if isinstance(parent_context, AnonymousInstance):
func = BoundMethod(
self, parent_context, parent_context.class_context,
parent_context.parent_context, scope_node
)
else:
func = er.FunctionContext(
self,
parent_context,
scope_node
)
if is_nested and not node_is_object:
return func.get_function_execution()
return func
elif scope_node.type == 'classdef':
class_context = er.ClassContext(self, scope_node, parent_context)
if child_is_funcdef:
# anonymous instance
return AnonymousInstance(self, parent_context, class_context)
else:
return class_context
elif scope_node.type == 'comp_for':
if node.start_pos >= scope_node.children[-1].start_pos:
return parent_context
return iterable.CompForContext.from_comp_for(parent_context, scope_node)
raise Exception("There's a scope that was not managed.")
base_node = base_context.tree_node
if node_is_context and parser_utils.is_scope(node):
scope_node = node
else:
if node.parent.type in ('funcdef', 'classdef') and node.parent.name == node:
# When we're on class/function names/leafs that define the
# object itself and not its contents.
node = node.parent
scope_node = parent_scope(node)
return from_scope_node(scope_node, is_nested=True, node_is_object=node_is_object)

View File

@@ -1,81 +0,0 @@
"""
- the popular ``_memoize_default`` works like a typical memoize and returns the
default otherwise.
- ``CachedMetaClass`` uses ``_memoize_default`` to do the same with classes.
"""
import inspect
_NO_DEFAULT = object()
def _memoize_default(default=_NO_DEFAULT, evaluator_is_first_arg=False, second_arg_is_evaluator=False):
""" This is a typical memoization decorator, BUT there is one difference:
To prevent recursion it sets defaults.
Preventing recursion is in this case the much bigger use than speed. I
don't think, that there is a big speed difference, but there are many cases
where recursion could happen (think about a = b; b = a).
"""
def func(function):
def wrapper(obj, *args, **kwargs):
# TODO These checks are kind of ugly and slow.
if evaluator_is_first_arg:
cache = obj.memoize_cache
elif second_arg_is_evaluator:
cache = args[0].memoize_cache # needed for meta classes
else:
cache = obj.evaluator.memoize_cache
try:
memo = cache[function]
except KeyError:
memo = {}
cache[function] = memo
key = (obj, args, frozenset(kwargs.items()))
if key in memo:
return memo[key]
else:
if default is not _NO_DEFAULT:
memo[key] = default
rv = function(obj, *args, **kwargs)
if inspect.isgenerator(rv):
rv = list(rv)
memo[key] = rv
return rv
return wrapper
return func
def evaluator_function_cache(default=_NO_DEFAULT):
def decorator(func):
return _memoize_default(default=default, evaluator_is_first_arg=True)(func)
return decorator
def evaluator_method_cache(default=_NO_DEFAULT):
def decorator(func):
return _memoize_default(default=default)(func)
return decorator
def _memoize_meta_class():
def decorator(call):
return _memoize_default(second_arg_is_evaluator=True)(call)
return decorator
class CachedMetaClass(type):
"""
This is basically almost the same than the decorator above, it just caches
class initializations. Either you do it this way or with decorators, but
with decorators you lose class access (isinstance, etc).
"""
@_memoize_meta_class()
def __call__(self, *args, **kwargs):
return super(CachedMetaClass, self).__call__(*args, **kwargs)

View File

@@ -1,637 +0,0 @@
"""
Imitate the parser representation.
"""
import inspect
import re
import sys
import os
import types
from functools import partial
from jedi._compatibility import builtins as _builtins, unicode, py_version
from jedi import debug
from jedi.cache import underscore_memoization, memoize_method
from jedi.evaluate.filters import AbstractFilter, AbstractNameDefinition, \
ContextNameMixin
from jedi.evaluate.context import Context, LazyKnownContext
from jedi.evaluate.compiled.getattr_static import getattr_static
from . import fake
_sep = os.path.sep
if os.path.altsep is not None:
_sep += os.path.altsep
_path_re = re.compile('(?:\.[^{0}]+|[{0}]__init__\.py)$'.format(re.escape(_sep)))
del _sep
# Those types don't exist in typing.
MethodDescriptorType = type(str.replace)
WrapperDescriptorType = type(set.__iter__)
# `object.__subclasshook__` is an already executed descriptor.
object_class_dict = type.__dict__["__dict__"].__get__(object)
ClassMethodDescriptorType = type(object_class_dict['__subclasshook__'])
ALLOWED_DESCRIPTOR_ACCESS = (
types.FunctionType,
types.GetSetDescriptorType,
types.MemberDescriptorType,
MethodDescriptorType,
WrapperDescriptorType,
ClassMethodDescriptorType,
staticmethod,
classmethod,
)
class CheckAttribute(object):
"""Raises an AttributeError if the attribute X isn't available."""
def __init__(self, func):
self.func = func
# Remove the py in front of e.g. py__call__.
self.check_name = func.__name__[2:]
def __get__(self, instance, owner):
# This might raise an AttributeError. That's wanted.
if self.check_name == '__iter__':
# Python iterators are a bit strange, because there's no need for
# the __iter__ function as long as __getitem__ is defined (it will
# just start with __getitem__(0). This is especially true for
# Python 2 strings, where `str.__iter__` is not even defined.
try:
iter(instance.obj)
except TypeError:
raise AttributeError
else:
getattr(instance.obj, self.check_name)
return partial(self.func, instance)
class CompiledObject(Context):
path = None # modules have this attribute - set it to None.
used_names = lambda self: {} # To be consistent with modules.
def __init__(self, evaluator, obj, parent_context=None, faked_class=None):
super(CompiledObject, self).__init__(evaluator, parent_context)
self.obj = obj
# This attribute will not be set for most classes, except for fakes.
self.tree_node = faked_class
def get_root_node(self):
# To make things a bit easier with filters we add this method here.
return self.get_root_context()
@CheckAttribute
def py__call__(self, params):
if inspect.isclass(self.obj):
from jedi.evaluate.instance import CompiledInstance
return set([CompiledInstance(self.evaluator, self.parent_context, self, params)])
else:
return set(self._execute_function(params))
@CheckAttribute
def py__class__(self):
return create(self.evaluator, self.obj.__class__)
@CheckAttribute
def py__mro__(self):
return (self,) + tuple(create(self.evaluator, cls) for cls in self.obj.__mro__[1:])
@CheckAttribute
def py__bases__(self):
return tuple(create(self.evaluator, cls) for cls in self.obj.__bases__)
def py__bool__(self):
return bool(self.obj)
def py__file__(self):
try:
return self.obj.__file__
except AttributeError:
return None
def is_class(self):
return inspect.isclass(self.obj)
def py__doc__(self, include_call_signature=False):
return inspect.getdoc(self.obj) or ''
def get_param_names(self):
obj = self.obj
try:
if py_version < 33:
raise ValueError("inspect.signature was introduced in 3.3")
if py_version == 34:
# In 3.4 inspect.signature are wrong for str and int. This has
# been fixed in 3.5. The signature of object is returned,
# because no signature was found for str. Here we imitate 3.5
# logic and just ignore the signature if the magic methods
# don't match object.
# 3.3 doesn't even have the logic and returns nothing for str
# and classes that inherit from object.
user_def = inspect._signature_get_user_defined_method
if (inspect.isclass(obj)
and not user_def(type(obj), '__init__')
and not user_def(type(obj), '__new__')
and (obj.__init__ != object.__init__
or obj.__new__ != object.__new__)):
raise ValueError
signature = inspect.signature(obj)
except ValueError: # Has no signature
params_str, ret = self._parse_function_doc()
tokens = params_str.split(',')
if inspect.ismethoddescriptor(obj):
tokens.insert(0, 'self')
for p in tokens:
parts = p.strip().split('=')
yield UnresolvableParamName(self, parts[0])
else:
for signature_param in signature.parameters.values():
yield SignatureParamName(self, signature_param)
def __repr__(self):
return '<%s: %s>' % (self.__class__.__name__, repr(self.obj))
@underscore_memoization
def _parse_function_doc(self):
doc = self.py__doc__()
if doc is None:
return '', ''
return _parse_function_doc(doc)
@property
def api_type(self):
obj = self.obj
if inspect.isclass(obj):
return 'class'
elif inspect.ismodule(obj):
return 'module'
elif inspect.isbuiltin(obj) or inspect.ismethod(obj) \
or inspect.ismethoddescriptor(obj) or inspect.isfunction(obj):
return 'function'
# Everything else...
return 'instance'
@property
def type(self):
"""Imitate the tree.Node.type values."""
cls = self._get_class()
if inspect.isclass(cls):
return 'classdef'
elif inspect.ismodule(cls):
return 'file_input'
elif inspect.isbuiltin(cls) or inspect.ismethod(cls) or \
inspect.ismethoddescriptor(cls):
return 'funcdef'
@underscore_memoization
def _cls(self):
"""
We used to limit the lookups for instantiated objects like list(), but
this is not the case anymore. Python itself
"""
# Ensures that a CompiledObject is returned that is not an instance (like list)
return self
def _get_class(self):
if not fake.is_class_instance(self.obj) or \
inspect.ismethoddescriptor(self.obj): # slots
return self.obj
try:
return self.obj.__class__
except AttributeError:
# happens with numpy.core.umath._UFUNC_API (you get it
# automatically by doing `import numpy`.
return type
def get_filters(self, search_global=False, is_instance=False,
until_position=None, origin_scope=None):
yield self._ensure_one_filter(is_instance)
@memoize_method
def _ensure_one_filter(self, is_instance):
"""
search_global shouldn't change the fact that there's one dict, this way
there's only one `object`.
"""
return CompiledObjectFilter(self.evaluator, self, is_instance)
@CheckAttribute
def py__getitem__(self, index):
if type(self.obj) not in (str, list, tuple, unicode, bytes, bytearray, dict):
# Get rid of side effects, we won't call custom `__getitem__`s.
return set()
return set([create(self.evaluator, self.obj[index])])
@CheckAttribute
def py__iter__(self):
if type(self.obj) not in (str, list, tuple, unicode, bytes, bytearray, dict):
# Get rid of side effects, we won't call custom `__getitem__`s.
return
for i, part in enumerate(self.obj):
if i > 20:
# Should not go crazy with large iterators
break
yield LazyKnownContext(create(self.evaluator, part))
def py__name__(self):
try:
return self._get_class().__name__
except AttributeError:
return None
@property
def name(self):
try:
name = self._get_class().__name__
except AttributeError:
name = repr(self.obj)
return CompiledContextName(self, name)
def _execute_function(self, params):
from jedi.evaluate import docstrings
if self.type != 'funcdef':
return
for name in self._parse_function_doc()[1].split():
try:
bltn_obj = getattr(_builtins, name)
except AttributeError:
continue
else:
if bltn_obj is None:
# We want to evaluate everything except None.
# TODO do we?
continue
bltn_obj = create(self.evaluator, bltn_obj)
for result in self.evaluator.execute(bltn_obj, params):
yield result
for type_ in docstrings.infer_return_types(self):
yield type_
def get_self_attributes(self):
return [] # Instance compatibility
def get_imports(self):
return [] # Builtins don't have imports
def dict_values(self):
return set(create(self.evaluator, v) for v in self.obj.values())
class CompiledName(AbstractNameDefinition):
def __init__(self, evaluator, parent_context, name):
self._evaluator = evaluator
self.parent_context = parent_context
self.string_name = name
def __repr__(self):
try:
name = self.parent_context.name # __name__ is not defined all the time
except AttributeError:
name = None
return '<%s: (%s).%s>' % (self.__class__.__name__, name, self.string_name)
@property
def api_type(self):
return next(iter(self.infer())).api_type
@underscore_memoization
def infer(self):
module = self.parent_context.get_root_context()
return [_create_from_name(self._evaluator, module, self.parent_context, self.string_name)]
class SignatureParamName(AbstractNameDefinition):
api_type = 'param'
def __init__(self, compiled_obj, signature_param):
self.parent_context = compiled_obj.parent_context
self._signature_param = signature_param
@property
def string_name(self):
return self._signature_param.name
def infer(self):
p = self._signature_param
evaluator = self.parent_context.evaluator
types = set()
if p.default is not p.empty:
types.add(create(evaluator, p.default))
if p.annotation is not p.empty:
annotation = create(evaluator, p.annotation)
types |= annotation.execute_evaluated()
return types
class UnresolvableParamName(AbstractNameDefinition):
api_type = 'param'
def __init__(self, compiled_obj, name):
self.parent_context = compiled_obj.parent_context
self.string_name = name
def infer(self):
return set()
class CompiledContextName(ContextNameMixin, AbstractNameDefinition):
def __init__(self, context, name):
self.string_name = name
self._context = context
self.parent_context = context.parent_context
class EmptyCompiledName(AbstractNameDefinition):
"""
Accessing some names will raise an exception. To avoid not having any
completions, just give Jedi the option to return this object. It infers to
nothing.
"""
def __init__(self, evaluator, name):
self.parent_context = evaluator.BUILTINS
self.string_name = name
def infer(self):
return []
class CompiledObjectFilter(AbstractFilter):
name_class = CompiledName
def __init__(self, evaluator, compiled_object, is_instance=False):
self._evaluator = evaluator
self._compiled_object = compiled_object
self._is_instance = is_instance
@memoize_method
def get(self, name):
name = str(name)
obj = self._compiled_object.obj
try:
attr, is_get_descriptor = getattr_static(obj, name)
except AttributeError:
return []
else:
if is_get_descriptor \
and not type(attr) in ALLOWED_DESCRIPTOR_ACCESS:
# In case of descriptors that have get methods we cannot return
# it's value, because that would mean code execution.
return [EmptyCompiledName(self._evaluator, name)]
if self._is_instance and name not in dir(obj):
return []
return [self._create_name(name)]
def values(self):
obj = self._compiled_object.obj
names = []
for name in dir(obj):
names += self.get(name)
is_instance = self._is_instance or fake.is_class_instance(obj)
# ``dir`` doesn't include the type names.
if not inspect.ismodule(obj) and (obj is not type) and not is_instance:
for filter in create(self._evaluator, type).get_filters():
names += filter.values()
return names
def _create_name(self, name):
return self.name_class(self._evaluator, self._compiled_object, name)
def dotted_from_fs_path(fs_path, sys_path):
"""
Changes `/usr/lib/python3.4/email/utils.py` to `email.utils`. I.e.
compares the path with sys.path and then returns the dotted_path. If the
path is not in the sys.path, just returns None.
"""
if os.path.basename(fs_path).startswith('__init__.'):
# We are calculating the path. __init__ files are not interesting.
fs_path = os.path.dirname(fs_path)
# prefer
# - UNIX
# /path/to/pythonX.Y/lib-dynload
# /path/to/pythonX.Y/site-packages
# - Windows
# C:\path\to\DLLs
# C:\path\to\Lib\site-packages
# over
# - UNIX
# /path/to/pythonX.Y
# - Windows
# C:\path\to\Lib
path = ''
for s in sys_path:
if (fs_path.startswith(s) and len(path) < len(s)):
path = s
# - Window
# X:\path\to\lib-dynload/datetime.pyd => datetime
module_path = fs_path[len(path):].lstrip(os.path.sep).lstrip('/')
# - Window
# Replace like X:\path\to\something/foo/bar.py
return _path_re.sub('', module_path).replace(os.path.sep, '.').replace('/', '.')
def load_module(evaluator, path=None, name=None):
sys_path = evaluator.sys_path
if path is not None:
dotted_path = dotted_from_fs_path(path, sys_path=sys_path)
else:
dotted_path = name
if dotted_path is None:
p, _, dotted_path = path.partition(os.path.sep)
sys_path.insert(0, p)
temp, sys.path = sys.path, sys_path
try:
__import__(dotted_path)
except RuntimeError:
if 'PySide' in dotted_path or 'PyQt' in dotted_path:
# RuntimeError: the PyQt4.QtCore and PyQt5.QtCore modules both wrap
# the QObject class.
# See https://github.com/davidhalter/jedi/pull/483
return None
raise
except ImportError:
# If a module is "corrupt" or not really a Python module or whatever.
debug.warning('Module %s not importable in path %s.', dotted_path, path)
return None
finally:
sys.path = temp
# Just access the cache after import, because of #59 as well as the very
# complicated import structure of Python.
module = sys.modules[dotted_path]
return create(evaluator, module)
docstr_defaults = {
'floating point number': 'float',
'character': 'str',
'integer': 'int',
'dictionary': 'dict',
'string': 'str',
}
def _parse_function_doc(doc):
"""
Takes a function and returns the params and return value as a tuple.
This is nothing more than a docstring parser.
TODO docstrings like utime(path, (atime, mtime)) and a(b [, b]) -> None
TODO docstrings like 'tuple of integers'
"""
# parse round parentheses: def func(a, (b,c))
try:
count = 0
start = doc.index('(')
for i, s in enumerate(doc[start:]):
if s == '(':
count += 1
elif s == ')':
count -= 1
if count == 0:
end = start + i
break
param_str = doc[start + 1:end]
except (ValueError, UnboundLocalError):
# ValueError for doc.index
# UnboundLocalError for undefined end in last line
debug.dbg('no brackets found - no param')
end = 0
param_str = ''
else:
# remove square brackets, that show an optional param ( = None)
def change_options(m):
args = m.group(1).split(',')
for i, a in enumerate(args):
if a and '=' not in a:
args[i] += '=None'
return ','.join(args)
while True:
param_str, changes = re.subn(r' ?\[([^\[\]]+)\]',
change_options, param_str)
if changes == 0:
break
param_str = param_str.replace('-', '_') # see: isinstance.__doc__
# parse return value
r = re.search('-[>-]* ', doc[end:end + 7])
if r is None:
ret = ''
else:
index = end + r.end()
# get result type, which can contain newlines
pattern = re.compile(r'(,\n|[^\n-])+')
ret_str = pattern.match(doc, index).group(0).strip()
# New object -> object()
ret_str = re.sub(r'[nN]ew (.*)', r'\1()', ret_str)
ret = docstr_defaults.get(ret_str, ret_str)
return param_str, ret
def _create_from_name(evaluator, module, compiled_object, name):
obj = compiled_object.obj
faked = None
try:
faked = fake.get_faked(evaluator, module, obj, parent_context=compiled_object, name=name)
if faked.type == 'funcdef':
from jedi.evaluate.representation import FunctionContext
return FunctionContext(evaluator, compiled_object, faked)
except fake.FakeDoesNotExist:
pass
try:
obj = getattr(obj, name)
except AttributeError:
# Happens e.g. in properties of
# PyQt4.QtGui.QStyleOptionComboBox.currentText
# -> just set it to None
obj = None
return create(evaluator, obj, parent_context=compiled_object, faked=faked)
def builtin_from_name(evaluator, string):
bltn_obj = getattr(_builtins, string)
return create(evaluator, bltn_obj)
def _a_generator(foo):
"""Used to have an object to return for generators."""
yield 42
yield foo
_SPECIAL_OBJECTS = {
'FUNCTION_CLASS': type(load_module),
'METHOD_CLASS': type(CompiledObject.is_class),
'MODULE_CLASS': type(os),
'GENERATOR_OBJECT': _a_generator(1.0),
'BUILTINS': _builtins,
}
def get_special_object(evaluator, identifier):
obj = _SPECIAL_OBJECTS[identifier]
return create(evaluator, obj, parent_context=create(evaluator, _builtins))
def compiled_objects_cache(attribute_name):
def decorator(func):
"""
This decorator caches just the ids, oopposed to caching the object itself.
Caching the id has the advantage that an object doesn't need to be
hashable.
"""
def wrapper(evaluator, obj, parent_context=None, module=None, faked=None):
cache = getattr(evaluator, attribute_name)
# Do a very cheap form of caching here.
key = id(obj), id(parent_context)
try:
return cache[key][0]
except KeyError:
# TODO this whole decorator is way too ugly
result = func(evaluator, obj, parent_context, module, faked)
# Need to cache all of them, otherwise the id could be overwritten.
cache[key] = result, obj, parent_context, module, faked
return result
return wrapper
return decorator
@compiled_objects_cache('compiled_cache')
def create(evaluator, obj, parent_context=None, module=None, faked=None):
"""
A very weird interface class to this module. The more options provided the
more acurate loading compiled objects is.
"""
if inspect.ismodule(obj):
if parent_context is not None:
# Modules don't have parents, be careful with caching: recurse.
return create(evaluator, obj)
else:
if parent_context is None and obj is not _builtins:
return create(evaluator, obj, create(evaluator, _builtins))
try:
faked = fake.get_faked(evaluator, module, obj, parent_context=parent_context)
if faked.type == 'funcdef':
from jedi.evaluate.representation import FunctionContext
return FunctionContext(evaluator, parent_context, faked)
except fake.FakeDoesNotExist:
pass
return CompiledObject(evaluator, obj, parent_context, faked)

View File

@@ -1,213 +0,0 @@
"""
Loads functions that are mixed in to the standard library. E.g. builtins are
written in C (binaries), but my autocompletion only understands Python code. By
mixing in Python code, the autocompletion should work much better for builtins.
"""
import os
import inspect
import types
from itertools import chain
from parso.python import tree
from jedi._compatibility import is_py3, builtins, unicode, is_py34
modules = {}
MethodDescriptorType = type(str.replace)
# These are not considered classes and access is granted even though they have
# a __class__ attribute.
NOT_CLASS_TYPES = (
types.BuiltinFunctionType,
types.CodeType,
types.FrameType,
types.FunctionType,
types.GeneratorType,
types.GetSetDescriptorType,
types.LambdaType,
types.MemberDescriptorType,
types.MethodType,
types.ModuleType,
types.TracebackType,
MethodDescriptorType
)
if is_py3:
NOT_CLASS_TYPES += (
types.MappingProxyType,
types.SimpleNamespace
)
if is_py34:
NOT_CLASS_TYPES += (types.DynamicClassAttribute,)
class FakeDoesNotExist(Exception):
pass
def _load_faked_module(grammar, module):
module_name = module.__name__
if module_name == '__builtin__' and not is_py3:
module_name = 'builtins'
try:
return modules[module_name]
except KeyError:
path = os.path.dirname(os.path.abspath(__file__))
try:
with open(os.path.join(path, 'fake', module_name) + '.pym') as f:
source = f.read()
except IOError:
modules[module_name] = None
return
modules[module_name] = m = grammar.parse(unicode(source))
if module_name == 'builtins' and not is_py3:
# There are two implementations of `open` for either python 2/3.
# -> Rename the python2 version (`look at fake/builtins.pym`).
open_func = _search_scope(m, 'open')
open_func.children[1].value = 'open_python3'
open_func = _search_scope(m, 'open_python2')
open_func.children[1].value = 'open'
return m
def _search_scope(scope, obj_name):
for s in chain(scope.iter_classdefs(), scope.iter_funcdefs()):
if s.name.value == obj_name:
return s
def get_module(obj):
if inspect.ismodule(obj):
return obj
try:
obj = obj.__objclass__
except AttributeError:
pass
try:
imp_plz = obj.__module__
except AttributeError:
# Unfortunately in some cases like `int` there's no __module__
return builtins
else:
if imp_plz is None:
# Happens for example in `(_ for _ in []).send.__module__`.
return builtins
else:
try:
return __import__(imp_plz)
except ImportError:
# __module__ can be something arbitrary that doesn't exist.
return builtins
def _faked(grammar, module, obj, name):
# Crazy underscore actions to try to escape all the internal madness.
if module is None:
module = get_module(obj)
faked_mod = _load_faked_module(grammar, module)
if faked_mod is None:
return None, None
# Having the module as a `parser.python.tree.Module`, we need to scan
# for methods.
if name is None:
if inspect.isbuiltin(obj) or inspect.isclass(obj):
return _search_scope(faked_mod, obj.__name__), faked_mod
elif not inspect.isclass(obj):
# object is a method or descriptor
try:
objclass = obj.__objclass__
except AttributeError:
return None, None
else:
cls = _search_scope(faked_mod, objclass.__name__)
if cls is None:
return None, None
return _search_scope(cls, obj.__name__), faked_mod
else:
if obj is module:
return _search_scope(faked_mod, name), faked_mod
else:
try:
cls_name = obj.__name__
except AttributeError:
return None, None
cls = _search_scope(faked_mod, cls_name)
if cls is None:
return None, None
return _search_scope(cls, name), faked_mod
return None, None
def memoize_faked(obj):
"""
A typical memoize function that ignores issues with non hashable results.
"""
cache = obj.cache = {}
def memoizer(*args, **kwargs):
key = (obj, args, frozenset(kwargs.items()))
try:
result = cache[key]
except (TypeError, ValueError):
return obj(*args, **kwargs)
except KeyError:
result = obj(*args, **kwargs)
if result is not None:
cache[key] = obj(*args, **kwargs)
return result
else:
return result
return memoizer
@memoize_faked
def _get_faked(grammar, module, obj, name=None):
result, fake_module = _faked(grammar, module, obj, name)
if result is None:
# We're not interested in classes. What we want is functions.
raise FakeDoesNotExist
elif result.type == 'classdef':
return result, fake_module
else:
# Set the docstr which was previously not set (faked modules don't
# contain it).
assert result.type == 'funcdef'
doc = '"""%s"""' % obj.__doc__ # TODO need escapes.
suite = result.children[-1]
string = tree.String(doc, (0, 0), '')
new_line = tree.Newline('\n', (0, 0))
docstr_node = tree.PythonNode('simple_stmt', [string, new_line])
suite.children.insert(1, docstr_node)
return result, fake_module
def get_faked(evaluator, module, obj, name=None, parent_context=None):
if parent_context and parent_context.tree_node is not None:
# Try to search in already clearly defined stuff.
found = _search_scope(parent_context.tree_node, name)
if found is not None:
return found
else:
raise FakeDoesNotExist
faked, fake_module = _get_faked(evaluator.latest_grammar, module and module.obj, obj, name)
if module is not None:
module.get_used_names = fake_module.get_used_names
return faked
def is_class_instance(obj):
"""Like inspect.* methods."""
try:
cls = obj.__class__
except AttributeError:
return False
else:
return cls != type and not issubclass(cls, NOT_CLASS_TYPES)

View File

@@ -1,9 +0,0 @@
class partial():
def __init__(self, func, *args, **keywords):
self.__func = func
self.__args = args
self.__keywords = keywords
def __call__(self, *args, **kwargs):
# TODO should be **dict(self.__keywords, **kwargs)
return self.__func(*(self.__args + args), **self.__keywords)

View File

@@ -1,26 +0,0 @@
def connect(database, timeout=None, isolation_level=None, detect_types=None, factory=None):
return Connection()
class Connection():
def cursor(self):
return Cursor()
class Cursor():
def cursor(self):
return Cursor()
def fetchone(self):
return Row()
def fetchmany(self, size=cursor.arraysize):
return [self.fetchone()]
def fetchall(self):
return [self.fetchone()]
class Row():
def keys(self):
return ['']

View File

@@ -1,99 +0,0 @@
def compile():
class SRE_Match():
endpos = int()
lastgroup = int()
lastindex = int()
pos = int()
string = str()
regs = ((int(), int()),)
def __init__(self, pattern):
self.re = pattern
def start(self):
return int()
def end(self):
return int()
def span(self):
return int(), int()
def expand(self):
return str()
def group(self, nr):
return str()
def groupdict(self):
return {str(): str()}
def groups(self):
return (str(),)
class SRE_Pattern():
flags = int()
groupindex = {}
groups = int()
pattern = str()
def findall(self, string, pos=None, endpos=None):
"""
findall(string[, pos[, endpos]]) --> list.
Return a list of all non-overlapping matches of pattern in string.
"""
return [str()]
def finditer(self, string, pos=None, endpos=None):
"""
finditer(string[, pos[, endpos]]) --> iterator.
Return an iterator over all non-overlapping matches for the
RE pattern in string. For each match, the iterator returns a
match object.
"""
yield SRE_Match(self)
def match(self, string, pos=None, endpos=None):
"""
match(string[, pos[, endpos]]) --> match object or None.
Matches zero or more characters at the beginning of the string
pattern
"""
return SRE_Match(self)
def scanner(self, string, pos=None, endpos=None):
pass
def search(self, string, pos=None, endpos=None):
"""
search(string[, pos[, endpos]]) --> match object or None.
Scan through string looking for a match, and return a corresponding
MatchObject instance. Return None if no position in the string matches.
"""
return SRE_Match(self)
def split(self, string, maxsplit=0]):
"""
split(string[, maxsplit = 0]) --> list.
Split string by the occurrences of pattern.
"""
return [str()]
def sub(self, repl, string, count=0):
"""
sub(repl, string[, count = 0]) --> newstring
Return the string obtained by replacing the leftmost non-overlapping
occurrences of pattern in string by the replacement repl.
"""
return str()
def subn(self, repl, string, count=0):
"""
subn(repl, string[, count = 0]) --> (newstring, number of subs)
Return the tuple (new_string, number_of_subs_made) found by replacing
the leftmost non-overlapping occurrences of pattern with the
replacement repl.
"""
return (str(), int())
return SRE_Pattern()

View File

@@ -1,9 +0,0 @@
def proxy(object, callback=None):
return object
class ref():
def __init__(self, object, callback=None):
self.__object = object
def __call__(self):
return self.__object

View File

@@ -1,274 +0,0 @@
"""
Pure Python implementation of some builtins.
This code is not going to be executed anywhere.
These implementations are not always correct, but should work as good as
possible for the auto completion.
"""
def next(iterator, default=None):
if random.choice([0, 1]):
if hasattr("next"):
return iterator.next()
else:
return iterator.__next__()
else:
if default is not None:
return default
def iter(collection, sentinel=None):
if sentinel:
yield collection()
else:
for c in collection:
yield c
def range(start, stop=None, step=1):
return [0]
class file():
def __iter__(self):
yield ''
def next(self):
return ''
def readlines(self):
return ['']
def __enter__(self):
return self
class xrange():
# Attention: this function doesn't exist in Py3k (there it is range).
def __iter__(self):
yield 1
def count(self):
return 1
def index(self):
return 1
def open(file, mode='r', buffering=-1, encoding=None, errors=None, newline=None, closefd=True):
import io
return io.TextIOWrapper(file, mode, buffering, encoding, errors, newline, closefd)
def open_python2(name, mode=None, buffering=None):
return file(name, mode, buffering)
#--------------------------------------------------------
# descriptors
#--------------------------------------------------------
class property():
def __init__(self, fget, fset=None, fdel=None, doc=None):
self.fget = fget
self.fset = fset
self.fdel = fdel
self.__doc__ = doc
def __get__(self, obj, cls):
return self.fget(obj)
def __set__(self, obj, value):
self.fset(obj, value)
def __delete__(self, obj):
self.fdel(obj)
def setter(self, func):
self.fset = func
return self
def getter(self, func):
self.fget = func
return self
def deleter(self, func):
self.fdel = func
return self
class staticmethod():
def __init__(self, func):
self.__func = func
def __get__(self, obj, cls):
return self.__func
class classmethod():
def __init__(self, func):
self.__func = func
def __get__(self, obj, cls):
def _method(*args, **kwargs):
return self.__func(cls, *args, **kwargs)
return _method
#--------------------------------------------------------
# array stuff
#--------------------------------------------------------
class list():
def __init__(self, iterable=[]):
self.__iterable = []
for i in iterable:
self.__iterable += [i]
def __iter__(self):
for i in self.__iterable:
yield i
def __getitem__(self, y):
return self.__iterable[y]
def pop(self):
return self.__iterable[int()]
class tuple():
def __init__(self, iterable=[]):
self.__iterable = []
for i in iterable:
self.__iterable += [i]
def __iter__(self):
for i in self.__iterable:
yield i
def __getitem__(self, y):
return self.__iterable[y]
def index(self):
return 1
def count(self):
return 1
class set():
def __init__(self, iterable=[]):
self.__iterable = iterable
def __iter__(self):
for i in self.__iterable:
yield i
def pop(self):
return list(self.__iterable)[-1]
def copy(self):
return self
def difference(self, other):
return self - other
def intersection(self, other):
return self & other
def symmetric_difference(self, other):
return self ^ other
def union(self, other):
return self | other
class frozenset():
def __init__(self, iterable=[]):
self.__iterable = iterable
def __iter__(self):
for i in self.__iterable:
yield i
def copy(self):
return self
class dict():
def __init__(self, **elements):
self.__elements = elements
def clear(self):
# has a strange docstr
pass
def get(self, k, d=None):
# TODO implement
try:
#return self.__elements[k]
pass
except KeyError:
return d
def values(self):
return self.__elements.values()
def setdefault(self, k, d):
# TODO maybe also return the content
return d
class enumerate():
def __init__(self, sequence, start=0):
self.__sequence = sequence
def __iter__(self):
for i in self.__sequence:
yield 1, i
def __next__(self):
return next(self.__iter__())
def next(self):
return next(self.__iter__())
class reversed():
def __init__(self, sequence):
self.__sequence = sequence
def __iter__(self):
for i in self.__sequence:
yield i
def __next__(self):
return next(self.__iter__())
def next(self):
return next(self.__iter__())
def sorted(iterable, cmp=None, key=None, reverse=False):
return iterable
#--------------------------------------------------------
# basic types
#--------------------------------------------------------
class int():
def __init__(self, x, base=None):
pass
class str():
def __init__(self, obj):
pass
def strip(self):
return str()
def split(self):
return [str()]
class type():
def mro():
return [object]

View File

@@ -1,4 +0,0 @@
class datetime():
@staticmethod
def now():
return datetime()

View File

@@ -1,12 +0,0 @@
class TextIOWrapper():
def __next__(self):
return str()
def __iter__(self):
yield str()
def readlines(self):
return ['']
def __enter__(self):
return self

View File

@@ -1,33 +0,0 @@
# Just copied this code from Python 3.6.
class itemgetter:
"""
Return a callable object that fetches the given item(s) from its operand.
After f = itemgetter(2), the call f(r) returns r[2].
After g = itemgetter(2, 5, 3), the call g(r) returns (r[2], r[5], r[3])
"""
__slots__ = ('_items', '_call')
def __init__(self, item, *items):
if not items:
self._items = (item,)
def func(obj):
return obj[item]
self._call = func
else:
self._items = items = (item,) + items
def func(obj):
return tuple(obj[i] for i in items)
self._call = func
def __call__(self, obj):
return self._call(obj)
def __repr__(self):
return '%s.%s(%s)' % (self.__class__.__module__,
self.__class__.__name__,
', '.join(map(repr, self._items)))
def __reduce__(self):
return self.__class__, self._items

View File

@@ -1,5 +0,0 @@
def getcwd():
return ''
def getcwdu():
return ''

View File

@@ -1,232 +0,0 @@
"""
Used only for REPL Completion.
"""
import inspect
import os
from jedi import settings
from jedi.evaluate import compiled
from jedi.cache import underscore_memoization
from jedi.evaluate import imports
from jedi.evaluate.context import Context
from jedi.evaluate.cache import evaluator_function_cache
from jedi.evaluate.compiled.getattr_static import getattr_static
class MixedObject(object):
"""
A ``MixedObject`` is used in two ways:
1. It uses the default logic of ``parser.python.tree`` objects,
2. except for getattr calls. The names dicts are generated in a fashion
like ``CompiledObject``.
This combined logic makes it possible to provide more powerful REPL
completion. It allows side effects that are not noticable with the default
parser structure to still be completeable.
The biggest difference from CompiledObject to MixedObject is that we are
generally dealing with Python code and not with C code. This will generate
fewer special cases, because we in Python you don't have the same freedoms
to modify the runtime.
"""
def __init__(self, evaluator, parent_context, compiled_object, tree_context):
self.evaluator = evaluator
self.parent_context = parent_context
self.compiled_object = compiled_object
self._context = tree_context
self.obj = compiled_object.obj
# We have to overwrite everything that has to do with trailers, name
# lookups and filters to make it possible to route name lookups towards
# compiled objects and the rest towards tree node contexts.
def eval_trailer(*args, **kwags):
return Context.eval_trailer(*args, **kwags)
def py__getattribute__(*args, **kwargs):
return Context.py__getattribute__(*args, **kwargs)
def get_filters(self, *args, **kwargs):
yield MixedObjectFilter(self.evaluator, self)
def __repr__(self):
return '<%s: %s>' % (type(self).__name__, repr(self.obj))
def __getattr__(self, name):
return getattr(self._context, name)
class MixedName(compiled.CompiledName):
"""
The ``CompiledName._compiled_object`` is our MixedObject.
"""
@property
def start_pos(self):
contexts = list(self.infer())
if not contexts:
# This means a start_pos that doesn't exist (compiled objects).
return (0, 0)
return contexts[0].name.start_pos
@start_pos.setter
def start_pos(self, value):
# Ignore the __init__'s start_pos setter call.
pass
@underscore_memoization
def infer(self):
obj = self.parent_context.obj
try:
# TODO use logic from compiled.CompiledObjectFilter
obj = getattr(obj, self.string_name)
except AttributeError:
# Happens e.g. in properties of
# PyQt4.QtGui.QStyleOptionComboBox.currentText
# -> just set it to None
obj = None
return [_create(self._evaluator, obj, parent_context=self.parent_context)]
@property
def api_type(self):
return next(iter(self.infer())).api_type
class MixedObjectFilter(compiled.CompiledObjectFilter):
name_class = MixedName
def __init__(self, evaluator, mixed_object, is_instance=False):
super(MixedObjectFilter, self).__init__(
evaluator, mixed_object, is_instance)
self._mixed_object = mixed_object
#def _create(self, name):
#return MixedName(self._evaluator, self._compiled_object, name)
@evaluator_function_cache()
def _load_module(evaluator, path, python_object):
module = evaluator.grammar.parse(
path=path,
cache=True,
diff_cache=True,
cache_path=settings.cache_directory
).get_root_node()
python_module = inspect.getmodule(python_object)
evaluator.modules[python_module.__name__] = module
return module
def _get_object_to_check(python_object):
"""Check if inspect.getfile has a chance to find the source."""
if (inspect.ismodule(python_object) or
inspect.isclass(python_object) or
inspect.ismethod(python_object) or
inspect.isfunction(python_object) or
inspect.istraceback(python_object) or
inspect.isframe(python_object) or
inspect.iscode(python_object)):
return python_object
try:
return python_object.__class__
except AttributeError:
raise TypeError # Prevents computation of `repr` within inspect.
def find_syntax_node_name(evaluator, python_object):
try:
python_object = _get_object_to_check(python_object)
path = inspect.getsourcefile(python_object)
except TypeError:
# The type might not be known (e.g. class_with_dict.__weakref__)
return None, None
if path is None or not os.path.exists(path):
# The path might not exist or be e.g. <stdin>.
return None, None
module = _load_module(evaluator, path, python_object)
if inspect.ismodule(python_object):
# We don't need to check names for modules, because there's not really
# a way to write a module in a module in Python (and also __name__ can
# be something like ``email.utils``).
return module, path
try:
name_str = python_object.__name__
except AttributeError:
# Stuff like python_function.__code__.
return None, None
if name_str == '<lambda>':
return None, None # It's too hard to find lambdas.
# Doesn't always work (e.g. os.stat_result)
try:
names = module.get_used_names()[name_str]
except KeyError:
return None, None
names = [n for n in names if n.is_definition()]
try:
code = python_object.__code__
# By using the line number of a code object we make the lookup in a
# file pretty easy. There's still a possibility of people defining
# stuff like ``a = 3; foo(a); a = 4`` on the same line, but if people
# do so we just don't care.
line_nr = code.co_firstlineno
except AttributeError:
pass
else:
line_names = [name for name in names if name.start_pos[0] == line_nr]
# There's a chance that the object is not available anymore, because
# the code has changed in the background.
if line_names:
return line_names[-1].parent, path
# It's really hard to actually get the right definition, here as a last
# resort we just return the last one. This chance might lead to odd
# completions at some points but will lead to mostly correct type
# inference, because people tend to define a public name in a module only
# once.
return names[-1].parent, path
@compiled.compiled_objects_cache('mixed_cache')
def _create(evaluator, obj, parent_context=None, *args):
tree_node, path = find_syntax_node_name(evaluator, obj)
compiled_object = compiled.create(
evaluator, obj, parent_context=parent_context.compiled_object)
if tree_node is None:
return compiled_object
module_node = tree_node.get_root_node()
if parent_context.tree_node.get_root_node() == module_node:
module_context = parent_context.get_root_context()
else:
from jedi.evaluate.representation import ModuleContext
module_context = ModuleContext(evaluator, module_node, path=path)
# TODO this __name__ is probably wrong.
name = compiled_object.get_root_context().py__name__()
imports.add_module(evaluator, name, module_context)
tree_context = module_context.create_context(
tree_node,
node_is_context=True,
node_is_object=True
)
if tree_node.type == 'classdef':
if not inspect.isclass(obj):
# Is an instance, not a class.
tree_context, = tree_context.execute_evaluated()
return MixedObject(
evaluator,
parent_context,
compiled_object,
tree_context=tree_context
)

View File

@@ -1,206 +0,0 @@
from jedi._compatibility import Python3Method
from jedi.common import unite
from parso.python.tree import ExprStmt, CompFor
from jedi.parser_utils import clean_scope_docstring, get_doc_with_call_signature
class Context(object):
"""
Should be defined, otherwise the API returns empty types.
"""
"""
To be defined by subclasses.
"""
predefined_names = {}
tree_node = None
def __init__(self, evaluator, parent_context=None):
self.evaluator = evaluator
self.parent_context = parent_context
@property
def api_type(self):
# By default just lower name of the class. Can and should be
# overwritten.
return self.__class__.__name__.lower()
def get_root_context(self):
context = self
while True:
if context.parent_context is None:
return context
context = context.parent_context
def execute(self, arguments):
return self.evaluator.execute(self, arguments)
def execute_evaluated(self, *value_list):
"""
Execute a function with already executed arguments.
"""
from jedi.evaluate.param import ValuesArguments
arguments = ValuesArguments([[value] for value in value_list])
return self.execute(arguments)
def eval_node(self, node):
return self.evaluator.eval_element(self, node)
def eval_stmt(self, stmt, seek_name=None):
return self.evaluator.eval_statement(self, stmt, seek_name)
def eval_trailer(self, types, trailer):
return self.evaluator.eval_trailer(self, types, trailer)
@Python3Method
def py__getattribute__(self, name_or_str, name_context=None, position=None,
search_global=False, is_goto=False,
analysis_errors=True):
if name_context is None:
name_context = self
return self.evaluator.find_types(
self, name_or_str, name_context, position, search_global, is_goto,
analysis_errors)
def create_context(self, node, node_is_context=False, node_is_object=False):
return self.evaluator.create_context(self, node, node_is_context, node_is_object)
def is_class(self):
return False
def py__bool__(self):
"""
Since Wrapper is a super class for classes, functions and modules,
the return value will always be true.
"""
return True
def py__doc__(self, include_call_signature=False):
try:
self.tree_node.get_doc_node
except AttributeError:
return ''
else:
if include_call_signature:
return get_doc_with_call_signature(self.tree_node)
else:
return clean_scope_docstring(self.tree_node)
return None
class TreeContext(Context):
def __init__(self, evaluator, parent_context=None):
super(TreeContext, self).__init__(evaluator, parent_context)
self.predefined_names = {}
def __repr__(self):
return '<%s: %s>' % (self.__class__.__name__, self.tree_node)
class AbstractLazyContext(object):
def __init__(self, data):
self.data = data
def __repr__(self):
return '<%s: %s>' % (self.__class__.__name__, self.data)
def infer(self):
raise NotImplementedError
class LazyKnownContext(AbstractLazyContext):
"""data is a context."""
def infer(self):
return set([self.data])
class LazyKnownContexts(AbstractLazyContext):
"""data is a set of contexts."""
def infer(self):
return self.data
class LazyUnknownContext(AbstractLazyContext):
def __init__(self):
super(LazyUnknownContext, self).__init__(None)
def infer(self):
return set()
class LazyTreeContext(AbstractLazyContext):
def __init__(self, context, node):
super(LazyTreeContext, self).__init__(node)
self._context = context
# We need to save the predefined names. It's an unfortunate side effect
# that needs to be tracked otherwise results will be wrong.
self._predefined_names = dict(context.predefined_names)
def infer(self):
old, self._context.predefined_names = \
self._context.predefined_names, self._predefined_names
try:
return self._context.eval_node(self.data)
finally:
self._context.predefined_names = old
def get_merged_lazy_context(lazy_contexts):
if len(lazy_contexts) > 1:
return MergedLazyContexts(lazy_contexts)
else:
return lazy_contexts[0]
class MergedLazyContexts(AbstractLazyContext):
"""data is a list of lazy contexts."""
def infer(self):
return unite(l.infer() for l in self.data)
class ContextualizedNode(object):
def __init__(self, context, node):
self.context = context
self._node = node
def get_root_context(self):
return self.context.get_root_context()
def infer(self):
return self.context.eval_node(self._node)
class ContextualizedName(ContextualizedNode):
# TODO merge with TreeNameDefinition?!
@property
def name(self):
return self._node
def assignment_indexes(self):
"""
Returns an array of tuple(int, node) of the indexes that are used in
tuple assignments.
For example if the name is ``y`` in the following code::
x, (y, z) = 2, ''
would result in ``[(1, xyz_node), (0, yz_node)]``.
"""
indexes = []
node = self._node.parent
compare = self._node
while node is not None:
if node.type in ('testlist', 'testlist_comp', 'testlist_star_expr', 'exprlist'):
for i, child in enumerate(node.children):
if child == compare:
indexes.insert(0, (int(i / 2), node))
break
else:
raise LookupError("Couldn't find the assignment.")
elif isinstance(node, (ExprStmt, CompFor)):
break
compare = node
node = node.parent
return indexes

View File

@@ -1,212 +0,0 @@
"""
One of the really important features of |jedi| is to have an option to
understand code like this::
def foo(bar):
bar. # completion here
foo(1)
There's no doubt wheter bar is an ``int`` or not, but if there's also a call
like ``foo('str')``, what would happen? Well, we'll just show both. Because
that's what a human would expect.
It works as follows:
- |Jedi| sees a param
- search for function calls named ``foo``
- execute these calls and check the input. This work with a ``ParamListener``.
"""
from parso.python import tree
from jedi import settings
from jedi import debug
from jedi.evaluate.cache import evaluator_function_cache
from jedi.evaluate import imports
from jedi.evaluate.param import TreeArguments, create_default_params
from jedi.evaluate.helpers import is_stdlib_path
from jedi.common import to_list, unite
from jedi.parser_utils import get_parent_scope
MAX_PARAM_SEARCHES = 20
class ParamListener(object):
"""
This listener is used to get the params for a function.
"""
def __init__(self):
self.param_possibilities = []
def execute(self, params):
self.param_possibilities += params
class MergedExecutedParams(object):
"""
Simulates being a parameter while actually just being multiple params.
"""
def __init__(self, executed_params):
self._executed_params = executed_params
def infer(self):
return unite(p.infer() for p in self._executed_params)
@debug.increase_indent
def search_params(evaluator, execution_context, funcdef):
"""
A dynamic search for param values. If you try to complete a type:
>>> def func(foo):
... foo
>>> func(1)
>>> func("")
It is not known what the type ``foo`` without analysing the whole code. You
have to look for all calls to ``func`` to find out what ``foo`` possibly
is.
"""
if not settings.dynamic_params:
return create_default_params(execution_context, funcdef)
evaluator.dynamic_params_depth += 1
try:
path = execution_context.get_root_context().py__file__()
if path is not None and is_stdlib_path(path):
# We don't want to search for usages in the stdlib. Usually people
# don't work with it (except if you are a core maintainer, sorry).
# This makes everything slower. Just disable it and run the tests,
# you will see the slowdown, especially in 3.6.
return create_default_params(execution_context, funcdef)
debug.dbg('Dynamic param search in %s.', funcdef.name.value, color='MAGENTA')
module_context = execution_context.get_root_context()
function_executions = _search_function_executions(
evaluator,
module_context,
funcdef
)
if function_executions:
zipped_params = zip(*list(
function_execution.get_params()
for function_execution in function_executions
))
params = [MergedExecutedParams(executed_params) for executed_params in zipped_params]
# Evaluate the ExecutedParams to types.
else:
return create_default_params(execution_context, funcdef)
debug.dbg('Dynamic param result finished', color='MAGENTA')
return params
finally:
evaluator.dynamic_params_depth -= 1
@evaluator_function_cache(default=[])
@to_list
def _search_function_executions(evaluator, module_context, funcdef):
"""
Returns a list of param names.
"""
from jedi.evaluate import representation as er
func_string_name = funcdef.name.value
compare_node = funcdef
if func_string_name == '__init__':
cls = get_parent_scope(funcdef)
if isinstance(cls, tree.Class):
func_string_name = cls.name.value
compare_node = cls
found_executions = False
i = 0
for for_mod_context in imports.get_modules_containing_name(
evaluator, [module_context], func_string_name):
if not isinstance(module_context, er.ModuleContext):
return
for name, trailer in _get_possible_nodes(for_mod_context, func_string_name):
i += 1
# This is a simple way to stop Jedi's dynamic param recursion
# from going wild: The deeper Jedi's in the recursion, the less
# code should be evaluated.
if i * evaluator.dynamic_params_depth > MAX_PARAM_SEARCHES:
return
random_context = evaluator.create_context(for_mod_context, name)
for function_execution in _check_name_for_execution(
evaluator, random_context, compare_node, name, trailer):
found_executions = True
yield function_execution
# If there are results after processing a module, we're probably
# good to process. This is a speed optimization.
if found_executions:
return
def _get_possible_nodes(module_context, func_string_name):
try:
names = module_context.tree_node.get_used_names()[func_string_name]
except KeyError:
return
for name in names:
bracket = name.get_next_leaf()
trailer = bracket.parent
if trailer.type == 'trailer' and bracket == '(':
yield name, trailer
def _check_name_for_execution(evaluator, context, compare_node, name, trailer):
from jedi.evaluate import representation as er, instance
def create_func_excs():
arglist = trailer.children[1]
if arglist == ')':
arglist = ()
args = TreeArguments(evaluator, context, arglist, trailer)
if value_node.type == 'funcdef':
yield value.get_function_execution(args)
else:
created_instance = instance.TreeInstance(
evaluator,
value.parent_context,
value,
args
)
for execution in created_instance.create_init_executions():
yield execution
for value in evaluator.goto_definitions(context, name):
value_node = value.tree_node
if compare_node == value_node:
for func_execution in create_func_excs():
yield func_execution
elif isinstance(value.parent_context, er.FunctionExecutionContext) and \
compare_node.type == 'funcdef':
# Here we're trying to find decorators by checking the first
# parameter. It's not very generic though. Should find a better
# solution that also applies to nested decorators.
params = value.parent_context.get_params()
if len(params) != 1:
continue
values = params[0].infer()
nodes = [v.tree_node for v in values]
if nodes == [compare_node]:
# Found a decorator.
module_context = context.get_root_context()
execution_context = next(create_func_excs())
for name, trailer in _get_possible_nodes(module_context, params[0].string_name):
if value_node.start_pos < name.start_pos < value_node.end_pos:
random_context = evaluator.create_context(execution_context, name)
iterator = _check_name_for_execution(
evaluator,
random_context,
compare_node,
name,
trailer
)
for function_execution in iterator:
yield function_execution

View File

@@ -1,345 +0,0 @@
"""
Filters are objects that you can use to filter names in different scopes. They
are needed for name resolution.
"""
from abc import abstractmethod
from parso.tree import search_ancestor
from jedi.evaluate import flow_analysis
from jedi.common import to_list, unite
from jedi.parser_utils import get_parent_scope
class AbstractNameDefinition(object):
start_pos = None
string_name = None
parent_context = None
tree_name = None
@abstractmethod
def infer(self):
raise NotImplementedError
@abstractmethod
def goto(self):
# Typically names are already definitions and therefore a goto on that
# name will always result on itself.
return set([self])
def get_root_context(self):
return self.parent_context.get_root_context()
def __repr__(self):
if self.start_pos is None:
return '<%s: %s>' % (self.__class__.__name__, self.string_name)
return '<%s: %s@%s>' % (self.__class__.__name__, self.string_name, self.start_pos)
def execute(self, arguments):
return unite(context.execute(arguments) for context in self.infer())
def execute_evaluated(self, *args, **kwargs):
return unite(context.execute_evaluated(*args, **kwargs) for context in self.infer())
@property
def api_type(self):
return self.parent_context.api_type
class AbstractTreeName(AbstractNameDefinition):
def __init__(self, parent_context, tree_name):
self.parent_context = parent_context
self.tree_name = tree_name
def goto(self):
return self.parent_context.evaluator.goto(self.parent_context, self.tree_name)
@property
def string_name(self):
return self.tree_name.value
@property
def start_pos(self):
return self.tree_name.start_pos
class ContextNameMixin(object):
def infer(self):
return set([self._context])
def get_root_context(self):
if self.parent_context is None:
return self._context
return super(ContextNameMixin, self).get_root_context()
@property
def api_type(self):
return self._context.api_type
class ContextName(ContextNameMixin, AbstractTreeName):
def __init__(self, context, tree_name):
super(ContextName, self).__init__(context.parent_context, tree_name)
self._context = context
class TreeNameDefinition(AbstractTreeName):
_API_TYPES = dict(
import_name='module',
import_from='module',
funcdef='function',
param='param',
classdef='class',
)
def infer(self):
# Refactor this, should probably be here.
from jedi.evaluate.finder import _name_to_types
return _name_to_types(self.parent_context.evaluator, self.parent_context, self.tree_name)
@property
def api_type(self):
definition = self.tree_name.get_definition(import_name_always=True)
if definition is None:
return 'statement'
return self._API_TYPES.get(definition.type, 'statement')
class ParamName(AbstractTreeName):
api_type = 'param'
def __init__(self, parent_context, tree_name):
self.parent_context = parent_context
self.tree_name = tree_name
def infer(self):
return self.get_param().infer()
def get_param(self):
params = self.parent_context.get_params()
param_node = search_ancestor(self.tree_name, 'param')
return params[param_node.position_index]
class AnonymousInstanceParamName(ParamName):
def infer(self):
param_node = search_ancestor(self.tree_name, 'param')
# TODO I think this should not belong here. It's not even really true,
# because classmethod and other descriptors can change it.
if param_node.position_index == 0:
# This is a speed optimization, to return the self param (because
# it's known). This only affects anonymous instances.
return set([self.parent_context.instance])
else:
return self.get_param().infer()
class AbstractFilter(object):
_until_position = None
def _filter(self, names):
if self._until_position is not None:
return [n for n in names if n.start_pos < self._until_position]
return names
@abstractmethod
def get(self, name):
raise NotImplementedError
@abstractmethod
def values(self):
raise NotImplementedError
class AbstractUsedNamesFilter(AbstractFilter):
name_class = TreeNameDefinition
def __init__(self, context, parser_scope):
self._parser_scope = parser_scope
self._used_names = self._parser_scope.get_root_node().get_used_names()
self.context = context
def get(self, name):
try:
names = self._used_names[str(name)]
except KeyError:
return []
return self._convert_names(self._filter(names))
def _convert_names(self, names):
return [self.name_class(self.context, name) for name in names]
def values(self):
return self._convert_names(name for name_list in self._used_names.values()
for name in self._filter(name_list))
def __repr__(self):
return '<%s: %s>' % (self.__class__.__name__, self.context)
class ParserTreeFilter(AbstractUsedNamesFilter):
def __init__(self, evaluator, context, node_context=None, until_position=None,
origin_scope=None):
"""
node_context is an option to specify a second context for use cases
like the class mro where the parent class of a new name would be the
context, but for some type inference it's important to have a local
context of the other classes.
"""
if node_context is None:
node_context = context
super(ParserTreeFilter, self).__init__(context, node_context.tree_node)
self._node_context = node_context
self._origin_scope = origin_scope
self._until_position = until_position
def _filter(self, names):
names = super(ParserTreeFilter, self)._filter(names)
names = [n for n in names if self._is_name_reachable(n)]
return list(self._check_flows(names))
def _is_name_reachable(self, name):
if not name.is_definition():
return False
parent = name.parent
if parent.type == 'trailer':
return False
base_node = parent if parent.type in ('classdef', 'funcdef') else name
return get_parent_scope(base_node) == self._parser_scope
def _check_flows(self, names):
for name in sorted(names, key=lambda name: name.start_pos, reverse=True):
check = flow_analysis.reachability_check(
self._node_context, self._parser_scope, name, self._origin_scope
)
if check is not flow_analysis.UNREACHABLE:
yield name
if check is flow_analysis.REACHABLE:
break
class FunctionExecutionFilter(ParserTreeFilter):
param_name = ParamName
def __init__(self, evaluator, context, node_context=None,
until_position=None, origin_scope=None):
super(FunctionExecutionFilter, self).__init__(
evaluator,
context,
node_context,
until_position,
origin_scope
)
@to_list
def _convert_names(self, names):
for name in names:
param = search_ancestor(name, 'param')
if param:
yield self.param_name(self.context, name)
else:
yield TreeNameDefinition(self.context, name)
class AnonymousInstanceFunctionExecutionFilter(FunctionExecutionFilter):
param_name = AnonymousInstanceParamName
class GlobalNameFilter(AbstractUsedNamesFilter):
def __init__(self, context, parser_scope):
super(GlobalNameFilter, self).__init__(context, parser_scope)
@to_list
def _filter(self, names):
for name in names:
if name.parent.type == 'global_stmt':
yield name
class DictFilter(AbstractFilter):
def __init__(self, dct):
self._dct = dct
def get(self, name):
try:
value = self._convert(name, self._dct[str(name)])
except KeyError:
return []
return list(self._filter([value]))
def values(self):
return self._filter(self._convert(*item) for item in self._dct.items())
def _convert(self, name, value):
return value
def get_global_filters(evaluator, context, until_position, origin_scope):
"""
Returns all filters in order of priority for name resolution.
For global name lookups. The filters will handle name resolution
themselves, but here we gather possible filters downwards.
>>> from jedi._compatibility import u, no_unicode_pprint
>>> from jedi import Script
>>> script = Script(u('''
... x = ['a', 'b', 'c']
... def func():
... y = None
... '''))
>>> module_node = script._get_module_node()
>>> scope = next(module_node.iter_funcdefs())
>>> scope
<Function: func@3-5>
>>> context = script._get_module().create_context(scope)
>>> filters = list(get_global_filters(context.evaluator, context, (4, 0), None))
First we get the names names from the function scope.
>>> no_unicode_pprint(filters[0])
<ParserTreeFilter: <ModuleContext: @2-5>>
>>> sorted(str(n) for n in filters[0].values())
['<TreeNameDefinition: func@(3, 4)>', '<TreeNameDefinition: x@(2, 0)>']
>>> filters[0]._until_position
(4, 0)
Then it yields the names from one level "lower". In this example, this is
the module scope. As a side note, you can see, that the position in the
filter is now None, because typically the whole module is loaded before the
function is called.
>>> filters[1].values() # global names -> there are none in our example.
[]
>>> list(filters[2].values()) # package modules -> Also empty.
[]
>>> sorted(name.string_name for name in filters[3].values()) # Module attributes
['__doc__', '__file__', '__name__', '__package__']
>>> print(filters[1]._until_position)
None
Finally, it yields the builtin filter, if `include_builtin` is
true (default).
>>> filters[4].values() #doctest: +ELLIPSIS
[<CompiledName: ...>, ...]
"""
from jedi.evaluate.representation import FunctionExecutionContext
while context is not None:
# Names in methods cannot be resolved within the class.
for filter in context.get_filters(
search_global=True,
until_position=until_position,
origin_scope=origin_scope):
yield filter
if isinstance(context, FunctionExecutionContext):
# The position should be reset if the current scope is a function.
until_position = None
context = context.parent_context
# Add builtins to the global scope.
for filter in evaluator.BUILTINS.get_filters(search_global=True):
yield filter

View File

@@ -1,410 +0,0 @@
"""
Searching for names with given scope and name. This is very central in Jedi and
Python. The name resolution is quite complicated with descripter,
``__getattribute__``, ``__getattr__``, ``global``, etc.
If you want to understand name resolution, please read the first few chapters
in http://blog.ionelmc.ro/2015/02/09/understanding-python-metaclasses/.
Flow checks
+++++++++++
Flow checks are not really mature. There's only a check for ``isinstance``. It
would check whether a flow has the form of ``if isinstance(a, type_or_tuple)``.
Unfortunately every other thing is being ignored (e.g. a == '' would be easy to
check for -> a is a string). There's big potential in these checks.
"""
from parso.python import tree
from parso.tree import search_ancestor
from jedi import debug
from jedi.common import unite
from jedi import settings
from jedi.evaluate import representation as er
from jedi.evaluate.instance import AbstractInstanceContext
from jedi.evaluate import compiled
from jedi.evaluate import pep0484
from jedi.evaluate import iterable
from jedi.evaluate import imports
from jedi.evaluate import analysis
from jedi.evaluate import flow_analysis
from jedi.evaluate import param
from jedi.evaluate import helpers
from jedi.evaluate.filters import get_global_filters, TreeNameDefinition
from jedi.evaluate.context import ContextualizedName, ContextualizedNode
from jedi.parser_utils import is_scope, get_parent_scope
class NameFinder(object):
def __init__(self, evaluator, context, name_context, name_or_str,
position=None, analysis_errors=True):
self._evaluator = evaluator
# Make sure that it's not just a syntax tree node.
self._context = context
self._name_context = name_context
self._name = name_or_str
if isinstance(name_or_str, tree.Name):
self._string_name = name_or_str.value
else:
self._string_name = name_or_str
self._position = position
self._found_predefined_types = None
self._analysis_errors = analysis_errors
@debug.increase_indent
def find(self, filters, attribute_lookup):
"""
:params bool attribute_lookup: Tell to logic if we're accessing the
attribute or the contents of e.g. a function.
"""
names = self.filter_name(filters)
if self._found_predefined_types is not None and names:
check = flow_analysis.reachability_check(
self._context, self._context.tree_node, self._name)
if check is flow_analysis.UNREACHABLE:
return set()
return self._found_predefined_types
types = self._names_to_types(names, attribute_lookup)
if not names and self._analysis_errors and not types \
and not (isinstance(self._name, tree.Name) and
isinstance(self._name.parent.parent, tree.Param)):
if isinstance(self._name, tree.Name):
if attribute_lookup:
analysis.add_attribute_error(
self._name_context, self._context, self._name)
else:
message = ("NameError: name '%s' is not defined."
% self._string_name)
analysis.add(self._name_context, 'name-error', self._name, message)
return types
def _get_origin_scope(self):
if isinstance(self._name, tree.Name):
scope = self._name
while scope.parent is not None:
# TODO why if classes?
if not isinstance(scope, tree.Scope):
break
scope = scope.parent
return scope
else:
return None
def get_filters(self, search_global=False):
origin_scope = self._get_origin_scope()
if search_global:
return get_global_filters(self._evaluator, self._context, self._position, origin_scope)
else:
return self._context.get_filters(search_global, self._position, origin_scope=origin_scope)
def filter_name(self, filters):
"""
Searches names that are defined in a scope (the different
``filters``), until a name fits.
"""
names = []
if self._context.predefined_names:
# TODO is this ok? node might not always be a tree.Name
node = self._name
while node is not None and not is_scope(node):
node = node.parent
if node.type in ("if_stmt", "for_stmt", "comp_for"):
try:
name_dict = self._context.predefined_names[node]
types = name_dict[self._string_name]
except KeyError:
continue
else:
self._found_predefined_types = types
break
for filter in filters:
names = filter.get(self._string_name)
if names:
if len(names) == 1:
n, = names
if isinstance(n, TreeNameDefinition):
# Something somewhere went terribly wrong. This
# typically happens when using goto on an import in an
# __init__ file. I think we need a better solution, but
# it's kind of hard, because for Jedi it's not clear
# that that name has not been defined, yet.
if n.tree_name == self._name:
if self._name.get_definition().type == 'import_from':
continue
break
debug.dbg('finder.filter_name "%s" in (%s): %s@%s', self._string_name,
self._context, names, self._position)
return list(names)
def _check_getattr(self, inst):
"""Checks for both __getattr__ and __getattribute__ methods"""
# str is important, because it shouldn't be `Name`!
name = compiled.create(self._evaluator, self._string_name)
# This is a little bit special. `__getattribute__` is in Python
# executed before `__getattr__`. But: I know no use case, where
# this could be practical and where Jedi would return wrong types.
# If you ever find something, let me know!
# We are inversing this, because a hand-crafted `__getattribute__`
# could still call another hand-crafted `__getattr__`, but not the
# other way around.
names = (inst.get_function_slot_names('__getattr__') or
inst.get_function_slot_names('__getattribute__'))
return inst.execute_function_slots(names, name)
def _names_to_types(self, names, attribute_lookup):
types = set()
types = unite(name.infer() for name in names)
debug.dbg('finder._names_to_types: %s -> %s', names, types)
if not names and isinstance(self._context, AbstractInstanceContext):
# handling __getattr__ / __getattribute__
return self._check_getattr(self._context)
# Add isinstance and other if/assert knowledge.
if not types and isinstance(self._name, tree.Name) and \
not isinstance(self._name_context, AbstractInstanceContext):
flow_scope = self._name
base_node = self._name_context.tree_node
if base_node.type == 'comp_for':
return types
while True:
flow_scope = get_parent_scope(flow_scope, include_flows=True)
n = _check_flow_information(self._name_context, flow_scope,
self._name, self._position)
if n is not None:
return n
if flow_scope == base_node:
break
return types
def _name_to_types(evaluator, context, tree_name):
types = []
node = tree_name.get_definition(import_name_always=True)
if node is None:
node = tree_name.parent
if node.type == 'global_stmt':
context = evaluator.create_context(context, tree_name)
finder = NameFinder(evaluator, context, context, tree_name.value)
filters = finder.get_filters(search_global=True)
# For global_stmt lookups, we only need the first possible scope,
# which means the function itself.
filters = [next(filters)]
return finder.find(filters, attribute_lookup=False)
elif node.type not in ('import_from', 'import_name'):
raise ValueError("Should not happen.")
typ = node.type
if typ == 'for_stmt':
types = pep0484.find_type_from_comment_hint_for(context, node, tree_name)
if types:
return types
if typ == 'with_stmt':
types = pep0484.find_type_from_comment_hint_with(context, node, tree_name)
if types:
return types
if typ in ('for_stmt', 'comp_for'):
try:
types = context.predefined_names[node][tree_name.value]
except KeyError:
cn = ContextualizedNode(context, node.children[3])
for_types = iterable.py__iter__types(evaluator, cn.infer(), cn)
c_node = ContextualizedName(context, tree_name)
types = check_tuple_assignments(evaluator, c_node, for_types)
elif typ == 'expr_stmt':
types = _remove_statements(evaluator, context, node, tree_name)
elif typ == 'with_stmt':
context_managers = context.eval_node(node.get_test_node_from_name(tree_name))
enter_methods = unite(
context_manager.py__getattribute__('__enter__')
for context_manager in context_managers
)
types = unite(method.execute_evaluated() for method in enter_methods)
elif typ in ('import_from', 'import_name'):
types = imports.infer_import(context, tree_name)
elif typ in ('funcdef', 'classdef'):
types = _apply_decorators(evaluator, context, node)
elif typ == 'try_stmt':
# TODO an exception can also be a tuple. Check for those.
# TODO check for types that are not classes and add it to
# the static analysis report.
exceptions = context.eval_node(tree_name.get_previous_sibling().get_previous_sibling())
types = unite(
evaluator.execute(t, param.ValuesArguments([]))
for t in exceptions
)
else:
raise ValueError("Should not happen.")
return types
def _apply_decorators(evaluator, context, node):
"""
Returns the function, that should to be executed in the end.
This is also the places where the decorators are processed.
"""
if node.type == 'classdef':
decoratee_context = er.ClassContext(
evaluator,
parent_context=context,
classdef=node
)
else:
decoratee_context = er.FunctionContext(
evaluator,
parent_context=context,
funcdef=node
)
initial = values = set([decoratee_context])
for dec in reversed(node.get_decorators()):
debug.dbg('decorator: %s %s', dec, values)
dec_values = context.eval_node(dec.children[1])
trailer_nodes = dec.children[2:-1]
if trailer_nodes:
# Create a trailer and evaluate it.
trailer = tree.PythonNode('trailer', trailer_nodes)
trailer.parent = dec
dec_values = evaluator.eval_trailer(context, dec_values, trailer)
if not len(dec_values):
debug.warning('decorator not found: %s on %s', dec, node)
return initial
values = unite(dec_value.execute(param.ValuesArguments([values]))
for dec_value in dec_values)
if not len(values):
debug.warning('not possible to resolve wrappers found %s', node)
return initial
debug.dbg('decorator end %s', values)
return values
def _remove_statements(evaluator, context, stmt, name):
"""
This is the part where statements are being stripped.
Due to lazy evaluation, statements like a = func; b = a; b() have to be
evaluated.
"""
types = set()
check_instance = None
pep0484types = \
pep0484.find_type_from_comment_hint_assign(context, stmt, name)
if pep0484types:
return pep0484types
types |= context.eval_stmt(stmt, seek_name=name)
if check_instance is not None:
# class renames
types = set([er.get_instance_el(evaluator, check_instance, a, True)
if isinstance(a, er.Function) else a for a in types])
return types
def _check_flow_information(context, flow, search_name, pos):
""" Try to find out the type of a variable just with the information that
is given by the flows: e.g. It is also responsible for assert checks.::
if isinstance(k, str):
k. # <- completion here
ensures that `k` is a string.
"""
if not settings.dynamic_flow_information:
return None
result = None
if is_scope(flow):
# Check for asserts.
module_node = flow.get_root_node()
try:
names = module_node.get_used_names()[search_name.value]
except KeyError:
return None
names = reversed([
n for n in names
if flow.start_pos <= n.start_pos < (pos or flow.end_pos)
])
for name in names:
ass = search_ancestor(name, 'assert_stmt')
if ass is not None:
result = _check_isinstance_type(context, ass.assertion, search_name)
if result is not None:
return result
if flow.type in ('if_stmt', 'while_stmt'):
potential_ifs = [c for c in flow.children[1::4] if c != ':']
for if_test in reversed(potential_ifs):
if search_name.start_pos > if_test.end_pos:
return _check_isinstance_type(context, if_test, search_name)
return result
def _check_isinstance_type(context, element, search_name):
try:
assert element.type in ('power', 'atom_expr')
# this might be removed if we analyze and, etc
assert len(element.children) == 2
first, trailer = element.children
assert first.type == 'name' and first.value == 'isinstance'
assert trailer.type == 'trailer' and trailer.children[0] == '('
assert len(trailer.children) == 3
# arglist stuff
arglist = trailer.children[1]
args = param.TreeArguments(context.evaluator, context, arglist, trailer)
param_list = list(args.unpack())
# Disallow keyword arguments
assert len(param_list) == 2
(key1, lazy_context_object), (key2, lazy_context_cls) = param_list
assert key1 is None and key2 is None
call = helpers.call_of_leaf(search_name)
is_instance_call = helpers.call_of_leaf(lazy_context_object.data)
# Do a simple get_code comparison. They should just have the same code,
# and everything will be all right.
normalize = context.evaluator.grammar._normalize
assert normalize(is_instance_call) == normalize(call)
except AssertionError:
return None
result = set()
for cls_or_tup in lazy_context_cls.infer():
if isinstance(cls_or_tup, iterable.AbstractSequence) and \
cls_or_tup.array_type == 'tuple':
for lazy_context in cls_or_tup.py__iter__():
for context in lazy_context.infer():
result |= context.execute_evaluated()
else:
result |= cls_or_tup.execute_evaluated()
return result
def check_tuple_assignments(evaluator, contextualized_name, types):
"""
Checks if tuples are assigned.
"""
lazy_context = None
for index, node in contextualized_name.assignment_indexes():
cn = ContextualizedNode(contextualized_name.context, node)
iterated = iterable.py__iter__(evaluator, types, cn)
for _ in range(index + 1):
try:
lazy_context = next(iterated)
except StopIteration:
# We could do this with the default param in next. But this
# would allow this loop to run for a very long time if the
# index number is high. Therefore break if the loop is
# finished.
return set()
types = lazy_context.infer()
return types

View File

@@ -1,174 +0,0 @@
import copy
import sys
import re
import os
from itertools import chain
from contextlib import contextmanager
from parso.python import tree
from jedi.parser_utils import get_parent_scope
def is_stdlib_path(path):
# Python standard library paths look like this:
# /usr/lib/python3.5/...
# TODO The implementation below is probably incorrect and not complete.
if 'dist-packages' in path or 'site-packages' in path:
return False
base_path = os.path.join(sys.prefix, 'lib', 'python')
return bool(re.match(re.escape(base_path) + '\d.\d', path))
def deep_ast_copy(obj):
"""
Much, much faster than copy.deepcopy, but just for parser tree nodes.
"""
# If it's already in the cache, just return it.
new_obj = copy.copy(obj)
# Copy children
new_children = []
for child in obj.children:
if isinstance(child, tree.Leaf):
new_child = copy.copy(child)
new_child.parent = new_obj
else:
new_child = deep_ast_copy(child)
new_child.parent = new_obj
new_children.append(new_child)
new_obj.children = new_children
return new_obj
def evaluate_call_of_leaf(context, leaf, cut_own_trailer=False):
"""
Creates a "call" node that consist of all ``trailer`` and ``power``
objects. E.g. if you call it with ``append``::
list([]).append(3) or None
You would get a node with the content ``list([]).append`` back.
This generates a copy of the original ast node.
If you're using the leaf, e.g. the bracket `)` it will return ``list([])``.
# TODO remove cut_own_trailer option, since its always used with it. Just
# ignore it, It's not what we want anyway. Or document it better?
"""
trailer = leaf.parent
# The leaf may not be the last or first child, because there exist three
# different trailers: `( x )`, `[ x ]` and `.x`. In the first two examples
# we should not match anything more than x.
if trailer.type != 'trailer' or leaf not in (trailer.children[0], trailer.children[-1]):
if trailer.type == 'atom':
return context.eval_node(trailer)
return context.eval_node(leaf)
power = trailer.parent
index = power.children.index(trailer)
if cut_own_trailer:
cut = index
else:
cut = index + 1
if power.type == 'error_node':
start = index
while True:
start -= 1
base = power.children[start]
if base.type != 'trailer':
break
trailers = power.children[start + 1: index + 1]
else:
base = power.children[0]
trailers = power.children[1:cut]
values = context.eval_node(base)
for trailer in trailers:
values = context.eval_trailer(values, trailer)
return values
def call_of_leaf(leaf):
"""
Creates a "call" node that consist of all ``trailer`` and ``power``
objects. E.g. if you call it with ``append``::
list([]).append(3) or None
You would get a node with the content ``list([]).append`` back.
This generates a copy of the original ast node.
If you're using the leaf, e.g. the bracket `)` it will return ``list([])``.
"""
# TODO this is the old version of this call. Try to remove it.
trailer = leaf.parent
# The leaf may not be the last or first child, because there exist three
# different trailers: `( x )`, `[ x ]` and `.x`. In the first two examples
# we should not match anything more than x.
if trailer.type != 'trailer' or leaf not in (trailer.children[0], trailer.children[-1]):
if trailer.type == 'atom':
return trailer
return leaf
power = trailer.parent
index = power.children.index(trailer)
new_power = copy.copy(power)
new_power.children = list(new_power.children)
new_power.children[index + 1:] = []
if power.type == 'error_node':
start = index
while True:
start -= 1
if power.children[start].type != 'trailer':
break
transformed = tree.Node('power', power.children[start:])
transformed.parent = power.parent
return transformed
return power
def get_names_of_node(node):
try:
children = node.children
except AttributeError:
if node.type == 'name':
return [node]
else:
return []
else:
return list(chain.from_iterable(get_names_of_node(c) for c in children))
def get_module_names(module, all_scopes):
"""
Returns a dictionary with name parts as keys and their call paths as
values.
"""
names = chain.from_iterable(module.get_used_names().values())
if not all_scopes:
# We have to filter all the names that don't have the module as a
# parent_scope. There's None as a parent, because nodes in the module
# node have the parent module and not suite as all the others.
# Therefore it's important to catch that case.
names = [n for n in names if get_parent_scope(n).parent in (module, None)]
return names
@contextmanager
def predefine_names(context, flow_scope, dct):
predefined = context.predefined_names
if flow_scope in predefined:
raise NotImplementedError('Why does this happen?')
predefined[flow_scope] = dct
try:
yield
finally:
del predefined[flow_scope]

View File

@@ -1,564 +0,0 @@
"""
:mod:`jedi.evaluate.imports` is here to resolve import statements and return
the modules/classes/functions/whatever, which they stand for. However there's
not any actual importing done. This module is about finding modules in the
filesystem. This can be quite tricky sometimes, because Python imports are not
always that simple.
This module uses imp for python up to 3.2 and importlib for python 3.3 on; the
correct implementation is delegated to _compatibility.
This module also supports import autocompletion, which means to complete
statements like ``from datetim`` (curser at the end would return ``datetime``).
"""
import imp
import os
import pkgutil
import sys
from parso.python import tree
from parso.tree import search_ancestor
from parso.cache import parser_cache
from parso import python_bytes_to_unicode
from jedi._compatibility import find_module, unicode, ImplicitNSInfo
from jedi import debug
from jedi import settings
from jedi.common import unite
from jedi.evaluate import sys_path
from jedi.evaluate import helpers
from jedi.evaluate import compiled
from jedi.evaluate import analysis
from jedi.evaluate.cache import evaluator_method_cache
from jedi.evaluate.filters import AbstractNameDefinition
# This memoization is needed, because otherwise we will infinitely loop on
# certain imports.
@evaluator_method_cache(default=set())
def infer_import(context, tree_name, is_goto=False):
module_context = context.get_root_context()
import_node = search_ancestor(tree_name, 'import_name', 'import_from')
import_path = import_node.get_path_for_name(tree_name)
from_import_name = None
evaluator = context.evaluator
try:
from_names = import_node.get_from_names()
except AttributeError:
# Is an import_name
pass
else:
if len(from_names) + 1 == len(import_path):
# We have to fetch the from_names part first and then check
# if from_names exists in the modules.
from_import_name = import_path[-1]
import_path = from_names
importer = Importer(evaluator, tuple(import_path),
module_context, import_node.level)
types = importer.follow()
#if import_node.is_nested() and not self.nested_resolve:
# scopes = [NestedImportModule(module, import_node)]
if not types:
return set()
if from_import_name is not None:
types = unite(
t.py__getattribute__(
from_import_name,
name_context=context,
is_goto=is_goto,
analysis_errors=False
) for t in types
)
if not types:
path = import_path + [from_import_name]
importer = Importer(evaluator, tuple(path),
module_context, import_node.level)
types = importer.follow()
# goto only accepts `Name`
if is_goto:
types = set(s.name for s in types)
else:
# goto only accepts `Name`
if is_goto:
types = set(s.name for s in types)
debug.dbg('after import: %s', types)
return types
class NestedImportModule(tree.Module):
"""
TODO while there's no use case for nested import module right now, we might
be able to use them for static analysis checks later on.
"""
def __init__(self, module, nested_import):
self._module = module
self._nested_import = nested_import
def _get_nested_import_name(self):
"""
Generates an Import statement, that can be used to fake nested imports.
"""
i = self._nested_import
# This is not an existing Import statement. Therefore, set position to
# 0 (0 is not a valid line number).
zero = (0, 0)
names = [unicode(name) for name in i.namespace_names[1:]]
name = helpers.FakeName(names, self._nested_import)
new = tree.Import(i._sub_module, zero, zero, name)
new.parent = self._module
debug.dbg('Generated a nested import: %s', new)
return helpers.FakeName(str(i.namespace_names[1]), new)
def __getattr__(self, name):
return getattr(self._module, name)
def __repr__(self):
return "<%s: %s of %s>" % (self.__class__.__name__, self._module,
self._nested_import)
def _add_error(context, name, message=None):
# Should be a name, not a string!
if hasattr(name, 'parent'):
analysis.add(context, 'import-error', name, message)
def get_init_path(directory_path):
"""
The __init__ file can be searched in a directory. If found return it, else
None.
"""
for suffix, _, _ in imp.get_suffixes():
path = os.path.join(directory_path, '__init__' + suffix)
if os.path.exists(path):
return path
return None
class ImportName(AbstractNameDefinition):
start_pos = (1, 0)
_level = 0
def __init__(self, parent_context, string_name):
self.parent_context = parent_context
self.string_name = string_name
def infer(self):
return Importer(
self.parent_context.evaluator,
[self.string_name],
self.parent_context,
level=self._level,
).follow()
def goto(self):
return [m.name for m in self.infer()]
def get_root_context(self):
# Not sure if this is correct.
return self.parent_context.get_root_context()
@property
def api_type(self):
return 'module'
class SubModuleName(ImportName):
_level = 1
class Importer(object):
def __init__(self, evaluator, import_path, module_context, level=0):
"""
An implementation similar to ``__import__``. Use `follow`
to actually follow the imports.
*level* specifies whether to use absolute or relative imports. 0 (the
default) means only perform absolute imports. Positive values for level
indicate the number of parent directories to search relative to the
directory of the module calling ``__import__()`` (see PEP 328 for the
details).
:param import_path: List of namespaces (strings or Names).
"""
debug.speed('import %s' % (import_path,))
self._evaluator = evaluator
self.level = level
self.module_context = module_context
try:
self.file_path = module_context.py__file__()
except AttributeError:
# Can be None for certain compiled modules like 'builtins'.
self.file_path = None
if level:
base = module_context.py__package__().split('.')
if base == ['']:
base = []
if level > len(base):
path = module_context.py__file__()
if path is not None:
import_path = list(import_path)
p = path
for i in range(level):
p = os.path.dirname(p)
dir_name = os.path.basename(p)
# This is not the proper way to do relative imports. However, since
# Jedi cannot be sure about the entry point, we just calculate an
# absolute path here.
if dir_name:
# TODO those sys.modules modifications are getting
# really stupid. this is the 3rd time that we're using
# this. We should probably refactor.
if path.endswith(os.path.sep + 'os.py'):
import_path.insert(0, 'os')
else:
import_path.insert(0, dir_name)
else:
_add_error(module_context, import_path[-1])
import_path = []
# TODO add import error.
debug.warning('Attempted relative import beyond top-level package.')
# If no path is defined in the module we have no ideas where we
# are in the file system. Therefore we cannot know what to do.
# In this case we just let the path there and ignore that it's
# a relative path. Not sure if that's a good idea.
else:
# Here we basically rewrite the level to 0.
base = tuple(base)
if level > 1:
base = base[:-level + 1]
import_path = base + tuple(import_path)
self.import_path = import_path
@property
def str_import_path(self):
"""Returns the import path as pure strings instead of `Name`."""
return tuple(
name.value if isinstance(name, tree.Name) else name
for name in self.import_path)
def sys_path_with_modifications(self):
in_path = []
sys_path_mod = list(sys_path.sys_path_with_modifications(
self._evaluator,
self.module_context
))
if self.file_path is not None:
# If you edit e.g. gunicorn, there will be imports like this:
# `from gunicorn import something`. But gunicorn is not in the
# sys.path. Therefore look if gunicorn is a parent directory, #56.
if self.import_path: # TODO is this check really needed?
for path in sys_path.traverse_parents(self.file_path):
if os.path.basename(path) == self.str_import_path[0]:
in_path.append(os.path.dirname(path))
# Since we know nothing about the call location of the sys.path,
# it's a possibility that the current directory is the origin of
# the Python execution.
sys_path_mod.insert(0, os.path.dirname(self.file_path))
return in_path + sys_path_mod
def follow(self):
if not self.import_path:
return set()
return self._do_import(self.import_path, self.sys_path_with_modifications())
def _do_import(self, import_path, sys_path):
"""
This method is very similar to importlib's `_gcd_import`.
"""
import_parts = [
i.value if isinstance(i, tree.Name) else i
for i in import_path
]
# Handle "magic" Flask extension imports:
# ``flask.ext.foo`` is really ``flask_foo`` or ``flaskext.foo``.
if len(import_path) > 2 and import_parts[:2] == ['flask', 'ext']:
# New style.
ipath = ('flask_' + str(import_parts[2]),) + import_path[3:]
modules = self._do_import(ipath, sys_path)
if modules:
return modules
else:
# Old style
return self._do_import(('flaskext',) + import_path[2:], sys_path)
module_name = '.'.join(import_parts)
try:
return set([self._evaluator.modules[module_name]])
except KeyError:
pass
if len(import_path) > 1:
# This is a recursive way of importing that works great with
# the module cache.
bases = self._do_import(import_path[:-1], sys_path)
if not bases:
return set()
# We can take the first element, because only the os special
# case yields multiple modules, which is not important for
# further imports.
parent_module = list(bases)[0]
# This is a huge exception, we follow a nested import
# ``os.path``, because it's a very important one in Python
# that is being achieved by messing with ``sys.modules`` in
# ``os``.
if import_parts == ['os', 'path']:
return parent_module.py__getattribute__('path')
try:
method = parent_module.py__path__
except AttributeError:
# The module is not a package.
_add_error(self.module_context, import_path[-1])
return set()
else:
paths = method()
debug.dbg('search_module %s in paths %s', module_name, paths)
for path in paths:
# At the moment we are only using one path. So this is
# not important to be correct.
try:
if not isinstance(path, list):
path = [path]
module_file, module_path, is_pkg = \
find_module(import_parts[-1], path, fullname=module_name)
break
except ImportError:
module_path = None
if module_path is None:
_add_error(self.module_context, import_path[-1])
return set()
else:
parent_module = None
try:
debug.dbg('search_module %s in %s', import_parts[-1], self.file_path)
# Override the sys.path. It works only good that way.
# Injecting the path directly into `find_module` did not work.
sys.path, temp = sys_path, sys.path
try:
module_file, module_path, is_pkg = \
find_module(import_parts[-1], fullname=module_name)
finally:
sys.path = temp
except ImportError:
# The module is not a package.
_add_error(self.module_context, import_path[-1])
return set()
code = None
if is_pkg:
# In this case, we don't have a file yet. Search for the
# __init__ file.
if module_path.endswith(('.zip', '.egg')):
code = module_file.loader.get_source(module_name)
else:
module_path = get_init_path(module_path)
elif module_file:
code = module_file.read()
module_file.close()
if isinstance(module_path, ImplicitNSInfo):
from jedi.evaluate.representation import ImplicitNamespaceContext
fullname, paths = module_path.name, module_path.paths
module = ImplicitNamespaceContext(self._evaluator, fullname=fullname)
module.paths = paths
elif module_file is None and not module_path.endswith(('.py', '.zip', '.egg')):
module = compiled.load_module(self._evaluator, module_path)
else:
module = _load_module(self._evaluator, module_path, code, sys_path, parent_module)
if module is None:
# The file might raise an ImportError e.g. and therefore not be
# importable.
return set()
self._evaluator.modules[module_name] = module
return set([module])
def _generate_name(self, name, in_module=None):
# Create a pseudo import to be able to follow them.
if in_module is None:
return ImportName(self.module_context, name)
return SubModuleName(in_module, name)
def _get_module_names(self, search_path=None, in_module=None):
"""
Get the names of all modules in the search_path. This means file names
and not names defined in the files.
"""
names = []
# add builtin module names
if search_path is None and in_module is None:
names += [self._generate_name(name) for name in sys.builtin_module_names]
if search_path is None:
search_path = self.sys_path_with_modifications()
for module_loader, name, is_pkg in pkgutil.iter_modules(search_path):
names.append(self._generate_name(name, in_module=in_module))
return names
def completion_names(self, evaluator, only_modules=False):
"""
:param only_modules: Indicates wheter it's possible to import a
definition that is not defined in a module.
"""
from jedi.evaluate.representation import ModuleContext, ImplicitNamespaceContext
names = []
if self.import_path:
# flask
if self.str_import_path == ('flask', 'ext'):
# List Flask extensions like ``flask_foo``
for mod in self._get_module_names():
modname = mod.string_name
if modname.startswith('flask_'):
extname = modname[len('flask_'):]
names.append(self._generate_name(extname))
# Now the old style: ``flaskext.foo``
for dir in self.sys_path_with_modifications():
flaskext = os.path.join(dir, 'flaskext')
if os.path.isdir(flaskext):
names += self._get_module_names([flaskext])
for context in self.follow():
# Non-modules are not completable.
if context.api_type != 'module': # not a module
continue
# namespace packages
if isinstance(context, ModuleContext) and context.py__file__().endswith('__init__.py'):
paths = context.py__path__()
names += self._get_module_names(paths, in_module=context)
# implicit namespace packages
elif isinstance(context, ImplicitNamespaceContext):
paths = context.paths
names += self._get_module_names(paths)
if only_modules:
# In the case of an import like `from x.` we don't need to
# add all the variables.
if ('os',) == self.str_import_path and not self.level:
# os.path is a hardcoded exception, because it's a
# ``sys.modules`` modification.
names.append(self._generate_name('path', context))
continue
for filter in context.get_filters(search_global=False):
names += filter.values()
else:
# Empty import path=completion after import
if not self.level:
names += self._get_module_names()
if self.file_path is not None:
path = os.path.abspath(self.file_path)
for i in range(self.level - 1):
path = os.path.dirname(path)
names += self._get_module_names([path])
return names
def _load_module(evaluator, path=None, code=None, sys_path=None, parent_module=None):
if sys_path is None:
sys_path = evaluator.sys_path
dotted_path = path and compiled.dotted_from_fs_path(path, sys_path)
if path is not None and path.endswith(('.py', '.zip', '.egg')) \
and dotted_path not in settings.auto_import_modules:
module_node = evaluator.grammar.parse(
code=code, path=path, cache=True, diff_cache=True,
cache_path=settings.cache_directory)
from jedi.evaluate.representation import ModuleContext
return ModuleContext(evaluator, module_node, path=path)
else:
return compiled.load_module(evaluator, path)
def add_module(evaluator, module_name, module):
if '.' not in module_name:
# We cannot add paths with dots, because that would collide with
# the sepatator dots for nested packages. Therefore we return
# `__main__` in ModuleWrapper.py__name__(), which is similar to
# Python behavior.
evaluator.modules[module_name] = module
def get_modules_containing_name(evaluator, modules, name):
"""
Search a name in the directories of modules.
"""
from jedi.evaluate import representation as er
def check_python_file(path):
try:
# TODO I don't think we should use the cache here?!
node_cache_item = parser_cache[evaluator.grammar._hashed][path]
except KeyError:
try:
return check_fs(path)
except IOError:
return None
else:
module_node = node_cache_item.node
return er.ModuleContext(evaluator, module_node, path=path)
def check_fs(path):
with open(path, 'rb') as f:
code = python_bytes_to_unicode(f.read(), errors='replace')
if name in code:
module = _load_module(evaluator, path, code)
module_name = sys_path.dotted_path_in_sys_path(evaluator.sys_path, path)
if module_name is not None:
add_module(evaluator, module_name, module)
return module
# skip non python modules
used_mod_paths = set()
for m in modules:
try:
path = m.py__file__()
except AttributeError:
pass
else:
used_mod_paths.add(path)
yield m
if not settings.dynamic_params_for_other_modules:
return
paths = set(settings.additional_dynamic_modules)
for p in used_mod_paths:
if p is not None:
# We need abspath, because the seetings paths might not already
# have been converted to absolute paths.
d = os.path.dirname(os.path.abspath(p))
for file_name in os.listdir(d):
path = os.path.join(d, file_name)
if path not in used_mod_paths and path not in paths:
if file_name.endswith('.py'):
paths.add(path)
# Sort here to make issues less random.
for p in sorted(paths):
# make testing easier, sort it - same results on every interpreter
m = check_python_file(p)
if m is not None and not isinstance(m, compiled.CompiledObject):
yield m

View File

@@ -1,433 +0,0 @@
from abc import abstractproperty
from jedi._compatibility import is_py3
from jedi.common import unite
from jedi import debug
from jedi.evaluate import compiled
from jedi.evaluate import filters
from jedi.evaluate.context import Context, LazyKnownContext, LazyKnownContexts
from jedi.evaluate.cache import evaluator_method_cache
from jedi.evaluate.param import AbstractArguments, AnonymousArguments
from jedi.cache import memoize_method
from jedi.evaluate import representation as er
from jedi.evaluate import iterable
from jedi.parser_utils import get_parent_scope
class InstanceFunctionExecution(er.FunctionExecutionContext):
def __init__(self, instance, parent_context, function_context, var_args):
self.instance = instance
var_args = InstanceVarArgs(self, var_args)
super(InstanceFunctionExecution, self).__init__(
instance.evaluator, parent_context, function_context, var_args)
class AnonymousInstanceFunctionExecution(er.FunctionExecutionContext):
function_execution_filter = filters.AnonymousInstanceFunctionExecutionFilter
def __init__(self, instance, parent_context, function_context, var_args):
self.instance = instance
super(AnonymousInstanceFunctionExecution, self).__init__(
instance.evaluator, parent_context, function_context, var_args)
class AbstractInstanceContext(Context):
"""
This class is used to evaluate instances.
"""
api_type = 'instance'
function_execution_cls = InstanceFunctionExecution
def __init__(self, evaluator, parent_context, class_context, var_args):
super(AbstractInstanceContext, self).__init__(evaluator, parent_context)
# Generated instances are classes that are just generated by self
# (No var_args) used.
self.class_context = class_context
self.var_args = var_args
def is_class(self):
return False
@property
def py__call__(self):
names = self.get_function_slot_names('__call__')
if not names:
# Means the Instance is not callable.
raise AttributeError
def execute(arguments):
return unite(name.execute(arguments) for name in names)
return execute
def py__class__(self):
return self.class_context
def py__bool__(self):
# Signalize that we don't know about the bool type.
return None
def get_function_slot_names(self, name):
# Python classes don't look at the dictionary of the instance when
# looking up `__call__`. This is something that has to do with Python's
# internal slot system (note: not __slots__, but C slots).
for filter in self.get_filters(include_self_names=False):
names = filter.get(name)
if names:
return names
return []
def execute_function_slots(self, names, *evaluated_args):
return unite(
name.execute_evaluated(*evaluated_args)
for name in names
)
def py__get__(self, obj):
# Arguments in __get__ descriptors are obj, class.
# `method` is the new parent of the array, don't know if that's good.
names = self.get_function_slot_names('__get__')
if names:
if isinstance(obj, AbstractInstanceContext):
return self.execute_function_slots(names, obj, obj.class_context)
else:
none_obj = compiled.create(self.evaluator, None)
return self.execute_function_slots(names, none_obj, obj)
else:
return set([self])
def get_filters(self, search_global=None, until_position=None,
origin_scope=None, include_self_names=True):
if include_self_names:
for cls in self.class_context.py__mro__():
if isinstance(cls, compiled.CompiledObject):
if cls.tree_node is not None:
# In this case we're talking about a fake object, it
# doesn't make sense for normal compiled objects to
# search for self variables.
yield SelfNameFilter(self.evaluator, self, cls, origin_scope)
else:
yield SelfNameFilter(self.evaluator, self, cls, origin_scope)
for cls in self.class_context.py__mro__():
if isinstance(cls, compiled.CompiledObject):
yield CompiledInstanceClassFilter(self.evaluator, self, cls)
else:
yield InstanceClassFilter(self.evaluator, self, cls, origin_scope)
def py__getitem__(self, index):
try:
names = self.get_function_slot_names('__getitem__')
except KeyError:
debug.warning('No __getitem__, cannot access the array.')
return set()
else:
index_obj = compiled.create(self.evaluator, index)
return self.execute_function_slots(names, index_obj)
def py__iter__(self):
iter_slot_names = self.get_function_slot_names('__iter__')
if not iter_slot_names:
debug.warning('No __iter__ on %s.' % self)
return
for generator in self.execute_function_slots(iter_slot_names):
if isinstance(generator, AbstractInstanceContext):
# `__next__` logic.
name = '__next__' if is_py3 else 'next'
iter_slot_names = generator.get_function_slot_names(name)
if iter_slot_names:
yield LazyKnownContexts(
generator.execute_function_slots(iter_slot_names)
)
else:
debug.warning('Instance has no __next__ function in %s.', generator)
else:
for lazy_context in generator.py__iter__():
yield lazy_context
@abstractproperty
def name(self):
pass
def _create_init_execution(self, class_context, func_node):
bound_method = BoundMethod(
self.evaluator, self, class_context, self.parent_context, func_node
)
return self.function_execution_cls(
self,
class_context.parent_context,
bound_method,
self.var_args
)
def create_init_executions(self):
for name in self.get_function_slot_names('__init__'):
if isinstance(name, LazyInstanceName):
yield self._create_init_execution(name.class_context, name.tree_name.parent)
@evaluator_method_cache()
def create_instance_context(self, class_context, node):
if node.parent.type in ('funcdef', 'classdef'):
node = node.parent
scope = get_parent_scope(node)
if scope == class_context.tree_node:
return class_context
else:
parent_context = self.create_instance_context(class_context, scope)
if scope.type == 'funcdef':
if scope.name.value == '__init__' and parent_context == class_context:
return self._create_init_execution(class_context, scope)
else:
bound_method = BoundMethod(
self.evaluator, self, class_context,
parent_context, scope
)
return bound_method.get_function_execution()
elif scope.type == 'classdef':
class_context = er.ClassContext(self.evaluator, scope, parent_context)
return class_context
elif scope.type == 'comp_for':
# Comprehensions currently don't have a special scope in Jedi.
return self.create_instance_context(class_context, scope)
else:
raise NotImplementedError
return class_context
def __repr__(self):
return "<%s of %s(%s)>" % (self.__class__.__name__, self.class_context,
self.var_args)
class CompiledInstance(AbstractInstanceContext):
def __init__(self, *args, **kwargs):
super(CompiledInstance, self).__init__(*args, **kwargs)
# I don't think that dynamic append lookups should happen here. That
# sounds more like something that should go to py__iter__.
if self.class_context.name.string_name in ['list', 'set'] \
and self.parent_context.get_root_context() == self.evaluator.BUILTINS:
# compare the module path with the builtin name.
self.var_args = iterable.get_dynamic_array_instance(self)
@property
def name(self):
return compiled.CompiledContextName(self, self.class_context.name.string_name)
def create_instance_context(self, class_context, node):
if get_parent_scope(node).type == 'classdef':
return class_context
else:
return super(CompiledInstance, self).create_instance_context(class_context, node)
class TreeInstance(AbstractInstanceContext):
def __init__(self, evaluator, parent_context, class_context, var_args):
super(TreeInstance, self).__init__(evaluator, parent_context,
class_context, var_args)
self.tree_node = class_context.tree_node
@property
def name(self):
return filters.ContextName(self, self.class_context.name.tree_name)
class AnonymousInstance(TreeInstance):
function_execution_cls = AnonymousInstanceFunctionExecution
def __init__(self, evaluator, parent_context, class_context):
super(AnonymousInstance, self).__init__(
evaluator,
parent_context,
class_context,
var_args=AnonymousArguments(),
)
class CompiledInstanceName(compiled.CompiledName):
def __init__(self, evaluator, instance, parent_context, name):
super(CompiledInstanceName, self).__init__(evaluator, parent_context, name)
self._instance = instance
def infer(self):
for result_context in super(CompiledInstanceName, self).infer():
if isinstance(result_context, er.FunctionContext):
parent_context = result_context.parent_context
while parent_context.is_class():
parent_context = parent_context.parent_context
yield BoundMethod(
result_context.evaluator, self._instance, self.parent_context,
parent_context, result_context.tree_node
)
else:
if result_context.api_type == 'function':
yield CompiledBoundMethod(result_context)
else:
yield result_context
class CompiledInstanceClassFilter(compiled.CompiledObjectFilter):
name_class = CompiledInstanceName
def __init__(self, evaluator, instance, compiled_object):
super(CompiledInstanceClassFilter, self).__init__(
evaluator,
compiled_object,
is_instance=True,
)
self._instance = instance
def _create_name(self, name):
return self.name_class(
self._evaluator, self._instance, self._compiled_object, name)
class BoundMethod(er.FunctionContext):
def __init__(self, evaluator, instance, class_context, *args, **kwargs):
super(BoundMethod, self).__init__(evaluator, *args, **kwargs)
self._instance = instance
self._class_context = class_context
def get_function_execution(self, arguments=None):
if arguments is None:
arguments = AnonymousArguments()
return AnonymousInstanceFunctionExecution(
self._instance, self.parent_context, self, arguments)
else:
return InstanceFunctionExecution(
self._instance, self.parent_context, self, arguments)
class CompiledBoundMethod(compiled.CompiledObject):
def __init__(self, func):
super(CompiledBoundMethod, self).__init__(
func.evaluator, func.obj, func.parent_context, func.tree_node)
def get_param_names(self):
return list(super(CompiledBoundMethod, self).get_param_names())[1:]
class InstanceNameDefinition(filters.TreeNameDefinition):
def infer(self):
contexts = super(InstanceNameDefinition, self).infer()
for context in contexts:
yield context
class LazyInstanceName(filters.TreeNameDefinition):
"""
This name calculates the parent_context lazily.
"""
def __init__(self, instance, class_context, tree_name):
self._instance = instance
self.class_context = class_context
self.tree_name = tree_name
@property
def parent_context(self):
return self._instance.create_instance_context(self.class_context, self.tree_name)
class LazyInstanceClassName(LazyInstanceName):
def infer(self):
for result_context in super(LazyInstanceClassName, self).infer():
if isinstance(result_context, er.FunctionContext):
# Classes are never used to resolve anything within the
# functions. Only other functions and modules will resolve
# those things.
parent_context = result_context.parent_context
while parent_context.is_class():
parent_context = parent_context.parent_context
yield BoundMethod(
result_context.evaluator, self._instance, self.class_context,
parent_context, result_context.tree_node
)
else:
for c in er.apply_py__get__(result_context, self._instance):
yield c
class InstanceClassFilter(filters.ParserTreeFilter):
name_class = LazyInstanceClassName
def __init__(self, evaluator, context, class_context, origin_scope):
super(InstanceClassFilter, self).__init__(
evaluator=evaluator,
context=context,
node_context=class_context,
origin_scope=origin_scope
)
self._class_context = class_context
def _equals_origin_scope(self):
node = self._origin_scope
while node is not None:
if node == self._parser_scope or node == self.context:
return True
node = get_parent_scope(node)
return False
def _access_possible(self, name):
return not name.value.startswith('__') or name.value.endswith('__') \
or self._equals_origin_scope()
def _filter(self, names):
names = super(InstanceClassFilter, self)._filter(names)
return [name for name in names if self._access_possible(name)]
def _convert_names(self, names):
return [self.name_class(self.context, self._class_context, name) for name in names]
class SelfNameFilter(InstanceClassFilter):
name_class = LazyInstanceName
def _filter(self, names):
names = self._filter_self_names(names)
if isinstance(self._parser_scope, compiled.CompiledObject) and False:
# This would be for builtin skeletons, which are not yet supported.
return list(names)
else:
start, end = self._parser_scope.start_pos, self._parser_scope.end_pos
return [n for n in names if start < n.start_pos < end]
def _filter_self_names(self, names):
for name in names:
trailer = name.parent
if trailer.type == 'trailer' \
and len(trailer.children) == 2 \
and trailer.children[0] == '.':
if name.is_definition() and self._access_possible(name):
yield name
def _check_flows(self, names):
return names
class InstanceVarArgs(AbstractArguments):
def __init__(self, execution_context, var_args):
self._execution_context = execution_context
self._var_args = var_args
@memoize_method
def _get_var_args(self):
return self._var_args
@property
def argument_node(self):
return self._var_args.argument_node
@property
def trailer(self):
return self._var_args.trailer
def unpack(self, func=None):
yield None, LazyKnownContext(self._execution_context.instance)
for values in self._get_var_args().unpack(func):
yield values
def get_calling_nodes(self):
return self._get_var_args().get_calling_nodes()

View File

@@ -1,884 +0,0 @@
"""
Contains all classes and functions to deal with lists, dicts, generators and
iterators in general.
Array modifications
*******************
If the content of an array (``set``/``list``) is requested somewhere, the
current module will be checked for appearances of ``arr.append``,
``arr.insert``, etc. If the ``arr`` name points to an actual array, the
content will be added
This can be really cpu intensive, as you can imagine. Because |jedi| has to
follow **every** ``append`` and check wheter it's the right array. However this
works pretty good, because in *slow* cases, the recursion detector and other
settings will stop this process.
It is important to note that:
1. Array modfications work only in the current module.
2. Jedi only checks Array additions; ``list.pop``, etc are ignored.
"""
from jedi import debug
from jedi import settings
from jedi import common
from jedi.common import unite, safe_property
from jedi._compatibility import unicode, zip_longest, is_py3
from jedi.evaluate import compiled
from jedi.evaluate import helpers
from jedi.evaluate import analysis
from jedi.evaluate import pep0484
from jedi.evaluate import context
from jedi.evaluate import precedence
from jedi.evaluate import recursion
from jedi.evaluate.cache import evaluator_method_cache
from jedi.evaluate.filters import DictFilter, AbstractNameDefinition, \
ParserTreeFilter
from jedi.parser_utils import get_comp_fors
class AbstractSequence(context.Context):
builtin_methods = {}
api_type = 'instance'
def __init__(self, evaluator):
super(AbstractSequence, self).__init__(evaluator, evaluator.BUILTINS)
def get_filters(self, search_global, until_position=None, origin_scope=None):
raise NotImplementedError
@property
def name(self):
return compiled.CompiledContextName(self, self.array_type)
class BuiltinMethod(object):
"""``Generator.__next__`` ``dict.values`` methods and so on."""
def __init__(self, builtin_context, method, builtin_func):
self._builtin_context = builtin_context
self._method = method
self._builtin_func = builtin_func
def py__call__(self, params):
return self._method(self._builtin_context)
def __getattr__(self, name):
return getattr(self._builtin_func, name)
class SpecialMethodFilter(DictFilter):
"""
A filter for methods that are defined in this module on the corresponding
classes like Generator (for __next__, etc).
"""
class SpecialMethodName(AbstractNameDefinition):
api_type = 'function'
def __init__(self, parent_context, string_name, callable_, builtin_context):
self.parent_context = parent_context
self.string_name = string_name
self._callable = callable_
self._builtin_context = builtin_context
def infer(self):
filter = next(self._builtin_context.get_filters())
# We can take the first index, because on builtin methods there's
# always only going to be one name. The same is true for the
# inferred values.
builtin_func = next(iter(filter.get(self.string_name)[0].infer()))
return set([BuiltinMethod(self.parent_context, self._callable, builtin_func)])
def __init__(self, context, dct, builtin_context):
super(SpecialMethodFilter, self).__init__(dct)
self.context = context
self._builtin_context = builtin_context
"""
This context is what will be used to introspect the name, where as the
other context will be used to execute the function.
We distinguish, because we have to.
"""
def _convert(self, name, value):
return self.SpecialMethodName(self.context, name, value, self._builtin_context)
def has_builtin_methods(cls):
base_dct = {}
# Need to care properly about inheritance. Builtin Methods should not get
# lost, just because they are not mentioned in a class.
for base_cls in reversed(cls.__bases__):
try:
base_dct.update(base_cls.builtin_methods)
except AttributeError:
pass
cls.builtin_methods = base_dct
for func in cls.__dict__.values():
try:
cls.builtin_methods.update(func.registered_builtin_methods)
except AttributeError:
pass
return cls
def register_builtin_method(method_name, python_version_match=None):
def wrapper(func):
if python_version_match and python_version_match != 2 + int(is_py3):
# Some functions do only apply to certain versions.
return func
dct = func.__dict__.setdefault('registered_builtin_methods', {})
dct[method_name] = func
return func
return wrapper
@has_builtin_methods
class GeneratorMixin(object):
array_type = None
@register_builtin_method('send')
@register_builtin_method('next', python_version_match=2)
@register_builtin_method('__next__', python_version_match=3)
def py__next__(self):
# TODO add TypeError if params are given.
return unite(lazy_context.infer() for lazy_context in self.py__iter__())
def get_filters(self, search_global, until_position=None, origin_scope=None):
gen_obj = compiled.get_special_object(self.evaluator, 'GENERATOR_OBJECT')
yield SpecialMethodFilter(self, self.builtin_methods, gen_obj)
for filter in gen_obj.get_filters(search_global):
yield filter
def py__bool__(self):
return True
def py__class__(self):
gen_obj = compiled.get_special_object(self.evaluator, 'GENERATOR_OBJECT')
return gen_obj.py__class__()
@property
def name(self):
return compiled.CompiledContextName(self, 'generator')
class Generator(GeneratorMixin, context.Context):
"""Handling of `yield` functions."""
def __init__(self, evaluator, func_execution_context):
super(Generator, self).__init__(evaluator, parent_context=evaluator.BUILTINS)
self._func_execution_context = func_execution_context
def py__iter__(self):
return self._func_execution_context.get_yield_values()
def __repr__(self):
return "<%s of %s>" % (type(self).__name__, self._func_execution_context)
class CompForContext(context.TreeContext):
@classmethod
def from_comp_for(cls, parent_context, comp_for):
return cls(parent_context.evaluator, parent_context, comp_for)
def __init__(self, evaluator, parent_context, comp_for):
super(CompForContext, self).__init__(evaluator, parent_context)
self.tree_node = comp_for
def get_node(self):
return self.tree_node
def get_filters(self, search_global, until_position=None, origin_scope=None):
yield ParserTreeFilter(self.evaluator, self)
class Comprehension(AbstractSequence):
@staticmethod
def from_atom(evaluator, context, atom):
bracket = atom.children[0]
if bracket == '{':
if atom.children[1].children[1] == ':':
cls = DictComprehension
else:
cls = SetComprehension
elif bracket == '(':
cls = GeneratorComprehension
elif bracket == '[':
cls = ListComprehension
return cls(evaluator, context, atom)
def __init__(self, evaluator, defining_context, atom):
super(Comprehension, self).__init__(evaluator)
self._defining_context = defining_context
self._atom = atom
def _get_comprehension(self):
# The atom contains a testlist_comp
return self._atom.children[1]
def _get_comp_for(self):
# The atom contains a testlist_comp
return self._get_comprehension().children[1]
def _eval_node(self, index=0):
"""
The first part `x + 1` of the list comprehension:
[x + 1 for x in foo]
"""
return self._get_comprehension().children[index]
@evaluator_method_cache()
def _get_comp_for_context(self, parent_context, comp_for):
# TODO shouldn't this be part of create_context?
return CompForContext.from_comp_for(parent_context, comp_for)
def _nested(self, comp_fors, parent_context=None):
evaluator = self.evaluator
comp_for = comp_fors[0]
input_node = comp_for.children[3]
parent_context = parent_context or self._defining_context
input_types = parent_context.eval_node(input_node)
cn = context.ContextualizedNode(parent_context, input_node)
iterated = py__iter__(evaluator, input_types, cn)
exprlist = comp_for.children[1]
for i, lazy_context in enumerate(iterated):
types = lazy_context.infer()
dct = unpack_tuple_to_dict(parent_context, types, exprlist)
context_ = self._get_comp_for_context(
parent_context,
comp_for,
)
with helpers.predefine_names(context_, comp_for, dct):
try:
for result in self._nested(comp_fors[1:], context_):
yield result
except IndexError:
iterated = context_.eval_node(self._eval_node())
if self.array_type == 'dict':
yield iterated, context_.eval_node(self._eval_node(2))
else:
yield iterated
@evaluator_method_cache(default=[])
@common.to_list
def _iterate(self):
comp_fors = tuple(get_comp_fors(self._get_comp_for()))
for result in self._nested(comp_fors):
yield result
def py__iter__(self):
for set_ in self._iterate():
yield context.LazyKnownContexts(set_)
def __repr__(self):
return "<%s of %s>" % (type(self).__name__, self._atom)
class ArrayMixin(object):
def get_filters(self, search_global, until_position=None, origin_scope=None):
# `array.type` is a string with the type, e.g. 'list'.
compiled_obj = compiled.builtin_from_name(self.evaluator, self.array_type)
yield SpecialMethodFilter(self, self.builtin_methods, compiled_obj)
for typ in compiled_obj.execute_evaluated(self):
for filter in typ.get_filters():
yield filter
def py__bool__(self):
return None # We don't know the length, because of appends.
def py__class__(self):
return compiled.builtin_from_name(self.evaluator, self.array_type)
@safe_property
def parent(self):
return self.evaluator.BUILTINS
def dict_values(self):
return unite(self._defining_context.eval_node(v) for k, v in self._items())
class ListComprehension(ArrayMixin, Comprehension):
array_type = 'list'
def py__getitem__(self, index):
if isinstance(index, slice):
return set([self])
all_types = list(self.py__iter__())
return all_types[index].infer()
class SetComprehension(ArrayMixin, Comprehension):
array_type = 'set'
@has_builtin_methods
class DictComprehension(ArrayMixin, Comprehension):
array_type = 'dict'
def _get_comp_for(self):
return self._get_comprehension().children[3]
def py__iter__(self):
for keys, values in self._iterate():
yield context.LazyKnownContexts(keys)
def py__getitem__(self, index):
for keys, values in self._iterate():
for k in keys:
if isinstance(k, compiled.CompiledObject):
if k.obj == index:
return values
return self.dict_values()
def dict_values(self):
return unite(values for keys, values in self._iterate())
@register_builtin_method('values')
def _imitate_values(self):
lazy_context = context.LazyKnownContexts(self.dict_values())
return set([FakeSequence(self.evaluator, 'list', [lazy_context])])
@register_builtin_method('items')
def _imitate_items(self):
items = set(
FakeSequence(
self.evaluator, 'tuple'
(context.LazyKnownContexts(keys), context.LazyKnownContexts(values))
) for keys, values in self._iterate()
)
return create_evaluated_sequence_set(self.evaluator, items, sequence_type='list')
class GeneratorComprehension(GeneratorMixin, Comprehension):
pass
class SequenceLiteralContext(ArrayMixin, AbstractSequence):
mapping = {'(': 'tuple',
'[': 'list',
'{': 'set'}
def __init__(self, evaluator, defining_context, atom):
super(SequenceLiteralContext, self).__init__(evaluator)
self.atom = atom
self._defining_context = defining_context
if self.atom.type in ('testlist_star_expr', 'testlist'):
self.array_type = 'tuple'
else:
self.array_type = SequenceLiteralContext.mapping[atom.children[0]]
"""The builtin name of the array (list, set, tuple or dict)."""
def py__getitem__(self, index):
"""Here the index is an int/str. Raises IndexError/KeyError."""
if self.array_type == 'dict':
for key, value in self._items():
for k in self._defining_context.eval_node(key):
if isinstance(k, compiled.CompiledObject) \
and index == k.obj:
return self._defining_context.eval_node(value)
raise KeyError('No key found in dictionary %s.' % self)
# Can raise an IndexError
if isinstance(index, slice):
return set([self])
else:
return self._defining_context.eval_node(self._items()[index])
def py__iter__(self):
"""
While values returns the possible values for any array field, this
function returns the value for a certain index.
"""
if self.array_type == 'dict':
# Get keys.
types = set()
for k, _ in self._items():
types |= self._defining_context.eval_node(k)
# We don't know which dict index comes first, therefore always
# yield all the types.
for _ in types:
yield context.LazyKnownContexts(types)
else:
for node in self._items():
yield context.LazyTreeContext(self._defining_context, node)
for addition in check_array_additions(self._defining_context, self):
yield addition
def _values(self):
"""Returns a list of a list of node."""
if self.array_type == 'dict':
return unite(v for k, v in self._items())
else:
return self._items()
def _items(self):
c = self.atom.children
if self.atom.type in ('testlist_star_expr', 'testlist'):
return c[::2]
array_node = c[1]
if array_node in (']', '}', ')'):
return [] # Direct closing bracket, doesn't contain items.
if array_node.type == 'testlist_comp':
return array_node.children[::2]
elif array_node.type == 'dictorsetmaker':
kv = []
iterator = iter(array_node.children)
for key in iterator:
op = next(iterator, None)
if op is None or op == ',':
kv.append(key) # A set.
else:
assert op == ':' # A dict.
kv.append((key, next(iterator)))
next(iterator, None) # Possible comma.
return kv
else:
return [array_node]
def exact_key_items(self):
"""
Returns a generator of tuples like dict.items(), where the key is
resolved (as a string) and the values are still lazy contexts.
"""
for key_node, value in self._items():
for key in self._defining_context.eval_node(key_node):
if precedence.is_string(key):
yield key.obj, context.LazyTreeContext(self._defining_context, value)
def __repr__(self):
return "<%s of %s>" % (self.__class__.__name__, self.atom)
@has_builtin_methods
class DictLiteralContext(SequenceLiteralContext):
array_type = 'dict'
def __init__(self, evaluator, defining_context, atom):
super(SequenceLiteralContext, self).__init__(evaluator)
self._defining_context = defining_context
self.atom = atom
@register_builtin_method('values')
def _imitate_values(self):
lazy_context = context.LazyKnownContexts(self.dict_values())
return set([FakeSequence(self.evaluator, 'list', [lazy_context])])
@register_builtin_method('items')
def _imitate_items(self):
lazy_contexts = [
context.LazyKnownContext(FakeSequence(
self.evaluator, 'tuple',
(context.LazyTreeContext(self._defining_context, key_node),
context.LazyTreeContext(self._defining_context, value_node))
)) for key_node, value_node in self._items()
]
return set([FakeSequence(self.evaluator, 'list', lazy_contexts)])
class _FakeArray(SequenceLiteralContext):
def __init__(self, evaluator, container, type):
super(SequenceLiteralContext, self).__init__(evaluator)
self.array_type = type
self.atom = container
# TODO is this class really needed?
class FakeSequence(_FakeArray):
def __init__(self, evaluator, array_type, lazy_context_list):
"""
type should be one of "tuple", "list"
"""
super(FakeSequence, self).__init__(evaluator, None, array_type)
self._lazy_context_list = lazy_context_list
def py__getitem__(self, index):
return set(self._lazy_context_list[index].infer())
def py__iter__(self):
return self._lazy_context_list
def py__bool__(self):
return bool(len(self._lazy_context_list))
def __repr__(self):
return "<%s of %s>" % (type(self).__name__, self._lazy_context_list)
class FakeDict(_FakeArray):
def __init__(self, evaluator, dct):
super(FakeDict, self).__init__(evaluator, dct, 'dict')
self._dct = dct
def py__iter__(self):
for key in self._dct:
yield context.LazyKnownContext(compiled.create(self.evaluator, key))
def py__getitem__(self, index):
return self._dct[index].infer()
def dict_values(self):
return unite(lazy_context.infer() for lazy_context in self._dct.values())
def exact_key_items(self):
return self._dct.items()
class MergedArray(_FakeArray):
def __init__(self, evaluator, arrays):
super(MergedArray, self).__init__(evaluator, arrays, arrays[-1].array_type)
self._arrays = arrays
def py__iter__(self):
for array in self._arrays:
for lazy_context in array.py__iter__():
yield lazy_context
def py__getitem__(self, index):
return unite(lazy_context.infer() for lazy_context in self.py__iter__())
def _items(self):
for array in self._arrays:
for a in array._items():
yield a
def __len__(self):
return sum(len(a) for a in self._arrays)
def unpack_tuple_to_dict(context, types, exprlist):
"""
Unpacking tuple assignments in for statements and expr_stmts.
"""
if exprlist.type == 'name':
return {exprlist.value: types}
elif exprlist.type == 'atom' and exprlist.children[0] in '([':
return unpack_tuple_to_dict(context, types, exprlist.children[1])
elif exprlist.type in ('testlist', 'testlist_comp', 'exprlist',
'testlist_star_expr'):
dct = {}
parts = iter(exprlist.children[::2])
n = 0
for lazy_context in py__iter__(context.evaluator, types, exprlist):
n += 1
try:
part = next(parts)
except StopIteration:
# TODO this context is probably not right.
analysis.add(context, 'value-error-too-many-values', part,
message="ValueError: too many values to unpack (expected %s)" % n)
else:
dct.update(unpack_tuple_to_dict(context, lazy_context.infer(), part))
has_parts = next(parts, None)
if types and has_parts is not None:
# TODO this context is probably not right.
analysis.add(context, 'value-error-too-few-values', has_parts,
message="ValueError: need more than %s values to unpack" % n)
return dct
elif exprlist.type == 'power' or exprlist.type == 'atom_expr':
# Something like ``arr[x], var = ...``.
# This is something that is not yet supported, would also be difficult
# to write into a dict.
return {}
elif exprlist.type == 'star_expr': # `a, *b, c = x` type unpackings
# Currently we're not supporting them.
return {}
raise NotImplementedError
def py__iter__(evaluator, types, contextualized_node=None):
debug.dbg('py__iter__')
type_iters = []
for typ in types:
try:
iter_method = typ.py__iter__
except AttributeError:
if contextualized_node is not None:
analysis.add(
contextualized_node.context,
'type-error-not-iterable',
contextualized_node._node,
message="TypeError: '%s' object is not iterable" % typ)
else:
type_iters.append(iter_method())
for lazy_contexts in zip_longest(*type_iters):
yield context.get_merged_lazy_context(
[l for l in lazy_contexts if l is not None]
)
def py__iter__types(evaluator, types, contextualized_node=None):
"""
Calls `py__iter__`, but ignores the ordering in the end and just returns
all types that it contains.
"""
return unite(
lazy_context.infer()
for lazy_context in py__iter__(evaluator, types, contextualized_node)
)
def py__getitem__(evaluator, context, types, trailer):
from jedi.evaluate.representation import ClassContext
from jedi.evaluate.instance import TreeInstance
result = set()
trailer_op, node, trailer_cl = trailer.children
assert trailer_op == "["
assert trailer_cl == "]"
# special case: PEP0484 typing module, see
# https://github.com/davidhalter/jedi/issues/663
for typ in list(types):
if isinstance(typ, (ClassContext, TreeInstance)):
typing_module_types = pep0484.py__getitem__(context, typ, node)
if typing_module_types is not None:
types.remove(typ)
result |= typing_module_types
if not types:
# all consumed by special cases
return result
for index in create_index_types(evaluator, context, node):
if isinstance(index, (compiled.CompiledObject, Slice)):
index = index.obj
if type(index) not in (float, int, str, unicode, slice, type(Ellipsis)):
# If the index is not clearly defined, we have to get all the
# possiblities.
for typ in list(types):
if isinstance(typ, AbstractSequence) and typ.array_type == 'dict':
types.remove(typ)
result |= typ.dict_values()
return result | py__iter__types(evaluator, types)
for typ in types:
# The actual getitem call.
try:
getitem = typ.py__getitem__
except AttributeError:
# TODO this context is probably not right.
analysis.add(context, 'type-error-not-subscriptable', trailer_op,
message="TypeError: '%s' object is not subscriptable" % typ)
else:
try:
result |= getitem(index)
except IndexError:
result |= py__iter__types(evaluator, set([typ]))
except KeyError:
# Must be a dict. Lists don't raise KeyErrors.
result |= typ.dict_values()
return result
def check_array_additions(context, sequence):
""" Just a mapper function for the internal _check_array_additions """
if sequence.array_type not in ('list', 'set'):
# TODO also check for dict updates
return set()
return _check_array_additions(context, sequence)
@evaluator_method_cache(default=set())
@debug.increase_indent
def _check_array_additions(context, sequence):
"""
Checks if a `Array` has "add" (append, insert, extend) statements:
>>> a = [""]
>>> a.append(1)
"""
from jedi.evaluate import param
debug.dbg('Dynamic array search for %s' % sequence, color='MAGENTA')
module_context = context.get_root_context()
if not settings.dynamic_array_additions or isinstance(module_context, compiled.CompiledObject):
debug.dbg('Dynamic array search aborted.', color='MAGENTA')
return set()
def find_additions(context, arglist, add_name):
params = list(param.TreeArguments(context.evaluator, context, arglist).unpack())
result = set()
if add_name in ['insert']:
params = params[1:]
if add_name in ['append', 'add', 'insert']:
for key, lazy_context in params:
result.add(lazy_context)
elif add_name in ['extend', 'update']:
for key, lazy_context in params:
result |= set(py__iter__(context.evaluator, lazy_context.infer()))
return result
temp_param_add, settings.dynamic_params_for_other_modules = \
settings.dynamic_params_for_other_modules, False
is_list = sequence.name.string_name == 'list'
search_names = (['append', 'extend', 'insert'] if is_list else ['add', 'update'])
added_types = set()
for add_name in search_names:
try:
possible_names = module_context.tree_node.get_used_names()[add_name]
except KeyError:
continue
else:
for name in possible_names:
context_node = context.tree_node
if not (context_node.start_pos < name.start_pos < context_node.end_pos):
continue
trailer = name.parent
power = trailer.parent
trailer_pos = power.children.index(trailer)
try:
execution_trailer = power.children[trailer_pos + 1]
except IndexError:
continue
else:
if execution_trailer.type != 'trailer' \
or execution_trailer.children[0] != '(' \
or execution_trailer.children[1] == ')':
continue
random_context = context.create_context(name)
with recursion.execution_allowed(context.evaluator, power) as allowed:
if allowed:
found = helpers.evaluate_call_of_leaf(
random_context,
name,
cut_own_trailer=True
)
if sequence in found:
# The arrays match. Now add the results
added_types |= find_additions(
random_context,
execution_trailer.children[1],
add_name
)
# reset settings
settings.dynamic_params_for_other_modules = temp_param_add
debug.dbg('Dynamic array result %s' % added_types, color='MAGENTA')
return added_types
def get_dynamic_array_instance(instance):
"""Used for set() and list() instances."""
if not settings.dynamic_array_additions:
return instance.var_args
ai = _ArrayInstance(instance)
from jedi.evaluate import param
return param.ValuesArguments([[ai]])
class _ArrayInstance(object):
"""
Used for the usage of set() and list().
This is definitely a hack, but a good one :-)
It makes it possible to use set/list conversions.
In contrast to Array, ListComprehension and all other iterable types, this
is something that is only used inside `evaluate/compiled/fake/builtins.py`
and therefore doesn't need filters, `py__bool__` and so on, because
we don't use these operations in `builtins.py`.
"""
def __init__(self, instance):
self.instance = instance
self.var_args = instance.var_args
def py__iter__(self):
var_args = self.var_args
try:
_, lazy_context = next(var_args.unpack())
except StopIteration:
pass
else:
for lazy in py__iter__(self.instance.evaluator, lazy_context.infer()):
yield lazy
from jedi.evaluate import param
if isinstance(var_args, param.TreeArguments):
additions = _check_array_additions(var_args.context, self.instance)
for addition in additions:
yield addition
class Slice(context.Context):
def __init__(self, context, start, stop, step):
super(Slice, self).__init__(
context.evaluator,
parent_context=context.evaluator.BUILTINS
)
self._context = context
# all of them are either a Precedence or None.
self._start = start
self._stop = stop
self._step = step
@property
def obj(self):
"""
Imitate CompiledObject.obj behavior and return a ``builtin.slice()``
object.
"""
def get(element):
if element is None:
return None
result = self._context.eval_node(element)
if len(result) != 1:
# For simplicity, we want slices to be clear defined with just
# one type. Otherwise we will return an empty slice object.
raise IndexError
try:
return list(result)[0].obj
except AttributeError:
return None
try:
return slice(get(self._start), get(self._stop), get(self._step))
except IndexError:
return slice(None, None, None)
def create_index_types(evaluator, context, index):
"""
Handles slices in subscript nodes.
"""
if index == ':':
# Like array[:]
return set([Slice(context, None, None, None)])
elif index.type == 'subscript' and not index.children[0] == '.':
# subscript basically implies a slice operation, except for Python 2's
# Ellipsis.
# e.g. array[:3]
result = []
for el in index.children:
if el == ':':
if not result:
result.append(None)
elif el.type == 'sliceop':
if len(el.children) == 2:
result.append(el.children[1])
else:
result.append(el)
result += [None] * (3 - len(result))
return set([Slice(context, *result)])
# No slices
return context.eval_node(index)

View File

@@ -1,100 +0,0 @@
"""
This module is not intended to be used in jedi, rather it will be fed to the
jedi-parser to replace classes in the typing module
"""
try:
from collections import abc
except ImportError:
# python 2
import collections as abc
def factory(typing_name, indextypes):
class Iterable(abc.Iterable):
def __iter__(self):
while True:
yield indextypes[0]()
class Iterator(Iterable, abc.Iterator):
def next(self):
""" needed for python 2 """
return self.__next__()
def __next__(self):
return indextypes[0]()
class Sequence(abc.Sequence):
def __getitem__(self, index):
return indextypes[0]()
class MutableSequence(Sequence, abc.MutableSequence):
pass
class List(MutableSequence, list):
pass
class Tuple(Sequence, tuple):
def __getitem__(self, index):
if indextypes[1] == Ellipsis:
# https://www.python.org/dev/peps/pep-0484/#the-typing-module
# Tuple[int, ...] means a tuple of ints of indetermined length
return indextypes[0]()
else:
return indextypes[index]()
class AbstractSet(Iterable, abc.Set):
pass
class MutableSet(AbstractSet, abc.MutableSet):
pass
class KeysView(Iterable, abc.KeysView):
pass
class ValuesView(abc.ValuesView):
def __iter__(self):
while True:
yield indextypes[1]()
class ItemsView(abc.ItemsView):
def __iter__(self):
while True:
yield (indextypes[0](), indextypes[1]())
class Mapping(Iterable, abc.Mapping):
def __getitem__(self, item):
return indextypes[1]()
def keys(self):
return KeysView()
def values(self):
return ValuesView()
def items(self):
return ItemsView()
class MutableMapping(Mapping, abc.MutableMapping):
pass
class Dict(MutableMapping, dict):
pass
dct = {
"Sequence": Sequence,
"MutableSequence": MutableSequence,
"List": List,
"Iterable": Iterable,
"Iterator": Iterator,
"AbstractSet": AbstractSet,
"MutableSet": MutableSet,
"Mapping": Mapping,
"MutableMapping": MutableMapping,
"Tuple": Tuple,
"KeysView": KeysView,
"ItemsView": ItemsView,
"ValuesView": ValuesView,
"Dict": Dict,
}
return dct[typing_name]

View File

@@ -1,433 +0,0 @@
from collections import defaultdict
from jedi._compatibility import zip_longest
from jedi import debug
from jedi import common
from parso.python import tree
from jedi.evaluate import iterable
from jedi.evaluate import analysis
from jedi.evaluate import context
from jedi.evaluate import docstrings
from jedi.evaluate import pep0484
from jedi.evaluate.filters import ParamName
def add_argument_issue(parent_context, error_name, lazy_context, message):
if isinstance(lazy_context, context.LazyTreeContext):
node = lazy_context.data
if node.parent.type == 'argument':
node = node.parent
analysis.add(parent_context, error_name, node, message)
def try_iter_content(types, depth=0):
"""Helper method for static analysis."""
if depth > 10:
# It's possible that a loop has references on itself (especially with
# CompiledObject). Therefore don't loop infinitely.
return
for typ in types:
try:
f = typ.py__iter__
except AttributeError:
pass
else:
for lazy_context in f():
try_iter_content(lazy_context.infer(), depth + 1)
class AbstractArguments():
context = None
def eval_argument_clinic(self, parameters):
"""Uses a list with argument clinic information (see PEP 436)."""
iterator = self.unpack()
for i, (name, optional, allow_kwargs) in enumerate(parameters):
key, argument = next(iterator, (None, None))
if key is not None:
raise NotImplementedError
if argument is None and not optional:
debug.warning('TypeError: %s expected at least %s arguments, got %s',
name, len(parameters), i)
raise ValueError
values = set() if argument is None else argument.infer()
if not values and not optional:
# For the stdlib we always want values. If we don't get them,
# that's ok, maybe something is too hard to resolve, however,
# we will not proceed with the evaluation of that function.
debug.warning('argument_clinic "%s" not resolvable.', name)
raise ValueError
yield values
def eval_all(self, funcdef=None):
"""
Evaluates all arguments as a support for static analysis
(normally Jedi).
"""
for key, lazy_context in self.unpack():
types = lazy_context.infer()
try_iter_content(types)
def get_calling_nodes(self):
raise NotImplementedError
def unpack(self, funcdef=None):
raise NotImplementedError
def get_params(self, execution_context):
return get_params(execution_context, self)
class AnonymousArguments(AbstractArguments):
def get_params(self, execution_context):
from jedi.evaluate.dynamic import search_params
return search_params(
execution_context.evaluator,
execution_context,
execution_context.tree_node
)
class TreeArguments(AbstractArguments):
def __init__(self, evaluator, context, argument_node, trailer=None):
"""
The argument_node is either a parser node or a list of evaluated
objects. Those evaluated objects may be lists of evaluated objects
themselves (one list for the first argument, one for the second, etc).
:param argument_node: May be an argument_node or a list of nodes.
"""
self.argument_node = argument_node
self.context = context
self._evaluator = evaluator
self.trailer = trailer # Can be None, e.g. in a class definition.
def _split(self):
if isinstance(self.argument_node, (tuple, list)):
for el in self.argument_node:
yield 0, el
else:
if not (self.argument_node.type == 'arglist' or (
# in python 3.5 **arg is an argument, not arglist
(self.argument_node.type == 'argument') and
self.argument_node.children[0] in ('*', '**'))):
yield 0, self.argument_node
return
iterator = iter(self.argument_node.children)
for child in iterator:
if child == ',':
continue
elif child in ('*', '**'):
yield len(child.value), next(iterator)
elif child.type == 'argument' and \
child.children[0] in ('*', '**'):
assert len(child.children) == 2
yield len(child.children[0].value), child.children[1]
else:
yield 0, child
def unpack(self, funcdef=None):
named_args = []
for star_count, el in self._split():
if star_count == 1:
arrays = self.context.eval_node(el)
iterators = [_iterate_star_args(self.context, a, el, funcdef)
for a in arrays]
iterators = list(iterators)
for values in list(zip_longest(*iterators)):
# TODO zip_longest yields None, that means this would raise
# an exception?
yield None, context.get_merged_lazy_context(
[v for v in values if v is not None]
)
elif star_count == 2:
arrays = self._evaluator.eval_element(self.context, el)
for dct in arrays:
for key, values in _star_star_dict(self.context, dct, el, funcdef):
yield key, values
else:
if el.type == 'argument':
c = el.children
if len(c) == 3: # Keyword argument.
named_args.append((c[0].value, context.LazyTreeContext(self.context, c[2]),))
else: # Generator comprehension.
# Include the brackets with the parent.
comp = iterable.GeneratorComprehension(
self._evaluator, self.context, self.argument_node.parent)
yield None, context.LazyKnownContext(comp)
else:
yield None, context.LazyTreeContext(self.context, el)
# Reordering var_args is necessary, because star args sometimes appear
# after named argument, but in the actual order it's prepended.
for named_arg in named_args:
yield named_arg
def as_tree_tuple_objects(self):
for star_count, argument in self._split():
if argument.type == 'argument':
argument, default = argument.children[::2]
else:
default = None
yield argument, default, star_count
def __repr__(self):
return '<%s: %s>' % (self.__class__.__name__, self.argument_node)
def get_calling_nodes(self):
from jedi.evaluate.dynamic import MergedExecutedParams
old_arguments_list = []
arguments = self
while arguments not in old_arguments_list:
if not isinstance(arguments, TreeArguments):
break
old_arguments_list.append(arguments)
for name, default, star_count in reversed(list(arguments.as_tree_tuple_objects())):
if not star_count or not isinstance(name, tree.Name):
continue
names = self._evaluator.goto(arguments.context, name)
if len(names) != 1:
break
if not isinstance(names[0], ParamName):
break
param = names[0].get_param()
if isinstance(param, MergedExecutedParams):
# For dynamic searches we don't even want to see errors.
return []
if not isinstance(param, ExecutedParam):
break
if param.var_args is None:
break
arguments = param.var_args
break
return [arguments.argument_node or arguments.trailer]
class ValuesArguments(AbstractArguments):
def __init__(self, values_list):
self._values_list = values_list
def unpack(self, funcdef=None):
for values in self._values_list:
yield None, context.LazyKnownContexts(values)
def get_calling_nodes(self):
return []
def __repr__(self):
return '<%s: %s>' % (self.__class__.__name__, self._values_list)
class ExecutedParam(object):
"""Fake a param and give it values."""
def __init__(self, execution_context, param_node, lazy_context):
self._execution_context = execution_context
self._param_node = param_node
self._lazy_context = lazy_context
self.string_name = param_node.name.value
def infer(self):
pep0484_hints = pep0484.infer_param(self._execution_context, self._param_node)
doc_params = docstrings.infer_param(self._execution_context, self._param_node)
if pep0484_hints or doc_params:
return list(set(pep0484_hints) | set(doc_params))
return self._lazy_context.infer()
@property
def var_args(self):
return self._execution_context.var_args
def __repr__(self):
return '<%s: %s>' % (self.__class__.__name__, self.string_name)
def get_params(execution_context, var_args):
result_params = []
param_dict = {}
funcdef = execution_context.tree_node
parent_context = execution_context.parent_context
for param in funcdef.get_params():
param_dict[param.name.value] = param
unpacked_va = list(var_args.unpack(funcdef))
var_arg_iterator = common.PushBackIterator(iter(unpacked_va))
non_matching_keys = defaultdict(lambda: [])
keys_used = {}
keys_only = False
had_multiple_value_error = False
for param in funcdef.get_params():
# The value and key can both be null. There, the defaults apply.
# args / kwargs will just be empty arrays / dicts, respectively.
# Wrong value count is just ignored. If you try to test cases that are
# not allowed in Python, Jedi will maybe not show any completions.
key, argument = next(var_arg_iterator, (None, None))
while key is not None:
keys_only = True
try:
key_param = param_dict[key]
except KeyError:
non_matching_keys[key] = argument
else:
if key in keys_used:
had_multiple_value_error = True
m = ("TypeError: %s() got multiple values for keyword argument '%s'."
% (funcdef.name, key))
for node in var_args.get_calling_nodes():
analysis.add(parent_context, 'type-error-multiple-values',
node, message=m)
else:
keys_used[key] = ExecutedParam(execution_context, key_param, argument)
key, argument = next(var_arg_iterator, (None, None))
try:
result_params.append(keys_used[param.name.value])
continue
except KeyError:
pass
if param.star_count == 1:
# *args param
lazy_context_list = []
if argument is not None:
lazy_context_list.append(argument)
for key, argument in var_arg_iterator:
# Iterate until a key argument is found.
if key:
var_arg_iterator.push_back((key, argument))
break
lazy_context_list.append(argument)
seq = iterable.FakeSequence(execution_context.evaluator, 'tuple', lazy_context_list)
result_arg = context.LazyKnownContext(seq)
elif param.star_count == 2:
# **kwargs param
dct = iterable.FakeDict(execution_context.evaluator, dict(non_matching_keys))
result_arg = context.LazyKnownContext(dct)
non_matching_keys = {}
else:
# normal param
if argument is None:
# No value: Return an empty container
if param.default is None:
result_arg = context.LazyUnknownContext()
if not keys_only:
for node in var_args.get_calling_nodes():
m = _error_argument_count(funcdef, len(unpacked_va))
analysis.add(parent_context, 'type-error-too-few-arguments',
node, message=m)
else:
result_arg = context.LazyTreeContext(parent_context, param.default)
else:
result_arg = argument
result_params.append(ExecutedParam(execution_context, param, result_arg))
if not isinstance(result_arg, context.LazyUnknownContext):
keys_used[param.name.value] = result_params[-1]
if keys_only:
# All arguments should be handed over to the next function. It's not
# about the values inside, it's about the names. Jedi needs to now that
# there's nothing to find for certain names.
for k in set(param_dict) - set(keys_used):
param = param_dict[k]
if not (non_matching_keys or had_multiple_value_error or
param.star_count or param.default):
# add a warning only if there's not another one.
for node in var_args.get_calling_nodes():
m = _error_argument_count(funcdef, len(unpacked_va))
analysis.add(parent_context, 'type-error-too-few-arguments',
node, message=m)
for key, lazy_context in non_matching_keys.items():
m = "TypeError: %s() got an unexpected keyword argument '%s'." \
% (funcdef.name, key)
add_argument_issue(
parent_context,
'type-error-keyword-argument',
lazy_context,
message=m
)
remaining_arguments = list(var_arg_iterator)
if remaining_arguments:
m = _error_argument_count(funcdef, len(unpacked_va))
# Just report an error for the first param that is not needed (like
# cPython).
first_key, lazy_context = remaining_arguments[0]
if var_args.get_calling_nodes():
# There might not be a valid calling node so check for that first.
add_argument_issue(parent_context, 'type-error-too-many-arguments', lazy_context, message=m)
return result_params
def _iterate_star_args(context, array, input_node, funcdef=None):
try:
iter_ = array.py__iter__
except AttributeError:
if funcdef is not None:
# TODO this funcdef should not be needed.
m = "TypeError: %s() argument after * must be a sequence, not %s" \
% (funcdef.name.value, array)
analysis.add(context, 'type-error-star', input_node, message=m)
else:
for lazy_context in iter_():
yield lazy_context
def _star_star_dict(context, array, input_node, funcdef):
from jedi.evaluate.instance import CompiledInstance
if isinstance(array, CompiledInstance) and array.name.string_name == 'dict':
# For now ignore this case. In the future add proper iterators and just
# make one call without crazy isinstance checks.
return {}
elif isinstance(array, iterable.AbstractSequence) and array.array_type == 'dict':
return array.exact_key_items()
else:
if funcdef is not None:
m = "TypeError: %s argument after ** must be a mapping, not %s" \
% (funcdef.name.value, array)
analysis.add(context, 'type-error-star-star', input_node, message=m)
return {}
def _error_argument_count(funcdef, actual_count):
params = funcdef.get_params()
default_arguments = sum(1 for p in params if p.default or p.star_count)
if default_arguments == 0:
before = 'exactly '
else:
before = 'from %s to ' % (len(params) - default_arguments)
return ('TypeError: %s() takes %s%s arguments (%s given).'
% (funcdef.name, before, len(params), actual_count))
def _create_default_param(execution_context, param):
if param.star_count == 1:
result_arg = context.LazyKnownContext(
iterable.FakeSequence(execution_context.evaluator, 'tuple', [])
)
elif param.star_count == 2:
result_arg = context.LazyKnownContext(
iterable.FakeDict(execution_context.evaluator, {})
)
elif param.default is None:
result_arg = context.LazyUnknownContext()
else:
result_arg = context.LazyTreeContext(execution_context.parent_context, param.default)
return ExecutedParam(execution_context, param, result_arg)
def create_default_params(execution_context, funcdef):
return [_create_default_param(execution_context, p)
for p in funcdef.get_params()]

View File

@@ -1,6 +0,0 @@
from jedi.evaluate.cache import evaluator_function_cache
@evaluator_function_cache()
def get_yield_exprs(evaluator, funcdef):
return list(funcdef.iter_yield_exprs())

View File

@@ -1,224 +0,0 @@
"""
PEP 0484 ( https://www.python.org/dev/peps/pep-0484/ ) describes type hints
through function annotations. There is a strong suggestion in this document
that only the type of type hinting defined in PEP0484 should be allowed
as annotations in future python versions.
The (initial / probably incomplete) implementation todo list for pep-0484:
v Function parameter annotations with builtin/custom type classes
v Function returntype annotations with builtin/custom type classes
v Function parameter annotations with strings (forward reference)
v Function return type annotations with strings (forward reference)
v Local variable type hints
v Assigned types: `Url = str\ndef get(url:Url) -> str:`
v Type hints in `with` statements
x Stub files support
x support `@no_type_check` and `@no_type_check_decorator`
x support for typing.cast() operator
x support for type hint comments for functions, `# type: (int, str) -> int`.
See comment from Guido https://github.com/davidhalter/jedi/issues/662
"""
import itertools
import os
import re
from parso import ParserSyntaxError
from parso.python import tree
from jedi.common import unite
from jedi.evaluate.cache import evaluator_method_cache
from jedi.evaluate import compiled
from jedi.evaluate.context import LazyTreeContext
from jedi import debug
from jedi import _compatibility
from jedi import parser_utils
def _evaluate_for_annotation(context, annotation, index=None):
"""
Evaluates a string-node, looking for an annotation
If index is not None, the annotation is expected to be a tuple
and we're interested in that index
"""
if annotation is not None:
definitions = context.eval_node(
_fix_forward_reference(context, annotation))
if index is not None:
definitions = list(itertools.chain.from_iterable(
definition.py__getitem__(index) for definition in definitions
if definition.array_type == 'tuple' and
len(list(definition.py__iter__())) >= index))
return unite(d.execute_evaluated() for d in definitions)
else:
return set()
def _fix_forward_reference(context, node):
evaled_nodes = context.eval_node(node)
if len(evaled_nodes) != 1:
debug.warning("Eval'ed typing index %s should lead to 1 object, "
" not %s" % (node, evaled_nodes))
return node
evaled_node = list(evaled_nodes)[0]
if isinstance(evaled_node, compiled.CompiledObject) and \
isinstance(evaled_node.obj, str):
try:
new_node = context.evaluator.grammar.parse(
_compatibility.unicode(evaled_node.obj),
start_symbol='eval_input',
error_recovery=False
)
except ParserSyntaxError:
debug.warning('Annotation not parsed: %s' % evaled_node.obj)
return node
else:
module = node.get_root_node()
parser_utils.move(new_node, module.end_pos[0])
new_node.parent = context.tree_node
return new_node
else:
return node
@evaluator_method_cache()
def infer_param(execution_context, param):
annotation = param.annotation
module_context = execution_context.get_root_context()
return _evaluate_for_annotation(module_context, annotation)
def py__annotations__(funcdef):
return_annotation = funcdef.annotation
if return_annotation:
dct = {'return': return_annotation}
else:
dct = {}
for function_param in funcdef.get_params():
param_annotation = function_param.annotation
if param_annotation is not None:
dct[function_param.name.value] = param_annotation
return dct
@evaluator_method_cache()
def infer_return_types(function_context):
annotation = py__annotations__(function_context.tree_node).get("return", None)
module_context = function_context.get_root_context()
return _evaluate_for_annotation(module_context, annotation)
_typing_module = None
def _get_typing_replacement_module(grammar):
"""
The idea is to return our jedi replacement for the PEP-0484 typing module
as discussed at https://github.com/davidhalter/jedi/issues/663
"""
global _typing_module
if _typing_module is None:
typing_path = \
os.path.abspath(os.path.join(__file__, "../jedi_typing.py"))
with open(typing_path) as f:
code = _compatibility.unicode(f.read())
_typing_module = grammar.parse(code)
return _typing_module
def py__getitem__(context, typ, node):
if not typ.get_root_context().name.string_name == "typing":
return None
# we assume that any class using [] in a module called
# "typing" with a name for which we have a replacement
# should be replaced by that class. This is not 100%
# airtight but I don't have a better idea to check that it's
# actually the PEP-0484 typing module and not some other
if node.type == "subscriptlist":
nodes = node.children[::2] # skip the commas
else:
nodes = [node]
del node
nodes = [_fix_forward_reference(context, node) for node in nodes]
type_name = typ.name.string_name
# hacked in Union and Optional, since it's hard to do nicely in parsed code
if type_name in ("Union", '_Union'):
# In Python 3.6 it's still called typing.Union but it's an instance
# called _Union.
return unite(context.eval_node(node) for node in nodes)
if type_name in ("Optional", '_Optional'):
# Here we have the same issue like in Union. Therefore we also need to
# check for the instance typing._Optional (Python 3.6).
return context.eval_node(nodes[0])
from jedi.evaluate.representation import ModuleContext
typing = ModuleContext(
context.evaluator,
module_node=_get_typing_replacement_module(context.evaluator.latest_grammar),
path=None
)
factories = typing.py__getattribute__("factory")
assert len(factories) == 1
factory = list(factories)[0]
assert factory
function_body_nodes = factory.tree_node.children[4].children
valid_classnames = set(child.name.value
for child in function_body_nodes
if isinstance(child, tree.Class))
if type_name not in valid_classnames:
return None
compiled_classname = compiled.create(context.evaluator, type_name)
from jedi.evaluate.iterable import FakeSequence
args = FakeSequence(
context.evaluator,
"tuple",
[LazyTreeContext(context, n) for n in nodes]
)
result = factory.execute_evaluated(compiled_classname, args)
return result
def find_type_from_comment_hint_for(context, node, name):
return _find_type_from_comment_hint(context, node, node.children[1], name)
def find_type_from_comment_hint_with(context, node, name):
assert len(node.children[1].children) == 3, \
"Can only be here when children[1] is 'foo() as f'"
varlist = node.children[1].children[2]
return _find_type_from_comment_hint(context, node, varlist, name)
def find_type_from_comment_hint_assign(context, node, name):
return _find_type_from_comment_hint(context, node, node.children[0], name)
def _find_type_from_comment_hint(context, node, varlist, name):
index = None
if varlist.type in ("testlist_star_expr", "exprlist", "testlist"):
# something like "a, b = 1, 2"
index = 0
for child in varlist.children:
if child == name:
break
if child.type == "operator":
continue
index += 1
else:
return []
comment = parser_utils.get_following_comment_same_line(node)
if comment is None:
return []
match = re.match(r"^#\s*type:\s*([^#]*)", comment)
if not match:
return []
annotation = tree.String(
repr(str(match.group(1).strip())),
node.start_pos)
annotation.parent = node.parent
return _evaluate_for_annotation(context, annotation, index)

View File

@@ -1,178 +0,0 @@
"""
Handles operator precedence.
"""
import operator as op
from jedi._compatibility import unicode
from jedi import debug
from jedi.evaluate.compiled import CompiledObject, create, builtin_from_name
from jedi.evaluate import analysis
# Maps Python syntax to the operator module.
COMPARISON_OPERATORS = {
'==': op.eq,
'!=': op.ne,
'is': op.is_,
'is not': op.is_not,
'<': op.lt,
'<=': op.le,
'>': op.gt,
'>=': op.ge,
}
def literals_to_types(evaluator, result):
# Changes literals ('a', 1, 1.0, etc) to its type instances (str(),
# int(), float(), etc).
new_result = set()
for typ in result:
if is_literal(typ):
# Literals are only valid as long as the operations are
# correct. Otherwise add a value-free instance.
cls = builtin_from_name(evaluator, typ.name.string_name)
new_result |= cls.execute_evaluated()
else:
new_result.add(typ)
return new_result
def calculate_children(evaluator, context, children):
"""
Calculate a list of children with operators.
"""
iterator = iter(children)
types = context.eval_node(next(iterator))
for operator in iterator:
right = next(iterator)
if operator.type == 'comp_op': # not in / is not
operator = ' '.join(c.value for c in operator.children)
# handle lazy evaluation of and/or here.
if operator in ('and', 'or'):
left_bools = set([left.py__bool__() for left in types])
if left_bools == set([True]):
if operator == 'and':
types = context.eval_node(right)
elif left_bools == set([False]):
if operator != 'and':
types = context.eval_node(right)
# Otherwise continue, because of uncertainty.
else:
types = calculate(evaluator, context, types, operator,
context.eval_node(right))
debug.dbg('calculate_children types %s', types)
return types
def calculate(evaluator, context, left_result, operator, right_result):
result = set()
if not left_result or not right_result:
# illegal slices e.g. cause left/right_result to be None
result = (left_result or set()) | (right_result or set())
result = literals_to_types(evaluator, result)
else:
# I don't think there's a reasonable chance that a string
# operation is still correct, once we pass something like six
# objects.
if len(left_result) * len(right_result) > 6:
result = literals_to_types(evaluator, left_result | right_result)
else:
for left in left_result:
for right in right_result:
result |= _element_calculate(evaluator, context, left, operator, right)
return result
def factor_calculate(evaluator, types, operator):
"""
Calculates `+`, `-`, `~` and `not` prefixes.
"""
for typ in types:
if operator == '-':
if _is_number(typ):
yield create(evaluator, -typ.obj)
elif operator == 'not':
value = typ.py__bool__()
if value is None: # Uncertainty.
return
yield create(evaluator, not value)
else:
yield typ
def _is_number(obj):
return isinstance(obj, CompiledObject) \
and isinstance(obj.obj, (int, float))
def is_string(obj):
return isinstance(obj, CompiledObject) \
and isinstance(obj.obj, (str, unicode))
def is_literal(obj):
return _is_number(obj) or is_string(obj)
def _is_tuple(obj):
from jedi.evaluate import iterable
return isinstance(obj, iterable.AbstractSequence) and obj.array_type == 'tuple'
def _is_list(obj):
from jedi.evaluate import iterable
return isinstance(obj, iterable.AbstractSequence) and obj.array_type == 'list'
def _element_calculate(evaluator, context, left, operator, right):
from jedi.evaluate import iterable, instance
l_is_num = _is_number(left)
r_is_num = _is_number(right)
if operator == '*':
# for iterables, ignore * operations
if isinstance(left, iterable.AbstractSequence) or is_string(left):
return set([left])
elif isinstance(right, iterable.AbstractSequence) or is_string(right):
return set([right])
elif operator == '+':
if l_is_num and r_is_num or is_string(left) and is_string(right):
return set([create(evaluator, left.obj + right.obj)])
elif _is_tuple(left) and _is_tuple(right) or _is_list(left) and _is_list(right):
return set([iterable.MergedArray(evaluator, (left, right))])
elif operator == '-':
if l_is_num and r_is_num:
return set([create(evaluator, left.obj - right.obj)])
elif operator == '%':
# With strings and numbers the left type typically remains. Except for
# `int() % float()`.
return set([left])
elif operator in COMPARISON_OPERATORS:
operation = COMPARISON_OPERATORS[operator]
if isinstance(left, CompiledObject) and isinstance(right, CompiledObject):
# Possible, because the return is not an option. Just compare.
left = left.obj
right = right.obj
try:
result = operation(left, right)
except TypeError:
# Could be True or False.
return set([create(evaluator, True), create(evaluator, False)])
else:
return set([create(evaluator, result)])
elif operator == 'in':
return set()
def check(obj):
"""Checks if a Jedi object is either a float or an int."""
return isinstance(obj, instance.CompiledInstance) and \
obj.name.string_name in ('int', 'float')
# Static analysis, one is a number, the other one is not.
if operator in ('+', '-') and l_is_num != r_is_num \
and not (check(left) or check(right)):
message = "TypeError: unsupported operand type(s) for +: %s and %s"
analysis.add(context, 'type-error-operation', operator,
message % (left, right))
return set([left, right])

View File

@@ -1,684 +0,0 @@
"""
Like described in the :mod:`parso.python.tree` module,
there's a need for an ast like module to represent the states of parsed
modules.
But now there are also structures in Python that need a little bit more than
that. An ``Instance`` for example is only a ``Class`` before it is
instantiated. This class represents these cases.
So, why is there also a ``Class`` class here? Well, there are decorators and
they change classes in Python 3.
Representation modules also define "magic methods". Those methods look like
``py__foo__`` and are typically mappable to the Python equivalents ``__call__``
and others. Here's a list:
====================================== ========================================
**Method** **Description**
-------------------------------------- ----------------------------------------
py__call__(params: Array) On callable objects, returns types.
py__bool__() Returns True/False/None; None means that
there's no certainty.
py__bases__() Returns a list of base classes.
py__mro__() Returns a list of classes (the mro).
py__iter__() Returns a generator of a set of types.
py__class__() Returns the class of an instance.
py__getitem__(index: int/str) Returns a a set of types of the index.
Can raise an IndexError/KeyError.
py__file__() Only on modules. Returns None if does
not exist.
py__package__() Only on modules. For the import system.
py__path__() Only on modules. For the import system.
py__get__(call_object) Only on instances. Simulates
descriptors.
py__doc__(include_call_signature: Returns the docstring for a context.
bool)
====================================== ========================================
"""
import os
import pkgutil
import imp
import re
from itertools import chain
from parso.python import tree
from parso import python_bytes_to_unicode
from jedi._compatibility import use_metaclass
from jedi import debug
from jedi.evaluate.cache import evaluator_method_cache, CachedMetaClass
from jedi.evaluate import compiled
from jedi.evaluate import recursion
from jedi.evaluate import iterable
from jedi.evaluate import docstrings
from jedi.evaluate import pep0484
from jedi.evaluate import param
from jedi.evaluate import flow_analysis
from jedi.evaluate import imports
from jedi.evaluate import helpers
from jedi.evaluate.filters import ParserTreeFilter, FunctionExecutionFilter, \
GlobalNameFilter, DictFilter, ContextName, AbstractNameDefinition, \
ParamName, AnonymousInstanceParamName, TreeNameDefinition, \
ContextNameMixin
from jedi.evaluate import context
from jedi.evaluate.context import ContextualizedNode
from jedi import parser_utils
from jedi.evaluate.parser_cache import get_yield_exprs
def apply_py__get__(context, base_context):
try:
method = context.py__get__
except AttributeError:
yield context
else:
for descriptor_context in method(base_context):
yield descriptor_context
class ClassName(TreeNameDefinition):
def __init__(self, parent_context, tree_name, name_context):
super(ClassName, self).__init__(parent_context, tree_name)
self._name_context = name_context
def infer(self):
# TODO this _name_to_types might get refactored and be a part of the
# parent class. Once it is, we can probably just overwrite method to
# achieve this.
from jedi.evaluate.finder import _name_to_types
inferred = _name_to_types(
self.parent_context.evaluator, self._name_context, self.tree_name)
for result_context in inferred:
for c in apply_py__get__(result_context, self.parent_context):
yield c
class ClassFilter(ParserTreeFilter):
name_class = ClassName
def _convert_names(self, names):
return [self.name_class(self.context, name, self._node_context)
for name in names]
class ClassContext(use_metaclass(CachedMetaClass, context.TreeContext)):
"""
This class is not only important to extend `tree.Class`, it is also a
important for descriptors (if the descriptor methods are evaluated or not).
"""
api_type = 'class'
def __init__(self, evaluator, classdef, parent_context):
super(ClassContext, self).__init__(evaluator, parent_context=parent_context)
self.tree_node = classdef
@evaluator_method_cache(default=())
def py__mro__(self):
def add(cls):
if cls not in mro:
mro.append(cls)
mro = [self]
# TODO Do a proper mro resolution. Currently we are just listing
# classes. However, it's a complicated algorithm.
for lazy_cls in self.py__bases__():
# TODO there's multiple different mro paths possible if this yields
# multiple possibilities. Could be changed to be more correct.
for cls in lazy_cls.infer():
# TODO detect for TypeError: duplicate base class str,
# e.g. `class X(str, str): pass`
try:
mro_method = cls.py__mro__
except AttributeError:
# TODO add a TypeError like:
"""
>>> class Y(lambda: test): pass
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: function() argument 1 must be code, not str
>>> class Y(1): pass
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: int() takes at most 2 arguments (3 given)
"""
pass
else:
add(cls)
for cls_new in mro_method():
add(cls_new)
return tuple(mro)
@evaluator_method_cache(default=())
def py__bases__(self):
arglist = self.tree_node.get_super_arglist()
if arglist:
args = param.TreeArguments(self.evaluator, self, arglist)
return [value for key, value in args.unpack() if key is None]
else:
return [context.LazyKnownContext(compiled.create(self.evaluator, object))]
def py__call__(self, params):
from jedi.evaluate.instance import TreeInstance
return set([TreeInstance(self.evaluator, self.parent_context, self, params)])
def py__class__(self):
return compiled.create(self.evaluator, type)
def get_params(self):
from jedi.evaluate.instance import AnonymousInstance
anon = AnonymousInstance(self.evaluator, self.parent_context, self)
return [AnonymousInstanceParamName(anon, param.name) for param in self.funcdef.get_params()]
def get_filters(self, search_global, until_position=None, origin_scope=None, is_instance=False):
if search_global:
yield ParserTreeFilter(
self.evaluator,
context=self,
until_position=until_position,
origin_scope=origin_scope
)
else:
for cls in self.py__mro__():
if isinstance(cls, compiled.CompiledObject):
for filter in cls.get_filters(is_instance=is_instance):
yield filter
else:
yield ClassFilter(
self.evaluator, self, node_context=cls,
origin_scope=origin_scope)
def is_class(self):
return True
def get_function_slot_names(self, name):
for filter in self.get_filters(search_global=False):
names = filter.get(name)
if names:
return names
return []
def get_param_names(self):
for name in self.get_function_slot_names('__init__'):
for context_ in name.infer():
try:
method = context_.get_param_names
except AttributeError:
pass
else:
return list(method())[1:]
return []
@property
def name(self):
return ContextName(self, self.tree_node.name)
class LambdaName(AbstractNameDefinition):
string_name = '<lambda>'
def __init__(self, lambda_context):
self._lambda_context = lambda_context
self.parent_context = lambda_context.parent_context
def start_pos(self):
return self._lambda_context.tree_node.start_pos
def infer(self):
return set([self._lambda_context])
class FunctionContext(use_metaclass(CachedMetaClass, context.TreeContext)):
"""
Needed because of decorators. Decorators are evaluated here.
"""
api_type = 'function'
def __init__(self, evaluator, parent_context, funcdef):
""" This should not be called directly """
super(FunctionContext, self).__init__(evaluator, parent_context)
self.tree_node = funcdef
def get_filters(self, search_global, until_position=None, origin_scope=None):
if search_global:
yield ParserTreeFilter(
self.evaluator,
context=self,
until_position=until_position,
origin_scope=origin_scope
)
else:
scope = self.py__class__()
for filter in scope.get_filters(search_global=False, origin_scope=origin_scope):
yield filter
def infer_function_execution(self, function_execution):
"""
Created to be used by inheritance.
"""
yield_exprs = get_yield_exprs(self.evaluator, self.tree_node)
if yield_exprs:
return set([iterable.Generator(self.evaluator, function_execution)])
else:
return function_execution.get_return_values()
def get_function_execution(self, arguments=None):
if arguments is None:
arguments = param.AnonymousArguments()
return FunctionExecutionContext(self.evaluator, self.parent_context, self, arguments)
def py__call__(self, arguments):
function_execution = self.get_function_execution(arguments)
return self.infer_function_execution(function_execution)
def py__class__(self):
# This differentiation is only necessary for Python2. Python3 does not
# use a different method class.
if isinstance(parser_utils.get_parent_scope(self.tree_node), tree.Class):
name = 'METHOD_CLASS'
else:
name = 'FUNCTION_CLASS'
return compiled.get_special_object(self.evaluator, name)
@property
def name(self):
if self.tree_node.type == 'lambdef':
return LambdaName(self)
return ContextName(self, self.tree_node.name)
def get_param_names(self):
function_execution = self.get_function_execution()
return [ParamName(function_execution, param.name)
for param in self.tree_node.get_params()]
class FunctionExecutionContext(context.TreeContext):
"""
This class is used to evaluate functions and their returns.
This is the most complicated class, because it contains the logic to
transfer parameters. It is even more complicated, because there may be
multiple calls to functions and recursion has to be avoided. But this is
responsibility of the decorators.
"""
function_execution_filter = FunctionExecutionFilter
def __init__(self, evaluator, parent_context, function_context, var_args):
super(FunctionExecutionContext, self).__init__(evaluator, parent_context)
self.function_context = function_context
self.tree_node = function_context.tree_node
self.var_args = var_args
@evaluator_method_cache(default=set())
@recursion.execution_recursion_decorator()
def get_return_values(self, check_yields=False):
funcdef = self.tree_node
if funcdef.type == 'lambdef':
return self.evaluator.eval_element(self, funcdef.children[-1])
if check_yields:
types = set()
returns = get_yield_exprs(self.evaluator, funcdef)
else:
returns = funcdef.iter_return_stmts()
types = set(docstrings.infer_return_types(self.function_context))
types |= set(pep0484.infer_return_types(self.function_context))
for r in returns:
check = flow_analysis.reachability_check(self, funcdef, r)
if check is flow_analysis.UNREACHABLE:
debug.dbg('Return unreachable: %s', r)
else:
if check_yields:
types |= set(self._eval_yield(r))
else:
try:
children = r.children
except AttributeError:
types.add(compiled.create(self.evaluator, None))
else:
types |= self.eval_node(children[1])
if check is flow_analysis.REACHABLE:
debug.dbg('Return reachable: %s', r)
break
return types
def _eval_yield(self, yield_expr):
if yield_expr.type == 'keyword':
# `yield` just yields None.
yield context.LazyKnownContext(compiled.create(self.evaluator, None))
return
node = yield_expr.children[1]
if node.type == 'yield_arg': # It must be a yield from.
cn = ContextualizedNode(self, node.children[1])
for lazy_context in iterable.py__iter__(self.evaluator, cn.infer(), cn):
yield lazy_context
else:
yield context.LazyTreeContext(self, node)
@recursion.execution_recursion_decorator(default=iter([]))
def get_yield_values(self):
for_parents = [(y, tree.search_ancestor(y, 'for_stmt', 'funcdef',
'while_stmt', 'if_stmt'))
for y in get_yield_exprs(self.evaluator, self.tree_node)]
# Calculate if the yields are placed within the same for loop.
yields_order = []
last_for_stmt = None
for yield_, for_stmt in for_parents:
# For really simple for loops we can predict the order. Otherwise
# we just ignore it.
parent = for_stmt.parent
if parent.type == 'suite':
parent = parent.parent
if for_stmt.type == 'for_stmt' and parent == self.tree_node \
and parser_utils.for_stmt_defines_one_name(for_stmt): # Simplicity for now.
if for_stmt == last_for_stmt:
yields_order[-1][1].append(yield_)
else:
yields_order.append((for_stmt, [yield_]))
elif for_stmt == self.tree_node:
yields_order.append((None, [yield_]))
else:
types = self.get_return_values(check_yields=True)
if types:
yield context.get_merged_lazy_context(list(types))
return
last_for_stmt = for_stmt
evaluator = self.evaluator
for for_stmt, yields in yields_order:
if for_stmt is None:
# No for_stmt, just normal yields.
for yield_ in yields:
for result in self._eval_yield(yield_):
yield result
else:
input_node = for_stmt.get_testlist()
cn = ContextualizedNode(self, input_node)
ordered = iterable.py__iter__(evaluator, cn.infer(), cn)
ordered = list(ordered)
for lazy_context in ordered:
dct = {str(for_stmt.children[1].value): lazy_context.infer()}
with helpers.predefine_names(self, for_stmt, dct):
for yield_in_same_for_stmt in yields:
for result in self._eval_yield(yield_in_same_for_stmt):
yield result
def get_filters(self, search_global, until_position=None, origin_scope=None):
yield self.function_execution_filter(self.evaluator, self,
until_position=until_position,
origin_scope=origin_scope)
@evaluator_method_cache()
def get_params(self):
return self.var_args.get_params(self)
class ModuleAttributeName(AbstractNameDefinition):
"""
For module attributes like __file__, __str__ and so on.
"""
api_type = 'instance'
def __init__(self, parent_module, string_name):
self.parent_context = parent_module
self.string_name = string_name
def infer(self):
return compiled.create(self.parent_context.evaluator, str).execute(
param.ValuesArguments([])
)
class ModuleName(ContextNameMixin, AbstractNameDefinition):
start_pos = 1, 0
def __init__(self, context, name):
self._context = context
self._name = name
@property
def string_name(self):
return self._name
class ModuleContext(use_metaclass(CachedMetaClass, context.TreeContext)):
api_type = 'module'
parent_context = None
def __init__(self, evaluator, module_node, path):
super(ModuleContext, self).__init__(evaluator, parent_context=None)
self.tree_node = module_node
self._path = path
def get_filters(self, search_global, until_position=None, origin_scope=None):
yield ParserTreeFilter(
self.evaluator,
context=self,
until_position=until_position,
origin_scope=origin_scope
)
yield GlobalNameFilter(self, self.tree_node)
yield DictFilter(self._sub_modules_dict())
yield DictFilter(self._module_attributes_dict())
for star_module in self.star_imports():
yield next(star_module.get_filters(search_global))
# I'm not sure if the star import cache is really that effective anymore
# with all the other really fast import caches. Recheck. Also we would need
# to push the star imports into Evaluator.modules, if we reenable this.
@evaluator_method_cache([])
def star_imports(self):
modules = []
for i in self.tree_node.iter_imports():
if i.is_star_import():
name = i.get_paths()[-1][-1]
new = imports.infer_import(self, name)
for module in new:
if isinstance(module, ModuleContext):
modules += module.star_imports()
modules += new
return modules
@evaluator_method_cache()
def _module_attributes_dict(self):
names = ['__file__', '__package__', '__doc__', '__name__']
# All the additional module attributes are strings.
return dict((n, ModuleAttributeName(self, n)) for n in names)
@property
def _string_name(self):
""" This is used for the goto functions. """
if self._path is None:
return '' # no path -> empty name
else:
sep = (re.escape(os.path.sep),) * 2
r = re.search(r'([^%s]*?)(%s__init__)?(\.py|\.so)?$' % sep, self._path)
# Remove PEP 3149 names
return re.sub('\.[a-z]+-\d{2}[mud]{0,3}$', '', r.group(1))
@property
@evaluator_method_cache()
def name(self):
return ModuleName(self, self._string_name)
def _get_init_directory(self):
"""
:return: The path to the directory of a package. None in case it's not
a package.
"""
for suffix, _, _ in imp.get_suffixes():
ending = '__init__' + suffix
py__file__ = self.py__file__()
if py__file__ is not None and py__file__.endswith(ending):
# Remove the ending, including the separator.
return self.py__file__()[:-len(ending) - 1]
return None
def py__name__(self):
for name, module in self.evaluator.modules.items():
if module == self and name != '':
return name
return '__main__'
def py__file__(self):
"""
In contrast to Python's __file__ can be None.
"""
if self._path is None:
return None
return os.path.abspath(self._path)
def py__package__(self):
if self._get_init_directory() is None:
return re.sub(r'\.?[^\.]+$', '', self.py__name__())
else:
return self.py__name__()
def _py__path__(self):
search_path = self.evaluator.sys_path
init_path = self.py__file__()
if os.path.basename(init_path) == '__init__.py':
with open(init_path, 'rb') as f:
content = python_bytes_to_unicode(f.read(), errors='replace')
# these are strings that need to be used for namespace packages,
# the first one is ``pkgutil``, the second ``pkg_resources``.
options = ('declare_namespace(__name__)', 'extend_path(__path__')
if options[0] in content or options[1] in content:
# It is a namespace, now try to find the rest of the
# modules on sys_path or whatever the search_path is.
paths = set()
for s in search_path:
other = os.path.join(s, self.name.string_name)
if os.path.isdir(other):
paths.add(other)
if paths:
return list(paths)
# TODO I'm not sure if this is how nested namespace
# packages work. The tests are not really good enough to
# show that.
# Default to this.
return [self._get_init_directory()]
@property
def py__path__(self):
"""
Not seen here, since it's a property. The callback actually uses a
variable, so use it like::
foo.py__path__(sys_path)
In case of a package, this returns Python's __path__ attribute, which
is a list of paths (strings).
Raises an AttributeError if the module is not a package.
"""
path = self._get_init_directory()
if path is None:
raise AttributeError('Only packages have __path__ attributes.')
else:
return self._py__path__
@evaluator_method_cache()
def _sub_modules_dict(self):
"""
Lists modules in the directory of this module (if this module is a
package).
"""
path = self._path
names = {}
if path is not None and path.endswith(os.path.sep + '__init__.py'):
mods = pkgutil.iter_modules([os.path.dirname(path)])
for module_loader, name, is_pkg in mods:
# It's obviously a relative import to the current module.
names[name] = imports.SubModuleName(self, name)
# TODO add something like this in the future, its cleaner than the
# import hacks.
# ``os.path`` is a hardcoded exception, because it's a
# ``sys.modules`` modification.
# if str(self.name) == 'os':
# names.append(Name('path', parent_context=self))
return names
def py__class__(self):
return compiled.get_special_object(self.evaluator, 'MODULE_CLASS')
def __repr__(self):
return "<%s: %s@%s-%s>" % (
self.__class__.__name__, self._string_name,
self.tree_node.start_pos[0], self.tree_node.end_pos[0])
class ImplicitNSName(AbstractNameDefinition):
"""
Accessing names for implicit namespace packages should infer to nothing.
This object will prevent Jedi from raising exceptions
"""
def __init__(self, implicit_ns_context, string_name):
self.implicit_ns_context = implicit_ns_context
self.string_name = string_name
def infer(self):
return []
def get_root_context(self):
return self.implicit_ns_context
class ImplicitNamespaceContext(use_metaclass(CachedMetaClass, context.TreeContext)):
"""
Provides support for implicit namespace packages
"""
api_type = 'module'
parent_context = None
def __init__(self, evaluator, fullname):
super(ImplicitNamespaceContext, self).__init__(evaluator, parent_context=None)
self.evaluator = evaluator
self.fullname = fullname
def get_filters(self, search_global, until_position=None, origin_scope=None):
yield DictFilter(self._sub_modules_dict())
@property
@evaluator_method_cache()
def name(self):
string_name = self.py__package__().rpartition('.')[-1]
return ImplicitNSName(self, string_name)
def py__file__(self):
return None
def py__package__(self):
"""Return the fullname
"""
return self.fullname
@property
def py__path__(self):
return lambda: [self.paths]
@evaluator_method_cache()
def _sub_modules_dict(self):
names = {}
paths = self.paths
file_names = chain.from_iterable(os.listdir(path) for path in paths)
mods = [
file_name.rpartition('.')[0] if '.' in file_name else file_name
for file_name in file_names
if file_name != '__pycache__'
]
for name in mods:
names[name] = imports.SubModuleName(self, name)
return names

View File

@@ -1,110 +0,0 @@
"""An adapted copy of relevant site-packages functionality from Python stdlib.
This file contains some functions related to handling site-packages in Python
with jedi-specific modifications:
- the functions operate on sys_path argument rather than global sys.path
- in .pth files "import ..." lines that allow execution of arbitrary code are
skipped to prevent code injection into jedi interpreter
"""
# Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010,
# 2011, 2012, 2013, 2014, 2015 Python Software Foundation; All Rights Reserved
from __future__ import print_function
import sys
import os
def makepath(*paths):
dir = os.path.join(*paths)
try:
dir = os.path.abspath(dir)
except OSError:
pass
return dir, os.path.normcase(dir)
def _init_pathinfo(sys_path):
"""Return a set containing all existing directory entries from sys_path"""
d = set()
for dir in sys_path:
try:
if os.path.isdir(dir):
dir, dircase = makepath(dir)
d.add(dircase)
except TypeError:
continue
return d
def addpackage(sys_path, sitedir, name, known_paths):
"""Process a .pth file within the site-packages directory:
For each line in the file, either combine it with sitedir to a path
and add that to known_paths, or execute it if it starts with 'import '.
"""
if known_paths is None:
known_paths = _init_pathinfo(sys_path)
reset = 1
else:
reset = 0
fullname = os.path.join(sitedir, name)
try:
f = open(fullname, "r")
except OSError:
return
with f:
for n, line in enumerate(f):
if line.startswith("#"):
continue
try:
if line.startswith(("import ", "import\t")):
# Change by immerrr: don't evaluate import lines to prevent
# code injection into jedi through pth files.
#
# exec(line)
continue
line = line.rstrip()
dir, dircase = makepath(sitedir, line)
if not dircase in known_paths and os.path.exists(dir):
sys_path.append(dir)
known_paths.add(dircase)
except Exception:
print("Error processing line {:d} of {}:\n".format(n+1, fullname),
file=sys.stderr)
import traceback
for record in traceback.format_exception(*sys.exc_info()):
for line in record.splitlines():
print(' '+line, file=sys.stderr)
print("\nRemainder of file ignored", file=sys.stderr)
break
if reset:
known_paths = None
return known_paths
def addsitedir(sys_path, sitedir, known_paths=None):
"""Add 'sitedir' argument to sys_path if missing and handle .pth files in
'sitedir'"""
if known_paths is None:
known_paths = _init_pathinfo(sys_path)
reset = 1
else:
reset = 0
sitedir, sitedircase = makepath(sitedir)
if not sitedircase in known_paths:
sys_path.append(sitedir) # Add path component
known_paths.add(sitedircase)
try:
names = os.listdir(sitedir)
except OSError:
return
names = [name for name in names if name.endswith(".pth")]
for name in sorted(names):
addpackage(sys_path, sitedir, name, known_paths)
if reset:
known_paths = None
return known_paths

View File

@@ -1,323 +0,0 @@
"""
Implementations of standard library functions, because it's not possible to
understand them with Jedi.
To add a new implementation, create a function and add it to the
``_implemented`` dict at the bottom of this module.
Note that this module exists only to implement very specific functionality in
the standard library. The usual way to understand the standard library is the
compiled module that returns the types for C-builtins.
"""
import collections
import re
from jedi.common import unite
from jedi.evaluate import compiled
from jedi.evaluate import representation as er
from jedi.evaluate.instance import InstanceFunctionExecution, \
AbstractInstanceContext, CompiledInstance, BoundMethod, \
AnonymousInstanceFunctionExecution
from jedi.evaluate import iterable
from jedi import debug
from jedi.evaluate import precedence
from jedi.evaluate import param
from jedi.evaluate import analysis
from jedi.evaluate.context import LazyTreeContext, ContextualizedNode
# Now this is all part of fake tuples in Jedi. However super doesn't work on
# __init__ and __new__ doesn't work at all. So adding this to nametuples is
# just the easiest way.
_NAMEDTUPLE_INIT = """
def __init__(_cls, {arg_list}):
'A helper function for namedtuple.'
self.__iterable = ({arg_list})
def __iter__(self):
for i in self.__iterable:
yield i
def __getitem__(self, y):
return self.__iterable[y]
"""
class NotInStdLib(LookupError):
pass
def execute(evaluator, obj, arguments):
if isinstance(obj, BoundMethod):
raise NotInStdLib()
try:
obj_name = obj.name.string_name
except AttributeError:
pass
else:
if obj.parent_context == evaluator.BUILTINS:
module_name = 'builtins'
elif isinstance(obj.parent_context, er.ModuleContext):
module_name = obj.parent_context.name.string_name
else:
module_name = ''
# for now we just support builtin functions.
try:
func = _implemented[module_name][obj_name]
except KeyError:
pass
else:
return func(evaluator, obj, arguments)
raise NotInStdLib()
def _follow_param(evaluator, arguments, index):
try:
key, lazy_context = list(arguments.unpack())[index]
except IndexError:
return set()
else:
return lazy_context.infer()
def argument_clinic(string, want_obj=False, want_context=False, want_arguments=False):
"""
Works like Argument Clinic (PEP 436), to validate function params.
"""
clinic_args = []
allow_kwargs = False
optional = False
while string:
# Optional arguments have to begin with a bracket. And should always be
# at the end of the arguments. This is therefore not a proper argument
# clinic implementation. `range()` for exmple allows an optional start
# value at the beginning.
match = re.match('(?:(?:(\[),? ?|, ?|)(\w+)|, ?/)\]*', string)
string = string[len(match.group(0)):]
if not match.group(2): # A slash -> allow named arguments
allow_kwargs = True
continue
optional = optional or bool(match.group(1))
word = match.group(2)
clinic_args.append((word, optional, allow_kwargs))
def f(func):
def wrapper(evaluator, obj, arguments):
debug.dbg('builtin start %s' % obj, color='MAGENTA')
try:
lst = list(arguments.eval_argument_clinic(clinic_args))
except ValueError:
return set()
else:
kwargs = {}
if want_context:
kwargs['context'] = arguments.context
if want_obj:
kwargs['obj'] = obj
if want_arguments:
kwargs['arguments'] = arguments
return func(evaluator, *lst, **kwargs)
finally:
debug.dbg('builtin end', color='MAGENTA')
return wrapper
return f
@argument_clinic('iterator[, default], /')
def builtins_next(evaluator, iterators, defaults):
"""
TODO this function is currently not used. It's a stab at implementing next
in a different way than fake objects. This would be a bit more flexible.
"""
if evaluator.python_version[0] == 2:
name = 'next'
else:
name = '__next__'
types = set()
for iterator in iterators:
if isinstance(iterator, AbstractInstanceContext):
for filter in iterator.get_filters(include_self_names=True):
for n in filter.get(name):
for context in n.infer():
types |= context.execute_evaluated()
if types:
return types
return defaults
@argument_clinic('object, name[, default], /')
def builtins_getattr(evaluator, objects, names, defaults=None):
# follow the first param
for obj in objects:
for name in names:
if precedence.is_string(name):
return obj.py__getattribute__(name.obj)
else:
debug.warning('getattr called without str')
continue
return set()
@argument_clinic('object[, bases, dict], /')
def builtins_type(evaluator, objects, bases, dicts):
if bases or dicts:
# It's a type creation... maybe someday...
return set()
else:
return set([o.py__class__() for o in objects])
class SuperInstance(AbstractInstanceContext):
"""To be used like the object ``super`` returns."""
def __init__(self, evaluator, cls):
su = cls.py_mro()[1]
super().__init__(evaluator, su and su[0] or self)
@argument_clinic('[type[, obj]], /', want_context=True)
def builtins_super(evaluator, types, objects, context):
# TODO make this able to detect multiple inheritance super
if isinstance(context, (InstanceFunctionExecution,
AnonymousInstanceFunctionExecution)):
su = context.instance.py__class__().py__bases__()
return unite(context.execute_evaluated() for context in su[0].infer())
return set()
@argument_clinic('sequence, /', want_obj=True, want_arguments=True)
def builtins_reversed(evaluator, sequences, obj, arguments):
# While we could do without this variable (just by using sequences), we
# want static analysis to work well. Therefore we need to generated the
# values again.
key, lazy_context = next(arguments.unpack())
cn = None
if isinstance(lazy_context, LazyTreeContext):
# TODO access private
cn = ContextualizedNode(lazy_context._context, lazy_context.data)
ordered = list(iterable.py__iter__(evaluator, sequences, cn))
rev = list(reversed(ordered))
# Repack iterator values and then run it the normal way. This is
# necessary, because `reversed` is a function and autocompletion
# would fail in certain cases like `reversed(x).__iter__` if we
# just returned the result directly.
seq = iterable.FakeSequence(evaluator, 'list', rev)
arguments = param.ValuesArguments([[seq]])
return set([CompiledInstance(evaluator, evaluator.BUILTINS, obj, arguments)])
@argument_clinic('obj, type, /', want_arguments=True)
def builtins_isinstance(evaluator, objects, types, arguments):
bool_results = set([])
for o in objects:
try:
mro_func = o.py__class__().py__mro__
except AttributeError:
# This is temporary. Everything should have a class attribute in
# Python?! Maybe we'll leave it here, because some numpy objects or
# whatever might not.
return set([compiled.create(True), compiled.create(False)])
mro = mro_func()
for cls_or_tup in types:
if cls_or_tup.is_class():
bool_results.add(cls_or_tup in mro)
elif cls_or_tup.name.string_name == 'tuple' \
and cls_or_tup.get_root_context() == evaluator.BUILTINS:
# Check for tuples.
classes = unite(
lazy_context.infer()
for lazy_context in cls_or_tup.py__iter__()
)
bool_results.add(any(cls in mro for cls in classes))
else:
_, lazy_context = list(arguments.unpack())[1]
if isinstance(lazy_context, LazyTreeContext):
node = lazy_context.data
message = 'TypeError: isinstance() arg 2 must be a ' \
'class, type, or tuple of classes and types, ' \
'not %s.' % cls_or_tup
analysis.add(lazy_context._context, 'type-error-isinstance', node, message)
return set(compiled.create(evaluator, x) for x in bool_results)
def collections_namedtuple(evaluator, obj, arguments):
"""
Implementation of the namedtuple function.
This has to be done by processing the namedtuple class template and
evaluating the result.
.. note:: |jedi| only supports namedtuples on Python >2.6.
"""
# Namedtuples are not supported on Python 2.6
if not hasattr(collections, '_class_template'):
return set()
# Process arguments
# TODO here we only use one of the types, we should use all.
name = list(_follow_param(evaluator, arguments, 0))[0].obj
_fields = list(_follow_param(evaluator, arguments, 1))[0]
if isinstance(_fields, compiled.CompiledObject):
fields = _fields.obj.replace(',', ' ').split()
elif isinstance(_fields, iterable.AbstractSequence):
fields = [
v.obj
for lazy_context in _fields.py__iter__()
for v in lazy_context.infer() if hasattr(v, 'obj')
]
else:
return set()
base = collections._class_template
base += _NAMEDTUPLE_INIT
# Build source
source = base.format(
typename=name,
field_names=tuple(fields),
num_fields=len(fields),
arg_list = repr(tuple(fields)).replace("'", "")[1:-1],
repr_fmt=', '.join(collections._repr_template.format(name=name) for name in fields),
field_defs='\n'.join(collections._field_template.format(index=index, name=name)
for index, name in enumerate(fields))
)
# Parse source
module = evaluator.grammar.parse(source)
generated_class = next(module.iter_classdefs())
parent_context = er.ModuleContext(evaluator, module, '')
return set([er.ClassContext(evaluator, generated_class, parent_context)])
@argument_clinic('first, /')
def _return_first_param(evaluator, firsts):
return firsts
_implemented = {
'builtins': {
'getattr': builtins_getattr,
'type': builtins_type,
'super': builtins_super,
'reversed': builtins_reversed,
'isinstance': builtins_isinstance,
},
'copy': {
'copy': _return_first_param,
'deepcopy': _return_first_param,
},
'json': {
'load': lambda *args: set(),
'loads': lambda *args: set(),
},
'collections': {
'namedtuple': collections_namedtuple,
},
}

View File

@@ -1,314 +0,0 @@
import glob
import os
import sys
import imp
from jedi.evaluate.site import addsitedir
from jedi._compatibility import exec_function, unicode
from jedi.evaluate.cache import evaluator_function_cache
from jedi.evaluate.compiled import CompiledObject
from jedi.evaluate.context import ContextualizedNode
from jedi import settings
from jedi import debug
from jedi import common
def get_venv_path(venv):
"""Get sys.path for specified virtual environment."""
sys_path = _get_venv_path_dirs(venv)
with common.ignored(ValueError):
sys_path.remove('')
sys_path = _get_sys_path_with_egglinks(sys_path)
# As of now, get_venv_path_dirs does not scan built-in pythonpath and
# user-local site-packages, let's approximate them using path from Jedi
# interpreter.
return sys_path + sys.path
def _get_sys_path_with_egglinks(sys_path):
"""Find all paths including those referenced by egg-links.
Egg-link-referenced directories are inserted into path immediately before
the directory on which their links were found. Such directories are not
taken into consideration by normal import mechanism, but they are traversed
when doing pkg_resources.require.
"""
result = []
for p in sys_path:
# pkg_resources does not define a specific order for egg-link files
# using os.listdir to enumerate them, we're sorting them to have
# reproducible tests.
for egg_link in sorted(glob.glob(os.path.join(p, '*.egg-link'))):
with open(egg_link) as fd:
for line in fd:
line = line.strip()
if line:
result.append(os.path.join(p, line))
# pkg_resources package only interprets the first
# non-empty line in egg-link files.
break
result.append(p)
return result
def _get_venv_path_dirs(venv):
"""Get sys.path for venv without starting up the interpreter."""
venv = os.path.abspath(venv)
sitedir = _get_venv_sitepackages(venv)
sys_path = []
addsitedir(sys_path, sitedir)
return sys_path
def _get_venv_sitepackages(venv):
if os.name == 'nt':
p = os.path.join(venv, 'lib', 'site-packages')
else:
p = os.path.join(venv, 'lib', 'python%d.%d' % sys.version_info[:2],
'site-packages')
return p
def _execute_code(module_path, code):
c = "import os; from os.path import *; result=%s"
variables = {'__file__': module_path}
try:
exec_function(c % code, variables)
except Exception:
debug.warning('sys.path manipulation detected, but failed to evaluate.')
else:
try:
res = variables['result']
if isinstance(res, str):
return [os.path.abspath(res)]
except KeyError:
pass
return []
def _paths_from_assignment(module_context, expr_stmt):
"""
Extracts the assigned strings from an assignment that looks as follows::
>>> sys.path[0:0] = ['module/path', 'another/module/path']
This function is in general pretty tolerant (and therefore 'buggy').
However, it's not a big issue usually to add more paths to Jedi's sys_path,
because it will only affect Jedi in very random situations and by adding
more paths than necessary, it usually benefits the general user.
"""
for assignee, operator in zip(expr_stmt.children[::2], expr_stmt.children[1::2]):
try:
assert operator in ['=', '+=']
assert assignee.type in ('power', 'atom_expr') and \
len(assignee.children) > 1
c = assignee.children
assert c[0].type == 'name' and c[0].value == 'sys'
trailer = c[1]
assert trailer.children[0] == '.' and trailer.children[1].value == 'path'
# TODO Essentially we're not checking details on sys.path
# manipulation. Both assigment of the sys.path and changing/adding
# parts of the sys.path are the same: They get added to the current
# sys.path.
"""
execution = c[2]
assert execution.children[0] == '['
subscript = execution.children[1]
assert subscript.type == 'subscript'
assert ':' in subscript.children
"""
except AssertionError:
continue
from jedi.evaluate.iterable import py__iter__
from jedi.evaluate.precedence import is_string
cn = ContextualizedNode(module_context.create_context(expr_stmt), expr_stmt)
for lazy_context in py__iter__(module_context.evaluator, cn.infer(), cn):
for context in lazy_context.infer():
if is_string(context):
yield context.obj
def _paths_from_list_modifications(module_path, trailer1, trailer2):
""" extract the path from either "sys.path.append" or "sys.path.insert" """
# Guarantee that both are trailers, the first one a name and the second one
# a function execution with at least one param.
if not (trailer1.type == 'trailer' and trailer1.children[0] == '.'
and trailer2.type == 'trailer' and trailer2.children[0] == '('
and len(trailer2.children) == 3):
return []
name = trailer1.children[1].value
if name not in ['insert', 'append']:
return []
arg = trailer2.children[1]
if name == 'insert' and len(arg.children) in (3, 4): # Possible trailing comma.
arg = arg.children[2]
return _execute_code(module_path, arg.get_code())
def _check_module(module_context):
"""
Detect sys.path modifications within module.
"""
def get_sys_path_powers(names):
for name in names:
power = name.parent.parent
if power.type in ('power', 'atom_expr'):
c = power.children
if c[0].type == 'name' and c[0].value == 'sys' \
and c[1].type == 'trailer':
n = c[1].children[1]
if n.type == 'name' and n.value == 'path':
yield name, power
sys_path = list(module_context.evaluator.sys_path) # copy
if isinstance(module_context, CompiledObject):
return sys_path
try:
possible_names = module_context.tree_node.get_used_names()['path']
except KeyError:
pass
else:
for name, power in get_sys_path_powers(possible_names):
expr_stmt = power.parent
if len(power.children) >= 4:
sys_path.extend(
_paths_from_list_modifications(
module_context.py__file__(), *power.children[2:4]
)
)
elif expr_stmt is not None and expr_stmt.type == 'expr_stmt':
sys_path.extend(_paths_from_assignment(module_context, expr_stmt))
return sys_path
@evaluator_function_cache(default=[])
def sys_path_with_modifications(evaluator, module_context):
path = module_context.py__file__()
if path is None:
# Support for modules without a path is bad, therefore return the
# normal path.
return list(evaluator.sys_path)
curdir = os.path.abspath(os.curdir)
#TODO why do we need a chdir?
with common.ignored(OSError):
os.chdir(os.path.dirname(path))
buildout_script_paths = set()
result = _check_module(module_context)
result += _detect_django_path(path)
for buildout_script_path in _get_buildout_script_paths(path):
for path in _get_paths_from_buildout_script(evaluator, buildout_script_path):
buildout_script_paths.add(path)
# cleanup, back to old directory
os.chdir(curdir)
return list(result) + list(buildout_script_paths)
def _get_paths_from_buildout_script(evaluator, buildout_script_path):
try:
module_node = evaluator.grammar.parse(
path=buildout_script_path,
cache=True,
cache_path=settings.cache_directory
)
except IOError:
debug.warning('Error trying to read buildout_script: %s', buildout_script_path)
return
from jedi.evaluate.representation import ModuleContext
for path in _check_module(ModuleContext(evaluator, module_node, buildout_script_path)):
yield path
def traverse_parents(path):
while True:
new = os.path.dirname(path)
if new == path:
return
path = new
yield path
def _get_parent_dir_with_file(path, filename):
for parent in traverse_parents(path):
if os.path.isfile(os.path.join(parent, filename)):
return parent
return None
def _detect_django_path(module_path):
""" Detects the path of the very well known Django library (if used) """
result = []
for parent in traverse_parents(module_path):
with common.ignored(IOError):
with open(parent + os.path.sep + 'manage.py'):
debug.dbg('Found django path: %s', module_path)
result.append(parent)
return result
def _get_buildout_script_paths(module_path):
"""
if there is a 'buildout.cfg' file in one of the parent directories of the
given module it will return a list of all files in the buildout bin
directory that look like python files.
:param module_path: absolute path to the module.
:type module_path: str
"""
project_root = _get_parent_dir_with_file(module_path, 'buildout.cfg')
if not project_root:
return []
bin_path = os.path.join(project_root, 'bin')
if not os.path.exists(bin_path):
return []
extra_module_paths = []
for filename in os.listdir(bin_path):
try:
filepath = os.path.join(bin_path, filename)
with open(filepath, 'r') as f:
firstline = f.readline()
if firstline.startswith('#!') and 'python' in firstline:
extra_module_paths.append(filepath)
except (UnicodeDecodeError, IOError) as e:
# Probably a binary file; permission error or race cond. because file got deleted
# ignore
debug.warning(unicode(e))
continue
return extra_module_paths
def dotted_path_in_sys_path(sys_path, module_path):
"""
Returns the dotted path inside a sys.path.
"""
# First remove the suffix.
for suffix, _, _ in imp.get_suffixes():
if module_path.endswith(suffix):
module_path = module_path[:-len(suffix)]
break
else:
# There should always be a suffix in a valid Python file on the path.
return None
if module_path.startswith(os.path.sep):
# The paths in sys.path most of the times don't end with a slash.
module_path = module_path[1:]
for p in sys_path:
if module_path.startswith(p):
rest = module_path[len(p):]
if rest:
split = rest.split(os.path.sep)
for string in split:
if not string or '.' in string:
return None
return '.'.join(split)
return None

83
jedi/file_io.py Normal file
View File

@@ -0,0 +1,83 @@
import os
from parso import file_io
class AbstractFolderIO(object):
def __init__(self, path):
self.path = path
def get_base_name(self):
raise NotImplementedError
def list(self):
raise NotImplementedError
def get_file_io(self, name):
raise NotImplementedError
def get_parent_folder(self):
raise NotImplementedError
def __repr__(self):
return '<%s: %s>' % (self.__class__.__name__, self.path)
class FolderIO(AbstractFolderIO):
def get_base_name(self):
return os.path.basename(self.path)
def list(self):
return os.listdir(self.path)
def get_file_io(self, name):
return FileIO(os.path.join(self.path, name))
def get_parent_folder(self):
return FolderIO(os.path.dirname(self.path))
def walk(self):
for root, dirs, files in os.walk(self.path):
root_folder_io = FolderIO(root)
original_folder_ios = [FolderIO(os.path.join(root, d)) for d in dirs]
modified_folder_ios = list(original_folder_ios)
yield (
root_folder_io,
modified_folder_ios,
[FileIO(os.path.join(root, f)) for f in files],
)
modified_iterator = iter(reversed(modified_folder_ios))
current = next(modified_iterator, None)
i = len(original_folder_ios)
for folder_io in reversed(original_folder_ios):
i -= 1 # Basically enumerate but reversed
if current is folder_io:
current = next(modified_iterator, None)
else:
del dirs[i]
class FileIOFolderMixin(object):
def get_parent_folder(self):
return FolderIO(os.path.dirname(self.path))
class ZipFileIO(file_io.KnownContentFileIO, FileIOFolderMixin):
"""For .zip and .egg archives"""
def __init__(self, path, code, zip_path):
super(ZipFileIO, self).__init__(path, code)
self._zip_path = zip_path
def get_last_modified(self):
try:
return os.path.getmtime(self._zip_path)
except OSError: # Python 3 would probably only need FileNotFoundError
return None
class FileIO(file_io.FileIO, FileIOFolderMixin):
pass
class KnownContentFileIO(file_io.KnownContentFileIO, FileIOFolderMixin):
pass

197
jedi/inference/__init__.py Normal file
View File

@@ -0,0 +1,197 @@
"""
Type inference of Python code in |jedi| is based on three assumptions:
* The code uses as least side effects as possible. Jedi understands certain
list/tuple/set modifications, but there's no guarantee that Jedi detects
everything (list.append in different modules for example).
* No magic is being used:
- metaclasses
- ``setattr()`` / ``__import__()``
- writing to ``globals()``, ``locals()``, ``object.__dict__``
* The programmer is not a total dick, e.g. like `this
<https://github.com/davidhalter/jedi/issues/24>`_ :-)
The actual algorithm is based on a principle I call lazy type inference. That
said, the typical entry point for static analysis is calling
``infer_expr_stmt``. There's separate logic for autocompletion in the API, the
inference_state is all about inferring an expression.
TODO this paragraph is not what jedi does anymore, it's similar, but not the
same.
Now you need to understand what follows after ``infer_expr_stmt``. Let's
make an example::
import datetime
datetime.date.toda# <-- cursor here
First of all, this module doesn't care about completion. It really just cares
about ``datetime.date``. At the end of the procedure ``infer_expr_stmt`` will
return the ``date`` class.
To *visualize* this (simplified):
- ``InferenceState.infer_expr_stmt`` doesn't do much, because there's no assignment.
- ``Context.infer_node`` cares for resolving the dotted path
- ``InferenceState.find_types`` searches for global definitions of datetime, which
it finds in the definition of an import, by scanning the syntax tree.
- Using the import logic, the datetime module is found.
- Now ``find_types`` is called again by ``infer_node`` to find ``date``
inside the datetime module.
Now what would happen if we wanted ``datetime.date.foo.bar``? Two more
calls to ``find_types``. However the second call would be ignored, because the
first one would return nothing (there's no foo attribute in ``date``).
What if the import would contain another ``ExprStmt`` like this::
from foo import bar
Date = bar.baz
Well... You get it. Just another ``infer_expr_stmt`` recursion. It's really
easy. Python can obviously get way more complicated then this. To understand
tuple assignments, list comprehensions and everything else, a lot more code had
to be written.
Jedi has been tested very well, so you can just start modifying code. It's best
to write your own test first for your "new" feature. Don't be scared of
breaking stuff. As long as the tests pass, you're most likely to be fine.
I need to mention now that lazy type inference is really good because it
only *inferes* what needs to be *inferred*. All the statements and modules
that are not used are just being ignored.
"""
import parso
from jedi.file_io import FileIO
from jedi import debug
from jedi import settings
from jedi.inference import imports
from jedi.inference import recursion
from jedi.inference.cache import inference_state_function_cache
from jedi.inference import helpers
from jedi.inference.names import TreeNameDefinition
from jedi.inference.base_value import ContextualizedNode, \
ValueSet, iterate_values
from jedi.inference.value import ClassValue, FunctionValue
from jedi.inference.syntax_tree import infer_expr_stmt, \
check_tuple_assignments, tree_name_to_values
from jedi.inference.imports import follow_error_node_imports_if_possible
from jedi.plugins import plugin_manager
class InferenceState(object):
def __init__(self, project, environment=None, script_path=None):
if environment is None:
environment = project.get_environment()
self.environment = environment
self.script_path = script_path
self.compiled_subprocess = environment.get_inference_state_subprocess(self)
self.grammar = environment.get_grammar()
self.latest_grammar = parso.load_grammar(version='3.7')
self.memoize_cache = {} # for memoize decorators
self.module_cache = imports.ModuleCache() # does the job of `sys.modules`.
self.stub_module_cache = {} # Dict[Tuple[str, ...], Optional[ModuleValue]]
self.compiled_cache = {} # see `inference.compiled.create()`
self.inferred_element_counts = {}
self.mixed_cache = {} # see `inference.compiled.mixed._create()`
self.analysis = []
self.dynamic_params_depth = 0
self.is_analysis = False
self.project = project
self.access_cache = {}
self.allow_descriptor_getattr = False
self.flow_analysis_enabled = True
self.reset_recursion_limitations()
def import_module(self, import_names, sys_path=None, prefer_stubs=True):
return imports.import_module_by_names(
self, import_names, sys_path, prefer_stubs=prefer_stubs)
@staticmethod
@plugin_manager.decorate()
def execute(value, arguments):
debug.dbg('execute: %s %s', value, arguments)
with debug.increase_indent_cm():
value_set = value.py__call__(arguments=arguments)
debug.dbg('execute result: %s in %s', value_set, value)
return value_set
@property
@inference_state_function_cache()
def builtins_module(self):
module_name = u'builtins'
if self.environment.version_info.major == 2:
module_name = u'__builtin__'
builtins_module, = self.import_module((module_name,), sys_path=())
return builtins_module
@property
@inference_state_function_cache()
def typing_module(self):
typing_module, = self.import_module((u'typing',))
return typing_module
def reset_recursion_limitations(self):
self.recursion_detector = recursion.RecursionDetector()
self.execution_recursion_detector = recursion.ExecutionRecursionDetector(self)
def get_sys_path(self, **kwargs):
"""Convenience function"""
return self.project._get_sys_path(self, **kwargs)
def infer(self, context, name):
def_ = name.get_definition(import_name_always=True)
if def_ is not None:
type_ = def_.type
is_classdef = type_ == 'classdef'
if is_classdef or type_ == 'funcdef':
if is_classdef:
c = ClassValue(self, context, name.parent)
else:
c = FunctionValue.from_context(context, name.parent)
return ValueSet([c])
if type_ == 'expr_stmt':
is_simple_name = name.parent.type not in ('power', 'trailer')
if is_simple_name:
return infer_expr_stmt(context, def_, name)
if type_ == 'for_stmt':
container_types = context.infer_node(def_.children[3])
cn = ContextualizedNode(context, def_.children[3])
for_types = iterate_values(container_types, cn)
n = TreeNameDefinition(context, name)
return check_tuple_assignments(n, for_types)
if type_ in ('import_from', 'import_name'):
return imports.infer_import(context, name)
if type_ == 'with_stmt':
return tree_name_to_values(self, context, name)
elif type_ == 'param':
return context.py__getattribute__(name.value, position=name.end_pos)
else:
result = follow_error_node_imports_if_possible(context, name)
if result is not None:
return result
return helpers.infer_call_of_leaf(context, name)
def parse_and_get_code(self, code=None, path=None, encoding='utf-8',
use_latest_grammar=False, file_io=None, **kwargs):
if code is None:
if file_io is None:
file_io = FileIO(path)
code = file_io.read()
# We cannot just use parso, because it doesn't use errors='replace'.
code = parso.python_bytes_to_unicode(code, encoding=encoding, errors='replace')
if len(code) > settings._cropped_file_size:
code = code[:settings._cropped_file_size]
grammar = self.latest_grammar if use_latest_grammar else self.grammar
return grammar.parse(code=code, path=path, file_io=file_io, **kwargs), code
def parse(self, *args, **kwargs):
return self.parse_and_get_code(*args, **kwargs)[0]

View File

@@ -1,9 +1,11 @@
"""
Module for statical analysis.
"""
from jedi import debug
from parso.python import tree
from jedi.evaluate.compiled import CompiledObject
from jedi._compatibility import force_unicode
from jedi import debug
from jedi.inference.helpers import is_string
CODES = {
@@ -56,8 +58,8 @@ class Error(object):
return self.__unicode__()
def __eq__(self, other):
return (self.path == other.path and self.name == other.name and
self._start_pos == other._start_pos)
return (self.path == other.path and self.name == other.name
and self._start_pos == other._start_pos)
def __ne__(self, other):
return not self.__eq__(other)
@@ -83,49 +85,45 @@ def add(node_context, error_name, node, message=None, typ=Error, payload=None):
# TODO this path is probably not right
module_context = node_context.get_root_context()
module_path = module_context.py__file__()
instance = typ(error_name, module_path, node.start_pos, message)
debug.warning(str(instance), format=False)
node_context.evaluator.analysis.append(instance)
issue_instance = typ(error_name, module_path, node.start_pos, message)
debug.warning(str(issue_instance), format=False)
node_context.inference_state.analysis.append(issue_instance)
return issue_instance
def _check_for_setattr(instance):
"""
Check if there's any setattr method inside an instance. If so, return True.
"""
from jedi.evaluate.representation import ModuleContext
module = instance.get_root_context()
if not isinstance(module, ModuleContext):
node = module.tree_node
if node is None:
# If it's a compiled module or doesn't have a tree_node
return False
node = module.tree_node
try:
stmts = node.get_used_names()['setattr']
stmt_names = node.get_used_names()['setattr']
except KeyError:
return False
return any(node.start_pos < stmt.start_pos < node.end_pos
for stmt in stmts)
return any(node.start_pos < n.start_pos < node.end_pos
# Check if it's a function called setattr.
and not (n.parent.type == 'funcdef' and n.parent.name == n)
for n in stmt_names)
def add_attribute_error(name_context, lookup_context, name):
message = ('AttributeError: %s has no attribute %s.' % (lookup_context, name))
from jedi.evaluate.instance import AbstractInstanceContext, CompiledInstanceName
def add_attribute_error(name_context, lookup_value, name):
message = ('AttributeError: %s has no attribute %s.' % (lookup_value, name))
# Check for __getattr__/__getattribute__ existance and issue a warning
# instead of an error, if that happens.
typ = Error
if isinstance(lookup_context, AbstractInstanceContext):
slot_names = lookup_context.get_function_slot_names('__getattr__') + \
lookup_context.get_function_slot_names('__getattribute__')
for n in slot_names:
if isinstance(name, CompiledInstanceName) and \
n.parent_context.obj == object:
typ = Warning
break
if lookup_value.is_instance() and not lookup_value.is_compiled():
# TODO maybe make a warning for __getattr__/__getattribute__
if _check_for_setattr(lookup_context):
if _check_for_setattr(lookup_value):
typ = Warning
payload = lookup_context, name
payload = lookup_value, name
add(name_context, 'attribute-error', name, message, typ, payload)
@@ -138,16 +136,20 @@ def _check_for_exception_catch(node_context, jedi_name, exception, payload=None)
Returns True if the exception was catched.
"""
def check_match(cls, exception):
try:
return isinstance(cls, CompiledObject) and issubclass(exception, cls.obj)
except TypeError:
if not cls.is_class():
return False
for python_cls in exception.mro():
if cls.py__name__() == python_cls.__name__ \
and cls.parent_context.is_builtins_module():
return True
return False
def check_try_for_except(obj, exception):
# Only nodes in try
iterator = iter(obj.children)
for branch_type in iterator:
colon = next(iterator)
next(iterator) # The colon
suite = next(iterator)
if branch_type == 'try' \
and not (branch_type.start_pos < jedi_name.start_pos <= suite.end_pos):
@@ -157,14 +159,14 @@ def _check_for_exception_catch(node_context, jedi_name, exception, payload=None)
if node is None:
return True # An exception block that catches everything.
else:
except_classes = node_context.eval_node(node)
except_classes = node_context.infer_node(node)
for cls in except_classes:
from jedi.evaluate import iterable
if isinstance(cls, iterable.AbstractSequence) and \
from jedi.inference.value import iterable
if isinstance(cls, iterable.Sequence) and \
cls.array_type == 'tuple':
# multiple exceptions
for lazy_context in cls.py__iter__():
for typ in lazy_context.infer():
for lazy_value in cls.py__iter__():
for typ in lazy_value.infer():
if check_match(typ, exception):
return True
else:
@@ -181,20 +183,21 @@ def _check_for_exception_catch(node_context, jedi_name, exception, payload=None)
assert trailer.type == 'trailer'
arglist = trailer.children[1]
assert arglist.type == 'arglist'
from jedi.evaluate.param import TreeArguments
args = list(TreeArguments(node_context.evaluator, node_context, arglist).unpack())
from jedi.inference.arguments import TreeArguments
args = TreeArguments(node_context.inference_state, node_context, arglist)
unpacked_args = list(args.unpack())
# Arguments should be very simple
assert len(args) == 2
assert len(unpacked_args) == 2
# Check name
key, lazy_context = args[1]
names = list(lazy_context.infer())
assert len(names) == 1 and isinstance(names[0], CompiledObject)
assert names[0].obj == payload[1].value
key, lazy_value = unpacked_args[1]
names = list(lazy_value.infer())
assert len(names) == 1 and is_string(names[0])
assert force_unicode(names[0].get_safe_value()) == payload[1].value
# Check objects
key, lazy_context = args[0]
objects = lazy_context.infer()
key, lazy_value = unpacked_args[0]
objects = lazy_value.infer()
return payload[0] in objects
except AssertionError:
return False

343
jedi/inference/arguments.py Normal file
View File

@@ -0,0 +1,343 @@
import re
from parso.python import tree
from jedi._compatibility import zip_longest
from jedi import debug
from jedi.inference.utils import PushBackIterator
from jedi.inference import analysis
from jedi.inference.lazy_value import LazyKnownValue, LazyKnownValues, \
LazyTreeValue, get_merged_lazy_value
from jedi.inference.names import ParamName, TreeNameDefinition, AnonymousParamName
from jedi.inference.base_value import NO_VALUES, ValueSet, ContextualizedNode
from jedi.inference.value import iterable
from jedi.inference.cache import inference_state_as_method_param_cache
def try_iter_content(types, depth=0):
"""Helper method for static analysis."""
if depth > 10:
# It's possible that a loop has references on itself (especially with
# CompiledValue). Therefore don't loop infinitely.
return
for typ in types:
try:
f = typ.py__iter__
except AttributeError:
pass
else:
for lazy_value in f():
try_iter_content(lazy_value.infer(), depth + 1)
class ParamIssue(Exception):
pass
def repack_with_argument_clinic(clinic_string):
"""
Transforms a function or method with arguments to the signature that is
given as an argument clinic notation.
Argument clinic is part of CPython and used for all the functions that are
implemented in C (Python 3.7):
str.split.__text_signature__
# Results in: '($self, /, sep=None, maxsplit=-1)'
"""
def decorator(func):
def wrapper(value, arguments):
try:
args = tuple(iterate_argument_clinic(
value.inference_state,
arguments,
clinic_string,
))
except ParamIssue:
return NO_VALUES
else:
return func(value, *args)
return wrapper
return decorator
def iterate_argument_clinic(inference_state, arguments, clinic_string):
"""Uses a list with argument clinic information (see PEP 436)."""
clinic_args = list(_parse_argument_clinic(clinic_string))
iterator = PushBackIterator(arguments.unpack())
for i, (name, optional, allow_kwargs, stars) in enumerate(clinic_args):
if stars == 1:
lazy_values = []
for key, argument in iterator:
if key is not None:
iterator.push_back((key, argument))
break
lazy_values.append(argument)
yield ValueSet([iterable.FakeTuple(inference_state, lazy_values)])
lazy_values
continue
elif stars == 2:
raise NotImplementedError()
key, argument = next(iterator, (None, None))
if key is not None:
debug.warning('Keyword arguments in argument clinic are currently not supported.')
raise ParamIssue
if argument is None and not optional:
debug.warning('TypeError: %s expected at least %s arguments, got %s',
name, len(clinic_args), i)
raise ParamIssue
value_set = NO_VALUES if argument is None else argument.infer()
if not value_set and not optional:
# For the stdlib we always want values. If we don't get them,
# that's ok, maybe something is too hard to resolve, however,
# we will not proceed with the type inference of that function.
debug.warning('argument_clinic "%s" not resolvable.', name)
raise ParamIssue
yield value_set
def _parse_argument_clinic(string):
allow_kwargs = False
optional = False
while string:
# Optional arguments have to begin with a bracket. And should always be
# at the end of the arguments. This is therefore not a proper argument
# clinic implementation. `range()` for exmple allows an optional start
# value at the beginning.
match = re.match(r'(?:(?:(\[),? ?|, ?|)(\**\w+)|, ?/)\]*', string)
string = string[len(match.group(0)):]
if not match.group(2): # A slash -> allow named arguments
allow_kwargs = True
continue
optional = optional or bool(match.group(1))
word = match.group(2)
stars = word.count('*')
word = word[stars:]
yield (word, optional, allow_kwargs, stars)
if stars:
allow_kwargs = True
class _AbstractArgumentsMixin(object):
def unpack(self, funcdef=None):
raise NotImplementedError
def get_calling_nodes(self):
return []
class AbstractArguments(_AbstractArgumentsMixin):
context = None
argument_node = None
trailer = None
def unpack_arglist(arglist):
if arglist is None:
return
# Allow testlist here as well for Python2's class inheritance
# definitions.
if not (arglist.type in ('arglist', 'testlist') or (
# in python 3.5 **arg is an argument, not arglist
arglist.type == 'argument' and arglist.children[0] in ('*', '**'))):
yield 0, arglist
return
iterator = iter(arglist.children)
for child in iterator:
if child == ',':
continue
elif child in ('*', '**'):
c = next(iterator, None)
assert c is not None
yield len(child.value), c
elif child.type == 'argument' and \
child.children[0] in ('*', '**'):
assert len(child.children) == 2
yield len(child.children[0].value), child.children[1]
else:
yield 0, child
class TreeArguments(AbstractArguments):
def __init__(self, inference_state, context, argument_node, trailer=None):
"""
:param argument_node: May be an argument_node or a list of nodes.
"""
self.argument_node = argument_node
self.context = context
self._inference_state = inference_state
self.trailer = trailer # Can be None, e.g. in a class definition.
@classmethod
@inference_state_as_method_param_cache()
def create_cached(cls, *args, **kwargs):
return cls(*args, **kwargs)
def unpack(self, funcdef=None):
named_args = []
for star_count, el in unpack_arglist(self.argument_node):
if star_count == 1:
arrays = self.context.infer_node(el)
iterators = [_iterate_star_args(self.context, a, el, funcdef)
for a in arrays]
for values in list(zip_longest(*iterators)):
# TODO zip_longest yields None, that means this would raise
# an exception?
yield None, get_merged_lazy_value(
[v for v in values if v is not None]
)
elif star_count == 2:
arrays = self.context.infer_node(el)
for dct in arrays:
for key, values in _star_star_dict(self.context, dct, el, funcdef):
yield key, values
else:
if el.type == 'argument':
c = el.children
if len(c) == 3: # Keyword argument.
named_args.append((c[0].value, LazyTreeValue(self.context, c[2]),))
else: # Generator comprehension.
# Include the brackets with the parent.
sync_comp_for = el.children[1]
if sync_comp_for.type == 'comp_for':
sync_comp_for = sync_comp_for.children[1]
comp = iterable.GeneratorComprehension(
self._inference_state,
defining_context=self.context,
sync_comp_for_node=sync_comp_for,
entry_node=el.children[0],
)
yield None, LazyKnownValue(comp)
else:
yield None, LazyTreeValue(self.context, el)
# Reordering arguments is necessary, because star args sometimes appear
# after named argument, but in the actual order it's prepended.
for named_arg in named_args:
yield named_arg
def _as_tree_tuple_objects(self):
for star_count, argument in unpack_arglist(self.argument_node):
default = None
if argument.type == 'argument':
if len(argument.children) == 3: # Keyword argument.
argument, default = argument.children[::2]
yield argument, default, star_count
def iter_calling_names_with_star(self):
for name, default, star_count in self._as_tree_tuple_objects():
# TODO this function is a bit strange. probably refactor?
if not star_count or not isinstance(name, tree.Name):
continue
yield TreeNameDefinition(self.context, name)
def __repr__(self):
return '<%s: %s>' % (self.__class__.__name__, self.argument_node)
def get_calling_nodes(self):
old_arguments_list = []
arguments = self
while arguments not in old_arguments_list:
if not isinstance(arguments, TreeArguments):
break
old_arguments_list.append(arguments)
for calling_name in reversed(list(arguments.iter_calling_names_with_star())):
names = calling_name.goto()
if len(names) != 1:
break
if isinstance(names[0], AnonymousParamName):
# Dynamic parameters should not have calling nodes, because
# they are dynamic and extremely random.
return []
if not isinstance(names[0], ParamName):
break
executed_param_name = names[0].get_executed_param_name()
arguments = executed_param_name.arguments
break
if arguments.argument_node is not None:
return [ContextualizedNode(arguments.context, arguments.argument_node)]
if arguments.trailer is not None:
return [ContextualizedNode(arguments.context, arguments.trailer)]
return []
class ValuesArguments(AbstractArguments):
def __init__(self, values_list):
self._values_list = values_list
def unpack(self, funcdef=None):
for values in self._values_list:
yield None, LazyKnownValues(values)
def __repr__(self):
return '<%s: %s>' % (self.__class__.__name__, self._values_list)
class TreeArgumentsWrapper(_AbstractArgumentsMixin):
def __init__(self, arguments):
self._wrapped_arguments = arguments
@property
def context(self):
return self._wrapped_arguments.context
@property
def argument_node(self):
return self._wrapped_arguments.argument_node
@property
def trailer(self):
return self._wrapped_arguments.trailer
def unpack(self, func=None):
raise NotImplementedError
def get_calling_nodes(self):
return self._wrapped_arguments.get_calling_nodes()
def __repr__(self):
return '<%s: %s>' % (self.__class__.__name__, self._wrapped_arguments)
def _iterate_star_args(context, array, input_node, funcdef=None):
if not array.py__getattribute__('__iter__'):
if funcdef is not None:
# TODO this funcdef should not be needed.
m = "TypeError: %s() argument after * must be a sequence, not %s" \
% (funcdef.name.value, array)
analysis.add(context, 'type-error-star', input_node, message=m)
try:
iter_ = array.py__iter__
except AttributeError:
pass
else:
for lazy_value in iter_():
yield lazy_value
def _star_star_dict(context, array, input_node, funcdef):
from jedi.inference.value.instance import CompiledInstance
if isinstance(array, CompiledInstance) and array.name.string_name == 'dict':
# For now ignore this case. In the future add proper iterators and just
# make one call without crazy isinstance checks.
return {}
elif isinstance(array, iterable.Sequence) and array.array_type == 'dict':
return array.exact_key_items()
else:
if funcdef is not None:
m = "TypeError: %s argument after ** must be a mapping, not %s" \
% (funcdef.name.value, array)
analysis.add(context, 'type-error-star-star', input_node, message=m)
return {}

View File

@@ -0,0 +1,556 @@
"""
Values are the "values" that Python would return. However Values are at the
same time also the "values" that a user is currently sitting in.
A ValueSet is typically used to specify the return of a function or any other
static analysis operation. In jedi there are always multiple returns and not
just one.
"""
from functools import reduce
from operator import add
from parso.python.tree import Name
from jedi import debug
from jedi._compatibility import zip_longest, unicode
from jedi.parser_utils import clean_scope_docstring
from jedi.inference.helpers import SimpleGetItemNotFound
from jedi.inference.utils import safe_property
from jedi.inference.cache import inference_state_as_method_param_cache
from jedi.cache import memoize_method
sentinel = object()
class HelperValueMixin(object):
def get_root_context(self):
value = self
if value.parent_context is None:
return value.as_context()
while True:
if value.parent_context is None:
return value
value = value.parent_context
def execute(self, arguments):
return self.inference_state.execute(self, arguments=arguments)
def execute_with_values(self, *value_list):
from jedi.inference.arguments import ValuesArguments
arguments = ValuesArguments([ValueSet([value]) for value in value_list])
return self.inference_state.execute(self, arguments)
def execute_annotation(self):
return self.execute_with_values()
def gather_annotation_classes(self):
return ValueSet([self])
def merge_types_of_iterate(self, contextualized_node=None, is_async=False):
return ValueSet.from_sets(
lazy_value.infer()
for lazy_value in self.iterate(contextualized_node, is_async)
)
def _get_value_filters(self, name_or_str):
origin_scope = name_or_str if isinstance(name_or_str, Name) else None
for f in self.get_filters(origin_scope=origin_scope):
yield f
# This covers the case where a stub files are incomplete.
if self.is_stub():
from jedi.inference.gradual.conversion import convert_values
for c in convert_values(ValueSet({self})):
for f in c.get_filters():
yield f
def goto(self, name_or_str, name_context=None, analysis_errors=True):
from jedi.inference import finder
filters = self._get_value_filters(name_or_str)
names = finder.filter_name(filters, name_or_str)
debug.dbg('context.goto %s in (%s): %s', name_or_str, self, names)
return names
def py__getattribute__(self, name_or_str, name_context=None, position=None,
analysis_errors=True):
"""
:param position: Position of the last statement -> tuple of line, column
"""
if name_context is None:
name_context = self
names = self.goto(name_or_str, name_context, analysis_errors)
values = ValueSet.from_sets(name.infer() for name in names)
if not values:
n = name_or_str.value if isinstance(name_or_str, Name) else name_or_str
values = self.py__getattribute__alternatives(n)
if not names and not values and analysis_errors:
if isinstance(name_or_str, Name):
from jedi.inference import analysis
analysis.add_attribute_error(
name_context, self, name_or_str)
debug.dbg('context.names_to_types: %s -> %s', names, values)
return values
def py__await__(self):
await_value_set = self.py__getattribute__(u"__await__")
if not await_value_set:
debug.warning('Tried to run __await__ on value %s', self)
return await_value_set.execute_with_values()
def py__name__(self):
return self.name.string_name
def iterate(self, contextualized_node=None, is_async=False):
debug.dbg('iterate %s', self)
if is_async:
from jedi.inference.lazy_value import LazyKnownValues
# TODO if no __aiter__ values are there, error should be:
# TypeError: 'async for' requires an object with __aiter__ method, got int
return iter([
LazyKnownValues(
self.py__getattribute__('__aiter__').execute_with_values()
.py__getattribute__('__anext__').execute_with_values()
.py__getattribute__('__await__').execute_with_values()
.py__stop_iteration_returns()
) # noqa
])
return self.py__iter__(contextualized_node)
def is_sub_class_of(self, class_value):
with debug.increase_indent_cm('subclass matching of %s <=> %s' % (self, class_value),
color='BLUE'):
for cls in self.py__mro__():
if cls.is_same_class(class_value):
debug.dbg('matched subclass True', color='BLUE')
return True
debug.dbg('matched subclass False', color='BLUE')
return False
def is_same_class(self, class2):
# Class matching should prefer comparisons that are not this function.
if type(class2).is_same_class != HelperValueMixin.is_same_class:
return class2.is_same_class(self)
return self == class2
@memoize_method
def as_context(self, *args, **kwargs):
return self._as_context(*args, **kwargs)
class Value(HelperValueMixin):
"""
To be implemented by subclasses.
"""
tree_node = None
# Possible values: None, tuple, list, dict and set. Here to deal with these
# very important containers.
array_type = None
api_type = 'not_defined_please_report_bug'
def __init__(self, inference_state, parent_context=None):
self.inference_state = inference_state
self.parent_context = parent_context
def py__getitem__(self, index_value_set, contextualized_node):
from jedi.inference import analysis
# TODO this value is probably not right.
analysis.add(
contextualized_node.context,
'type-error-not-subscriptable',
contextualized_node.node,
message="TypeError: '%s' object is not subscriptable" % self
)
return NO_VALUES
def py__simple_getitem__(self, index):
raise SimpleGetItemNotFound
def py__iter__(self, contextualized_node=None):
if contextualized_node is not None:
from jedi.inference import analysis
analysis.add(
contextualized_node.context,
'type-error-not-iterable',
contextualized_node.node,
message="TypeError: '%s' object is not iterable" % self)
return iter([])
def py__next__(self, contextualized_node=None):
return self.py__iter__(contextualized_node)
def get_signatures(self):
return []
def is_class(self):
return False
def is_class_mixin(self):
return False
def is_instance(self):
return False
def is_function(self):
return False
def is_module(self):
return False
def is_namespace(self):
return False
def is_compiled(self):
return False
def is_bound_method(self):
return False
def is_builtins_module(self):
return False
def py__bool__(self):
"""
Since Wrapper is a super class for classes, functions and modules,
the return value will always be true.
"""
return True
def py__doc__(self):
try:
self.tree_node.get_doc_node
except AttributeError:
return ''
else:
return clean_scope_docstring(self.tree_node)
def get_safe_value(self, default=sentinel):
if default is sentinel:
raise ValueError("There exists no safe value for value %s" % self)
return default
def execute_operation(self, other, operator):
debug.warning("%s not possible between %s and %s", operator, self, other)
return NO_VALUES
def py__call__(self, arguments):
debug.warning("no execution possible %s", self)
return NO_VALUES
def py__stop_iteration_returns(self):
debug.warning("Not possible to return the stop iterations of %s", self)
return NO_VALUES
def py__getattribute__alternatives(self, name_or_str):
"""
For now a way to add values in cases like __getattr__.
"""
return NO_VALUES
def py__get__(self, instance, class_value):
debug.warning("No __get__ defined on %s", self)
return ValueSet([self])
def py__get__on_class(self, calling_instance, instance, class_value):
return NotImplemented
def get_qualified_names(self):
# Returns Optional[Tuple[str, ...]]
return None
def is_stub(self):
# The root value knows if it's a stub or not.
return self.parent_context.is_stub()
def _as_context(self):
raise NotImplementedError('Not all values need to be converted to contexts: %s', self)
@property
def name(self):
raise NotImplementedError
def get_type_hint(self, add_class_info=True):
return None
def infer_type_vars(self, value_set):
"""
When the current instance represents a type annotation, this method
tries to find information about undefined type vars and returns a dict
from type var name to value set.
This is for example important to understand what `iter([1])` returns.
According to typeshed, `iter` returns an `Iterator[_T]`:
def iter(iterable: Iterable[_T]) -> Iterator[_T]: ...
This functions would generate `int` for `_T` in this case, because it
unpacks the `Iterable`.
Parameters
----------
`self`: represents the annotation of the current parameter to infer the
value for. In the above example, this would initially be the
`Iterable[_T]` of the `iterable` parameter and then, when recursing,
just the `_T` generic parameter.
`value_set`: represents the actual argument passed to the parameter
we're inferrined for, or (for recursive calls) their types. In the
above example this would first be the representation of the list
`[1]` and then, when recursing, just of `1`.
"""
return {}
def iterate_values(values, contextualized_node=None, is_async=False):
"""
Calls `iterate`, on all values but ignores the ordering and just returns
all values that the iterate functions yield.
"""
return ValueSet.from_sets(
lazy_value.infer()
for lazy_value in values.iterate(contextualized_node, is_async=is_async)
)
class _ValueWrapperBase(HelperValueMixin):
@safe_property
def name(self):
from jedi.inference.names import ValueName
wrapped_name = self._wrapped_value.name
if wrapped_name.tree_name is not None:
return ValueName(self, wrapped_name.tree_name)
else:
from jedi.inference.compiled import CompiledValueName
return CompiledValueName(self, wrapped_name.string_name)
@classmethod
@inference_state_as_method_param_cache()
def create_cached(cls, inference_state, *args, **kwargs):
return cls(*args, **kwargs)
def __getattr__(self, name):
assert name != '_wrapped_value', 'Problem with _get_wrapped_value'
return getattr(self._wrapped_value, name)
class LazyValueWrapper(_ValueWrapperBase):
@safe_property
@memoize_method
def _wrapped_value(self):
with debug.increase_indent_cm('Resolve lazy value wrapper'):
return self._get_wrapped_value()
def __repr__(self):
return '<%s>' % (self.__class__.__name__)
def _get_wrapped_value(self):
raise NotImplementedError
class ValueWrapper(_ValueWrapperBase):
def __init__(self, wrapped_value):
self._wrapped_value = wrapped_value
def __repr__(self):
return '%s(%s)' % (self.__class__.__name__, self._wrapped_value)
class TreeValue(Value):
def __init__(self, inference_state, parent_context, tree_node):
super(TreeValue, self).__init__(inference_state, parent_context)
self.tree_node = tree_node
def __repr__(self):
return '<%s: %s>' % (self.__class__.__name__, self.tree_node)
class ContextualizedNode(object):
def __init__(self, context, node):
self.context = context
self.node = node
def get_root_context(self):
return self.context.get_root_context()
def infer(self):
return self.context.infer_node(self.node)
def __repr__(self):
return '<%s: %s in %s>' % (self.__class__.__name__, self.node, self.context)
def _getitem(value, index_values, contextualized_node):
# The actual getitem call.
result = NO_VALUES
unused_values = set()
for index_value in index_values:
index = index_value.get_safe_value(default=None)
if type(index) in (float, int, str, unicode, slice, bytes):
try:
result |= value.py__simple_getitem__(index)
continue
except SimpleGetItemNotFound:
pass
unused_values.add(index_value)
# The index was somehow not good enough or simply a wrong type.
# Therefore we now iterate through all the values and just take
# all results.
if unused_values or not index_values:
result |= value.py__getitem__(
ValueSet(unused_values),
contextualized_node
)
debug.dbg('py__getitem__ result: %s', result)
return result
class ValueSet(object):
def __init__(self, iterable):
self._set = frozenset(iterable)
for value in iterable:
assert not isinstance(value, ValueSet)
@classmethod
def _from_frozen_set(cls, frozenset_):
self = cls.__new__(cls)
self._set = frozenset_
return self
@classmethod
def from_sets(cls, sets):
"""
Used to work with an iterable of set.
"""
aggregated = set()
for set_ in sets:
if isinstance(set_, ValueSet):
aggregated |= set_._set
else:
aggregated |= frozenset(set_)
return cls._from_frozen_set(frozenset(aggregated))
def __or__(self, other):
return self._from_frozen_set(self._set | other._set)
def __and__(self, other):
return self._from_frozen_set(self._set & other._set)
def __iter__(self):
for element in self._set:
yield element
def __bool__(self):
return bool(self._set)
def __len__(self):
return len(self._set)
def __repr__(self):
return 'S{%s}' % (', '.join(str(s) for s in self._set))
def filter(self, filter_func):
return self.__class__(filter(filter_func, self._set))
def __getattr__(self, name):
def mapper(*args, **kwargs):
return self.from_sets(
getattr(value, name)(*args, **kwargs)
for value in self._set
)
return mapper
def __eq__(self, other):
return self._set == other._set
def __ne__(self, other):
return not self.__eq__(other)
def __hash__(self):
return hash(self._set)
def py__class__(self):
return ValueSet(c.py__class__() for c in self._set)
def iterate(self, contextualized_node=None, is_async=False):
from jedi.inference.lazy_value import get_merged_lazy_value
type_iters = [c.iterate(contextualized_node, is_async=is_async) for c in self._set]
for lazy_values in zip_longest(*type_iters):
yield get_merged_lazy_value(
[l for l in lazy_values if l is not None]
)
def execute(self, arguments):
return ValueSet.from_sets(c.inference_state.execute(c, arguments) for c in self._set)
def execute_with_values(self, *args, **kwargs):
return ValueSet.from_sets(c.execute_with_values(*args, **kwargs) for c in self._set)
def goto(self, *args, **kwargs):
return reduce(add, [c.goto(*args, **kwargs) for c in self._set], [])
def py__getattribute__(self, *args, **kwargs):
return ValueSet.from_sets(c.py__getattribute__(*args, **kwargs) for c in self._set)
def get_item(self, *args, **kwargs):
return ValueSet.from_sets(_getitem(c, *args, **kwargs) for c in self._set)
def try_merge(self, function_name):
value_set = self.__class__([])
for c in self._set:
try:
method = getattr(c, function_name)
except AttributeError:
pass
else:
value_set |= method()
return value_set
def gather_annotation_classes(self):
return ValueSet.from_sets([c.gather_annotation_classes() for c in self._set])
def get_signatures(self):
return [sig for c in self._set for sig in c.get_signatures()]
def get_type_hint(self, add_class_info=True):
t = [v.get_type_hint(add_class_info=add_class_info) for v in self._set]
type_hints = sorted(filter(None, t))
if len(type_hints) == 1:
return type_hints[0]
optional = 'None' in type_hints
if optional:
type_hints.remove('None')
if len(type_hints) == 0:
return None
elif len(type_hints) == 1:
s = type_hints[0]
else:
s = 'Union[%s]' % ', '.join(type_hints)
if optional:
s = 'Optional[%s]' % s
return s
def infer_type_vars(self, value_set):
# Circular
from jedi.inference.gradual.annotation import merge_type_var_dicts
type_var_dict = {}
for value in self._set:
merge_type_var_dicts(
type_var_dict,
value.infer_type_vars(value_set),
)
return type_var_dict
NO_VALUES = ValueSet([])
def iterator_to_value_set(func):
def wrapper(*args, **kwargs):
return ValueSet(func(*args, **kwargs))
return wrapper

126
jedi/inference/cache.py Normal file
View File

@@ -0,0 +1,126 @@
"""
- the popular ``_memoize_default`` works like a typical memoize and returns the
default otherwise.
- ``CachedMetaClass`` uses ``_memoize_default`` to do the same with classes.
"""
from functools import wraps
from jedi import debug
_NO_DEFAULT = object()
_RECURSION_SENTINEL = object()
def _memoize_default(default=_NO_DEFAULT, inference_state_is_first_arg=False,
second_arg_is_inference_state=False):
""" This is a typical memoization decorator, BUT there is one difference:
To prevent recursion it sets defaults.
Preventing recursion is in this case the much bigger use than speed. I
don't think, that there is a big speed difference, but there are many cases
where recursion could happen (think about a = b; b = a).
"""
def func(function):
def wrapper(obj, *args, **kwargs):
# TODO These checks are kind of ugly and slow.
if inference_state_is_first_arg:
cache = obj.memoize_cache
elif second_arg_is_inference_state:
cache = args[0].memoize_cache # needed for meta classes
else:
cache = obj.inference_state.memoize_cache
try:
memo = cache[function]
except KeyError:
cache[function] = memo = {}
key = (obj, args, frozenset(kwargs.items()))
if key in memo:
return memo[key]
else:
if default is not _NO_DEFAULT:
memo[key] = default
rv = function(obj, *args, **kwargs)
memo[key] = rv
return rv
return wrapper
return func
def inference_state_function_cache(default=_NO_DEFAULT):
def decorator(func):
return _memoize_default(default=default, inference_state_is_first_arg=True)(func)
return decorator
def inference_state_method_cache(default=_NO_DEFAULT):
def decorator(func):
return _memoize_default(default=default)(func)
return decorator
def inference_state_as_method_param_cache():
def decorator(call):
return _memoize_default(second_arg_is_inference_state=True)(call)
return decorator
class CachedMetaClass(type):
"""
This is basically almost the same than the decorator above, it just caches
class initializations. Either you do it this way or with decorators, but
with decorators you lose class access (isinstance, etc).
"""
@inference_state_as_method_param_cache()
def __call__(self, *args, **kwargs):
return super(CachedMetaClass, self).__call__(*args, **kwargs)
def inference_state_method_generator_cache():
"""
This is a special memoizer. It memoizes generators and also checks for
recursion errors and returns no further iterator elemends in that case.
"""
def func(function):
@wraps(function)
def wrapper(obj, *args, **kwargs):
cache = obj.inference_state.memoize_cache
try:
memo = cache[function]
except KeyError:
cache[function] = memo = {}
key = (obj, args, frozenset(kwargs.items()))
if key in memo:
actual_generator, cached_lst = memo[key]
else:
actual_generator = function(obj, *args, **kwargs)
cached_lst = []
memo[key] = actual_generator, cached_lst
i = 0
while True:
try:
next_element = cached_lst[i]
if next_element is _RECURSION_SENTINEL:
debug.warning('Found a generator recursion for %s' % obj)
# This means we have hit a recursion.
return
except IndexError:
cached_lst.append(_RECURSION_SENTINEL)
next_element = next(actual_generator, None)
if next_element is None:
cached_lst.pop()
return
cached_lst[-1] = next_element
yield next_element
i += 1
return wrapper
return func

View File

@@ -0,0 +1,68 @@
from jedi._compatibility import unicode
from jedi.inference.compiled.value import CompiledValue, CompiledName, \
CompiledValueFilter, CompiledValueName, create_from_access_path
from jedi.inference.base_value import LazyValueWrapper
def builtin_from_name(inference_state, string):
typing_builtins_module = inference_state.builtins_module
if string in ('None', 'True', 'False'):
builtins, = typing_builtins_module.non_stub_value_set
filter_ = next(builtins.get_filters())
else:
filter_ = next(typing_builtins_module.get_filters())
name, = filter_.get(string)
value, = name.infer()
return value
class ExactValue(LazyValueWrapper):
"""
This class represents exact values, that makes operations like additions
and exact boolean values possible, while still being a "normal" stub.
"""
def __init__(self, compiled_value):
self.inference_state = compiled_value.inference_state
self._compiled_value = compiled_value
def __getattribute__(self, name):
if name in ('get_safe_value', 'execute_operation', 'access_handle',
'negate', 'py__bool__', 'is_compiled'):
return getattr(self._compiled_value, name)
return super(ExactValue, self).__getattribute__(name)
def _get_wrapped_value(self):
instance, = builtin_from_name(
self.inference_state, self._compiled_value.name.string_name).execute_with_values()
return instance
def __repr__(self):
return '<%s: %s>' % (self.__class__.__name__, self._compiled_value)
def create_simple_object(inference_state, obj):
"""
Only allows creations of objects that are easily picklable across Python
versions.
"""
assert type(obj) in (int, float, str, bytes, unicode, slice, complex, bool), obj
compiled_value = create_from_access_path(
inference_state,
inference_state.compiled_subprocess.create_simple_object(obj)
)
return ExactValue(compiled_value)
def get_string_value_set(inference_state):
return builtin_from_name(inference_state, u'str').execute_with_values()
def load_module(inference_state, dotted_name, **kwargs):
# Temporary, some tensorflow builtins cannot be loaded, so it's tried again
# and again and it's really slow.
if dotted_name.startswith('tensorflow.'):
return None
access_path = inference_state.compiled_subprocess.load_module(dotted_name=dotted_name, **kwargs)
if access_path is None:
return None
return create_from_access_path(inference_state, access_path)

View File

@@ -0,0 +1,564 @@
from __future__ import print_function
import inspect
import types
import sys
import operator as op
from collections import namedtuple
import warnings
import re
from jedi._compatibility import unicode, is_py3, builtins, \
py_version, force_unicode
from jedi.inference.compiled.getattr_static import getattr_static
ALLOWED_GETITEM_TYPES = (str, list, tuple, unicode, bytes, bytearray, dict)
MethodDescriptorType = type(str.replace)
# These are not considered classes and access is granted even though they have
# a __class__ attribute.
NOT_CLASS_TYPES = (
types.BuiltinFunctionType,
types.CodeType,
types.FrameType,
types.FunctionType,
types.GeneratorType,
types.GetSetDescriptorType,
types.LambdaType,
types.MemberDescriptorType,
types.MethodType,
types.ModuleType,
types.TracebackType,
MethodDescriptorType
)
if is_py3:
NOT_CLASS_TYPES += (
types.MappingProxyType,
types.SimpleNamespace,
types.DynamicClassAttribute,
)
# Those types don't exist in typing.
MethodDescriptorType = type(str.replace)
WrapperDescriptorType = type(set.__iter__)
# `object.__subclasshook__` is an already executed descriptor.
object_class_dict = type.__dict__["__dict__"].__get__(object)
ClassMethodDescriptorType = type(object_class_dict['__subclasshook__'])
_sentinel = object()
# Maps Python syntax to the operator module.
COMPARISON_OPERATORS = {
'==': op.eq,
'!=': op.ne,
'is': op.is_,
'is not': op.is_not,
'<': op.lt,
'<=': op.le,
'>': op.gt,
'>=': op.ge,
}
_OPERATORS = {
'+': op.add,
'-': op.sub,
}
_OPERATORS.update(COMPARISON_OPERATORS)
ALLOWED_DESCRIPTOR_ACCESS = (
types.FunctionType,
types.GetSetDescriptorType,
types.MemberDescriptorType,
MethodDescriptorType,
WrapperDescriptorType,
ClassMethodDescriptorType,
staticmethod,
classmethod,
)
def safe_getattr(obj, name, default=_sentinel):
try:
attr, is_get_descriptor = getattr_static(obj, name)
except AttributeError:
if default is _sentinel:
raise
return default
else:
if isinstance(attr, ALLOWED_DESCRIPTOR_ACCESS):
# In case of descriptors that have get methods we cannot return
# it's value, because that would mean code execution.
# Since it's an isinstance call, code execution is still possible,
# but this is not really a security feature, but much more of a
# safety feature. Code execution is basically always possible when
# a module is imported. This is here so people don't shoot
# themselves in the foot.
return getattr(obj, name)
return attr
SignatureParam = namedtuple(
'SignatureParam',
'name has_default default default_string has_annotation annotation annotation_string kind_name'
)
def shorten_repr(func):
def wrapper(self):
r = func(self)
if len(r) > 50:
r = r[:50] + '..'
return r
return wrapper
def create_access(inference_state, obj):
return inference_state.compiled_subprocess.get_or_create_access_handle(obj)
def load_module(inference_state, dotted_name, sys_path):
temp, sys.path = sys.path, sys_path
try:
__import__(dotted_name)
except ImportError:
# If a module is "corrupt" or not really a Python module or whatever.
print('Module %s not importable in path %s.' % (dotted_name, sys_path), file=sys.stderr)
return None
except Exception:
# Since __import__ pretty much makes code execution possible, just
# catch any error here and print it.
import traceback
print("Cannot import:\n%s" % traceback.format_exc(), file=sys.stderr)
return None
finally:
sys.path = temp
# Just access the cache after import, because of #59 as well as the very
# complicated import structure of Python.
module = sys.modules[dotted_name]
return create_access_path(inference_state, module)
class AccessPath(object):
def __init__(self, accesses):
self.accesses = accesses
# Writing both of these methods here looks a bit ridiculous. However with
# the differences of Python 2/3 it's actually necessary, because we will
# otherwise have a accesses attribute that is bytes instead of unicode.
def __getstate__(self):
return self.accesses
def __setstate__(self, value):
self.accesses = value
def create_access_path(inference_state, obj):
access = create_access(inference_state, obj)
return AccessPath(access.get_access_path_tuples())
def _force_unicode_decorator(func):
return lambda *args, **kwargs: force_unicode(func(*args, **kwargs))
def get_api_type(obj):
if inspect.isclass(obj):
return u'class'
elif inspect.ismodule(obj):
return u'module'
elif inspect.isbuiltin(obj) or inspect.ismethod(obj) \
or inspect.ismethoddescriptor(obj) or inspect.isfunction(obj):
return u'function'
# Everything else...
return u'instance'
class DirectObjectAccess(object):
def __init__(self, inference_state, obj):
self._inference_state = inference_state
self._obj = obj
def __repr__(self):
return '%s(%s)' % (self.__class__.__name__, self.get_repr())
def _create_access(self, obj):
return create_access(self._inference_state, obj)
def _create_access_path(self, obj):
return create_access_path(self._inference_state, obj)
def py__bool__(self):
return bool(self._obj)
def py__file__(self):
try:
return self._obj.__file__
except AttributeError:
return None
def py__doc__(self):
return force_unicode(inspect.getdoc(self._obj)) or u''
def py__name__(self):
if not _is_class_instance(self._obj) or \
inspect.ismethoddescriptor(self._obj): # slots
cls = self._obj
else:
try:
cls = self._obj.__class__
except AttributeError:
# happens with numpy.core.umath._UFUNC_API (you get it
# automatically by doing `import numpy`.
return None
try:
return force_unicode(cls.__name__)
except AttributeError:
return None
def py__mro__accesses(self):
return tuple(self._create_access_path(cls) for cls in self._obj.__mro__[1:])
def py__getitem__all_values(self):
if isinstance(self._obj, dict):
return [self._create_access_path(v) for v in self._obj.values()]
return self.py__iter__list()
def py__simple_getitem__(self, index):
if type(self._obj) not in ALLOWED_GETITEM_TYPES:
# Get rid of side effects, we won't call custom `__getitem__`s.
return None
return self._create_access_path(self._obj[index])
def py__iter__list(self):
if not hasattr(self._obj, '__getitem__'):
return None
if type(self._obj) not in ALLOWED_GETITEM_TYPES:
# Get rid of side effects, we won't call custom `__getitem__`s.
return []
lst = []
for i, part in enumerate(self._obj):
if i > 20:
# Should not go crazy with large iterators
break
lst.append(self._create_access_path(part))
return lst
def py__class__(self):
return self._create_access_path(self._obj.__class__)
def py__bases__(self):
return [self._create_access_path(base) for base in self._obj.__bases__]
def py__path__(self):
paths = getattr(self._obj, '__path__', None)
# Avoid some weird hacks that would just fail, because they cannot be
# used by pickle.
if not isinstance(paths, list) \
or not all(isinstance(p, (bytes, unicode)) for p in paths):
return None
return paths
@_force_unicode_decorator
@shorten_repr
def get_repr(self):
builtins = 'builtins', '__builtin__'
if inspect.ismodule(self._obj):
return repr(self._obj)
# Try to avoid execution of the property.
if safe_getattr(self._obj, '__module__', default='') in builtins:
return repr(self._obj)
type_ = type(self._obj)
if type_ == type:
return type.__repr__(self._obj)
if safe_getattr(type_, '__module__', default='') in builtins:
# Allow direct execution of repr for builtins.
return repr(self._obj)
return object.__repr__(self._obj)
def is_class(self):
return inspect.isclass(self._obj)
def is_function(self):
return inspect.isfunction(self._obj) or inspect.ismethod(self._obj)
def is_module(self):
return inspect.ismodule(self._obj)
def is_instance(self):
return _is_class_instance(self._obj)
def ismethoddescriptor(self):
return inspect.ismethoddescriptor(self._obj)
def get_qualified_names(self):
def try_to_get_name(obj):
return getattr(obj, '__qualname__', getattr(obj, '__name__', None))
if self.is_module():
return ()
name = try_to_get_name(self._obj)
if name is None:
name = try_to_get_name(type(self._obj))
if name is None:
return ()
return tuple(force_unicode(n) for n in name.split('.'))
def dir(self):
return list(map(force_unicode, dir(self._obj)))
def has_iter(self):
try:
iter(self._obj)
return True
except TypeError:
return False
def is_allowed_getattr(self, name, unsafe=False):
# TODO this API is ugly.
if unsafe:
# Unsafe is mostly used to check for __getattr__/__getattribute__.
# getattr_static works for properties, but the underscore methods
# are just ignored (because it's safer and avoids more code
# execution). See also GH #1378.
# Avoid warnings, see comment in the next function.
with warnings.catch_warnings(record=True):
warnings.simplefilter("always")
try:
return hasattr(self._obj, name), False
except Exception:
# Obviously has an attribute (propably a property) that
# gets executed, so just avoid all exceptions here.
return False, False
try:
attr, is_get_descriptor = getattr_static(self._obj, name)
except AttributeError:
return False, False
else:
if is_get_descriptor and type(attr) not in ALLOWED_DESCRIPTOR_ACCESS:
# In case of descriptors that have get methods we cannot return
# it's value, because that would mean code execution.
return True, True
return True, False
def getattr_paths(self, name, default=_sentinel):
try:
# Make sure no warnings are printed here, this is autocompletion,
# warnings should not be shown. See also GH #1383.
with warnings.catch_warnings(record=True):
warnings.simplefilter("always")
return_obj = getattr(self._obj, name)
except Exception as e:
if default is _sentinel:
if isinstance(e, AttributeError):
# Happens e.g. in properties of
# PyQt4.QtGui.QStyleOptionComboBox.currentText
# -> just set it to None
raise
# Just in case anything happens, return an AttributeError. It
# should not crash.
raise AttributeError
return_obj = default
access = self._create_access(return_obj)
if inspect.ismodule(return_obj):
return [access]
try:
module = return_obj.__module__
except AttributeError:
pass
else:
if module is not None:
try:
__import__(module)
# For some modules like _sqlite3, the __module__ for classes is
# different, in this case it's sqlite3. So we have to try to
# load that "original" module, because it's not loaded yet. If
# we don't do that, we don't really have a "parent" module and
# we would fall back to builtins.
except ImportError:
pass
module = inspect.getmodule(return_obj)
if module is None:
module = inspect.getmodule(type(return_obj))
if module is None:
module = builtins
return [self._create_access(module), access]
def get_safe_value(self):
if type(self._obj) in (bool, bytes, float, int, str, unicode, slice) or self._obj is None:
return self._obj
raise ValueError("Object is type %s and not simple" % type(self._obj))
def get_api_type(self):
return get_api_type(self._obj)
def get_array_type(self):
if isinstance(self._obj, dict):
return 'dict'
return None
def get_key_paths(self):
def iter_partial_keys():
# We could use list(keys()), but that might take a lot more memory.
for (i, k) in enumerate(self._obj.keys()):
# Limit key listing at some point. This is artificial, but this
# way we don't get stalled because of slow completions
if i > 50:
break
yield k
return [self._create_access_path(k) for k in iter_partial_keys()]
def get_access_path_tuples(self):
accesses = [create_access(self._inference_state, o) for o in self._get_objects_path()]
return [(access.py__name__(), access) for access in accesses]
def _get_objects_path(self):
def get():
obj = self._obj
yield obj
try:
obj = obj.__objclass__
except AttributeError:
pass
else:
yield obj
try:
# Returns a dotted string path.
imp_plz = obj.__module__
except AttributeError:
# Unfortunately in some cases like `int` there's no __module__
if not inspect.ismodule(obj):
yield builtins
else:
if imp_plz is None:
# Happens for example in `(_ for _ in []).send.__module__`.
yield builtins
else:
try:
yield sys.modules[imp_plz]
except KeyError:
# __module__ can be something arbitrary that doesn't exist.
yield builtins
return list(reversed(list(get())))
def execute_operation(self, other_access_handle, operator):
other_access = other_access_handle.access
op = _OPERATORS[operator]
return self._create_access_path(op(self._obj, other_access._obj))
def get_annotation_name_and_args(self):
"""
Returns Tuple[Optional[str], Tuple[AccessPath, ...]]
"""
if sys.version_info < (3, 5):
return None, ()
name = None
args = ()
if safe_getattr(self._obj, '__module__', default='') == 'typing':
m = re.match(r'typing.(\w+)\[', repr(self._obj))
if m is not None:
name = m.group(1)
import typing
if sys.version_info >= (3, 8):
args = typing.get_args(self._obj)
else:
args = safe_getattr(self._obj, '__args__', default=None)
return name, tuple(self._create_access_path(arg) for arg in args)
def needs_type_completions(self):
return inspect.isclass(self._obj) and self._obj != type
def _annotation_to_str(self, annotation):
if py_version < 30:
return ''
return inspect.formatannotation(annotation)
def get_signature_params(self):
return [
SignatureParam(
name=p.name,
has_default=p.default is not p.empty,
default=self._create_access_path(p.default),
default_string=repr(p.default),
has_annotation=p.annotation is not p.empty,
annotation=self._create_access_path(p.annotation),
annotation_string=self._annotation_to_str(p.annotation),
kind_name=str(p.kind)
) for p in self._get_signature().parameters.values()
]
def _get_signature(self):
obj = self._obj
if py_version < 33:
raise ValueError("inspect.signature was introduced in 3.3")
try:
return inspect.signature(obj)
except (RuntimeError, TypeError):
# Reading the code of the function in Python 3.6 implies there are
# at least these errors that might occur if something is wrong with
# the signature. In that case we just want a simple escape for now.
raise ValueError
def get_return_annotation(self):
try:
o = self._obj.__annotations__.get('return')
except AttributeError:
return None
if o is None:
return None
try:
# Python 2 doesn't have typing.
import typing
except ImportError:
pass
else:
try:
o = typing.get_type_hints(self._obj).get('return')
except Exception:
pass
return self._create_access_path(o)
def negate(self):
return self._create_access_path(-self._obj)
def get_dir_infos(self):
"""
Used to return a couple of infos that are needed when accessing the sub
objects of an objects
"""
tuples = dict(
(force_unicode(name), self.is_allowed_getattr(name))
for name in self.dir()
)
return self.needs_type_completions(), tuples
def _is_class_instance(obj):
"""Like inspect.* methods."""
try:
cls = obj.__class__
except AttributeError:
return False
else:
# The isinstance check for cls is just there so issubclass doesn't
# raise an exception.
return cls != type and isinstance(cls, type) and not issubclass(cls, NOT_CLASS_TYPES)

Some files were not shown because too many files have changed in this diff Show More