1
0
forked from VimPlug/jedi

Compare commits

..

144 Commits
tmp ... v0.18.2

Author SHA1 Message Date
Dave Halter
eaab706038 Prepare the release of 0.18.2 2022-11-21 23:23:46 +01:00
Dave Halter
41455480be Better search for venvs 2022-11-21 23:06:26 +01:00
Dave Halter
0a670d10dd Merge branch 'master' of github.com:davidhalter/jedi 2022-11-21 22:59:48 +01:00
Dave Halter
6b73d5c1bf Probably using the 3.10 grammar is better for stubs for now 2022-11-21 21:07:33 +01:00
Dave Halter
a3fed3b6a6 Remove a TODO that was already implemented 2022-11-14 08:39:11 +01:00
Dave Halter
66c52b4bc7 Try to fix a test for Windows 2022-11-13 23:48:43 +01:00
Dave Halter
89f9a3a7f1 Fix a Django test 2022-11-13 23:38:22 +01:00
Dave Halter
3a30008cc4 Fix keyword argument completion, fixes #1856 2022-11-13 20:26:00 +01:00
Dave Halter
b0d5fc2bd0 Fix errors around docs of namespace packages, fixes #1890, fixes #1822 2022-11-13 19:50:08 +01:00
Dave Halter
6e5db3f479 Fix a weird AttributeError, fixes #1765 2022-11-13 18:26:01 +01:00
Dave Halter
85780111e0 Use the latest grammar from parso for stubs, probably fixes #1864 2022-11-13 17:59:22 +01:00
Dave Halter
0ba48bbb9d Fix an issue with creatin a diff, fixes #1757 2022-11-13 17:51:54 +01:00
Dave Halter
26f7878d97 Revert some of the logic around ClassVar completions, see #1847 2022-11-12 23:15:16 +01:00
Dave Halter
8027e1b162 Remove the ClassVar filter, see also #1847 2022-11-12 22:58:00 +01:00
Dave Halter
78a53bf005 Change a test slightly 2022-11-12 13:59:07 +01:00
Dave Halter
8485df416d Finally fix a Django test 2022-11-11 18:00:17 +01:00
Dave Halter
94e78340e1 Fix a formatting issue in CI 2022-11-11 17:54:57 +01:00
Dave Halter
f454989859 Now that ClassVars work differently fix a Django test 2022-11-11 17:52:35 +01:00
Dave Halter
e779f23ac7 Another small change towards tests 2022-11-11 17:50:05 +01:00
Dave Halter
3c40363a39 Remove another test that depends on specific pytest versions and is well covered by other tests 2022-11-11 17:47:02 +01:00
Dave Halter
a6cf2c338a Remove part of a test that is annoying to develop 2022-11-11 17:44:49 +01:00
Dave Halter
2a7311c1a0 Remove some unrelated things from .gitignore again 2022-11-11 17:15:46 +01:00
Dave Halter
81427e4408 Add a note about pytest entrypoints in CHANGELOG 2022-11-11 17:01:11 +01:00
Dave Halter
804e4b0ca2 Merge pull request #1861 from qmmp123/master
Fix: #1847
2022-11-11 16:00:39 +00:00
Dave Halter
3475ccfbd3 Merge pull request #1870 from Presburger/master
fix autocomplete crash in ycmd
2022-11-11 15:50:10 +00:00
Dave Halter
9723a0eed0 Merge pull request #1879 from marciomazza/find-external-pytest-fixtures
Find external pytest fixtures
2022-11-11 15:46:40 +00:00
Dave Halter
658f80fa1e Just pin all documentation generation dependencies 2022-11-11 16:36:23 +01:00
Dave Halter
31c2c508c3 Try to get jedi.readthedocs.org running again 2022-11-11 16:15:37 +01:00
Dave Halter
6c9cab2f8e Merge pull request #1889 from AndrewAmmerlaan/master
python3.11 compatibility
2022-10-20 19:08:52 +00:00
Andrew Ammerlaan
0a6ad1010c inference/compiled/subprocess/functions.py: Skip python3.11's frozen imports
Bug: https://github.com/davidhalter/jedi/issues/1858
Signed-off-by: Andrew Ammerlaan <andrewammerlaan@gentoo.org>
2022-10-19 16:53:17 +02:00
Dave Halter
3a60943f6e Merge pull request #1885 from asford/attrs_support
Extend dataclass constructor hinting to attrs next-gen apis.
2022-10-13 19:12:59 +00:00
Alex Ford
4d1e00c3ab Skip if attrs not in target environment.
Add check for attrs in test environment and skip if not installed.
This is patterned off the existing django tests.
2022-10-13 00:43:29 -07:00
Alex Ford
e15f51ecc1 Remove mutable from attrs signature tests 2022-10-11 17:55:57 -07:00
Alex Ford
eaa66b3dbb Update setup.py 2022-10-11 17:40:31 -07:00
Alex Ford
239d9e0b22 Add note to changelog 2022-10-11 17:40:31 -07:00
Alex Ford
40e1e3f560 Extend dataclass constructor hinting to attrs next-gen apis.
Trivially extends dataclass constructor hinting to attrs next-gen APIs.

This will stumble in cases where attrs extends beyond the standard
dataclasses API, such as complex use of defaults, converters, et al.
However, it likely covers the vast majority of cases which fall solidly
in the intersection of the two APIs.

Extension beyond these cases could use [PEP0681 dataclass_transforms],
however this is definitely a problem for another day.

[PEP0681 dataclass_transforms]: https://peps.python.org/pep-0681/

https://github.com/davidhalter/jedi/issues/1835
2022-10-11 17:40:31 -07:00
Marcio Mazza
c243608ac6 Add your name to AUTHORS.txt 2022-09-05 17:31:14 -03:00
Marcio Mazza
e25750ecef Make code compatible with python < 3.8 2022-09-05 17:05:11 -03:00
Marcio Mazza
1a306fddbf Fix check pytest fixture from import on the right context 2022-09-04 13:12:13 -03:00
Marcio Mazza
ec425ed2af Add tests to find pytest fixtures from external plugins 2022-09-03 17:16:32 -03:00
Marcio Mazza
fa1e9ce9a7 Simplify entry points enumeration 2022-09-03 17:16:32 -03:00
Marcio Mazza
8447d7f3e4 Discard imports of modules as pytest fixtures 2022-09-03 17:16:32 -03:00
Marcio Mazza
27e13e4072 Allow for multiple returns from goto_import 2022-09-03 17:16:32 -03:00
Marcio Mazza
9fd4aab5da Find pytest fixtures from external plugins registered via setuptools entry points
Using setuptools entry points is probably the main pytest mechanism of
plugin discovery.

See https://docs.pytest.org/en/stable/how-to/writing_plugins.html#setuptools-entry-points

This extends the functionality of #791
and maybe eliminates the need for #1786.
2022-09-03 17:16:32 -03:00
Dave Halter
8b0d391ac1 Merge pull request #1876 from marciomazza/fix-skipped-tests-due-to-python-symlinks
Fix skipped collection of pytest integration test files
2022-09-03 12:36:01 +00:00
Marcio Mazza
fa0c064841 Fix skipped collection of pytest integration test files
On integration tests file collection,
the value of `environment.executable` can also be a symlink
(e.g. in a virtualenv) with a different name than,
but pointing to the same as `sys.executable`
(e.g. .../bin/python3.10 and .../bin/python, respectively).

That causes skipping the collection of `completion/pytest.py`
and `completion/conftest.py` a lot of times, depending on the environment.
(e.g. "60 skipped" before x "23 skipped" after, in a local virtualenv)
2022-09-02 14:23:38 -03:00
Dave Halter
9e2089ef1e Merge pull request #1875 from marciomazza/fix-test-home-is-potential-project
Fix test where home could be a potential project
2022-09-02 09:19:52 +00:00
Marcio Mazza
85c7f14562 Fix test where home could be a potential project 2022-09-01 13:01:27 -03:00
Dave Halter
695f0832b4 Merge pull request #1871 from xzz53/fix-gitignore
Improve .gitignore handling
2022-08-22 09:59:53 +00:00
Mikhail Rudenko
cfb7e300af Improve .gitignore handling
At present, .gitignore patterns not starting with '/' are classified
as "ignored names" (opposing to "ignored paths") and not used for
filtering directories. But, according to the spec [1], the situation
is a bit different: all patterns apply to directories (and those
ending with '/' apply to directories only). Besides that, there two
kinds of patterns: those that match only w.r.t the directory where
defining .gitignore is located (they must contain a '/' in the
beginning or in the middle), which we call "absolute", and those that
also match in all subdirectories under the directory where defining
.gitignore is located (they must not contain '/' or contain only
trailing '/'), which we call "relative".

This commit implements handling of both "absolute" and "relative"
.gitignore patterns according to the spec. "Absolute" patterns are
handled mostly like `ignored_paths` were handled in the previous
implementation. "Relative" patterns are collected into a distinct set
containing `(defining_gitignore_dir, pattern)` tuples. For each
traversed `root_folder_io`, all applicable "relative" patterns are
expanded into a set of plain paths, which are then used for filtering
`folder_io`s.

While at it, also fix some minor issues. Explicitly ignore negative
and wildcard patterns, since we don't handle them correctly
anyway. Also, use '/' as a path separator instead of `os.path.sep`
when dealing with .gitignore, since the spec explicitly says that '/'
must be used on all platforms.

[1] https://git-scm.com/docs/gitignore
2022-08-21 21:50:29 +03:00
Yusheng.Ma
f5faca014f fix autocomplete crash in ycmd
Signed-off-by: Yusheng.Ma <Yusheng.Ma@zilliz.com>
2022-08-17 07:53:35 +00:00
Dave Halter
7ff0d2d595 Merge pull request #1867 from timgates42/bugfix_typos
docs: Fix a few typos
2022-07-15 07:36:27 +00:00
Tim Gates
c28b337278 docs: Fix a few typos
There are small typos in:
- jedi/api/exceptions.py
- jedi/inference/base_value.py
- jedi/inference/compiled/mixed.py
- jedi/inference/value/dynamic_arrays.py

Fixes:
- Should read `usually` rather than `ususally`.
- Should read `modifications` rather than `modfications`.
- Should read `interpreters` rather than `interpreteters`.
- Should read `inferred` rather than `inferrined`.
- Should read `completable` rather than `completeable`.

Signed-off-by: Tim Gates <tim.gates@iress.com>
2022-07-15 17:29:02 +10:00
nedilmark
128695bd8e remove debug changes 2022-07-03 09:42:29 +08:00
nedilmark
e194ab5951 Fix: #1847 2022-06-18 06:13:07 +08:00
Dave Halter
c0ac341750 Replace some type comments with annotations
This was necessary, back when we supported Python 3.5
2022-05-26 23:09:28 +02:00
Dave Halter
486695d479 Merge pull request #1851 from GalaxySnail/pep604
Add a naive implementation for PEP 604
2022-05-13 12:31:54 +02:00
GalaxySnail
8cb1b76ea4 Fix typo 2022-04-14 04:02:20 +08:00
GalaxySnail
e7755651a4 Add some tests for PEP 604 2022-04-14 03:32:43 +08:00
GalaxySnail
0c7384edc3 A naive implementation for PEP 604 2022-04-14 03:32:12 +08:00
Dave Halter
8f15f38949 Revert a change for Python 2.7 compatibility (see also e267f63657) 2021-12-25 14:08:44 +01:00
Dave Halter
96af7e4077 The Python 3.6 requirement is now the lowest supported version 2021-12-25 13:37:35 +01:00
Dave Halter
929fa9b452 Fix a small issue in overload tests 2021-12-25 13:18:58 +01:00
Dave Halter
08c5ab821f Merge pull request #1826 from PeterJCLaw/fix-1801-typed-decorator-on-instance-method
Make typed decorators work for instance methods
2021-12-13 02:05:55 +01:00
Peter Law
b6f761f13c Make typed decorators work for instance methods
This feels incomplete when compared to FunctionMixin.py__get__,
however seems to work at least in the cut-down reported.

Fixes https://github.com/davidhalter/jedi/issues/1801.
2021-12-12 18:18:55 +00:00
Peter Law
72cf41f4c9 Lambdas in comprehensions need parentheses in Python > 3.8
Fixes https://github.com/davidhalter/jedi/issues/1824.
2021-12-12 18:17:53 +00:00
Dave Halter
3602c10916 Merge pull request #1821 from tomaarsen/patch-1
Typo in docstring of `extract_variable`
2021-11-17 13:44:08 +01:00
Dave Halter
601bfb3493 The readthedocs option submodules should not be part of the Python option 2021-11-17 13:39:21 +01:00
Dave Halter
021f081d8a Submodules should be part of the readthedocs build 2021-11-17 13:38:03 +01:00
Dave Halter
54af6fa86d Try to fix docs dependencies
Docs were not building on read the docs, see also: https://github.com/sphinx-doc/sphinx/issues/9788
2021-11-17 13:33:41 +01:00
Tom Aarsen
f193ae67e9 typo: "statemenet" -> "statement" 2021-11-17 12:59:13 +01:00
Dave Halter
fae26fa7a4 Last preparations for v0.18.1 2021-11-17 01:44:27 +01:00
Dave Halter
a276710f66 Merge pull request #1820 from davidhalter/changes
Some Changes for 0.18.1
2021-11-17 01:42:55 +01:00
Dave Halter
aa8eed8da4 Merge pull request #1819 from jerluc/master
Adds support for "async with" via #1818
2021-11-17 01:36:53 +01:00
jerluc
b2e647d598 Removing invalid test for async with open(...)
See explanation in https://github.com/davidhalter/jedi/pull/1819#issuecomment-970776091
2021-11-16 16:12:43 -08:00
Dave Halter
ec9b453379 Handle defined_names for values that have no context, fixes #1744, fixes #1745 2021-11-17 01:07:28 +01:00
Dave Halter
84d086a47b Fix an issue with whitespace after a dot at the end of a file, also part of #1748 2021-11-17 00:31:46 +01:00
Dave Halter
8bc9c8cda2 Fix an issue where a slice is indexed, fixes #1748 2021-11-17 00:14:59 +01:00
Dave Halter
a17b958078 Fix infer_default for params in REPL, fixes #1738 2021-11-16 23:36:22 +01:00
Dave Halter
656ecf502d Prepare CHANGELOG for 0.18.1 2021-11-16 23:27:01 +01:00
Dave Halter
b846043117 Add 3.10 to the supported Python versions 2021-11-16 23:19:21 +01:00
Dave Halter
6fa91726bf Fix a test in Python 3.10 that's not really important anyway 2021-11-16 23:08:05 +01:00
Dave Halter
42508d9309 Fix fixture annotations for pytest
This means mostly these:

@fixture
def foo() -> Generator[int, None, None]: ...
2021-11-16 22:57:25 +01:00
jerluc
8847848a03 Adds support for "async with" via #1818 2021-11-16 13:00:24 -08:00
Dave Halter
8bd969c24a Upgrade pytest 2021-11-16 21:51:03 +01:00
Dave Halter
458bb30884 Yaml got me again 2021-11-16 21:46:00 +01:00
Dave Halter
515e07227b Try to enable Python 3.10 in CI 2021-11-16 21:44:29 +01:00
Dave Halter
6cb5804227 Revert "Upgrade Django"
This reverts commit 195695edd3.
2021-11-16 21:32:15 +01:00
Dave Halter
e580d1f4d9 Fix a stub docs issue 2021-11-16 21:27:00 +01:00
Dave Halter
195695edd3 Upgrade Django 2021-11-16 21:10:12 +01:00
Dave Halter
42c5276e04 Merge pull request #1800 from Boerde/pytest_improve_fixture_completion
Improve completion for pytest fixtures
2021-11-16 21:09:35 +01:00
Dave Halter
bb5bed4937 Merge pull request #1805 from kirat-singh/support_nested_namespace_packages
fix(import): support for nested namespace packages
2021-10-09 15:20:59 +02:00
Kirat Singh
d872eef1a7 chore: remove unnecessary for loop 2021-10-06 13:15:20 +00:00
Kirat Singh
53e837055f fix(import): support for nested namespace packages
If multiple directories in sys.path provide a nested namespace
package, then jedi would only visit the first directory which
contained the package.  Fix this by saving the remaining path list in
the ImplicitNamespaceValue and add a test for it.
2021-10-02 04:09:27 +00:00
Dave Halter
65bc1c117b Merge pull request #1795 from frenzymadness/patch-1
inspect now raises OSError for objects without source file
2021-09-02 11:22:08 +02:00
Lumír 'Frenzy' Balhar
eab1b8be8b inspect now raises OSError for objects without source file
CPython issue: https://bugs.python.org/issue44648
2021-09-01 20:50:54 +02:00
boerde
3cf98f6ba1 paramters with annotation do not need special pytest handling 2021-08-29 09:17:04 +02:00
boerde
8808b5b64b added test to override fixture return value with annotation 2021-08-29 09:14:29 +02:00
Laurent Soest
fe50352f9c annotations should be preferred even when it is a generator 2021-08-28 21:04:57 +02:00
Laurent Soest
96b4330ef9 testing: added test to override generator with annotation 2021-08-28 21:02:45 +02:00
Dave Halter
1d944943c3 Merge pull request #1794 from PeterJCLaw/fix-quoted-generic-forward-refs
Fix quoted generic annotations
2021-07-25 20:02:38 +02:00
Peter Law
78a95f4751 Handle generics appearing within any quoted annotations
This hoists the solution added for return-type annotations to
also apply for input annotations so they work too.
2021-07-25 16:31:27 +01:00
Peter Law
599a1c3ee1 Handle generics appearing within quoted return annotations
This ensures that these quoted likely forwards references in
return type annotations behave like their non-quoted equivalents.

I suspect there may be other places which will need similar
adjustments, which may mean that we should push the conversion
a layer closer to the parsing (perhaps in `py__annotations__`?).

One case I know that this doesn't solve (but which likely needs
similar adjustment) is generics in return types of comment-style
annotations. They're less likely and may not be worth supporting
since all supported Python versions can use the in-syntax spelling
for annotations at this point.
2021-07-25 15:32:22 +01:00
Peter Law
6814a7336c Hoist common variable for additional re-use 2021-07-25 15:23:51 +01:00
Dave Halter
070f191f55 Merge pull request #1663 from PeterJCLaw/tidyups
Tidyups
2021-07-25 13:44:55 +02:00
Dave Halter
11e67ed319 Merge pull request #1793 from PeterJCLaw/fix-functools-wraps-module-scope
Fix module-scope passthrough function signatures
2021-07-25 13:43:00 +02:00
Peter Law
ab2eb570a8 Use search_ancestor for a more robust search 2021-07-24 17:27:27 +01:00
Peter Law
aa265a44e1 Have all py__file__ methods return a Path 2021-07-24 17:14:25 +01:00
Peter Law
25a3e31ca8 Add a __repr__ 2021-07-24 17:12:34 +01:00
Peter Law
87388ae00f Drop dead line 2021-07-24 17:12:34 +01:00
Peter Law
2d11e02fdb Remove redundant invalid documentation line
This is now replaced by the type signature.
2021-07-24 17:12:34 +01:00
Peter Law
392dcdf015 Fix potential bug passing exception to function excepting str
Found while adding type annotations.
2021-07-24 17:12:34 +01:00
Peter Law
b9fd84e11c Add sanity-check exception
Found by mypy while adding types.
2021-07-24 17:12:34 +01:00
Peter Law
75624f0e3c Convert more things to Python 3 idioms 2021-07-24 17:12:34 +01:00
Peter Law
6ad62e18d2 deque is in collections, not queue
Though it seems that the queue module does use it internally, which
is why this was working.
2021-07-24 17:12:34 +01:00
Peter Law
6787719c28 Ensure *args, **kwargs lookthrough works at module scope too
This means that passthrough signatures will be found for top level
functions, which is useful both where they're wrappered by
`functools.wraps` or not.

Fixes https://github.com/davidhalter/jedi/issues/1791.
2021-07-24 16:58:34 +01:00
Peter Law
bb40390225 Add identifiers to these test strings
This makes it easier to work out which one fails when pytest
reports a failure. Mostly useful when introducing failing tests,
which I'm about to do.
2021-07-24 16:15:41 +01:00
Peter Law
0d15347210 Remove confusing comment
I'm assuming that this is incorrect given that there _are_ arguments
where the comment suggests there aren't any.
2021-07-24 16:14:20 +01:00
Dan Rosén
41652507b3 Fix grammar in features.rst 2021-05-06 00:38:19 +02:00
Dave Halter
41fb6a0cde Merge pull request #1772 from josephbirkner/bugfix/zip-complete
Fixed ZIP import completion.
2021-04-29 23:56:14 +02:00
Joseph Birkner
a340fe077e Fixed ZIP completion. 2021-04-29 09:52:08 +02:00
Dave Halter
dcea842ac2 Revert "Upgrade django-stubs, fixes #1750"
This reverts commit ce5619cabb.
2021-02-26 23:09:22 +01:00
Dave Halter
ce5619cabb Upgrade django-stubs, fixes #1750 2021-02-26 22:30:09 +01:00
Dave Halter
0eb6720c11 Some Python objects suck, fixes #1755 2021-02-26 21:58:47 +01:00
Dave Halter
ee30843f22 Merge pull request #1741 from sfavazza/master
BUGFIX: endless loop in pytest plugin
2021-02-01 00:41:40 +01:00
Samuele FAVAZZA
613cb08325 BUGFIX: prevent an infinite loop seeking for a "conftest.py" file 2021-01-30 16:31:26 +01:00
Aivar Annamaa
9f41153eb2 Allow tweaking Interpreter sys_path (#1734) 2021-01-23 14:38:10 +01:00
Dave Halter
387d73990b Fix issues with getitem on compiled objects that have annotations, see #1719 2021-01-17 13:48:22 +01:00
Dave Halter
47d0318fa6 Paths are the default for modules 2021-01-14 02:00:14 +01:00
Dave Halter
7555dc0d45 Get rid of cast_path 2021-01-14 01:39:51 +01:00
Dave Halter
2a8b212af7 Move the module_injector 2021-01-14 01:35:18 +01:00
Dave Halter
837cb1106a Use Path instead of str if possible 2021-01-14 01:32:57 +01:00
Dave Halter
b6fd81f1e1 Another time avoiding a memory leak, also part of #1723 2021-01-14 01:18:00 +01:00
Dave Halter
0ff532b937 Refactor docstrings 2021-01-14 01:11:50 +01:00
Dave Halter
b9067ccdbb Avoid caching parso objects, fixes #1723 2021-01-14 00:29:34 +01:00
Dave Halter
44d77523b3 Fix a test that depended on correct cwd location an dnot having an x.py in a local directory 2021-01-10 16:31:37 +01:00
Dave Halter
6279791b24 Fix an issue with complete_search 2021-01-10 16:08:17 +01:00
Romain Rigaux
4597c7ebe7 Fix typo in docstring 2021-01-09 10:56:22 +01:00
Dave Halter
e6f18df1d2 unsafe -> not safe 2021-01-03 01:13:17 +01:00
Dave Halter
3428a24af0 Remove an outdated comment 2021-01-02 23:41:38 +01:00
Dave Halter
7a3d1f7cee Run CI on pull request 2021-01-02 23:40:14 +01:00
Dave Halter
8ef2ce232c Hopefully fix a Windows issue 2021-01-02 18:11:59 +01:00
Dave Halter
4ab7a53c19 Fix a compatibility issue for Python < 3.8 2021-01-02 17:37:30 +01:00
Dave Halter
c5fb2985a3 Use clearly defined project for tests to avoid scanning the 2000 typeshed files all the time 2021-01-02 15:31:57 +01:00
88 changed files with 1128 additions and 424 deletions

View File

@@ -1,5 +1,5 @@
name: ci
on: push
on: [push, pull_request]
jobs:
tests:
@@ -7,8 +7,8 @@ jobs:
strategy:
matrix:
os: [ubuntu-20.04, windows-2019]
python-version: [3.9, 3.8, 3.7, 3.6]
environment: ['3.8', '3.9', '3.7', '3.6', 'interpreter']
python-version: ["3.10", "3.9", "3.8", "3.7", "3.6"]
environment: ['3.8', '3.10', '3.9', '3.7', '3.6', 'interpreter']
steps:
- name: Checkout code
uses: actions/checkout@v2
@@ -27,9 +27,6 @@ jobs:
- name: Install dependencies
run: 'pip install .[testing]'
- name: Setup tmate session
uses: mxschmitt/action-tmate@v3
- name: Run tests
run: python -m pytest
env:

1
.gitignore vendored
View File

@@ -14,3 +14,4 @@ record.json
/.pytest_cache
/.mypy_cache
/venv/
.nvimrc

View File

@@ -1,2 +1,11 @@
version: 2
python:
pip_install: true
install:
- method: pip
path: .
extra_requirements:
- docs
submodules:
include: all

View File

@@ -1,4 +1,4 @@
Main Authors
Main Authors
------------
- David Halter (@davidhalter) <davidhalter88@gmail.com>
@@ -61,6 +61,8 @@ Code Contributors
- Vladislav Serebrennikov (@endilll)
- Andrii Kolomoiets (@muffinmad)
- Leo Ryu (@Leo-Ryu)
- Joseph Birkner (@josephbirkner)
- Márcio Mazza (@marciomazza)
And a few more "anonymous" contributors.

View File

@@ -6,7 +6,22 @@ Changelog
Unreleased
++++++++++
0.18.2 (2022-11-21)
+++++++++++++++++++
- Added dataclass-equivalent for attrs.define
- Find fixtures from Pytest entrypoints; Examples of pytest plugins installed
like this are pytest-django, pytest-sugar and Faker.
- Fixed Project.search, when a venv was involved, which is why for example
`:Pyimport django.db` did not work in some cases in jedi-vim.
- And many smaller bugfixes
0.18.1 (2021-11-17)
+++++++++++++++++++
- Implict namespaces are now a separate types in ``Name().type``
- Python 3.10 support
- Mostly bugfixes
0.18.0 (2020-12-25)
+++++++++++++++++++

View File

@@ -57,7 +57,7 @@ Supported Python Features
Limitations
-----------
In general Jedi's limit are quite high, but for very big projects or very
In general Jedi's limit is quite high, but for very big projects or very
complex code, sometimes Jedi intentionally stops type inference, to avoid
hanging for a long time.

View File

@@ -27,7 +27,7 @@ ad
load
"""
__version__ = '0.18.0'
__version__ = '0.18.2'
from jedi.api import Script, Interpreter, set_debug_function, preload_module
from jedi import settings

View File

@@ -7,22 +7,6 @@ import sys
import pickle
def cast_path(string):
"""
Take a bytes or str path and cast it to unicode.
Apparently it is perfectly fine to pass both byte and unicode objects into
the sys.path. This probably means that byte paths are normal at other
places as well.
Since this just really complicates everything and Python 2.7 will be EOL
soon anyway, just go with always strings.
"""
if isinstance(string, bytes):
return str(string, encoding='UTF-8', errors='replace')
return str(string)
def pickle_load(file):
try:
return pickle.load(file)

View File

@@ -13,7 +13,6 @@ from pathlib import Path
import parso
from parso.python import tree
from jedi._compatibility import cast_path
from jedi.parser_utils import get_executable_nodes
from jedi import debug
from jedi import settings
@@ -100,13 +99,15 @@ class Script:
"""
def __init__(self, code=None, *, path=None, environment=None, project=None):
self._orig_path = path
# An empty path (also empty string) should always result in no path.
if isinstance(path, str):
path = Path(path)
self.path = path.absolute() if path else None
if code is None:
if path is None:
raise ValueError("Must provide at least one of code or path")
# TODO add a better warning than the traceback!
with open(path, 'rb') as f:
code = f.read()
@@ -152,7 +153,7 @@ class Script:
if self.path is None:
file_io = None
else:
file_io = KnownContentFileIO(cast_path(self.path), self._code)
file_io = KnownContentFileIO(self.path, self._code)
if self.path is not None and self.path.suffix == '.pyi':
# We are in a stub file. Try to load the stub properly.
stub_module = load_proper_stub_module(
@@ -580,7 +581,7 @@ class Script:
@validate_line_column
def extract_variable(self, line, column, *, new_name, until_line=None, until_column=None):
"""
Moves an expression to a new statemenet.
Moves an expression to a new statement.
For example if you have the cursor on ``foo`` and provide a
``new_name`` called ``bar``::
@@ -709,7 +710,7 @@ class Interpreter(Script):
"""
_allow_descriptor_getattr_default = True
def __init__(self, code, namespaces, **kwds):
def __init__(self, code, namespaces, *, project=None, **kwds):
try:
namespaces = [dict(n) for n in namespaces]
except Exception:
@@ -722,16 +723,23 @@ class Interpreter(Script):
if not isinstance(environment, InterpreterEnvironment):
raise TypeError("The environment needs to be an InterpreterEnvironment subclass.")
super().__init__(code, environment=environment,
project=Project(Path.cwd()), **kwds)
if project is None:
project = Project(Path.cwd())
super().__init__(code, environment=environment, project=project, **kwds)
self.namespaces = namespaces
self._inference_state.allow_descriptor_getattr = self._allow_descriptor_getattr_default
@cache.memoize_method
def _get_module_context(self):
if self.path is None:
file_io = None
else:
file_io = KnownContentFileIO(self.path, self._code)
tree_module_value = ModuleValue(
self._inference_state, self._module_node,
file_io=KnownContentFileIO(str(self.path), self._code),
file_io=file_io,
string_names=('__main__',),
code_lines=self._code_lines,
)

View File

@@ -27,7 +27,7 @@ from jedi.inference.compiled.mixed import MixedName
from jedi.inference.names import ImportName, SubModuleName
from jedi.inference.gradual.stub_value import StubModuleValue
from jedi.inference.gradual.conversion import convert_names, convert_values
from jedi.inference.base_value import ValueSet
from jedi.inference.base_value import ValueSet, HasNoContext
from jedi.api.keywords import KeywordName
from jedi.api import completion_cache
from jedi.api.helpers import filter_follow_imports
@@ -37,13 +37,17 @@ def _sort_names_by_start_pos(names):
return sorted(names, key=lambda s: s.start_pos or (0, 0))
def defined_names(inference_state, context):
def defined_names(inference_state, value):
"""
List sub-definitions (e.g., methods in class).
:type scope: Scope
:rtype: list of Name
"""
try:
context = value.as_context()
except HasNoContext:
return []
filter = next(context.get_filters())
names = [name for name in filter.values()]
return [Name(inference_state, n) for n in _sort_names_by_start_pos(names)]
@@ -759,7 +763,7 @@ class Name(BaseName):
"""
defs = self._name.infer()
return sorted(
unite(defined_names(self._inference_state, d.as_context()) for d in defs),
unite(defined_names(self._inference_state, d) for d in defs),
key=lambda s: s._name.start_pos or (0, 0)
)

View File

@@ -18,7 +18,8 @@ from jedi.inference import imports
from jedi.inference.base_value import ValueSet
from jedi.inference.helpers import infer_call_of_leaf, parse_dotted_names
from jedi.inference.context import get_global_filters
from jedi.inference.value import TreeInstance, ModuleValue
from jedi.inference.value import TreeInstance
from jedi.inference.docstring_utils import DocstringModule
from jedi.inference.names import ParamNameWrapper, SubModuleName
from jedi.inference.gradual.conversion import convert_values, convert_names
from jedi.parser_utils import cut_value_at_position
@@ -194,7 +195,6 @@ class Completion:
- In args: */**: no completion
- In params (also lambda): no completion before =
"""
grammar = self._inference_state.grammar
self.stack = stack = None
self._position = (
@@ -277,6 +277,10 @@ class Completion:
)
elif nonterminals[-1] in ('trailer', 'dotted_name') and nodes[-1] == '.':
dot = self._module_node.get_leaf_for_position(self._position)
if dot.type == "endmarker":
# This is a bit of a weird edge case, maybe we can somehow
# generalize this.
dot = leaf.get_previous_leaf()
cached_name, n = self._complete_trailer(dot.get_previous_leaf())
completion_names += n
elif self._is_parameter_completion():
@@ -462,12 +466,12 @@ class Completion:
def _complete_code_lines(self, code_lines):
module_node = self._inference_state.grammar.parse(''.join(code_lines))
module_value = ModuleValue(
self._inference_state,
module_node,
module_value = DocstringModule(
in_module_context=self._module_context,
inference_state=self._inference_state,
module_node=module_node,
code_lines=code_lines,
)
module_value.parent_context = self._module_context
return Completion(
self._inference_state,
module_value.as_context(),

View File

@@ -17,7 +17,7 @@ import parso
_VersionInfo = namedtuple('VersionInfo', 'major minor micro')
_SUPPORTED_PYTHONS = ['3.9', '3.8', '3.7', '3.6']
_SUPPORTED_PYTHONS = ['3.10', '3.9', '3.8', '3.7', '3.6']
_SAFE_PATHS = ['/usr/bin', '/usr/local/bin']
_CONDA_VAR = 'CONDA_PREFIX'
_CURRENT_VERSION = '%s.%s' % (sys.version_info.major, sys.version_info.minor)

View File

@@ -23,7 +23,7 @@ class RefactoringError(_JediError):
Refactorings can fail for various reasons. So if you work with refactorings
like :meth:`.Script.rename`, :meth:`.Script.inline`,
:meth:`.Script.extract_variable` and :meth:`.Script.extract_function`, make
sure to catch these. The descriptions in the errors are ususally valuable
sure to catch these. The descriptions in the errors are usually valuable
for end users.
A typical ``RefactoringError`` would tell the user that inlining is not

View File

@@ -205,7 +205,6 @@ def filter_follow_imports(names, follow_builtin_imports=False):
class CallDetails:
def __init__(self, bracket_leaf, children, position):
['bracket_leaf', 'call_index', 'keyword_name_str']
self.bracket_leaf = bracket_leaf
self._children = children
self._position = position
@@ -281,7 +280,7 @@ class CallDetails:
def count_positional_arguments(self):
count = 0
for star_count, key_start, had_equal in self._list_arguments()[:-1]:
if star_count:
if star_count or key_start:
break
count += 1
return count
@@ -307,7 +306,7 @@ def _iter_arguments(nodes, position):
first = node.children[0]
second = node.children[1]
if second == '=':
if second.start_pos < position:
if second.start_pos < position and first.type == 'name':
yield 0, first.value, True
else:
yield 0, remove_after_pos(first), False

View File

@@ -106,7 +106,16 @@ class Project:
with open(self._get_json_path(self._path), 'w') as f:
return json.dump((_SERIALIZER_VERSION, data), f)
def __init__(self, path, **kwargs):
def __init__(
self,
path,
*,
environment_path=None,
load_unsafe_extensions=False,
sys_path=None,
added_sys_path=(),
smart_sys_path=True,
) -> None:
"""
:param path: The base path for this project.
:param environment_path: The Python executable path, typically the path
@@ -125,25 +134,22 @@ class Project:
local directories. Otherwise you will have to rely on your packages
being properly configured on the ``sys.path``.
"""
def py2_comp(path, environment_path=None, load_unsafe_extensions=False,
sys_path=None, added_sys_path=(), smart_sys_path=True):
if isinstance(path, str):
path = Path(path).absolute()
self._path = path
self._environment_path = environment_path
if sys_path is not None:
# Remap potential pathlib.Path entries
sys_path = list(map(str, sys_path))
self._sys_path = sys_path
self._smart_sys_path = smart_sys_path
self._load_unsafe_extensions = load_unsafe_extensions
self._django = False
if isinstance(path, str):
path = Path(path).absolute()
self._path = path
self._environment_path = environment_path
if sys_path is not None:
# Remap potential pathlib.Path entries
self.added_sys_path = list(map(str, added_sys_path))
"""The sys path that is going to be added at the end of the """
py2_comp(path, **kwargs)
sys_path = list(map(str, sys_path))
self._sys_path = sys_path
self._smart_sys_path = smart_sys_path
self._load_unsafe_extensions = load_unsafe_extensions
self._django = False
# Remap potential pathlib.Path entries
self.added_sys_path = list(map(str, added_sys_path))
"""The sys path that is going to be added at the end of the """
@property
def path(self):
@@ -328,7 +334,8 @@ class Project:
)
# 2. Search for identifiers in the project.
for module_context in search_in_file_ios(inference_state, file_ios, name):
for module_context in search_in_file_ios(inference_state, file_ios,
name, complete=complete):
names = get_module_names(module_context.tree_node, all_scopes=all_scopes)
names = [module_context.create_name(n) for n in names]
names = _remove_imports(names)
@@ -345,9 +352,8 @@ class Project:
# 3. Search for modules on sys.path
sys_path = [
p for p in self._get_sys_path(inference_state)
# Exclude folders that are handled by recursing of the Python
# folders.
if not p.startswith(str(self._path))
# Exclude the current folder which is handled by recursing the folders.
if p != self._path
]
names = list(iter_module_names(inference_state, empty_module_context, sys_path))
yield from search_in_module(
@@ -426,7 +432,6 @@ def get_default_project(path=None):
probable_path = dir
if probable_path is not None:
# TODO search for setup.py etc
return Project(probable_path)
if first_no_init_file is not None:

View File

@@ -42,11 +42,17 @@ class ChangedFile:
if self._from_path is None:
from_p = ''
else:
from_p = self._from_path.relative_to(project_path)
try:
from_p = self._from_path.relative_to(project_path)
except ValueError: # Happens it the path is not on th project_path
from_p = self._from_path
if self._to_path is None:
to_p = ''
else:
to_p = self._to_path.relative_to(project_path)
try:
to_p = self._to_path.relative_to(project_path)
except ValueError:
to_p = self._to_path
diff = difflib.unified_diff(
old_lines, new_lines,
fromfile=str(from_p),

View File

@@ -106,10 +106,7 @@ def dbg(message, *args, color='GREEN'):
debug_function(color, i + 'dbg: ' + message % tuple(repr(a) for a in args))
def warning(message, *args, **kwargs):
format = kwargs.pop('format', True)
assert not kwargs
def warning(message, *args, format=True):
if debug_function and enable_warning:
i = ' ' * _debug_indent
if format:

View File

@@ -90,7 +90,7 @@ class InferenceState:
self.compiled_subprocess = environment.get_inference_state_subprocess(self)
self.grammar = environment.get_grammar()
self.latest_grammar = parso.load_grammar(version='3.7')
self.latest_grammar = parso.load_grammar(version='3.10')
self.memoize_cache = {} # for memoize decorators
self.module_cache = imports.ModuleCache() # does the job of `sys.modules`.
self.stub_module_cache = {} # Dict[Tuple[str, ...], Optional[ModuleValue]]
@@ -181,8 +181,6 @@ class InferenceState:
def parse_and_get_code(self, code=None, path=None,
use_latest_grammar=False, file_io=None, **kwargs):
if path is not None:
path = str(path)
if code is None:
if file_io is None:
file_io = FileIO(path)

View File

@@ -22,6 +22,10 @@ from jedi.cache import memoize_method
sentinel = object()
class HasNoContext(Exception):
pass
class HelperValueMixin:
def get_root_context(self):
value = self
@@ -261,7 +265,7 @@ class Value(HelperValueMixin):
return self.parent_context.is_stub()
def _as_context(self):
raise NotImplementedError('Not all values need to be converted to contexts: %s', self)
raise HasNoContext
@property
def name(self):
@@ -293,7 +297,7 @@ class Value(HelperValueMixin):
just the `_T` generic parameter.
`value_set`: represents the actual argument passed to the parameter
we're inferrined for, or (for recursive calls) their types. In the
we're inferred for, or (for recursive calls) their types. In the
above example this would first be the representation of the list
`[1]` and then, when recursing, just of `1`.
"""

View File

@@ -8,6 +8,8 @@ import warnings
import re
import builtins
import typing
from pathlib import Path
from typing import Optional
from jedi.inference.compiled.getattr_static import getattr_static
@@ -179,9 +181,9 @@ class DirectObjectAccess:
def py__bool__(self):
return bool(self._obj)
def py__file__(self):
def py__file__(self) -> Optional[Path]:
try:
return self._obj.__file__
return Path(self._obj.__file__)
except AttributeError:
return None
@@ -211,7 +213,22 @@ class DirectObjectAccess:
def py__getitem__all_values(self):
if isinstance(self._obj, dict):
return [self._create_access_path(v) for v in self._obj.values()]
return self.py__iter__list()
if isinstance(self._obj, (list, tuple)):
return [self._create_access_path(v) for v in self._obj]
if self.is_instance():
cls = DirectObjectAccess(self._inference_state, self._obj.__class__)
return cls.py__getitem__all_values()
try:
getitem = self._obj.__getitem__
except AttributeError:
pass
else:
annotation = DirectObjectAccess(self._inference_state, getitem).get_return_annotation()
if annotation is not None:
return [annotation]
return None
def py__simple_getitem__(self, index):
if type(self._obj) not in ALLOWED_GETITEM_TYPES:
@@ -221,8 +238,14 @@ class DirectObjectAccess:
return self._create_access_path(self._obj[index])
def py__iter__list(self):
if not hasattr(self._obj, '__getitem__'):
try:
iter_method = self._obj.__iter__
except AttributeError:
return None
else:
p = DirectObjectAccess(self._inference_state, iter_method).get_return_annotation()
if p is not None:
return [p]
if type(self._obj) not in ALLOWED_GETITEM_TYPES:
# Get rid of side effects, we won't call custom `__getitem__`s.
@@ -306,9 +329,9 @@ class DirectObjectAccess:
except TypeError:
return False
def is_allowed_getattr(self, name, unsafe=False):
def is_allowed_getattr(self, name, safe=True):
# TODO this API is ugly.
if unsafe:
if not safe:
# Unsafe is mostly used to check for __getattr__/__getattribute__.
# getattr_static works for properties, but the underscore methods
# are just ignored (because it's safer and avoids more code
@@ -361,7 +384,7 @@ class DirectObjectAccess:
except AttributeError:
pass
else:
if module is not None:
if module is not None and isinstance(module, str):
try:
__import__(module)
# For some modules like _sqlite3, the __module__ for classes is

View File

@@ -34,7 +34,7 @@ class MixedObject(ValueWrapper):
This combined logic makes it possible to provide more powerful REPL
completion. It allows side effects that are not noticable with the default
parser structure to still be completeable.
parser structure to still be completable.
The biggest difference from CompiledValue to MixedObject is that we are
generally dealing with Python code and not with C code. This will generate
@@ -187,7 +187,7 @@ def _find_syntax_node_name(inference_state, python_object):
try:
python_object = _get_object_to_check(python_object)
path = inspect.getsourcefile(python_object)
except TypeError:
except (OSError, TypeError):
# The type might not be known (e.g. class_with_dict.__weakref__)
return None
path = None if path is None else Path(path)
@@ -267,7 +267,7 @@ def _find_syntax_node_name(inference_state, python_object):
@inference_state_function_cache()
def _create(inference_state, compiled_value, module_context):
# TODO accessing this is bad, but it probably doesn't matter that much,
# because we're working with interpreteters only here.
# because we're working with interpreters only here.
python_object = compiled_value.access_handle.access._obj
result = _find_syntax_node_name(inference_state, python_object)
if result is None:

View File

@@ -7,6 +7,7 @@ goals:
2. Make it possible to handle different Python versions as well as virtualenvs.
"""
import collections
import os
import sys
import queue
@@ -168,7 +169,7 @@ class CompiledSubprocess:
def __init__(self, executable, env_vars=None):
self._executable = executable
self._env_vars = env_vars
self._inference_state_deletion_queue = queue.deque()
self._inference_state_deletion_queue = collections.deque()
self._cleanup_callable = lambda: None
def __repr__(self):

View File

@@ -4,10 +4,10 @@ import inspect
import importlib
import warnings
from pathlib import Path
from zipimport import zipimporter
from zipfile import ZipFile
from zipimport import zipimporter, ZipImportError
from importlib.machinery import all_suffixes
from jedi._compatibility import cast_path
from jedi.inference.compiled import access
from jedi import debug
from jedi import parser_utils
@@ -15,7 +15,7 @@ from jedi.file_io import KnownContentFileIO, ZipFileIO
def get_sys_path():
return list(map(cast_path, sys.path))
return sys.path
def load_module(inference_state, **kwargs):
@@ -93,15 +93,22 @@ def _iter_module_names(inference_state, paths):
# Python modules/packages
for path in paths:
try:
dirs = os.scandir(path)
dir_entries = ((entry.name, entry.is_dir()) for entry in os.scandir(path))
except OSError:
# The file might not exist or reading it might lead to an error.
debug.warning("Not possible to list directory: %s", path)
continue
for dir_entry in dirs:
name = dir_entry.name
try:
zip_import_info = zipimporter(path)
# Unfortunately, there is no public way to access zipimporter's
# private _files member. We therefore have to use a
# custom function to iterate over the files.
dir_entries = _zip_list_subdirectory(
zip_import_info.archive, zip_import_info.prefix)
except ZipImportError:
# The file might not exist or reading it might lead to an error.
debug.warning("Not possible to list directory: %s", path)
continue
for name, is_dir in dir_entries:
# First Namespaces then modules/stubs
if dir_entry.is_dir():
if is_dir:
# pycache is obviously not an interesting namespace. Also the
# name must be a valid identifier.
if name != '__pycache__' and name.isidentifier():
@@ -144,7 +151,11 @@ def _find_module(string, path=None, full_name=None, is_global_search=True):
spec = find_spec(string, p)
if spec is not None:
if spec.origin == "frozen":
continue
loader = spec.loader
if loader is None and not spec.has_location:
# This is a namespace package.
full_name = string if not path else full_name
@@ -190,7 +201,7 @@ def _from_loader(loader, string):
except AttributeError:
return None, is_package
else:
module_path = cast_path(get_filename(string))
module_path = get_filename(string)
# To avoid unicode and read bytes, "overwrite" loader.get_source if
# possible.
@@ -212,7 +223,7 @@ def _from_loader(loader, string):
if code is None:
return None, is_package
if isinstance(loader, zipimporter):
return ZipFileIO(module_path, code, Path(cast_path(loader.archive))), is_package
return ZipFileIO(module_path, code, Path(loader.archive)), is_package
return KnownContentFileIO(module_path, code), is_package
@@ -230,6 +241,17 @@ def _get_source(loader, fullname):
name=fullname)
def _zip_list_subdirectory(zip_path, zip_subdir_path):
zip_file = ZipFile(zip_path)
zip_subdir_path = Path(zip_subdir_path)
zip_content_file_paths = zip_file.namelist()
for raw_file_name in zip_content_file_paths:
file_path = Path(raw_file_name)
if file_path.parent == zip_subdir_path:
file_path = file_path.relative_to(zip_subdir_path)
yield file_path.name, raw_file_name.endswith("/")
class ImplicitNSInfo:
"""Stores information returned from an implicit namespace spec"""
def __init__(self, name, paths):

View File

@@ -5,10 +5,10 @@ import re
from functools import partial
from inspect import Parameter
from pathlib import Path
from typing import Optional
from jedi import debug
from jedi.inference.utils import to_list
from jedi._compatibility import cast_path
from jedi.cache import memoize_method
from jedi.inference.filters import AbstractFilter
from jedi.inference.names import AbstractNameDefinition, ValueNameMixin, \
@@ -167,7 +167,7 @@ class CompiledValue(Value):
except AttributeError:
return super().py__simple_getitem__(index)
if access is None:
return NO_VALUES
return super().py__simple_getitem__(index)
return ValueSet([create_from_access_path(self.inference_state, access)])
@@ -293,10 +293,7 @@ class CompiledModule(CompiledValue):
return CompiledModuleContext(self)
def py__path__(self):
paths = self.access_handle.py__path__()
if paths is None:
return None
return map(cast_path, paths)
return self.access_handle.py__path__()
def is_package(self):
return self.py__path__() is not None
@@ -309,11 +306,8 @@ class CompiledModule(CompiledValue):
return ()
return tuple(name.split('.'))
def py__file__(self):
path = cast_path(self.access_handle.py__file__())
if path is None:
return None
return Path(path)
def py__file__(self) -> Optional[Path]:
return self.access_handle.py__file__() # type: ignore[no-any-return]
class CompiledName(AbstractNameDefinition):
@@ -440,7 +434,7 @@ class CompiledValueFilter(AbstractFilter):
access_handle = self.compiled_value.access_handle
return self._get(
name,
lambda name, unsafe: access_handle.is_allowed_getattr(name, unsafe),
lambda name, safe: access_handle.is_allowed_getattr(name, safe=safe),
lambda name: name in access_handle.dir(),
check_has_attribute=True
)
@@ -454,7 +448,7 @@ class CompiledValueFilter(AbstractFilter):
has_attribute, is_descriptor = allowed_getattr_callback(
name,
unsafe=self._inference_state.allow_descriptor_getattr
safe=not self._inference_state.allow_descriptor_getattr
)
if check_has_attribute and not has_attribute:
return []
@@ -478,7 +472,7 @@ class CompiledValueFilter(AbstractFilter):
from jedi.inference.compiled import builtin_from_name
names = []
needs_type_completions, dir_infos = self.compiled_value.access_handle.get_dir_infos()
# We could use `unsafe` here as well, especially as a parameter to
# We could use `safe=False` here as well, especially as a parameter to
# get_dir_infos. But this would lead to a lot of property executions
# that are probably not wanted. The drawback for this is that we
# have a different name for `get` and `values`. For `get` we always
@@ -486,7 +480,7 @@ class CompiledValueFilter(AbstractFilter):
for name in dir_infos:
names += self._get(
name,
lambda name, unsafe: dir_infos[name],
lambda name, safe: dir_infos[name],
lambda name: name in dir_infos,
)

View File

@@ -1,5 +1,7 @@
from abc import abstractmethod
from contextlib import contextmanager
from pathlib import Path
from typing import Optional
from parso.tree import search_ancestor
from parso.python.tree import Name
@@ -307,8 +309,8 @@ class FunctionContext(TreeContextMixin, ValueContext):
class ModuleContext(TreeContextMixin, ValueContext):
def py__file__(self):
return self._value.py__file__()
def py__file__(self) -> Optional[Path]:
return self._value.py__file__() # type: ignore[no-any-return]
def get_filters(self, until_position=None, origin_scope=None):
filters = self._value.get_filters(origin_scope)
@@ -325,7 +327,7 @@ class ModuleContext(TreeContextMixin, ValueContext):
yield from filters
def get_global_filter(self):
return GlobalNameFilter(self, self.tree_node)
return GlobalNameFilter(self)
@property
def string_names(self):
@@ -355,8 +357,8 @@ class NamespaceContext(TreeContextMixin, ValueContext):
def string_names(self):
return self._value.string_names
def py__file__(self):
return self._value.py__file__()
def py__file__(self) -> Optional[Path]:
return self._value.py__file__() # type: ignore[no-any-return]
class ClassContext(TreeContextMixin, ValueContext):
@@ -405,8 +407,8 @@ class CompiledModuleContext(CompiledContext):
def string_names(self):
return self._value.string_names
def py__file__(self):
return self._value.py__file__()
def py__file__(self) -> Optional[Path]:
return self._value.py__file__() # type: ignore[no-any-return]
def _get_global_filters_for_name(context, name_or_none, position):

View File

@@ -0,0 +1,21 @@
from jedi.inference.value import ModuleValue
from jedi.inference.context import ModuleContext
class DocstringModule(ModuleValue):
def __init__(self, in_module_context, **kwargs):
super().__init__(**kwargs)
self._in_module_context = in_module_context
def _as_context(self):
return DocstringModuleContext(self, self._in_module_context)
class DocstringModuleContext(ModuleContext):
def __init__(self, module_value, in_module_context):
super().__init__(module_value)
self._in_module_context = in_module_context
def get_filters(self, origin_scope=None, until_position=None):
yield from super().get_filters(until_position=until_position)
yield from self._in_module_context.get_filters()

View File

@@ -17,12 +17,10 @@ annotations.
import re
import warnings
from textwrap import dedent
from parso import parse, ParserSyntaxError
from jedi import debug
from jedi.common import indent_block
from jedi.inference.cache import inference_state_method_cache
from jedi.inference.base_value import iterator_to_value_set, ValueSet, \
NO_VALUES
@@ -182,52 +180,40 @@ def _strip_rst_role(type_str):
def _infer_for_statement_string(module_context, string):
code = dedent("""
def pseudo_docstring_stuff():
'''
Create a pseudo function for docstring statements.
Need this docstring so that if the below part is not valid Python this
is still a function.
'''
{}
""")
if string is None:
return []
for element in re.findall(r'((?:\w+\.)*\w+)\.', string):
# Try to import module part in dotted name.
# (e.g., 'threading' in 'threading.Thread').
string = 'import %s\n' % element + string
potential_imports = re.findall(r'((?:\w+\.)*\w+)\.', string)
# Try to import module part in dotted name.
# (e.g., 'threading' in 'threading.Thread').
imports = "\n".join(f"import {p}" for p in potential_imports)
string = f'{imports}\n{string}'
debug.dbg('Parse docstring code %s', string, color='BLUE')
grammar = module_context.inference_state.grammar
try:
module = grammar.parse(code.format(indent_block(string)), error_recovery=False)
module = grammar.parse(string, error_recovery=False)
except ParserSyntaxError:
return []
try:
funcdef = next(module.iter_funcdefs())
# First pick suite, then simple_stmt and then the node,
# which is also not the last item, because there's a newline.
stmt = funcdef.children[-1].children[-1].children[-2]
# It's not the last item, because that's an end marker.
stmt = module.children[-2]
except (AttributeError, IndexError):
return []
if stmt.type not in ('name', 'atom', 'atom_expr'):
return []
from jedi.inference.value import FunctionValue
function_value = FunctionValue(
module_context.inference_state,
module_context,
funcdef
# Here we basically use a fake module that also uses the filters in
# the actual module.
from jedi.inference.docstring_utils import DocstringModule
m = DocstringModule(
in_module_context=module_context,
inference_state=module_context.inference_state,
module_node=module,
code_lines=[],
)
func_execution_context = function_value.as_context()
# Use the module of the param.
# TODO this module is not the module of the param in case of a function
# call. In that case it's the module of the function call.
# stuffed with content from a function call.
return list(_execute_types_in_stmt(func_execution_context, stmt))
return list(_execute_types_in_stmt(m.as_context(), stmt))
def _execute_types_in_stmt(module_context, stmt):

View File

@@ -12,7 +12,7 @@ from parso.python.tree import Name, UsedNamesMapping
from jedi.inference import flow_analysis
from jedi.inference.base_value import ValueSet, ValueWrapper, \
LazyValueWrapper
from jedi.parser_utils import get_cached_parent_scope
from jedi.parser_utils import get_cached_parent_scope, get_parso_cache_node
from jedi.inference.utils import to_list
from jedi.inference.names import TreeNameDefinition, ParamName, \
AnonymousParamName, AbstractNameDefinition, NameWrapper
@@ -54,11 +54,15 @@ class FilterWrapper:
return self.wrap_names(self._wrapped_filter.values())
def _get_definition_names(used_names, name_key):
def _get_definition_names(parso_cache_node, used_names, name_key):
if parso_cache_node is None:
names = used_names.get(name_key, ())
return tuple(name for name in names if name.is_definition(include_setitem=True))
try:
for_module = _definition_name_cache[used_names]
for_module = _definition_name_cache[parso_cache_node]
except KeyError:
for_module = _definition_name_cache[used_names] = {}
for_module = _definition_name_cache[parso_cache_node] = {}
try:
return for_module[name_key]
@@ -70,18 +74,40 @@ def _get_definition_names(used_names, name_key):
return result
class AbstractUsedNamesFilter(AbstractFilter):
class _AbstractUsedNamesFilter(AbstractFilter):
name_class = TreeNameDefinition
def __init__(self, parent_context, parser_scope):
self._parser_scope = parser_scope
self._module_node = self._parser_scope.get_root_node()
self._used_names = self._module_node.get_used_names()
def __init__(self, parent_context, node_context=None):
if node_context is None:
node_context = parent_context
self._node_context = node_context
self._parser_scope = node_context.tree_node
module_context = node_context.get_root_context()
# It is quite hacky that we have to use that. This is for caching
# certain things with a WeakKeyDictionary. However, parso intentionally
# uses slots (to save memory) and therefore we end up with having to
# have a weak reference to the object that caches the tree.
#
# Previously we have tried to solve this by using a weak reference onto
# used_names. However that also does not work, because it has a
# reference from the module, which itself is referenced by any node
# through parents.
path = module_context.py__file__()
if path is None:
# If the path is None, there is no guarantee that parso caches it.
self._parso_cache_node = None
else:
self._parso_cache_node = get_parso_cache_node(
module_context.inference_state.latest_grammar
if module_context.is_stub() else module_context.inference_state.grammar,
path
)
self._used_names = module_context.tree_node.get_used_names()
self.parent_context = parent_context
def get(self, name):
return self._convert_names(self._filter(
_get_definition_names(self._used_names, name),
_get_definition_names(self._parso_cache_node, self._used_names, name),
))
def _convert_names(self, names):
@@ -92,7 +118,7 @@ class AbstractUsedNamesFilter(AbstractFilter):
name
for name_key in self._used_names
for name in self._filter(
_get_definition_names(self._used_names, name_key),
_get_definition_names(self._parso_cache_node, self._used_names, name_key),
)
)
@@ -100,7 +126,7 @@ class AbstractUsedNamesFilter(AbstractFilter):
return '<%s: %s>' % (self.__class__.__name__, self.parent_context)
class ParserTreeFilter(AbstractUsedNamesFilter):
class ParserTreeFilter(_AbstractUsedNamesFilter):
def __init__(self, parent_context, node_context=None, until_position=None,
origin_scope=None):
"""
@@ -109,10 +135,7 @@ class ParserTreeFilter(AbstractUsedNamesFilter):
value, but for some type inference it's important to have a local
value of the other classes.
"""
if node_context is None:
node_context = parent_context
super().__init__(parent_context, node_context.tree_node)
self._node_context = node_context
super().__init__(parent_context, node_context)
self._origin_scope = origin_scope
self._until_position = until_position
@@ -126,7 +149,7 @@ class ParserTreeFilter(AbstractUsedNamesFilter):
if parent.type == 'trailer':
return False
base_node = parent if parent.type in ('classdef', 'funcdef') else name
return get_cached_parent_scope(self._used_names, base_node) == self._parser_scope
return get_cached_parent_scope(self._parso_cache_node, base_node) == self._parser_scope
def _check_flows(self, names):
for name in sorted(names, key=lambda name: name.start_pos, reverse=True):
@@ -182,7 +205,7 @@ class AnonymousFunctionExecutionFilter(_FunctionExecutionFilter):
return AnonymousParamName(self._function_value, name)
class GlobalNameFilter(AbstractUsedNamesFilter):
class GlobalNameFilter(_AbstractUsedNamesFilter):
def get(self, name):
try:
names = self._used_names[name]

View File

@@ -196,13 +196,43 @@ def py__annotations__(funcdef):
return dct
def resolve_forward_references(context, all_annotations):
def resolve(node):
if node is None or node.type != 'string':
return node
node = _get_forward_reference_node(
context,
context.inference_state.compiled_subprocess.safe_literal_eval(
node.value,
),
)
if node is None:
# There was a string, but it's not a valid annotation
return None
# The forward reference tree has an additional root node ('eval_input')
# that we don't want. Extract the node we do want, that is equivalent to
# the nodes returned by `py__annotations__` for a non-quoted node.
node = node.children[0]
return node
return {name: resolve(node) for name, node in all_annotations.items()}
@inference_state_method_cache()
def infer_return_types(function, arguments):
"""
Infers the type of a function's return value,
according to type annotations.
"""
all_annotations = py__annotations__(function.tree_node)
context = function.get_default_param_context()
all_annotations = resolve_forward_references(
context,
py__annotations__(function.tree_node),
)
annotation = all_annotations.get("return", None)
if annotation is None:
# If there is no Python 3-type annotation, look for an annotation
@@ -217,11 +247,10 @@ def infer_return_types(function, arguments):
return NO_VALUES
return _infer_annotation_string(
function.get_default_param_context(),
context,
match.group(1).strip()
).execute_annotation()
context = function.get_default_param_context()
unknown_type_vars = find_unknown_type_vars(context, annotation)
annotation_values = infer_annotation(context, annotation)
if not unknown_type_vars:

View File

@@ -86,6 +86,8 @@ class StubFilter(ParserTreeFilter):
# Imports in stub files are only public if they have an "as"
# export.
definition = name.get_definition()
if definition is None:
return False
if definition.type in ('import_from', 'import_name'):
if name.parent.type not in ('import_as_name', 'dotted_as_name'):
return False

View File

@@ -7,7 +7,6 @@ from pathlib import Path
from jedi import settings
from jedi.file_io import FileIO
from jedi._compatibility import cast_path
from jedi.parser_utils import get_cached_code_lines
from jedi.inference.base_value import ValueSet, NO_VALUES
from jedi.inference.gradual.stub_value import TypingModuleWrapper, StubModuleValue
@@ -44,7 +43,6 @@ def _create_stub_map(directory_path_info):
return
for entry in listed:
entry = cast_path(entry)
path = os.path.join(directory_path_info.path, entry)
if os.path.isdir(path):
init = os.path.join(path, '__init__.pyi')
@@ -169,7 +167,6 @@ def _try_to_load_stub(inference_state, import_names, python_value_set,
if len(import_names) == 1:
# foo-stubs
for p in sys_path:
p = cast_path(p)
init = os.path.join(p, *import_names) + '-stubs' + os.path.sep + '__init__.pyi'
m = _try_to_load_stub_from_file(
inference_state,

View File

@@ -294,6 +294,9 @@ class Callable(BaseTypingInstance):
from jedi.inference.gradual.annotation import infer_return_for_callable
return infer_return_for_callable(arguments, param_values, result_values)
def py__get__(self, instance, class_value):
return ValueSet([self])
class Tuple(BaseTypingInstance):
def _is_homogenous(self):
@@ -431,6 +434,9 @@ class NewType(Value):
from jedi.inference.compiled.value import CompiledValueName
return CompiledValueName(self, 'NewType')
def __repr__(self) -> str:
return '<NewType: %s>%s' % (self.tree_node, self._type_value_set)
class CastFunction(ValueWrapper):
@repack_with_argument_clinic('type, object, /')

View File

@@ -422,20 +422,13 @@ def import_module(inference_state, import_names, parent_module_value, sys_path):
# The module might not be a package.
return NO_VALUES
for path in paths:
# At the moment we are only using one path. So this is
# not important to be correct.
if not isinstance(path, list):
path = [path]
file_io_or_ns, is_pkg = inference_state.compiled_subprocess.get_module_info(
string=import_names[-1],
path=path,
full_name=module_name,
is_global_search=False,
)
if is_pkg is not None:
break
else:
file_io_or_ns, is_pkg = inference_state.compiled_subprocess.get_module_info(
string=import_names[-1],
path=paths,
full_name=module_name,
is_global_search=False,
)
if is_pkg is None:
return NO_VALUES
if isinstance(file_io_or_ns, ImplicitNSInfo):

View File

@@ -248,7 +248,7 @@ class ValueNameMixin:
def get_defining_qualified_value(self):
context = self.parent_context
if context.is_module() or context.is_class():
if context is not None and (context.is_module() or context.is_class()):
return self.parent_context.get_value() # Might be None
return None
@@ -341,6 +341,12 @@ class TreeNameDefinition(AbstractTreeName):
def py__doc__(self):
api_type = self.api_type
if api_type in ('function', 'class', 'property'):
if self.parent_context.get_root_context().is_stub():
from jedi.inference.gradual.conversion import convert_names
names = convert_names([self], prefer_stub_to_compiled=False)
if self not in names:
return _merge_name_docs(names)
# Make sure the names are not TreeNameDefinitions anymore.
return clean_scope_docstring(self.tree_name.get_definition())
@@ -408,6 +414,9 @@ class ParamNameInterface(_ParamMixin):
return 2
return 0
def infer_default(self):
return NO_VALUES
class BaseTreeParamName(ParamNameInterface, AbstractTreeName):
annotation_node = None

View File

@@ -12,7 +12,7 @@ count the function calls.
Settings
~~~~~~~~~~
Recursion settings are important if you don't want extremly
Recursion settings are important if you don't want extremely
recursive python code to go absolutely crazy.
The default values are based on experiments while completing the |jedi| library

View File

@@ -180,26 +180,34 @@ def _check_fs(inference_state, file_io, regex):
return m.as_context()
def gitignored_lines(folder_io, file_io):
ignored_paths = set()
ignored_names = set()
def gitignored_paths(folder_io, file_io):
ignored_paths_abs = set()
ignored_paths_rel = set()
for l in file_io.read().splitlines():
if not l or l.startswith(b'#'):
if not l or l.startswith(b'#') or l.startswith(b'!') or b'*' in l:
continue
p = l.decode('utf-8', 'ignore')
if p.startswith('/'):
name = p[1:]
if name.endswith(os.path.sep):
name = name[:-1]
ignored_paths.add(os.path.join(folder_io.path, name))
p = l.decode('utf-8', 'ignore').rstrip('/')
if '/' in p:
name = p.lstrip('/')
ignored_paths_abs.add(os.path.join(folder_io.path, name))
else:
ignored_names.add(p)
return ignored_paths, ignored_names
name = p
ignored_paths_rel.add((folder_io.path, name))
return ignored_paths_abs, ignored_paths_rel
def expand_relative_ignore_paths(folder_io, relative_paths):
curr_path = folder_io.path
return {os.path.join(curr_path, p[1]) for p in relative_paths if curr_path.startswith(p[0])}
def recurse_find_python_folders_and_files(folder_io, except_paths=()):
except_paths = set(except_paths)
except_paths_relative = set()
for root_folder_io, folder_ios, file_ios in folder_io.walk():
# Delete folders that we don't want to iterate over.
for file_io in file_ios:
@@ -209,14 +217,21 @@ def recurse_find_python_folders_and_files(folder_io, except_paths=()):
yield None, file_io
if path.name == '.gitignore':
ignored_paths, ignored_names = \
gitignored_lines(root_folder_io, file_io)
except_paths |= ignored_paths
ignored_paths_abs, ignored_paths_rel = gitignored_paths(
root_folder_io, file_io
)
except_paths |= ignored_paths_abs
except_paths_relative |= ignored_paths_rel
except_paths_relative_expanded = expand_relative_ignore_paths(
root_folder_io, except_paths_relative
)
folder_ios[:] = [
folder_io
for folder_io in folder_ios
if folder_io.path not in except_paths
and folder_io.path not in except_paths_relative_expanded
and folder_io.get_base_name() not in _IGNORE_FOLDERS
]
for folder_io in folder_ios:
@@ -282,12 +297,13 @@ def get_module_contexts_containing_name(inference_state, module_contexts, name,
limit_reduction=limit_reduction)
def search_in_file_ios(inference_state, file_io_iterator, name, limit_reduction=1):
def search_in_file_ios(inference_state, file_io_iterator, name,
limit_reduction=1, complete=False):
parse_limit = _PARSED_FILE_LIMIT / limit_reduction
open_limit = _OPENED_FILE_LIMIT / limit_reduction
file_io_count = 0
parsed_file_count = 0
regex = re.compile(r'\b' + re.escape(name) + r'\b')
regex = re.compile(r'\b' + re.escape(name) + (r'' if complete else r'\b'))
for file_io in file_io_iterator:
file_io_count += 1
m = _check_fs(inference_state, file_io, regex)

View File

@@ -12,6 +12,8 @@ The signature here for bar should be `bar(b, c)` instead of bar(*args).
"""
from inspect import Parameter
from parso import tree
from jedi.inference.utils import to_list
from jedi.inference.names import ParamNameWrapper
from jedi.inference.helpers import is_big_annoying_library
@@ -22,7 +24,11 @@ def _iter_nodes_for_param(param_name):
from jedi.inference.arguments import TreeArguments
execution_context = param_name.parent_context
function_node = execution_context.tree_node
# Walk up the parso tree to get the FunctionNode we want. We use the parso
# tree rather than going via the execution context so that we're agnostic of
# the specific scope we're evaluating within (i.e: module or function,
# etc.).
function_node = tree.search_ancestor(param_name.tree_name, 'funcdef', 'lambdef')
module_node = function_node.get_root_node()
start = function_node.children[-1].start_pos
end = function_node.children[-1].end_pos

View File

@@ -2,6 +2,7 @@
Functions inferring the syntax tree.
"""
import copy
import itertools
from parso.python import tree
@@ -515,10 +516,20 @@ def _literals_to_types(inference_state, result):
def _infer_comparison(context, left_values, operator, right_values):
state = context.inference_state
if isinstance(operator, str):
operator_str = operator
else:
operator_str = str(operator.value)
if not left_values or not right_values:
# illegal slices e.g. cause left/right_result to be None
result = (left_values or NO_VALUES) | (right_values or NO_VALUES)
return _literals_to_types(state, result)
elif operator_str == "|" and all(
value.is_class() or value.is_compiled()
for value in itertools.chain(left_values, right_values)
):
# ^^^ A naive hack for PEP 604
return ValueSet.from_sets((left_values, right_values))
else:
# I don't think there's a reasonable chance that a string
# operation is still correct, once we pass something like six
@@ -738,6 +749,13 @@ def tree_name_to_values(inference_state, context, tree_name):
types = infer_expr_stmt(context, node, tree_name)
elif typ == 'with_stmt':
value_managers = context.infer_node(node.get_test_node_from_name(tree_name))
if node.parent.type == 'async_stmt':
# In the case of `async with` statements, we need to
# first get the coroutine from the `__aenter__` method,
# then "unwrap" via the `__await__` method
enter_methods = value_managers.py__getattribute__('__aenter__')
coro = enter_methods.execute_with_values()
return coro.py__await__().py__stop_iteration_returns()
enter_methods = value_managers.py__getattribute__('__enter__')
return enter_methods.execute_with_values()
elif typ in ('import_from', 'import_name'):

View File

@@ -186,7 +186,6 @@ def _get_buildout_script_paths(search_path: Path):
directory that look like python files.
:param search_path: absolute path to the module.
:type search_path: str
"""
project_root = _get_parent_dir_with_file(search_path, 'buildout.cfg')
if not project_root:
@@ -205,7 +204,7 @@ def _get_buildout_script_paths(search_path: Path):
except (UnicodeDecodeError, IOError) as e:
# Probably a binary file; permission error or race cond. because
# file got deleted. Ignore it.
debug.warning(e)
debug.warning(str(e))
continue

View File

@@ -16,7 +16,7 @@ settings will stop this process.
It is important to note that:
1. Array modfications work only in the current module.
1. Array modifications work only in the current module.
2. Jedi only checks Array additions; ``list.pop``, etc are ignored.
"""
from jedi import debug

View File

@@ -344,7 +344,8 @@ class BaseFunctionExecutionContext(ValueContext, TreeContextMixin):
GenericClass(c, TupleGenericManager(generics)) for c in async_classes
).execute_annotation()
else:
if self.is_generator():
# If there are annotations, prefer them over anything else.
if self.is_generator() and not self.infer_annotations():
return ValueSet([iterable.Generator(inference_state, self)])
else:
return self.get_return_values()

View File

@@ -342,6 +342,8 @@ class SequenceLiteralValue(Sequence):
else:
with reraise_getitem_errors(TypeError, KeyError, IndexError):
node = self.get_tree_entries()[index]
if node == ':' or node.type == 'subscript':
return NO_VALUES
return self._defining_context.infer_node(node)
def py__iter__(self, contextualized_node=None):
@@ -407,16 +409,6 @@ class SequenceLiteralValue(Sequence):
else:
return [array_node]
def exact_key_items(self):
"""
Returns a generator of tuples like dict.items(), where the key is
resolved (as a string) and the values are still lazy values.
"""
for key_node, value in self.get_tree_entries():
for key in self._defining_context.infer_node(key_node):
if is_string(key):
yield key.get_safe_value(), LazyTreeValue(self._defining_context, value)
def __repr__(self):
return "<%s of %s>" % (self.__class__.__name__, self.atom)
@@ -472,6 +464,16 @@ class DictLiteralValue(_DictMixin, SequenceLiteralValue, _DictKeyMixin):
return ValueSet([FakeList(self.inference_state, lazy_values)])
def exact_key_items(self):
"""
Returns a generator of tuples like dict.items(), where the key is
resolved (as a string) and the values are still lazy values.
"""
for key_node, value in self.get_tree_entries():
for key in self._defining_context.infer_node(key_node):
if is_string(key):
yield key.get_safe_value(), LazyTreeValue(self._defining_context, value)
def _dict_values(self):
return ValueSet.from_sets(
self._defining_context.infer_node(v)

View File

@@ -78,6 +78,8 @@ class ClassName(TreeNameDefinition):
type_ = super().api_type
if type_ == 'function':
definition = self.tree_name.get_definition()
if definition is None:
return type_
if function_is_property(definition):
# This essentially checks if there is an @property before
# the function. @property could be something different, but
@@ -114,25 +116,10 @@ class ClassFilter(ParserTreeFilter):
while node is not None:
if node == self._parser_scope or node == self.parent_context:
return True
node = get_cached_parent_scope(self._used_names, node)
node = get_cached_parent_scope(self._parso_cache_node, node)
return False
def _access_possible(self, name):
# Filter for ClassVar variables
# TODO this is not properly done, yet. It just checks for the string
# ClassVar in the annotation, which can be quite imprecise. If we
# wanted to do this correct, we would have to infer the ClassVar.
if not self._is_instance:
expr_stmt = name.get_definition()
if expr_stmt is not None and expr_stmt.type == 'expr_stmt':
annassign = expr_stmt.children[1]
if annassign.type == 'annassign':
# If there is an =, the variable is obviously also
# defined on the class.
if 'ClassVar' not in annassign.children[1].get_code() \
and '=' not in annassign.children:
return False
# Filter for name mangling of private variables like __foo
return not name.value.startswith('__') or name.value.endswith('__') \
or self._equals_origin_scope()

View File

@@ -64,7 +64,7 @@ class ModuleMixin(SubModuleDictMixin):
parent_context=self.as_context(),
origin_scope=origin_scope
),
GlobalNameFilter(self.as_context(), self.tree_node),
GlobalNameFilter(self.as_context()),
)
yield DictFilter(self.sub_modules_dict())
yield DictFilter(self._module_attributes_dict())
@@ -148,7 +148,7 @@ class ModuleValue(ModuleMixin, TreeValue):
if file_io is None:
self._path: Optional[Path] = None
else:
self._path = Path(file_io.path)
self._path = file_io.path
self.string_names = string_names # Optional[Tuple[str, ...]]
self.code_lines = code_lines
self._is_package = is_package

View File

@@ -1,3 +1,6 @@
from pathlib import Path
from typing import Optional
from jedi.inference.cache import inference_state_method_cache
from jedi.inference.filters import DictFilter
from jedi.inference.names import ValueNameMixin, AbstractNameDefinition
@@ -41,7 +44,7 @@ class ImplicitNamespaceValue(Value, SubModuleDictMixin):
string_name = self.py__package__()[-1]
return ImplicitNSName(self, string_name)
def py__file__(self):
def py__file__(self) -> Optional[Path]:
return None
def py__package__(self):

View File

@@ -216,11 +216,14 @@ def is_scope(node):
def _get_parent_scope_cache(func):
cache = WeakKeyDictionary()
def wrapper(used_names, node, include_flows=False):
def wrapper(parso_cache_node, node, include_flows=False):
if parso_cache_node is None:
return func(node, include_flows)
try:
for_module = cache[used_names]
for_module = cache[parso_cache_node]
except KeyError:
for_module = cache[used_names] = {}
for_module = cache[parso_cache_node] = {}
try:
return for_module[node]
@@ -270,7 +273,18 @@ def get_cached_code_lines(grammar, path):
Basically access the cached code lines in parso. This is not the nicest way
to do this, but we avoid splitting all the lines again.
"""
return parser_cache[grammar._hashed][path].lines
return get_parso_cache_node(grammar, path).lines
def get_parso_cache_node(grammar, path):
"""
This is of course not public. But as long as I control parso, this
shouldn't be a problem. ~ Dave
The reason for this is mostly caching. This is obviously also a sign of a
broken caching architecture.
"""
return parser_cache[grammar._hashed][path]
def cut_value_at_position(leaf, position):

View File

@@ -2,7 +2,7 @@ from pathlib import Path
from parso.tree import search_ancestor
from jedi.inference.cache import inference_state_method_cache
from jedi.inference.imports import load_module_from_path
from jedi.inference.imports import goto_import, load_module_from_path
from jedi.inference.filters import ParserTreeFilter
from jedi.inference.base_value import NO_VALUES, ValueSet
from jedi.inference.helpers import infer_call_of_leaf
@@ -31,7 +31,15 @@ def execute(callback):
def infer_anonymous_param(func):
def get_returns(value):
if value.tree_node.annotation is not None:
return value.execute_with_values()
result = value.execute_with_values()
if any(v.name.get_qualified_names(include_module_names=True)
== ('typing', 'Generator')
for v in result):
return ValueSet.from_sets(
v.py__getattribute__('__next__').execute_annotation()
for v in result
)
return result
# In pytest we need to differentiate between generators and normal
# returns.
@@ -43,6 +51,9 @@ def infer_anonymous_param(func):
return function_context.get_return_values()
def wrapper(param_name):
# parameters with an annotation do not need special handling
if param_name.annotation_node:
return func(param_name)
is_pytest_param, param_name_is_function_name = \
_is_a_pytest_param_and_inherited(param_name)
if is_pytest_param:
@@ -120,6 +131,17 @@ def _is_pytest_func(func_name, decorator_nodes):
or any('fixture' in n.get_code() for n in decorator_nodes)
def _find_pytest_plugin_modules():
"""
Finds pytest plugin modules hooked by setuptools entry points
See https://docs.pytest.org/en/stable/how-to/writing_plugins.html#setuptools-entry-points
"""
from pkg_resources import iter_entry_points
return [ep.module_name.split(".") for ep in iter_entry_points(group="pytest11")]
@inference_state_method_cache()
def _iter_pytest_modules(module_context, skip_own_module=False):
if not skip_own_module:
@@ -129,6 +151,10 @@ def _iter_pytest_modules(module_context, skip_own_module=False):
if file_io is not None:
folder = file_io.get_parent_folder()
sys_path = module_context.inference_state.get_sys_path()
# prevent an infinite loop when reaching the root of the current drive
last_folder = None
while any(folder.path.startswith(p) for p in sys_path):
file_io = folder.get_file_io('conftest.py')
if Path(file_io.path) != module_context.py__file__():
@@ -139,7 +165,12 @@ def _iter_pytest_modules(module_context, skip_own_module=False):
pass
folder = folder.get_parent_folder()
for names in _PYTEST_FIXTURE_MODULES:
# prevent an infinite for loop if the same parent folder is return twice
if last_folder is not None and folder.path == last_folder.path:
break
last_folder = folder # keep track of the last found parent name
for names in _PYTEST_FIXTURE_MODULES + _find_pytest_plugin_modules():
for module_value in module_context.inference_state.import_module(names):
yield module_value.as_context()
@@ -147,14 +178,28 @@ def _iter_pytest_modules(module_context, skip_own_module=False):
class FixtureFilter(ParserTreeFilter):
def _filter(self, names):
for name in super()._filter(names):
funcdef = name.parent
# Class fixtures are not supported
if funcdef.type == 'funcdef':
decorated = funcdef.parent
if decorated.type == 'decorated' and self._is_fixture(decorated):
# look for fixture definitions of imported names
if name.parent.type == "import_from":
imported_names = goto_import(self.parent_context, name)
if any(
self._is_fixture(iname.parent_context, iname.tree_name)
for iname in imported_names
# discard imports of whole modules, that have no tree_name
if iname.tree_name
):
yield name
def _is_fixture(self, decorated):
elif self._is_fixture(self.parent_context, name):
yield name
def _is_fixture(self, context, name):
funcdef = name.parent
# Class fixtures are not supported
if funcdef.type != "funcdef":
return False
decorated = funcdef.parent
if decorated.type != "decorated":
return False
decorators = decorated.children[0]
if decorators.type == 'decorators':
decorators = decorators.children
@@ -171,11 +216,12 @@ class FixtureFilter(ParserTreeFilter):
last_leaf = last_trailer.get_last_leaf()
if last_leaf == ')':
values = infer_call_of_leaf(
self.parent_context, last_leaf, cut_own_trailer=True)
context, last_leaf, cut_own_trailer=True
)
else:
values = self.parent_context.infer_node(dotted_name)
values = context.infer_node(dotted_name)
else:
values = self.parent_context.infer_node(dotted_name)
values = context.infer_node(dotted_name)
for value in values:
if value.name.get_qualified_names(include_module_names=True) \
== ('_pytest', 'fixtures', 'fixture'):

View File

@@ -803,6 +803,15 @@ _implemented = {
# For now this works at least better than Jedi trying to understand it.
'dataclass': _dataclass
},
# attrs exposes declaration interface roughly compatible with dataclasses
# via attrs.define, attrs.frozen and attrs.mutable
# https://www.attrs.org/en/stable/names.html
'attr': {
'define': _dataclass,
},
'attrs': {
'define': _dataclass,
},
'os.path': {
'dirname': _create_string_input_function(os.path.dirname),
'abspath': _create_string_input_function(os.path.abspath),

View File

@@ -35,17 +35,46 @@ setup(name='jedi',
install_requires=['parso>=0.8.0,<0.9.0'],
extras_require={
'testing': [
'pytest<6.0.0',
'pytest<7.0.0',
# docopt for sith doctests
'docopt',
# coloroma for colored debug output
'colorama',
'Django<3.1', # For now pin this.
'attrs',
],
'qa': [
'flake8==3.8.3',
'mypy==0.782',
],
'docs': [
# Just pin all of these.
'Jinja2==2.11.3',
'MarkupSafe==1.1.1',
'Pygments==2.8.1',
'alabaster==0.7.12',
'babel==2.9.1',
'chardet==4.0.0',
'commonmark==0.8.1',
'docutils==0.17.1',
'future==0.18.2',
'idna==2.10',
'imagesize==1.2.0',
'mock==1.0.1',
'packaging==20.9',
'pyparsing==2.4.7',
'pytz==2021.1',
'readthedocs-sphinx-ext==2.1.4',
'recommonmark==0.5.0',
'requests==2.25.1',
'six==1.15.0',
'snowballstemmer==2.1.0',
'sphinx==1.8.5',
'sphinx-rtd-theme==0.4.3',
'sphinxcontrib-serializinghtml==1.1.4',
'sphinxcontrib-websupport==1.2.4',
'urllib3==1.26.4',
],
},
package_data={'jedi': ['*.pyi', 'third_party/typeshed/LICENSE',
'third_party/typeshed/README']},
@@ -61,6 +90,7 @@ setup(name='jedi',
'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: 3.8',
'Programming Language :: Python :: 3.9',
'Programming Language :: Python :: 3.10',
'Topic :: Software Development :: Libraries :: Python Modules',
'Topic :: Text Editors :: Integrated Development Environments (IDE)',
'Topic :: Utilities',

View File

@@ -44,6 +44,8 @@ b[int():]
#? list()
b[:]
#? int()
b[:, :-1]
#? 3
b[:]
@@ -67,6 +69,20 @@ class _StrangeSlice():
#? slice()
_StrangeSlice()[1:2]
for x in b[:]:
#? int()
x
for x in b[:, :-1]:
#?
x
class Foo:
def __getitem__(self, item):
return item
#?
Foo()[:, :-1][0]
# -----------------
# iterable multiplication

View File

@@ -26,11 +26,6 @@ async def y():
x().__await__().__next
return 2
async def x2():
async with open('asdf') as f:
#? ['readlines']
f.readlines
class A():
@staticmethod
async def b(c=1, d=2):
@@ -52,8 +47,6 @@ async def awaitable_test():
#? str()
foo
# python >= 3.6
async def asgen():
yield 1
await asyncio.sleep(0)
@@ -105,3 +98,22 @@ async def f():
f = await C().async_for_classmethod()
#? C()
f
class AsyncCtxMgr:
def some_method():
pass
async def __aenter__(self):
return self
async def __aexit__(self, *args):
pass
async def asyncctxmgr():
async with AsyncCtxMgr() as acm:
#? AsyncCtxMgr()
acm
#? ['some_method']
acm.som

View File

@@ -23,7 +23,7 @@ def inheritance_fixture():
@pytest.fixture
def testdir(testdir):
#? ['chdir']
testdir.chdir
return testdir
def capsysbinary(capsysbinary):
#? ['close']
capsysbinary.clos
return capsysbinary

View File

@@ -5,6 +5,7 @@ import uuid
from django.db import models
from django.contrib.auth.models import User
from django.db.models.query_utils import DeferredAttribute
from django.db.models.manager import BaseManager
class TagManager(models.Manager):

View File

@@ -284,6 +284,13 @@ def doctest_with_space():
import_issu
"""
def doctest_issue_github_1748():
"""From GitHub #1748
#? 10 []
This. Al
"""
pass
def docstring_rst_identifiers():
"""

View File

@@ -1,5 +1,3 @@
# python >= 3.6
class Foo:
bar = 1

View File

@@ -309,3 +309,8 @@ def annotation2() -> Iterator[float]:
next(annotation1())
#? float()
next(annotation2())
# annotations should override generator inference
#? float()
annotation1()

View File

@@ -110,4 +110,4 @@ class Test(object):
# nocond lambdas make no sense at all.
#? int()
[a for a in [1,2] if lambda: 3][0]
[a for a in [1,2] if (lambda: 3)][0]

View File

@@ -0,0 +1 @@
mod1_name = 'mod1'

View File

@@ -0,0 +1 @@
mod2_name = 'mod2'

View File

@@ -0,0 +1,18 @@
import sys
import os
from os.path import dirname
sys.path.insert(0, os.path.join(dirname(__file__), 'namespace2'))
sys.path.insert(0, os.path.join(dirname(__file__), 'namespace1'))
#? ['mod1']
import pkg1.pkg2.mod1
#? ['mod2']
import pkg1.pkg2.mod2
#? ['mod1_name']
pkg1.pkg2.mod1.mod1_name
#? ['mod2_name']
pkg1.pkg2.mod2.mod2_name

View File

@@ -23,11 +23,9 @@ def builtin_test():
import sqlite3
# classes is a local module that has an __init__.py and can therefore not be
# found. test can be found.
# found.
#? []
import classes
#? ['test']
import test
#? ['timedelta']
from datetime import timedel

View File

@@ -0,0 +1,50 @@
""" Pep-0484 type hinted decorators """
from typing import Callable
def decorator(func):
def wrapper(*a, **k):
return str(func(*a, **k))
return wrapper
def typed_decorator(func: Callable[..., int]) -> Callable[..., str]:
...
# Functions
@decorator
def plain_func() -> int:
return 4
#? str()
plain_func()
@typed_decorator
def typed_func() -> int:
return 4
#? str()
typed_func()
# Methods
class X:
@decorator
def plain_method(self) -> int:
return 4
@typed_decorator
def typed_method(self) -> int:
return 4
inst = X()
#? str()
inst.plain_method()
#? str()
inst.typed_method()

View File

@@ -27,13 +27,13 @@ class PlainClass(object):
tpl = ("1", 2)
tpl_typed = ("2", 3) # type: Tuple[str, int]
tpl_typed: Tuple[str, int] = ("2", 3)
collection = {"a": 1}
collection_typed = {"a": 1} # type: Dict[str, int]
collection_typed: Dict[str, int] = {"a": 1}
list_of_ints = [42] # type: List[int]
list_of_funcs = [foo] # type: List[Callable[[T], T]]
list_of_ints: List[int] = [42]
list_of_funcs: List[Callable[[T], T]] = [foo]
custom_generic = CustomGeneric(123.45)

View File

@@ -19,12 +19,12 @@ T_co = TypeVar('T_co', covariant=True)
V = TypeVar('V')
just_float = 42. # type: float
optional_float = 42. # type: Optional[float]
list_of_ints = [42] # type: List[int]
list_of_floats = [42.] # type: List[float]
list_of_optional_floats = [x or None for x in list_of_floats] # type: List[Optional[float]]
list_of_ints_and_strs = [42, 'abc'] # type: List[Union[int, str]]
just_float: float = 42.
optional_float: Optional[float] = 42.
list_of_ints: List[int] = [42]
list_of_floats: List[float] = [42.]
list_of_optional_floats: List[Optional[float]] = [x or None for x in list_of_floats]
list_of_ints_and_strs: List[Union[int, str]] = [42, 'abc']
# Test that simple parameters are handled
def list_t_to_list_t(the_list: List[T]) -> List[T]:
@@ -48,7 +48,7 @@ for z in list_t_to_list_t(list_of_ints_and_strs):
z
list_of_int_type = [int] # type: List[Type[int]]
list_of_int_type: List[Type[int]] = [int]
# Test that nested parameters are handled
def list_optional_t_to_list_t(the_list: List[Optional[T]]) -> List[T]:
@@ -85,7 +85,7 @@ def optional_list_t_to_list_t(x: Optional[List[T]]) -> List[T]:
return x if x is not None else []
optional_list_float = None # type: Optional[List[float]]
optional_list_float: Optional[List[float]] = None
for xc in optional_list_t_to_list_t(optional_list_float):
#? float()
xc
@@ -134,7 +134,7 @@ def list_tuple_t_to_tuple_list_t(the_list: List[Tuple[T]]) -> Tuple[List[T], ...
return tuple(list(x) for x in the_list)
list_of_int_tuples = [(x,) for x in list_of_ints] # type: List[Tuple[int]]
list_of_int_tuples: List[Tuple[int]] = [(x,) for x in list_of_ints]
for b in list_tuple_t_to_tuple_list_t(list_of_int_tuples):
#? int()
@@ -145,7 +145,7 @@ def list_tuple_t_elipsis_to_tuple_list_t(the_list: List[Tuple[T, ...]]) -> Tuple
return tuple(list(x) for x in the_list)
list_of_int_tuple_elipsis = [tuple(list_of_ints)] # type: List[Tuple[int, ...]]
list_of_int_tuple_elipsis: List[Tuple[int, ...]] = [tuple(list_of_ints)]
for b in list_tuple_t_elipsis_to_tuple_list_t(list_of_int_tuple_elipsis):
#? int()
@@ -157,7 +157,7 @@ def foo(x: int) -> int:
return x
list_of_funcs = [foo] # type: List[Callable[[int], int]]
list_of_funcs: List[Callable[[int], int]] = [foo]
def list_func_t_to_list_func_type_t(the_list: List[Callable[[T], T]]) -> List[Callable[[Type[T]], T]]:
def adapt(func: Callable[[T], T]) -> Callable[[Type[T]], T]:
@@ -176,7 +176,7 @@ def bar(*a, **k) -> int:
return len(a) + len(k)
list_of_funcs_2 = [bar] # type: List[Callable[..., int]]
list_of_funcs_2: List[Callable[..., int]] = [bar]
def list_func_t_passthrough(the_list: List[Callable[..., T]]) -> List[Callable[..., T]]:
return the_list
@@ -187,7 +187,7 @@ for b in list_func_t_passthrough(list_of_funcs_2):
b(None, x="x")
mapping_int_str = {42: 'a'} # type: Dict[int, str]
mapping_int_str: Dict[int, str] = {42: 'a'}
# Test that mappings (that have more than one parameter) are handled
def invert_mapping(mapping: Mapping[K, V]) -> Mapping[V, K]:
@@ -210,11 +210,11 @@ first(mapping_int_str)
#? str()
first("abc")
some_str = NotImplemented # type: str
some_str: str = NotImplemented
#? str()
first(some_str)
annotated = [len] # type: List[ Callable[[Sequence[float]], int] ]
annotated: List[ Callable[[Sequence[float]], int] ] = [len]
#? int()
first(annotated)()
@@ -237,7 +237,7 @@ for b in values(mapping_int_str):
#
# Tests that user-defined generic types are handled
#
list_ints = [42] # type: List[int]
list_ints: List[int] = [42]
class CustomGeneric(Generic[T_co]):
def __init__(self, val: T_co) -> None:
@@ -248,7 +248,7 @@ class CustomGeneric(Generic[T_co]):
def custom(x: CustomGeneric[T]) -> T:
return x.val
custom_instance = CustomGeneric(42) # type: CustomGeneric[int]
custom_instance: CustomGeneric[int] = CustomGeneric(42)
#? int()
custom(custom_instance)
@@ -275,7 +275,7 @@ for x5 in wrap_custom(list_ints):
# Test extraction of type from a nested custom generic type
list_custom_instances = [CustomGeneric(42)] # type: List[CustomGeneric[int]]
list_custom_instances: List[CustomGeneric[int]] = [CustomGeneric(42)]
def unwrap_custom(iterable: Iterable[CustomGeneric[T]]) -> List[T]:
return [x.val for x in iterable]
@@ -303,7 +303,7 @@ for xg in unwrap_custom(CustomGeneric(s) for s in 'abc'):
# Test extraction of type from type parameer nested within a custom generic type
custom_instance_list_int = CustomGeneric([42]) # type: CustomGeneric[List[int]]
custom_instance_list_int: CustomGeneric[List[int]] = CustomGeneric([42])
def unwrap_custom2(instance: CustomGeneric[Iterable[T]]) -> List[T]:
return list(instance.val)
@@ -326,7 +326,7 @@ class Specialised(Mapping[int, str]):
pass
specialised_instance = NotImplemented # type: Specialised
specialised_instance: Specialised = NotImplemented
#? int()
first(specialised_instance)
@@ -341,7 +341,7 @@ class ChildOfSpecialised(Specialised):
pass
child_of_specialised_instance = NotImplemented # type: ChildOfSpecialised
child_of_specialised_instance: ChildOfSpecialised = NotImplemented
#? int()
first(child_of_specialised_instance)
@@ -355,13 +355,13 @@ class CustomPartialGeneric1(Mapping[str, T]):
pass
custom_partial1_instance = NotImplemented # type: CustomPartialGeneric1[int]
custom_partial1_instance: CustomPartialGeneric1[int] = NotImplemented
#? str()
first(custom_partial1_instance)
custom_partial1_unbound_instance = NotImplemented # type: CustomPartialGeneric1
custom_partial1_unbound_instance: CustomPartialGeneric1 = NotImplemented
#? str()
first(custom_partial1_unbound_instance)
@@ -371,7 +371,7 @@ class CustomPartialGeneric2(Mapping[T, str]):
pass
custom_partial2_instance = NotImplemented # type: CustomPartialGeneric2[int]
custom_partial2_instance: CustomPartialGeneric2[int] = NotImplemented
#? int()
first(custom_partial2_instance)
@@ -380,7 +380,7 @@ first(custom_partial2_instance)
values(custom_partial2_instance)[0]
custom_partial2_unbound_instance = NotImplemented # type: CustomPartialGeneric2
custom_partial2_unbound_instance: CustomPartialGeneric2 = NotImplemented
#? []
first(custom_partial2_unbound_instance)

View File

@@ -19,16 +19,16 @@ TTypeAny = TypeVar('TTypeAny', bound=Type[Any])
TCallable = TypeVar('TCallable', bound=Callable[..., Any])
untyped_list_str = ['abc', 'def']
typed_list_str = ['abc', 'def'] # type: List[str]
typed_list_str: List[str] = ['abc', 'def']
untyped_tuple_str = ('abc',)
typed_tuple_str = ('abc',) # type: Tuple[str]
typed_tuple_str: Tuple[str] = ('abc',)
untyped_tuple_str_int = ('abc', 4)
typed_tuple_str_int = ('abc', 4) # type: Tuple[str, int]
typed_tuple_str_int: Tuple[str, int] = ('abc', 4)
variadic_tuple_str = ('abc',) # type: Tuple[str, ...]
variadic_tuple_str_int = ('abc', 4) # type: Tuple[Union[str, int], ...]
variadic_tuple_str: Tuple[str, ...] = ('abc',)
variadic_tuple_str_int: Tuple[Union[str, int], ...] = ('abc', 4)
def untyped_passthrough(x):
@@ -58,6 +58,16 @@ def typed_bound_generic_passthrough(x: TList) -> TList:
return x
# Forward references are more likely with custom types, however this aims to
# test just the handling of the quoted type rather than any other part of the
# machinery.
def typed_quoted_return_generic_passthrough(x: T) -> 'List[T]':
return [x]
def typed_quoted_input_generic_passthrough(x: 'Tuple[T]') -> T:
x
return x[0]
for a in untyped_passthrough(untyped_list_str):
#? str()
@@ -146,6 +156,23 @@ for q in typed_bound_generic_passthrough(typed_list_str):
q
for r in typed_quoted_return_generic_passthrough("something"):
#? str()
r
for s in typed_quoted_return_generic_passthrough(42):
#? int()
s
#? str()
typed_quoted_input_generic_passthrough(("something",))
#? int()
typed_quoted_input_generic_passthrough((42,))
class CustomList(List):
def get_first(self):
return self[0]

View File

@@ -1,10 +1,9 @@
# python >= 3.6
from typing import List, Dict, overload, Tuple, TypeVar
lst: list
list_alias: List
list_str: List[str]
list_str: List[int]
list_int: List[int]
# -------------------------
# With base classes

View File

@@ -2,18 +2,14 @@
Test the typing library, with docstrings and annotations
"""
import typing
from typing import Sequence, MutableSequence, List, Iterable, Iterator, \
AbstractSet, Tuple, Mapping, Dict, Union, Optional
class B:
pass
def we_can_has_sequence(p, q, r, s, t, u):
"""
:type p: typing.Sequence[int]
:type q: typing.Sequence[B]
:type r: typing.Sequence[int]
:type s: typing.Sequence["int"]
:type t: typing.MutableSequence[dict]
:type u: typing.List[float]
"""
def we_can_has_sequence(p: Sequence[int], q: Sequence[B], r: Sequence[int],
s: Sequence["int"], t: MutableSequence[dict], u: List[float]):
#? ["count"]
p.c
#? int()
@@ -43,13 +39,8 @@ def we_can_has_sequence(p, q, r, s, t, u):
#? float()
u[1]
def iterators(ps, qs, rs, ts):
"""
:type ps: typing.Iterable[int]
:type qs: typing.Iterator[str]
:type rs: typing.Sequence["ForwardReference"]
:type ts: typing.AbstractSet["float"]
"""
def iterators(ps: Iterable[int], qs: Iterator[str], rs:
Sequence["ForwardReference"], ts: AbstractSet["float"]):
for p in ps:
#? int()
p
@@ -79,22 +70,13 @@ def iterators(ps, qs, rs, ts):
#? float()
t
def sets(p, q):
"""
:type p: typing.AbstractSet[int]
:type q: typing.MutableSet[float]
"""
def sets(p: AbstractSet[int], q: typing.MutableSet[float]):
#? []
p.a
#? ["add"]
q.a
def tuple(p, q, r):
"""
:type p: typing.Tuple[int]
:type q: typing.Tuple[int, str, float]
:type r: typing.Tuple[B, ...]
"""
def tuple(p: Tuple[int], q: Tuple[int, str, float], r: Tuple[B, ...]):
#? int()
p[0]
#? ['index']
@@ -127,16 +109,14 @@ class Key:
class Value:
pass
def mapping(p, q, d, dd, r, s, t):
"""
:type p: typing.Mapping[Key, Value]
:type q: typing.MutableMapping[Key, Value]
:type d: typing.Dict[Key, Value]
:type dd: typing.DefaultDict[Key, Value]
:type r: typing.KeysView[Key]
:type s: typing.ValuesView[Value]
:type t: typing.ItemsView[Key, Value]
"""
def mapping(
p: Mapping[Key, Value],
q: typing.MutableMapping[Key, Value],
d: Dict[Key, Value],
dd: typing.DefaultDict[Key, Value],
r: typing.KeysView[Key],
s: typing.ValuesView[Value],
t: typing.ItemsView[Key, Value]):
#? []
p.setd
#? ["setdefault"]
@@ -198,14 +178,12 @@ def mapping(p, q, d, dd, r, s, t):
#? Value()
value
def union(p, q, r, s, t):
"""
:type p: typing.Union[int]
:type q: typing.Union[int, int]
:type r: typing.Union[int, str, "int"]
:type s: typing.Union[int, typing.Union[str, "typing.Union['float', 'dict']"]]
:type t: typing.Union[int, None]
"""
def union(
p: Union[int],
q: Union[int, int],
r: Union[int, str, "int"],
s: Union[int, typing.Union[str, "typing.Union['float', 'dict']"]],
t: Union[int, None]):
#? int()
p
#? int()
@@ -217,9 +195,8 @@ def union(p, q, r, s, t):
#? int() None
t
def optional(p):
def optional(p: Optional[int]):
"""
:type p: typing.Optional[int]
Optional does not do anything special. However it should be recognised
as being of that type. Jedi doesn't do anything with the extra into that
it can be None as well
@@ -234,10 +211,7 @@ class TestDict(typing.Dict[str, int]):
def setdud(self):
pass
def testdict(x):
"""
:type x: TestDict
"""
def testdict(x: TestDict):
#? ["setdud", "setdefault"]
x.setd
for key in x.keys():
@@ -262,10 +236,7 @@ y = WrappingType(0) # Per https://github.com/davidhalter/jedi/issues/1015#issuec
#? str()
y
def testnewtype(y):
"""
:type y: WrappingType
"""
def testnewtype(y: WrappingType):
#? str()
y
#? ["upper"]
@@ -273,10 +244,7 @@ def testnewtype(y):
WrappingType2 = typing.NewType()
def testnewtype2(y):
"""
:type y: WrappingType2
"""
def testnewtype2(y: WrappingType2):
#?
y
#? []
@@ -297,10 +265,7 @@ class TestDefaultDict(typing.DefaultDict[str, int]):
def setdud(self):
pass
def testdict(x):
"""
:type x: TestDefaultDict
"""
def testdict(x: TestDefaultDict):
#? ["setdud", "setdefault"]
x.setd
for key in x.keys():

View File

@@ -1,9 +1,6 @@
"""
PEP 526 introduced a new way of using type annotations on variables. It was
introduced in Python 3.6.
PEP 526 introduced a way of using type annotations on variables.
"""
# python >= 3.6
import typing
asdf = ''
@@ -47,7 +44,7 @@ class Foo():
baz: typing.ClassVar[str]
#?
#? int()
Foo.bar
#? int()
Foo().bar
@@ -61,6 +58,7 @@ class VarClass:
var_instance2: float
var_class1: typing.ClassVar[str] = 1
var_class2: typing.ClassVar[bytes]
var_class3 = None
def __init__(self):
#? int()
@@ -73,15 +71,21 @@ class VarClass:
d.var_class2
#? []
d.int
#? ['var_class1', 'var_class2', 'var_instance1', 'var_instance2']
#? ['var_class1', 'var_class2', 'var_instance1', 'var_instance2', 'var_class3']
self.var_
class VarClass2(VarClass):
var_class3: typing.ClassVar[int]
#? ['var_class1', 'var_class2', 'var_instance1']
def __init__(self):
#? int()
self.var_class3
#? ['var_class1', 'var_class2', 'var_instance1', 'var_class3', 'var_instance2']
VarClass.var_
#? int()
VarClass.var_instance1
#?
#? float()
VarClass.var_instance2
#? str()
VarClass.var_class1
@@ -91,7 +95,7 @@ VarClass.var_class2
VarClass.int
d = VarClass()
#? ['var_class1', 'var_class2', 'var_instance1', 'var_instance2']
#? ['var_class1', 'var_class2', 'var_class3', 'var_instance1', 'var_instance2']
d.var_
#? int()
d.var_instance1

View File

@@ -0,0 +1,46 @@
from pep0484_generic_parameters import list_t_to_list_t
list_of_ints_and_strs: list[int | str]
# Test that unions are handled
x2 = list_t_to_list_t(list_of_ints_and_strs)[0]
#? int() str()
x2
for z in list_t_to_list_t(list_of_ints_and_strs):
#? int() str()
z
from pep0484_generic_passthroughs import (
typed_variadic_tuple_generic_passthrough,
)
variadic_tuple_str_int: tuple[int | str, ...]
for m in typed_variadic_tuple_generic_passthrough(variadic_tuple_str_int):
#? str() int()
m
def func_returns_byteslike() -> bytes | bytearray:
pass
#? bytes() bytearray()
func_returns_byteslike()
pep604_optional_1: int | str | None
pep604_optional_2: None | bytes
#? int() str() None
pep604_optional_1
#? None bytes()
pep604_optional_2
pep604_in_str: "int | bytes"
#? int() bytes()
pep604_in_str

View File

@@ -1,3 +1,5 @@
from typing import Generator
import pytest
from pytest import fixture
@@ -64,6 +66,11 @@ def lala(my_fixture):
def lala(my_fixture):
pass
# overriding types of a fixture should be possible
def test_x(my_yield_fixture: str):
#? str()
my_yield_fixture
# -----------------
# completion
# -----------------
@@ -132,9 +139,6 @@ def test_p(monkeypatch):
#? ['capsysbinary']
def test_p(capsysbin
#? ['tmpdir', 'tmpdir_factory']
def test_p(tmpdi
def close_parens():
pass
@@ -164,3 +168,40 @@ def test_inheritance_fixture(inheritance_fixture, caplog):
@pytest.fixture
def caplog(caplog):
yield caplog
# -----------------
# Generator with annotation
# -----------------
@pytest.fixture
def with_annot() -> Generator[float, None, None]:
pass
def test_with_annot(inheritance_fixture, with_annot):
#? float()
with_annot
# -----------------
# pytest external plugins
# -----------------
#? ['admin_user', 'admin_client']
def test_z(admin
#! 15 ['def admin_client']
def test_p(admin_client):
#? ['login', 'logout']
admin_client.log
@pytest.fixture
@some_decorator
#? ['admin_user']
def bla(admin_u
return
@pytest.fixture
@some_decorator
#! 12 ['def admin_user']
def bla(admin_user):
pass

View File

@@ -1,7 +1,6 @@
import os
import sys
import subprocess
from itertools import count
import pytest
@@ -10,9 +9,6 @@ from . import run
from . import refactor
from jedi import InterpreterEnvironment, get_system_environment
from jedi.inference.compiled.value import create_from_access_path
from jedi.inference.imports import _load_python_module
from jedi.file_io import KnownContentFileIO
from jedi.inference.base_value import ValueSet
from jedi.api.interpreter import MixedModuleContext
# For interpreter tests sometimes the path of this directory is in the sys
@@ -163,19 +159,6 @@ def create_compiled_object(inference_state):
)
@pytest.fixture
def module_injector():
counter = count()
def module_injector(inference_state, names, code):
assert isinstance(names, tuple)
file_io = KnownContentFileIO('/foo/bar/module-injector-%s.py' % next(counter), code)
v = _load_python_module(inference_state, file_io, names)
inference_state.module_cache.add(names, ValueSet([v]))
return module_injector
@pytest.fixture(params=[False, True])
def class_findable(monkeypatch, request):
if not request.param:

View File

@@ -0,0 +1,6 @@
from pytest import fixture
@fixture()
def admin_user():
pass

View File

@@ -0,0 +1,16 @@
import pytest
from .fixtures import admin_user # noqa
@pytest.fixture()
def admin_client():
return Client()
class Client:
def login(self, **credentials):
...
def logout(self):
...

View File

@@ -104,10 +104,14 @@ import os
import re
import sys
import operator
from ast import literal_eval
if sys.version_info < (3, 8):
literal_eval = eval
else:
from ast import literal_eval
from io import StringIO
from functools import reduce
from unittest.mock import ANY
from pathlib import Path
import parso
from _pytest.outcomes import Skipped
@@ -122,6 +126,7 @@ from jedi.api.environment import get_default_environment, get_system_environment
from jedi.inference.gradual.conversion import convert_values
from jedi.inference.analysis import Warning
test_dir = Path(__file__).absolute().parent
TEST_COMPLETIONS = 0
TEST_INFERENCE = 1
@@ -173,6 +178,7 @@ class IntegrationTestCase(BaseTestCase):
self.start = start
self.line = line
self.path = path
self._project = jedi.Project(test_dir)
@property
def module_name(self):
@@ -188,7 +194,12 @@ class IntegrationTestCase(BaseTestCase):
self.line_nr_test, self.line.rstrip())
def script(self, environment):
return jedi.Script(self.source, path=self.path, environment=environment)
return jedi.Script(
self.source,
path=self.path,
environment=environment,
project=self._project
)
def run(self, compare_cb, environment=None):
testers = {
@@ -198,7 +209,7 @@ class IntegrationTestCase(BaseTestCase):
TEST_REFERENCES: self.run_get_references,
}
if (self.path.endswith('pytest.py') or self.path.endswith('conftest.py')) \
and environment.executable != os.path.realpath(sys.executable):
and os.path.realpath(environment.executable) != os.path.realpath(sys.executable):
# It's not guarantueed that pytest is installed in test
# environments, if we're not running in the same environment that
# we're already in, so just skip that case.
@@ -263,7 +274,7 @@ class IntegrationTestCase(BaseTestCase):
self.correct = self.correct.strip()
compare = sorted(
(('stub:' if r.is_stub() else '')
+ re.sub(r'^test\.completion\.', '', r.module_name),
+ re.sub(r'^completion\.', '', r.module_name),
r.line,
r.column)
for r in result

View File

@@ -650,6 +650,7 @@ def test_cursor_after_signature(Script, column):
('abs(chr ( \nclass y: pass', 1, 8, 'abs', 0),
('abs(chr ( \nclass y: pass', 1, 9, 'abs', 0),
('abs(chr ( \nclass y: pass', 1, 10, 'chr', 0),
('abs(foo.bar=3)', 1, 13, 'abs', 0),
]
)
def test_base_signatures(Script, code, line, column, name, index):

View File

@@ -1,11 +1,16 @@
from os.path import join, sep as s, dirname, expanduser
import os
from textwrap import dedent
from itertools import count
from pathlib import Path
import pytest
from ..helpers import root_dir
from jedi.api.helpers import _start_match, _fuzzy_match
from jedi.inference.imports import _load_python_module
from jedi.file_io import KnownContentFileIO
from jedi.inference.base_value import ValueSet
def test_in_whitespace(Script):
@@ -400,6 +405,22 @@ def test_ellipsis_completion(Script):
assert Script('...').complete() == []
@pytest.fixture
def module_injector():
counter = count()
def module_injector(inference_state, names, code):
assert isinstance(names, tuple)
file_io = KnownContentFileIO(
Path('foo/bar/module-injector-%s.py' % next(counter)).absolute(),
code
)
v = _load_python_module(inference_state, file_io, names)
inference_state.module_cache.add(names, ValueSet([v]))
return module_injector
def test_completion_cache(Script, module_injector):
"""
For some modules like numpy, tensorflow or pandas we cache docstrings and
@@ -436,3 +457,7 @@ def test_module_completions(Script, module):
# Just make sure that there are no errors
c.type
c.docstring()
def test_whitespace_at_end_after_dot(Script):
assert 'strip' in [c.name for c in Script('str. ').complete()]

View File

@@ -37,6 +37,17 @@ def test_operator_doc(Script):
assert len(d.docstring()) > 100
@pytest.mark.parametrize(
'code, help_part', [
('str', 'Create a new string object'),
('str.strip', 'Return a copy of the string'),
]
)
def test_stdlib_doc(Script, code, help_part):
h, = Script(code).help()
assert help_part in h.docstring(raw=True)
def test_lambda(Script):
d, = Script('lambda x: x').help(column=0)
assert d.type == 'keyword'

View File

@@ -603,8 +603,11 @@ def test_dict_getitem(code, types):
@pytest.mark.parametrize(
'code, expected', [
('DunderCls()[0]', 'int'),
('dunder[0]', 'int'),
('next(DunderCls())', 'float'),
('next(dunder)', 'float'),
('for x in DunderCls(): x', 'str'),
#('for x in dunder: x', 'str'),
]
)
def test_dunders(class_is_findable, code, expected):
@@ -623,6 +626,8 @@ def test_dunders(class_is_findable, code, expected):
if not class_is_findable:
DunderCls.__name__ = 'asdf'
dunder = DunderCls()
n, = jedi.Interpreter(code, [locals()]).infer()
assert n.name == expected
@@ -706,3 +711,46 @@ def test_negate():
assert x.name == 'int'
value, = x._name.infer()
assert value.get_safe_value() == -3
def test_complete_not_findable_class_source():
class TestClass():
ta=1
ta1=2
# Simulate the environment where the class is defined in
# an interactive session and therefore inspect module
# cannot find its source code and raises OSError (Py 3.10+) or TypeError.
TestClass.__module__ = "__main__"
# There is a pytest __main__ module we have to remove temporarily.
module = sys.modules.pop("__main__")
try:
interpreter = jedi.Interpreter("TestClass.", [locals()])
completions = interpreter.complete(column=10, line=1)
finally:
sys.modules["__main__"] = module
assert "ta" in [c.name for c in completions]
assert "ta1" in [c.name for c in completions]
def test_param_infer_default():
abs_sig, = jedi.Interpreter('abs(', [{'abs': abs}]).get_signatures()
param, = abs_sig.params
assert param.name == 'x'
assert param.infer_default() == []
@pytest.mark.parametrize(
'code, expected', [
("random.triangular(", ['high=', 'low=', 'mode=']),
("random.triangular(low=1, ", ['high=', 'mode=']),
("random.triangular(high=1, ", ['low=', 'mode=']),
("random.triangular(low=1, high=2, ", ['mode=']),
("random.triangular(low=1, mode=2, ", ['high=']),
],
)
def test_keyword_param_completion(code, expected):
import random
completions = jedi.Interpreter(code, [locals()]).complete()
assert expected == [c.name for c in completions if c.name.endswith('=')]

View File

@@ -189,3 +189,9 @@ def test_no_error(get_names):
def test_is_side_effect(get_names, code, index, is_side_effect):
names = get_names(code, references=True, all_scopes=True)
assert names[index].is_side_effect() == is_side_effect
def test_no_defined_names(get_names):
definition, = get_names("x = (1, 2)")
assert not definition.defined_names()

View File

@@ -68,6 +68,10 @@ def test_load_save_project(tmpdir):
dict(all_scopes=True)),
('some_search_test_var', ['test_api.test_project.test_search.some_search_test_var'],
dict(complete=True, all_scopes=True)),
# Make sure that the searched name is not part of the file, by
# splitting it up.
('some_search_test_v' + 'a', ['test_api.test_project.test_search.some_search_test_var'],
dict(complete=True, all_scopes=True)),
('sample_int', ['helpers.sample_int'], {}),
('sample_int', ['helpers.sample_int'], dict(all_scopes=True)),
@@ -146,7 +150,7 @@ def test_search(string, full_names, kwargs):
defs = project.complete_search(string, **kwargs)
else:
defs = project.search(string, **kwargs)
assert sorted([('stub:' if d.is_stub() else '') + d.full_name for d in defs]) == full_names
assert sorted([('stub:' if d.is_stub() else '') + (d.full_name or d.name) for d in defs]) == full_names
@pytest.mark.parametrize(
@@ -174,7 +178,7 @@ def test_is_potential_project(path, expected):
if expected is None:
try:
expected = _CONTAINS_POTENTIAL_PROJECT in os.listdir(path)
expected = bool(set(_CONTAINS_POTENTIAL_PROJECT) & set(os.listdir(path)))
except OSError:
expected = False

View File

@@ -1,6 +1,7 @@
import os
from textwrap import dedent
from pathlib import Path
import platform
import pytest
@@ -70,3 +71,23 @@ def test_diff_without_ending_newline(Script):
-a
+c
''')
def test_diff_path_outside_of_project(Script):
if platform.system().lower() == 'windows':
abs_path = r'D:\unknown_dir\file.py'
else:
abs_path = '/unknown_dir/file.py'
script = Script(
code='foo = 1',
path=abs_path,
project=jedi.get_default_project()
)
diff = script.rename(line=1, column=0, new_name='bar').get_diff()
assert diff == dedent(f'''\
--- {abs_path}
+++ {abs_path}
@@ -1 +1 @@
-foo = 1
+bar = 1
''')

View File

@@ -64,6 +64,6 @@ def test_wrong_encoding(Script, tmpdir):
# Use both latin-1 and utf-8 (a really broken file).
x.write_binary('foobar = 1\nä'.encode('latin-1') + 'ä'.encode('utf-8'))
project = Project('.', sys_path=[tmpdir.strpath])
project = Project(tmpdir.strpath)
c, = Script('import x; x.foo', project=project).complete()
assert c.name == 'foobar'

View File

@@ -43,6 +43,9 @@ def test_implicit_namespace_package(Script):
solution = "foo = '%s'" % solution
assert completion.description == solution
c, = script_with_path('import pkg').complete()
assert c.docstring() == ""
def test_implicit_nested_namespace_package(Script):
code = 'from implicit_nested_namespaces.namespace.pkg.module import CONST'

View File

@@ -101,6 +101,16 @@ def test_correct_zip_package_behavior(Script, inference_state, environment, code
assert value.py__package__() == []
@pytest.mark.parametrize("code,names", [
("from pkg.", {"module", "nested", "namespace"}),
("from pkg.nested.", {"nested_module"})
])
def test_zip_package_import_complete(Script, environment, code, names):
sys_path = environment.get_sys_path() + [str(pkg_zip_path)]
completions = Script(code, project=Project('.', sys_path=sys_path)).complete()
assert names == {c.name for c in completions}
def test_find_module_not_package_zipped(Script, inference_state, environment):
path = get_example_dir('zipped_imports', 'not_pkg.zip')
sys_path = environment.get_sys_path() + [path]
@@ -287,7 +297,6 @@ def test_os_issues(Script):
# Github issue #759
s = 'import os, s'
assert 'sys' in import_names(s)
assert 'path' not in import_names(s, column=len(s) - 1)
assert 'os' in import_names(s, column=len(s) - 3)
# Some more checks
@@ -324,12 +333,13 @@ def test_compiled_import_none(monkeypatch, Script):
# context that was initially given, but now we just work with the file
# system.
(os.path.join(THIS_DIR, 'test_docstring.py'), False,
('test', 'test_inference', 'test_imports')),
('test_inference', 'test_imports')),
(os.path.join(THIS_DIR, '__init__.py'), True,
('test', 'test_inference', 'test_imports')),
('test_inference', 'test_imports')),
]
)
def test_get_modules_containing_name(inference_state, path, goal, is_package):
inference_state.project = Project(test_dir)
module = imports._load_python_module(
inference_state,
FileIO(path),

View File

@@ -267,19 +267,19 @@ def test_pow_signature(Script, environment):
@pytest.mark.parametrize(
'code, signature', [
[dedent('''
# identifier:A
import functools
def f(x):
pass
def x(f):
@functools.wraps(f)
def wrapper(*args):
# Have no arguments here, but because of wraps, the signature
# should still be f's.
return f(*args)
return wrapper
x(f)('''), 'f(x, /)'],
[dedent('''
# identifier:B
import functools
def f(x):
pass
@@ -292,6 +292,26 @@ def test_pow_signature(Script, environment):
return wrapper
x(f)('''), 'f()'],
[dedent('''
# identifier:C
import functools
def f(x: int, y: float):
pass
@functools.wraps(f)
def wrapper(*args, **kwargs):
return f(*args, **kwargs)
wrapper('''), 'f(x: int, y: float)'],
[dedent('''
# identifier:D
def f(x: int, y: float):
pass
def wrapper(*args, **kwargs):
return f(*args, **kwargs)
wrapper('''), 'wrapper(x: int, y: float)'],
]
)
def test_wraps_signature(Script, code, signature):
@@ -335,6 +355,48 @@ def test_dataclass_signature(Script, skip_pre_python37, start, start_params):
price, = sig.params[-2].infer()
assert price.name == 'float'
@pytest.mark.parametrize(
'start, start_params', [
['@define\nclass X:', []],
['@frozen\nclass X:', []],
['@define(eq=True)\nclass X:', []],
[dedent('''
class Y():
y: int
@define
class X(Y):'''), []],
[dedent('''
@define
class Y():
y: int
z = 5
@define
class X(Y):'''), ['y']],
]
)
def test_attrs_signature(Script, skip_pre_python37, start, start_params):
has_attrs = bool(Script('import attrs').infer())
if not has_attrs:
raise pytest.skip("attrs needed in target environment to run this test")
code = dedent('''
name: str
foo = 3
price: float
quantity: int = 0.0
X(''')
# attrs exposes two namespaces
code = 'from attrs import define, frozen\n' + start + code
sig, = Script(code).get_signatures()
assert [p.name for p in sig.params] == start_params + ['name', 'price', 'quantity']
quantity, = sig.params[-1].infer()
assert quantity.name == 'int'
price, = sig.params[-2].infer()
assert price.name == 'float'
@pytest.mark.parametrize(
'stmt, expected', [

View File

@@ -1,4 +1,5 @@
import os
from collections import namedtuple
import pytest
@@ -42,6 +43,22 @@ def test_completion(case, monkeypatch, environment, has_django):
if (not has_django) and case.path.endswith('django.py'):
pytest.skip('Needs django to be installed to run this test.')
if case.path.endswith("pytest.py"):
# to test finding pytest fixtures from external plugins
# add a stub pytest plugin to the project sys_path...
pytest_plugin_dir = str(helpers.get_example_dir("pytest_plugin_package"))
case._project.added_sys_path = [pytest_plugin_dir]
# ... and mock setuptools entry points to include it
# see https://docs.pytest.org/en/stable/how-to/writing_plugins.html#setuptools-entry-points
def mock_iter_entry_points(group):
assert group == "pytest11"
EntryPoint = namedtuple("EntryPoint", ["module_name"])
return [EntryPoint("pytest_plugin.plugin")]
monkeypatch.setattr("pkg_resources.iter_entry_points", mock_iter_entry_points)
repo_root = helpers.root_dir
monkeypatch.chdir(os.path.join(repo_root, 'jedi'))
case.run(assert_case_equal, environment)

View File

@@ -1,5 +1,9 @@
import gc
from pathlib import Path
from jedi import parser_utils
from parso import parse
from parso.cache import parser_cache
from parso.python import tree
import pytest
@@ -67,3 +71,18 @@ def test_get_signature(code, signature):
if node.type == 'simple_stmt':
node = node.children[0]
assert parser_utils.get_signature(node) == signature
def test_parser_cache_clear(Script):
"""
If parso clears its cache, Jedi should not keep those resources, they
should be freed.
"""
script = Script("a = abs\na", path=Path(__file__).parent / 'parser_cache_test_foo.py')
script.complete()
module_id = id(script._module_node)
del parser_cache[script._inference_state.grammar._hashed][script.path]
del script
gc.collect()
assert module_id not in [id(m) for m in gc.get_referrers(tree.Module)]

View File

@@ -85,7 +85,7 @@ class TestSetupReadline(unittest.TestCase):
}
# There are quite a few differences, because both Windows and Linux
# (posix and nt) librariesare included.
assert len(difference) < 15
assert len(difference) < 30
def test_local_import(self):
s = 'import test.test_utils'