Compare commits

..

1 Commits
master ... pod

Author SHA1 Message Date
b908cf8f1d Update submodules 2024-08-05 14:03:58 +08:00
60 changed files with 233 additions and 1382 deletions

View File

@@ -1,27 +1,27 @@
name: ci
on: [push, pull_request, workflow_dispatch]
on: [push, pull_request]
jobs:
tests:
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-24.04, windows-2022]
python-version: ["3.13", "3.12", "3.11", "3.10", "3.9", "3.8"]
environment: ['3.8', '3.13', '3.12', '3.11', '3.10', '3.9', 'interpreter']
os: [ubuntu-20.04, windows-2019]
python-version: ["3.12", "3.11", "3.10", "3.9", "3.8", "3.7", "3.6"]
environment: ['3.8', '3.12', '3.11', '3.10', '3.9', '3.7', '3.6', 'interpreter']
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
submodules: recursive
- uses: actions/setup-python@v5
- uses: actions/setup-python@v4
if: ${{ matrix.environment != 'interpreter' }}
with:
python-version: ${{ matrix.environment }}
allow-prereleases: true
- uses: actions/setup-python@v5
- uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
allow-prereleases: true
@@ -35,7 +35,7 @@ jobs:
JEDI_TEST_ENVIRONMENT: ${{ matrix.environment }}
code-quality:
runs-on: ubuntu-24.04
runs-on: ubuntu-20.04
steps:
- name: Checkout code
uses: actions/checkout@v4
@@ -47,11 +47,11 @@ jobs:
- name: Run tests
run: |
python -m flake8 jedi test setup.py
python -m flake8 jedi setup.py
python -m mypy jedi sith.py setup.py
coverage:
runs-on: ubuntu-24.04
runs-on: ubuntu-20.04
steps:
- name: Checkout code

4
.gitmodules vendored
View File

@@ -1,6 +1,6 @@
[submodule "jedi/third_party/typeshed"]
path = jedi/third_party/typeshed
url = https://github.com/davidhalter/typeshed.git
url = https://git.pick-up.group/VimPlug/typeshed.git
[submodule "jedi/third_party/django-stubs"]
path = jedi/third_party/django-stubs
url = https://github.com/davidhalter/django-stubs
url = https://git.pick-up.group/VimPlug/django-stubs.git

View File

@@ -9,13 +9,3 @@ python:
submodules:
include: all
sphinx:
configuration: docs/conf.py
build:
os: ubuntu-22.04
tools:
python: "3.11"
apt_packages:
- graphviz

View File

@@ -63,9 +63,6 @@ Code Contributors
- Leo Ryu (@Leo-Ryu)
- Joseph Birkner (@josephbirkner)
- Márcio Mazza (@marciomazza)
- Martin Vielsmaier (@moser) <martin@vielsmaier.net>
- TingJia Wu (@WutingjiaX) <wutingjia@bytedance.com>
- Nguyễn Hồng Quân <ng.hong.quan@gmail.com>
And a few more "anonymous" contributors.

View File

@@ -6,11 +6,6 @@ Changelog
Unreleased
++++++++++
0.19.2 (2024-11-10)
+++++++++++++++++++
- Python 3.13 support
0.19.1 (2023-10-02)
+++++++++++++++++++

View File

@@ -2,9 +2,6 @@
Jedi - an awesome autocompletion, static analysis and refactoring library for Python
####################################################################################
**I released the successor to Jedi: A
Mypy-Compatible Python Language Server Built in Rust** - `ZubanLS <https://zubanls.com>`_
.. image:: http://isitmaintained.com/badge/open/davidhalter/jedi.svg
:target: https://github.com/davidhalter/jedi/issues
:alt: The percentage of open issues and pull requests
@@ -13,7 +10,7 @@ Mypy-Compatible Python Language Server Built in Rust** - `ZubanLS <https://zuban
:target: https://github.com/davidhalter/jedi/issues
:alt: The resolution time is the median time an issue or pull request stays open.
.. image:: https://github.com/davidhalter/jedi/actions/workflows/ci.yml/badge.svg?branch=master
.. image:: https://github.com/davidhalter/jedi/workflows/ci/badge.svg?branch=master
:target: https://github.com/davidhalter/jedi/actions
:alt: Tests
@@ -102,7 +99,7 @@ Features and Limitations
Jedi's features are listed here:
`Features <https://jedi.readthedocs.org/en/latest/docs/features.html>`_.
You can run Jedi on Python 3.8+ but it should also
You can run Jedi on Python 3.6+ but it should also
understand code that is older than those versions. Additionally you should be
able to use `Virtualenvs <https://jedi.readthedocs.org/en/latest/docs/api.html#environments>`_
very well.

View File

@@ -2,7 +2,7 @@
If security issues arise, we will try to fix those as soon as possible.
Due to Jedi's nature, Security Issues will probably be extremely rare, but we will of course treat them seriously.
Due to Jedi's nature, Security Issues will probably be extremely rare, but we will neverless treat them seriously.
## Reporting Security Problems

View File

@@ -156,14 +156,6 @@ def jedi_path():
return os.path.dirname(__file__)
@pytest.fixture()
def skip_pre_python311(environment):
if environment.version_info < (3, 11):
# This if is just needed to avoid that tests ever skip way more than
# they should for all Python versions.
pytest.skip()
@pytest.fixture()
def skip_pre_python38(environment):
if environment.version_info < (3, 8):

View File

@@ -16,7 +16,7 @@ Jedi's main API calls and features are:
Basic Features
--------------
- Python 3.8+ support
- Python 3.6+ support
- Ignores syntax errors and wrong indentation
- Can deal with complex module / function / class structures
- Great ``virtualenv``/``venv`` support

View File

@@ -38,7 +38,7 @@ using pip::
If you want to install the current development version (master branch)::
sudo pip install -e git+https://github.com/davidhalter/jedi.git#egg=jedi
sudo pip install -e git://github.com/davidhalter/jedi.git#egg=jedi
System-wide installation via a package manager

View File

@@ -27,7 +27,7 @@ ad
load
"""
__version__ = '0.19.2'
__version__ = '0.19.1'
from jedi.api import Script, Interpreter, set_debug_function, preload_module
from jedi import settings

View File

@@ -5,24 +5,11 @@ different Python versions.
import errno
import sys
import pickle
from typing import Any
class Unpickler(pickle.Unpickler):
def find_class(self, module: str, name: str) -> Any:
# Python 3.13 moved pathlib implementation out of __init__.py as part of
# generalising its implementation. Ensure that we support loading
# pickles from 3.13 on older version of Python. Since 3.13 maintained a
# compatible API, pickles from older Python work natively on the newer
# version.
if module == 'pathlib._local':
module = 'pathlib'
return super().find_class(module, name)
def pickle_load(file):
try:
return Unpickler(file).load()
return pickle.load(file)
# Python on Windows don't throw EOF errors for pipes. So reraise them with
# the correct type, which is caught upwards.
except OSError:

View File

@@ -216,6 +216,7 @@ class Script:
@validate_line_column
def infer(self, line=None, column=None, *, only_stubs=False, prefer_stubs=False):
self._inference_state.reset_recursion_limitations()
"""
Return the definitions of under the cursor. It is basically a wrapper
around Jedi's type inference.
@@ -231,7 +232,6 @@ class Script:
:param prefer_stubs: Prefer stubs to Python objects for this method.
:rtype: list of :class:`.Name`
"""
self._inference_state.reset_recursion_limitations()
pos = line, column
leaf = self._module_node.get_name_of_position(pos)
if leaf is None:
@@ -262,6 +262,7 @@ class Script:
@validate_line_column
def goto(self, line=None, column=None, *, follow_imports=False, follow_builtin_imports=False,
only_stubs=False, prefer_stubs=False):
self._inference_state.reset_recursion_limitations()
"""
Goes to the name that defined the object under the cursor. Optionally
you can follow imports.
@@ -275,7 +276,6 @@ class Script:
:param prefer_stubs: Prefer stubs to Python objects for this method.
:rtype: list of :class:`.Name`
"""
self._inference_state.reset_recursion_limitations()
tree_name = self._module_node.get_name_of_position((line, column))
if tree_name is None:
# Without a name we really just want to jump to the result e.g.

View File

@@ -65,15 +65,12 @@ def _must_be_kwarg(signatures, positional_count, used_kwargs):
return must_be_kwarg
def filter_names(inference_state, completion_names, stack, like_name, fuzzy,
imported_names, cached_name):
def filter_names(inference_state, completion_names, stack, like_name, fuzzy, cached_name):
comp_dct = set()
if settings.case_insensitive_completion:
like_name = like_name.lower()
for name in completion_names:
string = name.string_name
if string in imported_names and string != like_name:
continue
if settings.case_insensitive_completion:
string = string.lower()
if helpers.match(string, like_name, fuzzy=fuzzy):
@@ -141,11 +138,6 @@ class Completion:
self._fuzzy = fuzzy
# Return list of completions in this order:
# - Beginning with what user is typing
# - Public (alphabet)
# - Private ("_xxx")
# - Dunder ("__xxx")
def complete(self):
leaf = self._module_node.get_leaf_for_position(
self._original_position,
@@ -177,19 +169,14 @@ class Completion:
cached_name, completion_names = self._complete_python(leaf)
imported_names = []
if leaf.parent is not None and leaf.parent.type in ['import_as_names', 'dotted_as_names']:
imported_names.extend(extract_imported_names(leaf.parent))
completions = list(filter_names(self._inference_state, completion_names,
self.stack, self._like_name,
self._fuzzy, imported_names, cached_name=cached_name))
self._fuzzy, cached_name=cached_name))
return (
# Removing duplicates mostly to remove False/True/None duplicates.
_remove_duplicates(prefixed_completions, completions)
+ sorted(completions, key=lambda x: (not x.name.startswith(self._like_name),
x.name.startswith('__'),
+ sorted(completions, key=lambda x: (x.name.startswith('__'),
x.name.startswith('_'),
x.name.lower()))
)
@@ -455,7 +442,6 @@ class Completion:
- Having some doctest code that starts with `>>>`
- Having backticks that doesn't have whitespace inside it
"""
def iter_relevant_lines(lines):
include_next_line = False
for l in code_lines:
@@ -678,19 +664,3 @@ def search_in_module(inference_state, module_context, names, wanted_names,
def_ = classes.Name(inference_state, n2)
if not wanted_type or wanted_type == def_.type:
yield def_
def extract_imported_names(node):
imported_names = []
if node.type in ['import_as_names', 'dotted_as_names', 'dotted_as_name', 'import_as_name']:
for index, child in enumerate(node.children):
if child.type == 'name':
if (index > 1 and node.children[index - 1].type == "keyword"
and node.children[index - 1].value == "as"):
continue
imported_names.append(child.value)
elif child.type in ('import_as_name', 'dotted_as_name'):
imported_names.extend(extract_imported_names(child))
return imported_names

View File

@@ -8,7 +8,6 @@ import hashlib
import filecmp
from collections import namedtuple
from shutil import which
from typing import TYPE_CHECKING
from jedi.cache import memoize_method, time_cache
from jedi.inference.compiled.subprocess import CompiledSubprocess, \
@@ -16,13 +15,9 @@ from jedi.inference.compiled.subprocess import CompiledSubprocess, \
import parso
if TYPE_CHECKING:
from jedi.inference import InferenceState
_VersionInfo = namedtuple('VersionInfo', 'major minor micro') # type: ignore[name-match]
_SUPPORTED_PYTHONS = ['3.13', '3.12', '3.11', '3.10', '3.9', '3.8']
_SUPPORTED_PYTHONS = ['3.12', '3.11', '3.10', '3.9', '3.8', '3.7', '3.6']
_SAFE_PATHS = ['/usr/bin', '/usr/local/bin']
_CONDA_VAR = 'CONDA_PREFIX'
_CURRENT_VERSION = '%s.%s' % (sys.version_info.major, sys.version_info.minor)
@@ -107,10 +102,7 @@ class Environment(_BaseEnvironment):
version = '.'.join(str(i) for i in self.version_info)
return '<%s: %s in %s>' % (self.__class__.__name__, version, self.path)
def get_inference_state_subprocess(
self,
inference_state: 'InferenceState',
) -> InferenceStateSubprocess:
def get_inference_state_subprocess(self, inference_state):
return InferenceStateSubprocess(inference_state, self._get_subprocess())
@memoize_method
@@ -142,10 +134,7 @@ class SameEnvironment(_SameEnvironmentMixin, Environment):
class InterpreterEnvironment(_SameEnvironmentMixin, _BaseEnvironment):
def get_inference_state_subprocess(
self,
inference_state: 'InferenceState',
) -> InferenceStateSameProcess:
def get_inference_state_subprocess(self, inference_state):
return InferenceStateSameProcess(inference_state)
def get_sys_path(self):
@@ -384,13 +373,10 @@ def _get_executable_path(path, safe=True):
"""
if os.name == 'nt':
pythons = [os.path.join(path, 'Scripts', 'python.exe'), os.path.join(path, 'python.exe')]
else:
pythons = [os.path.join(path, 'bin', 'python')]
for python in pythons:
if os.path.exists(python):
break
python = os.path.join(path, 'Scripts', 'python.exe')
else:
python = os.path.join(path, 'bin', 'python')
if not os.path.exists(python):
raise InvalidPythonEnvironment("%s seems to be missing." % python)
_assert_safe(python, safe)

View File

@@ -28,7 +28,7 @@ def clear_time_caches(delete_all: bool = False) -> None:
:param delete_all: Deletes also the cache that is normally not deleted,
like parser cache, which is important for faster parsing.
"""
global _time_caches # noqa: F824
global _time_caches
if delete_all:
for cache in _time_caches.values():

View File

@@ -21,7 +21,7 @@ try:
raise ImportError
else:
# Use colorama for nicer console output.
from colorama import Fore, init # type: ignore[import, unused-ignore]
from colorama import Fore, init # type: ignore[import]
from colorama import initialise
def _lazy_colorama_init(): # noqa: F811

View File

@@ -90,7 +90,7 @@ class InferenceState:
self.compiled_subprocess = environment.get_inference_state_subprocess(self)
self.grammar = environment.get_grammar()
self.latest_grammar = parso.load_grammar(version='3.13')
self.latest_grammar = parso.load_grammar(version='3.12')
self.memoize_cache = {} # for memoize decorators
self.module_cache = imports.ModuleCache() # does the job of `sys.modules`.
self.stub_module_cache = {} # Dict[Tuple[str, ...], Optional[ModuleValue]]
@@ -122,14 +122,14 @@ class InferenceState:
return value_set
# mypy doesn't suppport decorated propeties (https://github.com/python/mypy/issues/1362)
@property
@property # type: ignore[misc]
@inference_state_function_cache()
def builtins_module(self):
module_name = 'builtins'
builtins_module, = self.import_module((module_name,), sys_path=[])
return builtins_module
@property
@property # type: ignore[misc]
@inference_state_function_cache()
def typing_module(self):
typing_module, = self.import_module(('typing',))

View File

@@ -5,23 +5,6 @@ goals:
1. Making it safer - Segfaults and RuntimeErrors as well as stdout/stderr can
be ignored and dealt with.
2. Make it possible to handle different Python versions as well as virtualenvs.
The architecture here is briefly:
- For each Jedi `Environment` there is a corresponding subprocess which
operates within the target environment. If the subprocess dies it is replaced
at this level.
- `CompiledSubprocess` manages exactly one subprocess and handles communication
from the parent side.
- `Listener` runs within the subprocess, processing each request and yielding
results.
- `InterpreterEnvironment` provides an API which matches that of `Environment`,
but runs functionality inline rather than within a subprocess. It is thus
used both directly in places where a subprocess is unnecessary and/or
undesirable and also within subprocesses themselves.
- `InferenceStateSubprocess` (or `InferenceStateSameProcess`) provide high
level access to functionality within the subprocess from within the parent.
Each `InterpreterState` has an instance of one of these, provided by its
environment.
"""
import collections
@@ -33,7 +16,6 @@ import traceback
import weakref
from functools import partial
from threading import Thread
from typing import Dict, TYPE_CHECKING
from jedi._compatibility import pickle_dump, pickle_load
from jedi import debug
@@ -43,9 +25,6 @@ from jedi.inference.compiled.access import DirectObjectAccess, AccessPath, \
SignatureParam
from jedi.api.exceptions import InternalError
if TYPE_CHECKING:
from jedi.inference import InferenceState
_MAIN_PATH = os.path.join(os.path.dirname(__file__), '__main__.py')
PICKLE_PROTOCOL = 4
@@ -104,9 +83,10 @@ def _cleanup_process(process, thread):
class _InferenceStateProcess:
def __init__(self, inference_state: 'InferenceState') -> None:
def __init__(self, inference_state):
self._inference_state_weakref = weakref.ref(inference_state)
self._handles: Dict[int, AccessHandle] = {}
self._inference_state_id = id(inference_state)
self._handles = {}
def get_or_create_access_handle(self, obj):
id_ = id(obj)
@@ -136,49 +116,11 @@ class InferenceStateSameProcess(_InferenceStateProcess):
class InferenceStateSubprocess(_InferenceStateProcess):
"""
API to functionality which will run in a subprocess.
This mediates the interaction between an `InferenceState` and the actual
execution of functionality running within a `CompiledSubprocess`. Available
functions are defined in `.functions`, though should be accessed via
attributes on this class of the same name.
This class is responsible for indicating that the `InferenceState` within
the subprocess can be removed once the corresponding instance in the parent
goes away.
"""
def __init__(
self,
inference_state: 'InferenceState',
compiled_subprocess: 'CompiledSubprocess',
) -> None:
def __init__(self, inference_state, compiled_subprocess):
super().__init__(inference_state)
self._used = False
self._compiled_subprocess = compiled_subprocess
# Opaque id we'll pass to the subprocess to identify the context (an
# `InferenceState`) which should be used for the request. This allows us
# to make subsequent requests which operate on results from previous
# ones, while keeping a single subprocess which can work with several
# contexts in the parent process. Once it is no longer needed(i.e: when
# this class goes away), we also use this id to indicate that the
# subprocess can discard the context.
#
# Note: this id is deliberately coupled to this class (and not to
# `InferenceState`) as this class manages access handle mappings which
# must correspond to those in the subprocess. This approach also avoids
# race conditions from successive `InferenceState`s with the same object
# id (as observed while adding support for Python 3.13).
#
# This value does not need to be the `id()` of this instance, we merely
# need to ensure that it enables the (visible) lifetime of the context
# within the subprocess to match that of this class. We therefore also
# depend on the semantics of `CompiledSubprocess.delete_inference_state`
# for correctness.
self._inference_state_id = id(self)
def __getattr__(self, name):
func = _get_function(name)
@@ -186,7 +128,7 @@ class InferenceStateSubprocess(_InferenceStateProcess):
self._used = True
result = self._compiled_subprocess.run(
self._inference_state_id,
self._inference_state_weakref(),
func,
args=args,
kwargs=kwargs,
@@ -222,17 +164,6 @@ class InferenceStateSubprocess(_InferenceStateProcess):
class CompiledSubprocess:
"""
A subprocess which runs inference within a target environment.
This class manages the interface to a single instance of such a process as
well as the lifecycle of the process itself. See `.__main__` and `Listener`
for the implementation of the subprocess and details of the protocol.
A single live instance of this is maintained by `jedi.api.environment.Environment`,
so that typically a single subprocess is used at a time.
"""
is_crashed = False
def __init__(self, executable, env_vars=None):
@@ -282,18 +213,18 @@ class CompiledSubprocess:
t)
return process
def run(self, inference_state_id, function, args=(), kwargs={}):
def run(self, inference_state, function, args=(), kwargs={}):
# Delete old inference_states.
while True:
try:
delete_id = self._inference_state_deletion_queue.pop()
inference_state_id = self._inference_state_deletion_queue.pop()
except IndexError:
break
else:
self._send(delete_id, None)
self._send(inference_state_id, None)
assert callable(function)
return self._send(inference_state_id, function, args, kwargs)
return self._send(id(inference_state), function, args, kwargs)
def get_sys_path(self):
return self._send(None, functions.get_sys_path, (), {})
@@ -341,65 +272,21 @@ class CompiledSubprocess:
def delete_inference_state(self, inference_state_id):
"""
Indicate that an inference state (in the subprocess) is no longer
needed.
The state corresponding to the given id will become inaccessible and the
id may safely be re-used to refer to a different context.
Note: it is not guaranteed that the corresponding state will actually be
deleted immediately.
Currently we are not deleting inference_state instantly. They only get
deleted once the subprocess is used again. It would probably a better
solution to move all of this into a thread. However, the memory usage
of a single inference_state shouldn't be that high.
"""
# Warning: if changing the semantics of context deletion see the comment
# in `InferenceStateSubprocess.__init__` regarding potential race
# conditions.
# Currently we are not deleting the related state instantly. They only
# get deleted once the subprocess is used again. It would probably a
# better solution to move all of this into a thread. However, the memory
# usage of a single inference_state shouldn't be that high.
# With an argument - the inference_state gets deleted.
self._inference_state_deletion_queue.append(inference_state_id)
class Listener:
"""
Main loop for the subprocess which actually does the inference.
This class runs within the target environment. It listens to instructions
from the parent process, runs inference and returns the results.
The subprocess has a long lifetime and is expected to process several
requests, including for different `InferenceState` instances in the parent.
See `CompiledSubprocess` for the parent half of the system.
Communication is via pickled data sent serially over stdin and stdout.
Stderr is read only if the child process crashes.
The request protocol is a 4-tuple of:
* inference_state_id | None: an opaque identifier of the parent's
`InferenceState`. An `InferenceState` operating over an
`InterpreterEnvironment` is created within this process for each of
these, ensuring that each parent context has a corresponding context
here. This allows context to be persisted between requests. Unless
`None`, the local `InferenceState` will be passed to the given function
as the first positional argument.
* function | None: the function to run. This is expected to be a member of
`.functions`. `None` indicates that the corresponding inference state is
no longer needed and should be dropped.
* args: positional arguments to the `function`. If any of these are
`AccessHandle` instances they will be adapted to the local
`InferenceState` before being passed.
* kwargs: keyword arguments to the `function`. If any of these are
`AccessHandle` instances they will be adapted to the local
`InferenceState` before being passed.
The result protocol is a 3-tuple of either:
* (False, None, function result): if the function returns without error, or
* (True, traceback, exception): if the function raises an exception
"""
def __init__(self):
self._inference_states = {}
# TODO refactor so we don't need to process anymore just handle
# controlling.
self._process = _InferenceStateProcess(Listener)
def _get_inference_state(self, function, inference_state_id):
from jedi.inference import InferenceState
@@ -421,9 +308,6 @@ class Listener:
if inference_state_id is None:
return function(*args, **kwargs)
elif function is None:
# Warning: if changing the semantics of context deletion see the comment
# in `InferenceStateSubprocess.__init__` regarding potential race
# conditions.
del self._inference_states[inference_state_id]
else:
inference_state = self._get_inference_state(function, inference_state_id)
@@ -464,12 +348,7 @@ class Listener:
class AccessHandle:
def __init__(
self,
subprocess: _InferenceStateProcess,
access: DirectObjectAccess,
id_: int,
) -> None:
def __init__(self, subprocess, access, id_):
self.access = access
self._subprocess = subprocess
self.id = id_

View File

@@ -3,6 +3,10 @@ import sys
from importlib.abc import MetaPathFinder
from importlib.machinery import PathFinder
# Remove the first entry, because it's simply a directory entry that equals
# this directory.
del sys.path[0]
def _get_paths():
# Get the path to jedi.

View File

@@ -48,7 +48,7 @@ def _get_numpy_doc_string_cls():
global _numpy_doc_string_cache
if isinstance(_numpy_doc_string_cache, (ImportError, SyntaxError)):
raise _numpy_doc_string_cache
from numpydoc.docscrape import NumpyDocString # type: ignore[import, unused-ignore]
from numpydoc.docscrape import NumpyDocString # type: ignore[import]
_numpy_doc_string_cache = NumpyDocString
return _numpy_doc_string_cache
@@ -109,7 +109,7 @@ def _expand_typestr(type_str):
yield type_str.split('of')[0]
# Check if type has is a set of valid literal values eg: {'C', 'F', 'A'}
elif type_str.startswith('{'):
node = parse(type_str, version='3.13').children[0]
node = parse(type_str, version='3.7').children[0]
if node.type == 'atom':
for leaf in getattr(node.children[1], "children", []):
if leaf.type == 'number':

View File

@@ -480,7 +480,7 @@ def _load_builtin_module(inference_state, import_names=None, sys_path=None):
if sys_path is None:
sys_path = inference_state.get_sys_path()
if not project._load_unsafe_extensions:
safe_paths = set(project._get_base_sys_path(inference_state))
safe_paths = project._get_base_sys_path(inference_state)
sys_path = [p for p in sys_path if p in safe_paths]
dotted_name = '.'.join(import_names)

View File

@@ -251,8 +251,6 @@ def _infer_node(context, element):
return NO_VALUES
elif typ == 'namedexpr_test':
return context.infer_node(element.children[2])
elif typ == 'star_expr':
return NO_VALUES
else:
return infer_or_test(context, element)
@@ -495,10 +493,8 @@ def infer_factor(value_set, operator):
elif operator == 'not':
b = value.py__bool__()
if b is None: # Uncertainty.
yield list(value.inference_state.builtins_module.py__getattribute__('bool')
.execute_annotation()).pop()
else:
yield compiled.create_simple_object(value.inference_state, not b)
return
yield compiled.create_simple_object(value.inference_state, not b)
else:
yield value
@@ -649,7 +645,7 @@ def _infer_comparison_part(inference_state, context, left, operator, right):
_bool_to_value(inference_state, False)
])
elif str_operator in ('in', 'not in'):
return inference_state.builtins_module.py__getattribute__('bool').execute_annotation()
return NO_VALUES
def check(obj):
"""Checks if a Jedi object is either a float or an int."""
@@ -699,15 +695,8 @@ def tree_name_to_values(inference_state, context, tree_name):
if expr_stmt.type == "expr_stmt" and expr_stmt.children[1].type == "annassign":
correct_scope = parser_utils.get_parent_scope(name) == context.tree_node
ann_assign = expr_stmt.children[1]
if correct_scope:
found_annotation = True
if (
(ann_assign.children[1].type == 'name')
and (ann_assign.children[1].value == tree_name.value)
and context.parent_context
):
context = context.parent_context
value_set |= annotation.infer_annotation(
context, expr_stmt.children[1].children[1]
).execute_annotation()

View File

@@ -36,10 +36,6 @@ py__doc__() Returns the docstring for a value.
====================================== ========================================
"""
from __future__ import annotations
from typing import List, Optional, Tuple
from jedi import debug
from jedi.parser_utils import get_cached_parent_scope, expr_is_dotted, \
function_is_property
@@ -51,15 +47,11 @@ from jedi.inference.filters import ParserTreeFilter
from jedi.inference.names import TreeNameDefinition, ValueName
from jedi.inference.arguments import unpack_arglist, ValuesArguments
from jedi.inference.base_value import ValueSet, iterator_to_value_set, \
NO_VALUES, ValueWrapper
NO_VALUES
from jedi.inference.context import ClassContext
from jedi.inference.value.function import FunctionAndClassBase, FunctionMixin
from jedi.inference.value.decorator import Decoratee
from jedi.inference.value.function import FunctionAndClassBase
from jedi.inference.gradual.generics import LazyGenericManager, TupleGenericManager
from jedi.plugins import plugin_manager
from inspect import Parameter
from jedi.inference.names import BaseTreeParamName
from jedi.inference.signature import AbstractSignature
class ClassName(TreeNameDefinition):
@@ -137,65 +129,6 @@ class ClassFilter(ParserTreeFilter):
return [name for name in names if self._access_possible(name)]
def init_param_value(arg_nodes) -> Optional[bool]:
"""
Returns:
- ``True`` if ``@dataclass(init=True)``
- ``False`` if ``@dataclass(init=False)``
- ``None`` if not specified ``@dataclass()``
"""
for arg_node in arg_nodes:
if (
arg_node.type == "argument"
and arg_node.children[0].value == "init"
):
if arg_node.children[2].value == "False":
return False
elif arg_node.children[2].value == "True":
return True
return None
def get_dataclass_param_names(cls) -> List[DataclassParamName]:
"""
``cls`` is a :class:`ClassMixin`. The type is only documented as mypy would
complain that some fields are missing.
.. code:: python
@dataclass
class A:
a: int
b: str = "toto"
For the previous example, the param names would be ``a`` and ``b``.
"""
param_names = []
filter_ = cls.as_context().get_global_filter()
for name in sorted(filter_.values(), key=lambda name: name.start_pos):
d = name.tree_name.get_definition()
annassign = d.children[1]
if d.type == 'expr_stmt' and annassign.type == 'annassign':
node = annassign.children[1]
if node.type == "atom_expr" and node.children[0].value == "ClassVar":
continue
if len(annassign.children) < 4:
default = None
else:
default = annassign.children[3]
param_names.append(DataclassParamName(
parent_context=cls.parent_context,
tree_name=name.tree_name,
annotation_node=annassign.children[1],
default_node=default,
))
return param_names
class ClassMixin:
def is_class(self):
return True
@@ -288,73 +221,6 @@ class ClassMixin:
assert x is not None
yield x
def _has_dataclass_transform_metaclasses(self) -> Tuple[bool, Optional[bool]]:
for meta in self.get_metaclasses(): # type: ignore[attr-defined]
if (
isinstance(meta, Decoratee)
# Internal leakage :|
and isinstance(meta._wrapped_value, DataclassTransformer)
):
return True, meta._wrapped_value.init_mode_from_new()
return False, None
def _get_dataclass_transform_signatures(self) -> List[DataclassSignature]:
"""
Returns: A non-empty list if the class has dataclass semantics else an
empty list.
The dataclass-like semantics will be assumed for any class that directly
or indirectly derives from the decorated class or uses the decorated
class as a metaclass.
"""
param_names = []
is_dataclass_transform = False
default_init_mode: Optional[bool] = None
for cls in reversed(list(self.py__mro__())):
if not is_dataclass_transform:
# If dataclass_transform is applied to a class, dataclass-like semantics
# will be assumed for any class that directly or indirectly derives from
# the decorated class or uses the decorated class as a metaclass.
if (
isinstance(cls, DataclassTransformer)
and cls.init_mode_from_init_subclass
):
is_dataclass_transform = True
default_init_mode = cls.init_mode_from_init_subclass
elif (
# Some object like CompiledValues would not be compatible
isinstance(cls, ClassMixin)
):
is_dataclass_transform, default_init_mode = (
cls._has_dataclass_transform_metaclasses()
)
# Attributes on the decorated class and its base classes are not
# considered to be fields.
if is_dataclass_transform:
continue
# All inherited classes behave like dataclass semantics
if (
is_dataclass_transform
and isinstance(cls, ClassValue)
and (
cls.init_param_mode()
or (cls.init_param_mode() is None and default_init_mode)
)
):
param_names.extend(
get_dataclass_param_names(cls)
)
if is_dataclass_transform:
return [DataclassSignature(cls, param_names)]
else:
return []
def get_signatures(self):
# Since calling staticmethod without a function is illegal, the Jedi
# plugin doesn't return anything. Therefore call directly and get what
@@ -366,12 +232,7 @@ class ClassMixin:
return sigs
args = ValuesArguments([])
init_funcs = self.py__call__(args).py__getattribute__('__init__')
dataclass_sigs = self._get_dataclass_transform_signatures()
if dataclass_sigs:
return dataclass_sigs
else:
return [sig.bind(self) for sig in init_funcs.get_signatures()]
return [sig.bind(self) for sig in init_funcs.get_signatures()]
def _as_context(self):
return ClassContext(self)
@@ -458,158 +319,6 @@ class ClassMixin:
return ValueSet({self})
class DataclassParamName(BaseTreeParamName):
"""
Represent a field declaration on a class with dataclass semantics.
"""
def __init__(self, parent_context, tree_name, annotation_node, default_node):
super().__init__(parent_context, tree_name)
self.annotation_node = annotation_node
self.default_node = default_node
def get_kind(self):
return Parameter.POSITIONAL_OR_KEYWORD
def infer(self):
if self.annotation_node is None:
return NO_VALUES
else:
return self.parent_context.infer_node(self.annotation_node)
class DataclassSignature(AbstractSignature):
"""
It represents the ``__init__`` signature of a class with dataclass semantics.
.. code:: python
"""
def __init__(self, value, param_names):
super().__init__(value)
self._param_names = param_names
def get_param_names(self, resolve_stars=False):
return self._param_names
class DataclassDecorator(ValueWrapper, FunctionMixin):
"""
A dataclass(-like) decorator with custom parameters.
.. code:: python
@dataclass(init=True) # this
class A: ...
@dataclass_transform
def create_model(*, init=False): pass
@create_model(init=False) # or this
class B: ...
"""
def __init__(self, function, arguments, default_init: bool = True):
"""
Args:
function: Decoratee | function
arguments: The parameters to the dataclass function decorator
default_init: Boolean to indicate the default init value
"""
super().__init__(function)
argument_init = self._init_param_value(arguments)
self.init_param_mode = (
argument_init if argument_init is not None else default_init
)
def _init_param_value(self, arguments) -> Optional[bool]:
if not arguments.argument_node:
return None
arg_nodes = (
arguments.argument_node.children
if arguments.argument_node.type == "arglist"
else [arguments.argument_node]
)
return init_param_value(arg_nodes)
class DataclassTransformer(ValueWrapper, ClassMixin):
"""
A class decorated with the ``dataclass_transform`` decorator. dataclass-like
semantics will be assumed for any class that directly or indirectly derives
from the decorated class or uses the decorated class as a metaclass.
Attributes on the decorated class and its base classes are not considered to
be fields.
"""
def __init__(self, wrapped_value):
super().__init__(wrapped_value)
def init_mode_from_new(self) -> bool:
"""Default value if missing is ``True``"""
new_methods = self._wrapped_value.py__getattribute__("__new__")
if not new_methods:
return True
new_method = list(new_methods)[0]
for param in new_method.get_param_names():
if (
param.string_name == "init"
and param.default_node
and param.default_node.type == "keyword"
):
if param.default_node.value == "False":
return False
elif param.default_node.value == "True":
return True
return True
@property
def init_mode_from_init_subclass(self) -> Optional[bool]:
# def __init_subclass__(cls) -> None: ... is hardcoded in the typeshed
# so the extra parameters can not be inferred.
return True
class DataclassWrapper(ValueWrapper, ClassMixin):
"""
A class with dataclass semantics from a decorator. The init parameters are
only from the current class and parent classes decorated where the ``init``
parameter was ``True``.
.. code:: python
@dataclass
class A: ... # this
@dataclass_transform
def create_model(): pass
@create_model()
class B: ... # or this
"""
def __init__(
self, wrapped_value, should_generate_init: bool
):
super().__init__(wrapped_value)
self.should_generate_init = should_generate_init
def get_signatures(self):
param_names = []
for cls in reversed(list(self.py__mro__())):
if (
isinstance(cls, DataclassWrapper)
and cls.should_generate_init
):
param_names.extend(get_dataclass_param_names(cls))
return [DataclassSignature(cls, param_names)]
class ClassValue(ClassMixin, FunctionAndClassBase, metaclass=CachedMetaClass):
api_type = 'class'
@@ -676,19 +385,6 @@ class ClassValue(ClassMixin, FunctionAndClassBase, metaclass=CachedMetaClass):
return values
return NO_VALUES
def init_param_mode(self) -> Optional[bool]:
"""
It returns ``True`` if ``class X(init=False):`` else ``False``.
"""
bases_arguments = self._get_bases_arguments()
if bases_arguments.argument_node.type != "arglist":
# If it is not inheriting from the base model and having
# extra parameters, then init behavior is not changed.
return None
return init_param_value(bases_arguments.argument_node.children)
@plugin_manager.decorate()
def get_metaclass_signatures(self, metaclasses):
return []

View File

@@ -80,7 +80,7 @@ class ModuleMixin(SubModuleDictMixin):
def is_stub(self):
return False
@property
@property # type: ignore[misc]
@inference_state_method_cache()
def name(self):
return self._module_name_class(self, self.string_names[-1])
@@ -138,7 +138,7 @@ class ModuleValue(ModuleMixin, TreeValue):
api_type = 'module'
def __init__(self, inference_state, module_node, code_lines, file_io=None,
string_names=None, is_package=False) -> None:
string_names=None, is_package=False):
super().__init__(
inference_state,
parent_context=None,
@@ -149,7 +149,7 @@ class ModuleValue(ModuleMixin, TreeValue):
self._path: Optional[Path] = None
else:
self._path = file_io.path
self.string_names: Optional[tuple[str, ...]] = string_names
self.string_names = string_names # Optional[Tuple[str, ...]]
self.code_lines = code_lines
self._is_package = is_package

View File

@@ -38,7 +38,7 @@ class ImplicitNamespaceValue(Value, SubModuleDictMixin):
def get_qualified_names(self):
return ()
@property
@property # type: ignore[misc]
@inference_state_method_cache()
def name(self):
string_name = self.py__package__()[-1]

View File

@@ -320,7 +320,7 @@ def expr_is_dotted(node):
return node.type == 'name'
def _function_is_x_method(decorator_checker):
def _function_is_x_method(*method_names):
def wrapper(function_node):
"""
This is a heuristic. It will not hold ALL the times, but it will be
@@ -330,16 +330,12 @@ def _function_is_x_method(decorator_checker):
"""
for decorator in function_node.get_decorators():
dotted_name = decorator.children[1]
if decorator_checker(dotted_name.get_code()):
if dotted_name.get_code() in method_names:
return True
return False
return wrapper
function_is_staticmethod = _function_is_x_method(lambda m: m == "staticmethod")
function_is_classmethod = _function_is_x_method(lambda m: m == "classmethod")
function_is_property = _function_is_x_method(
lambda m: m == "property"
or m == "cached_property"
or (m.endswith(".setter"))
)
function_is_staticmethod = _function_is_x_method('staticmethod')
function_is_classmethod = _function_is_x_method('classmethod')
function_is_property = _function_is_x_method('property', 'cached_property')

View File

@@ -181,13 +181,7 @@ def _iter_pytest_modules(module_context, skip_own_module=False):
if Path(file_io.path) != module_context.py__file__():
try:
m = load_module_from_path(module_context.inference_state, file_io)
conftest_module = m.as_context()
yield conftest_module
plugins_list = m.tree_node.get_used_names().get("pytest_plugins")
if plugins_list:
name = conftest_module.create_name(plugins_list[0])
yield from _load_pytest_plugins(module_context, name)
yield m.as_context()
except FileNotFoundError:
pass
folder = folder.get_parent_folder()
@@ -202,19 +196,6 @@ def _iter_pytest_modules(module_context, skip_own_module=False):
yield module_value.as_context()
def _load_pytest_plugins(module_context, name):
from jedi.inference.helpers import get_str_or_none
for inferred in name.infer():
for seq_value in inferred.py__iter__():
for value in seq_value.infer():
fq_name = get_str_or_none(value)
if fq_name:
names = fq_name.split(".")
for module_value in module_context.inference_state.import_module(names):
yield module_value.as_context()
class FixtureFilter(ParserTreeFilter):
def _filter(self, names):
for name in super()._filter(names):

View File

@@ -11,6 +11,7 @@ compiled module that returns the types for C-builtins.
"""
import parso
import os
from inspect import Parameter
from jedi import debug
from jedi.inference.utils import safe_property
@@ -24,20 +25,15 @@ from jedi.inference.value.instance import \
from jedi.inference.base_value import ContextualizedNode, \
NO_VALUES, ValueSet, ValueWrapper, LazyValueWrapper
from jedi.inference.value import ClassValue, ModuleValue
from jedi.inference.value.decorator import Decoratee
from jedi.inference.value.klass import (
DataclassWrapper,
DataclassDecorator,
DataclassTransformer,
)
from jedi.inference.value.klass import ClassMixin
from jedi.inference.value.function import FunctionMixin
from jedi.inference.value import iterable
from jedi.inference.lazy_value import LazyTreeValue, LazyKnownValue, \
LazyKnownValues
from jedi.inference.names import ValueName
from jedi.inference.names import ValueName, BaseTreeParamName
from jedi.inference.filters import AttributeOverwrite, publish_method, \
ParserTreeFilter, DictFilter
from jedi.inference.signature import SignatureWrapper
from jedi.inference.signature import AbstractSignature, SignatureWrapper
# Copied from Python 3.6's stdlib.
@@ -595,103 +591,65 @@ def _random_choice(sequences):
def _dataclass(value, arguments, callback):
"""
Decorator entry points for dataclass.
1. dataclass decorator declaration with parameters
2. dataclass semantics on a class from a dataclass(-like) decorator
"""
for c in _follow_param(value.inference_state, arguments, 0):
if c.is_class():
# Declare dataclass semantics on a class from a dataclass decorator
should_generate_init = (
# Customized decorator, init may be disabled
value.init_param_mode
if isinstance(value, DataclassDecorator)
# Bare dataclass decorator, always with init mode
else True
)
return ValueSet([DataclassWrapper(c, should_generate_init)])
return ValueSet([DataclassWrapper(c)])
else:
# @dataclass(init=False)
# dataclass decorator customization
return ValueSet(
[
DataclassDecorator(
value,
arguments=arguments,
default_init=True,
)
]
)
return NO_VALUES
def _dataclass_transform(value, arguments, callback):
"""
Decorator entry points for dataclass_transform.
1. dataclass-like decorator instantiation from a dataclass_transform decorator
2. dataclass_transform decorator declaration with parameters
3. dataclass-like decorator declaration with parameters
4. dataclass-like semantics on a class from a dataclass-like decorator
"""
for c in _follow_param(value.inference_state, arguments, 0):
if c.is_class():
is_dataclass_transform = (
value.name.string_name == "dataclass_transform"
# The decorator function from dataclass_transform acting as the
# dataclass decorator.
and not isinstance(value, Decoratee)
# The decorator function from dataclass_transform acting as the
# dataclass decorator with customized parameters
and not isinstance(value, DataclassDecorator)
)
if is_dataclass_transform:
# Declare base class
return ValueSet([DataclassTransformer(c)])
else:
# Declare dataclass-like semantics on a class from a
# dataclass-like decorator
should_generate_init = value.init_param_mode
return ValueSet([DataclassWrapper(c, should_generate_init)])
elif c.is_function():
# dataclass-like decorator instantiation:
# @dataclass_transform
# def create_model()
return ValueSet(
[
DataclassDecorator(
value,
arguments=arguments,
default_init=True,
)
]
)
elif (
# @dataclass_transform
# def create_model(): pass
# @create_model(init=...)
isinstance(value, Decoratee)
):
# dataclass (or like) decorator customization
return ValueSet(
[
DataclassDecorator(
value,
arguments=arguments,
default_init=value._wrapped_value.init_param_mode,
)
]
)
else:
# dataclass_transform decorator with parameters; nothing impactful
return ValueSet([value])
return NO_VALUES
class DataclassWrapper(ValueWrapper, ClassMixin):
def get_signatures(self):
param_names = []
for cls in reversed(list(self.py__mro__())):
if isinstance(cls, DataclassWrapper):
filter_ = cls.as_context().get_global_filter()
# .values ordering is not guaranteed, at least not in
# Python < 3.6, when dicts where not ordered, which is an
# implementation detail anyway.
for name in sorted(filter_.values(), key=lambda name: name.start_pos):
d = name.tree_name.get_definition()
annassign = d.children[1]
if d.type == 'expr_stmt' and annassign.type == 'annassign':
if len(annassign.children) < 4:
default = None
else:
default = annassign.children[3]
param_names.append(DataclassParamName(
parent_context=cls.parent_context,
tree_name=name.tree_name,
annotation_node=annassign.children[1],
default_node=default,
))
return [DataclassSignature(cls, param_names)]
class DataclassSignature(AbstractSignature):
def __init__(self, value, param_names):
super().__init__(value)
self._param_names = param_names
def get_param_names(self, resolve_stars=False):
return self._param_names
class DataclassParamName(BaseTreeParamName):
def __init__(self, parent_context, tree_name, annotation_node, default_node):
super().__init__(parent_context, tree_name)
self.annotation_node = annotation_node
self.default_node = default_node
def get_kind(self):
return Parameter.POSITIONAL_OR_KEYWORD
def infer(self):
if self.annotation_node is None:
return NO_VALUES
else:
return self.parent_context.infer_node(self.annotation_node)
class ItemGetterCallable(ValueWrapper):
def __init__(self, instance, args_value_set):
super().__init__(instance)
@@ -840,17 +798,22 @@ _implemented = {
# runtime_checkable doesn't really change anything and is just
# adding logs for infering stuff, so we can safely ignore it.
'runtime_checkable': lambda value, arguments, callback: NO_VALUES,
# Python 3.11+
'dataclass_transform': _dataclass_transform,
},
'typing_extensions': {
# Python <3.11
'dataclass_transform': _dataclass_transform,
},
'dataclasses': {
# For now this works at least better than Jedi trying to understand it.
'dataclass': _dataclass
},
# attrs exposes declaration interface roughly compatible with dataclasses
# via attrs.define, attrs.frozen and attrs.mutable
# https://www.attrs.org/en/stable/names.html
'attr': {
'define': _dataclass,
'frozen': _dataclass,
},
'attrs': {
'define': _dataclass,
'frozen': _dataclass,
},
'os.path': {
'dirname': _create_string_input_function(os.path.dirname),
'abspath': _create_string_input_function(os.path.abspath),

View File

@@ -21,13 +21,7 @@ per-file-ignores =
jedi/__init__.py:F401
jedi/inference/compiled/__init__.py:F401
jedi/inference/value/__init__.py:F401
exclude =
.tox/*
jedi/third_party/*
test/completion/*
test/examples/*
test/refactor/*
test/static_analysis/*
exclude = jedi/third_party/* .tox/*
[pycodestyle]
max-line-length = 100

View File

@@ -1,5 +1,4 @@
#!/usr/bin/env python
from typing import cast
from setuptools import setup, find_packages
from setuptools.depends import get_module_constant
@@ -10,7 +9,7 @@ __AUTHOR__ = 'David Halter'
__AUTHOR_EMAIL__ = 'davidhalter88@gmail.com'
# Get the version from within jedi. It's defined in exactly one place now.
version = cast(str, get_module_constant("jedi", "__version__"))
version = get_module_constant("jedi", "__version__")
readme = open('README.rst').read() + '\n\n' + open('CHANGELOG.rst').read()
@@ -35,27 +34,26 @@ setup(name='jedi',
keywords='python completion refactoring vim',
long_description=readme,
packages=find_packages(exclude=['test', 'test.*']),
python_requires='>=3.8',
# Python 3.13 grammars are added to parso in 0.8.4
install_requires=['parso>=0.8.4,<0.9.0'],
python_requires='>=3.6',
# Python 3.11 & 3.12 grammars are added to parso in 0.8.3
install_requires=['parso>=0.8.3,<0.9.0'],
extras_require={
'testing': [
'pytest<9.0.0',
'pytest<7.0.0',
# docopt for sith doctests
'docopt',
# coloroma for colored debug output
'colorama',
'Django',
'attrs',
'typing_extensions',
],
'qa': [
# latest version on 2025-06-16
'flake8==7.2.0',
# latest version supporting Python 3.6
'mypy==1.16',
'flake8==5.0.4',
# latest version supporting Python 3.6
'mypy==0.971',
# Arbitrary pins, latest at the time of pinning
'types-setuptools==80.9.0.20250529',
'types-setuptools==67.2.0.1',
],
'docs': [
# Just pin all of these.
@@ -96,12 +94,13 @@ setup(name='jedi',
'License :: OSI Approved :: MIT License',
'Operating System :: OS Independent',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: 3.8',
'Programming Language :: Python :: 3.9',
'Programming Language :: Python :: 3.10',
'Programming Language :: Python :: 3.11',
'Programming Language :: Python :: 3.12',
'Programming Language :: Python :: 3.13',
'Topic :: Software Development :: Libraries :: Python Modules',
'Topic :: Text Editors :: Integrated Development Environments (IDE)',
'Topic :: Utilities',

View File

@@ -44,7 +44,7 @@ Options:
--pudb Launch pudb when error is raised.
"""
from docopt import docopt # type: ignore[import, unused-ignore]
from docopt import docopt # type: ignore[import]
import json
import os

View File

@@ -527,11 +527,3 @@ lc = [x for a, *x in [(1, '', 1.0)]]
lc[0][0]
#?
lc[0][1]
xy = (1,)
x, y = *xy, None
# whatever it is should not crash
#?
x

View File

@@ -424,13 +424,3 @@ with open("a"), open("b") as bfile:
some_array = ['', '']
#! ['def upper']
some_array[some_not_defined_index].upper
# -----------------
# operator
# -----------------
#? bool()
res = 'f' in 'foo'; res
#? bool()
res = not {}; res

View File

@@ -27,9 +27,3 @@ def capsysbinary(capsysbinary):
#? ['close']
capsysbinary.clos
return capsysbinary
# used when fixtures are defined in multiple files
pytest_plugins = [
"completion.fixture_module",
]

View File

@@ -21,9 +21,11 @@ class Y(X):
#? []
def __doc__
#? []
def __class__
# This might or might not be what we wanted, currently properties are also
# used like this. IMO this is not wanted ~dave.
#? ['__class__']
def __class__
#? []
__class__

View File

@@ -1,6 +0,0 @@
# Exists only for completion/pytest.py
import pytest
@pytest.fixture
def my_module_fixture():
return 1.0

View File

@@ -180,11 +180,6 @@ def argskwargs(*args: int, **kwargs: float):
#? float()
kwargs['']
class Test:
str: str = 'abc'
#? ['upper']
Test.str.upp
class NotCalledClass:
def __init__(self, x):

View File

@@ -96,9 +96,6 @@ def test_x(my_con
#? 18 ['my_conftest_fixture']
def test_x(my_conftest_fixture):
return
#? ['my_module_fixture']
def test_x(my_modu
return
#? []
def lala(my_con

View File

@@ -134,7 +134,7 @@ TEST_GOTO = 2
TEST_REFERENCES = 3
grammar313 = parso.load_grammar(version='3.13')
grammar36 = parso.load_grammar(version='3.6')
class BaseTestCase(object):
@@ -238,7 +238,7 @@ class IntegrationTestCase(BaseTestCase):
should_be = set()
for match in re.finditer('(?:[^ ]+)', correct):
string = match.group(0)
parser = grammar313.parse(string, start_symbol='eval_input', error_recovery=False)
parser = grammar36.parse(string, start_symbol='eval_input', error_recovery=False)
parser_utils.move(parser.get_root_node(), self.line_nr)
node = parser.get_root_node()
module_context = script._get_module_context()
@@ -504,7 +504,7 @@ if __name__ == '__main__':
if arguments['--env']:
environment = get_system_environment(arguments['--env'])
else:
# Will be 3.13.
# Will be 3.6.
environment = get_default_environment()
import traceback

View File

@@ -321,19 +321,10 @@ def test_docstrings_for_completions(Script):
assert isinstance(c.docstring(), str)
def test_completions_order_most_resemblance_on_top(Script):
"""Test that the completion which resembles the in-typing the most will come first."""
code = "from pathlib import Path\npath = Path('hello.txt')\n\npat"
script = Script(code)
# User is typing "pat" and "path" is closer to it than "Path".
assert ['path', 'Path'] == [comp.name for comp in script.complete()]
def test_fuzzy_completion(Script):
script = Script('string = "hello"\nstring.upper')
# 'isupper' is included because it is fuzzily matched.
assert ['upper',
'isupper'] == [comp.name for comp in script.complete(fuzzy=True)]
assert ['isupper',
'upper'] == [comp.name for comp in script.complete(fuzzy=True)]
def test_math_fuzzy_completion(Script, environment):

View File

@@ -348,8 +348,8 @@ def test_parent_on_comprehension(Script):
def test_type(Script):
for c in Script('a = [str()]; a[0].').complete():
if c.name == '__class__':
assert c.type == 'property'
if c.name == '__class__' and False: # TODO fix.
assert c.type == 'class'
else:
assert c.type in ('function', 'statement')

View File

@@ -17,7 +17,7 @@ def func1(x, y):
def func2():
what ?
i = 3
def func3():
1'''
cls_code = '''\

View File

@@ -26,7 +26,7 @@ def test_find_system_environments():
@pytest.mark.parametrize(
'version',
['3.8', '3.9', '3.10', '3.11', '3.12', '3.13']
['3.6', '3.7', '3.8', '3.9']
)
def test_versions(version):
try:

View File

@@ -26,7 +26,7 @@ def get_completion(source, namespace):
def test_builtin_details():
import keyword # noqa: F401
import keyword
class EmptyClass:
pass
@@ -53,9 +53,9 @@ def test_numpy_like_non_zero():
"""
class NumpyNonZero:
def __zero__(self):
raise ValueError('Numpy arrays would raise and tell you to use .any() or all()')
def __bool__(self):
raise ValueError('Numpy arrays would raise and tell you to use .any() or all()')
@@ -113,17 +113,17 @@ def _assert_interpreter_complete(source, namespace, completions, *, check_type=F
def test_complete_raw_function():
from os.path import join # noqa: F401
from os.path import join
_assert_interpreter_complete('join("").up', locals(), ['upper'])
def test_complete_raw_function_different_name():
from os.path import join as pjoin # noqa: F401
from os.path import join as pjoin
_assert_interpreter_complete('pjoin("").up', locals(), ['upper'])
def test_complete_raw_module():
import os # noqa: F401
import os
_assert_interpreter_complete('os.path.join("a").up', locals(), ['upper'])
@@ -281,7 +281,7 @@ def test_param_completion():
def foo(bar):
pass
lambd = lambda xyz: 3 # noqa: E731
lambd = lambda xyz: 3
_assert_interpreter_complete('foo(bar', locals(), ['bar='])
_assert_interpreter_complete('lambd(xyz', locals(), ['xyz='])
@@ -295,7 +295,7 @@ def test_endless_yield():
def test_completion_params():
foo = lambda a, b=3: None # noqa: E731
foo = lambda a, b=3: None
script = jedi.Interpreter('foo', [locals()])
c, = script.complete()
@@ -310,9 +310,8 @@ def test_completion_param_annotations():
# Need to define this function not directly in Python. Otherwise Jedi is too
# clever and uses the Python code instead of the signature object.
code = 'def foo(a: 1, b: str, c: int = 1.0) -> bytes: pass'
exec_locals = {}
exec(code, exec_locals)
script = jedi.Interpreter('foo', [exec_locals])
exec(code, locals())
script = jedi.Interpreter('foo', [locals()])
c, = script.complete()
sig, = c.get_signatures()
a, b, c = sig.params
@@ -324,7 +323,7 @@ def test_completion_param_annotations():
assert b.description == 'param b: str'
assert c.description == 'param c: int=1.0'
d, = jedi.Interpreter('foo()', [exec_locals]).infer()
d, = jedi.Interpreter('foo()', [locals()]).infer()
assert d.name == 'bytes'
@@ -410,7 +409,7 @@ def test_dir_magic_method(allow_unsafe_getattr):
def test_name_not_findable():
class X():
if 0:
NOT_FINDABLE # noqa: F821
NOT_FINDABLE
def hidden(self):
return
@@ -423,7 +422,7 @@ def test_name_not_findable():
def test_stubs_working():
from multiprocessing import cpu_count # noqa: F401
from multiprocessing import cpu_count
defs = jedi.Interpreter("cpu_count()", [locals()]).infer()
assert [d.name for d in defs] == ['int']
@@ -526,17 +525,14 @@ def test_partial_signatures(code, expected, index):
c = functools.partial(func, 1, c=2)
sig, = jedi.Interpreter(code, [locals()]).get_signatures()
assert sig.name == 'partial'
assert [p.name for p in sig.params] == expected
assert index == sig.index
if sys.version_info < (3, 13):
# Python 3.13.0b3 makes functools.partial be a descriptor, which breaks
# Jedi's `py__name__` detection; see https://github.com/davidhalter/jedi/issues/2012
assert sig.name == 'partial'
def test_type_var():
"""This was an issue before, see Github #1369"""
import typing
x = typing.TypeVar('myvar')
def_, = jedi.Interpreter('x', [locals()]).infer()
assert def_.name == 'TypeVar'
@@ -614,7 +610,7 @@ def test_dict_getitem(code, types):
('next(DunderCls())', 'float'),
('next(dunder)', 'float'),
('for x in DunderCls(): x', 'str'),
# ('for x in dunder: x', 'str'),
#('for x in dunder: x', 'str'),
]
)
def test_dunders(class_is_findable, code, expected, allow_unsafe_getattr):
@@ -691,7 +687,7 @@ def bar():
]
)
def test_string_annotation(annotations, result, code):
x = lambda foo: 1 # noqa: E731
x = lambda foo: 1
x.__annotations__ = annotations
defs = jedi.Interpreter(code or 'x()', [locals()]).infer()
assert [d.name for d in defs] == result
@@ -724,8 +720,8 @@ def test_negate():
def test_complete_not_findable_class_source():
class TestClass():
ta = 1
ta1 = 2
ta=1
ta1=2
# Simulate the environment where the class is defined in
# an interactive session and therefore inspect module
@@ -760,7 +756,7 @@ def test_param_infer_default():
],
)
def test_keyword_param_completion(code, expected):
import random # noqa: F401
import random
completions = jedi.Interpreter(code, [locals()]).complete()
assert expected == [c.name for c in completions if c.name.endswith('=')]

View File

@@ -144,17 +144,13 @@ def test_load_save_project(tmpdir):
]
)
def test_search(string, full_names, kwargs):
some_search_test_var = 1.0 # noqa: F841
some_search_test_var = 1.0
project = Project(test_dir)
if kwargs.pop('complete', False) is True:
defs = project.complete_search(string, **kwargs)
else:
defs = project.search(string, **kwargs)
actual = sorted([
('stub:' if d.is_stub() else '') + (d.full_name or d.name)
for d in defs
])
assert actual == full_names
assert sorted([('stub:' if d.is_stub() else '') + (d.full_name or d.name) for d in defs]) == full_names
@pytest.mark.parametrize(
@@ -173,8 +169,8 @@ def test_complete_search(Script, string, completions, all_scopes):
@pytest.mark.parametrize(
'path,expected', [
(Path(__file__).parents[2], True), # The path of the project
(Path(__file__).parents[1], False), # The path of the tests, not a project
(Path(__file__).parents[2], True), # The path of the project
(Path(__file__).parents[1], False), # The path of the tests, not a project
(Path.home(), None)
]
)

View File

@@ -113,7 +113,7 @@ def test_diff_without_ending_newline(Script):
b
-a
+c
''') # noqa: W291
''')
def test_diff_path_outside_of_project(Script):

View File

@@ -1,5 +1,5 @@
import os
import sys # noqa: F401
import sys
import pytest

View File

@@ -1,7 +1,6 @@
import jedi
from jedi import debug
def test_simple():
jedi.set_debug_function()
debug.speed('foo')

View File

@@ -6,7 +6,6 @@ from datetime import datetime
import pytest
import jedi
from jedi.inference import compiled
from jedi.inference.compiled.access import DirectObjectAccess
from jedi.inference.gradual.conversion import _stub_to_python_value_set
@@ -82,9 +81,9 @@ def test_method_completion(Script, environment):
assert [c.name for c in Script(code).complete()] == ['__func__']
def test_time_docstring():
def test_time_docstring(Script):
import time
comp, = jedi.Script('import time\ntime.sleep').complete()
comp, = Script('import time\ntime.sleep').complete()
assert comp.docstring(raw=True) == time.sleep.__doc__
expected = 'sleep(secs: float) -> None\n\n' + time.sleep.__doc__
assert comp.docstring() == expected

View File

@@ -206,7 +206,6 @@ def test_numpydoc_parameters_set_of_values():
assert 'capitalize' in names
assert 'numerator' in names
@pytest.mark.skipif(numpydoc_unavailable,
reason='numpydoc module is unavailable')
def test_numpydoc_parameters_set_single_value():
@@ -391,8 +390,7 @@ def test_numpydoc_yields():
@pytest.mark.skipif(numpydoc_unavailable or numpy_unavailable,
reason='numpydoc or numpy module is unavailable')
def test_numpy_returns():
s = dedent(
'''
s = dedent('''
import numpy
x = numpy.asarray([])
x.d'''
@@ -404,8 +402,7 @@ def test_numpy_returns():
@pytest.mark.skipif(numpydoc_unavailable or numpy_unavailable,
reason='numpydoc or numpy module is unavailable')
def test_numpy_comp_returns():
s = dedent(
'''
s = dedent('''
import numpy
x = numpy.array([])
x.d'''

View File

@@ -33,9 +33,7 @@ def test_get_signatures_stdlib(Script):
# Check only on linux 64 bit platform and Python3.8.
@pytest.mark.parametrize('load_unsafe_extensions', [False, True])
@pytest.mark.skipif(
'sys.platform != "linux" or sys.maxsize <= 2**32 or sys.version_info[:2] != (3, 8)',
)
@pytest.mark.skipif('sys.platform != "linux" or sys.maxsize <= 2**32 or sys.version_info[:2] != (3, 8)')
def test_init_extension_module(Script, load_unsafe_extensions):
"""
``__init__`` extension modules are also packages and Jedi should understand

View File

@@ -222,7 +222,7 @@ def test_goto_stubs_on_itself(Script, code, type_):
def test_module_exists_only_as_stub(Script):
try:
import redis # noqa: F401
import redis
except ImportError:
pass
else:
@@ -234,7 +234,7 @@ def test_module_exists_only_as_stub(Script):
def test_django_exists_only_as_stub(Script):
try:
import django # noqa: F401
import django
except ImportError:
pass
else:

View File

@@ -307,33 +307,6 @@ def test_os_issues(Script):
assert 'path' in import_names(s, column=len(s) - 3)
def test_duplicated_import(Script):
def import_names(*args, **kwargs):
return [d.name for d in Script(*args).complete(**kwargs)]
s = 'import os, o'
assert 'os' not in import_names(s)
assert 'os' in import_names(s, column=len(s) - 3)
s = 'from os import path, p'
assert 'path' not in import_names(s)
assert 'path' in import_names(s, column=len(s) - 3)
assert 'path' in import_names("from os import path")
assert 'path' in import_names("from os import chdir, path")
s = 'import math as mm, m'
assert 'math' not in import_names(s)
s = 'import math as os, o'
assert 'os' in import_names(s)
s = 'from os import path as pp, p'
assert 'path' not in import_names(s)
s = 'from os import chdir as path, p'
assert 'path' in import_names(s)
def test_path_issues(Script):
"""
See pull request #684 for details.

View File

@@ -1,3 +1,4 @@
import pytest
from jedi.inference.value import TreeInstance

View File

@@ -16,13 +16,13 @@ def test_on_code():
assert i.infer()
def test_generics_without_definition() -> None:
def test_generics_without_definition():
# Used to raise a recursion error
T = TypeVar('T')
class Stack(Generic[T]):
def __init__(self) -> None:
self.items: List[T] = []
def __init__(self):
self.items = [] # type: List[T]
def push(self, item):
self.items.append(item)

View File

@@ -1,5 +1,5 @@
from textwrap import dedent
from operator import eq, ge, lt
from operator import ge, lt
import re
import os
@@ -14,8 +14,7 @@ from ..helpers import get_example_dir
('import math; math.cos', 'cos(x, /)', ['x'], ge, (3, 6)),
('next', 'next(iterator, default=None, /)', ['iterator', 'default'], lt, (3, 12)),
('next', 'next()', [], eq, (3, 12)),
('next', 'next(iterator, default=None, /)', ['iterator', 'default'], ge, (3, 13)),
('next', 'next()', [], ge, (3, 12)),
('str', "str(object='', /) -> str", ['object'], ge, (3, 6)),
@@ -318,511 +317,40 @@ def test_wraps_signature(Script, code, signature):
@pytest.mark.parametrize(
"start, start_params, include_params",
[
["@dataclass\nclass X:", [], True],
["@dataclass(eq=True)\nclass X:", [], True],
[
dedent(
"""
'start, start_params', [
['@dataclass\nclass X:', []],
['@dataclass(eq=True)\nclass X:', []],
[dedent('''
class Y():
y: int
@dataclass
class X(Y):"""
),
[],
True,
],
[
dedent(
"""
class X(Y):'''), []],
[dedent('''
@dataclass
class Y():
y: int
z = 5
@dataclass
class X(Y):"""
),
["y"],
True,
],
[
dedent(
"""
@dataclass
class Y():
y: int
class Z(Y): # Not included
z = 5
@dataclass
class X(Z):"""
),
["y"],
True,
],
# init=False
[
dedent(
"""
@dataclass(init=False)
class X:"""
),
[],
False,
],
[
dedent(
"""
@dataclass(eq=True, init=False)
class X:"""
),
[],
False,
],
# custom init
[
dedent(
"""
@dataclass()
class X:
def __init__(self, toto: str):
pass
"""
),
["toto"],
False,
],
],
ids=[
"direct_transformed",
"transformed_with_params",
"subclass_transformed",
"both_transformed",
"intermediate_not_transformed",
"init_false",
"init_false_multiple",
"custom_init",
],
class X(Y):'''), ['y']],
]
)
def test_dataclass_signature(
Script, skip_pre_python37, start, start_params, include_params, environment
):
if environment.version_info < (3, 8):
# Final is not yet supported
price_type = "float"
price_type_infer = "float"
else:
price_type = "Final[float]"
price_type_infer = "object"
code = dedent(
f"""
name: str
foo = 3
blob: ClassVar[str]
price: {price_type}
quantity: int = 0.0
X("""
)
code = (
"from dataclasses import dataclass\n"
+ "from typing import ClassVar, Final\n"
+ start
+ code
)
sig, = Script(code).get_signatures()
expected_params = (
[*start_params, "name", "price", "quantity"]
if include_params
else [*start_params]
)
assert [p.name for p in sig.params] == expected_params
if include_params:
quantity, = sig.params[-1].infer()
assert quantity.name == 'int'
price, = sig.params[-2].infer()
assert price.name == price_type_infer
dataclass_transform_cases = [
# Attributes on the decorated class and its base classes
# are not considered to be fields.
# 1/ Declare dataclass transformer
# Base Class
['@dataclass_transform\nclass X:', [], False],
# Base Class with params
['@dataclass_transform(eq_default=True)\nclass X:', [], False],
# Subclass
[dedent('''
class Y():
y: int
@dataclass_transform
class X(Y):'''), [], False],
# 2/ Declare dataclass transformed
# Class based
[dedent('''
@dataclass_transform
class Y():
y: int
z = 5
class X(Y):'''), [], True],
# Class based with params
[dedent('''
@dataclass_transform(eq_default=True)
class Y():
y: int
z = 5
class X(Y):'''), [], True],
# Decorator based
[dedent('''
@dataclass_transform
def create_model():
pass
@create_model
class X:'''), [], True],
[dedent('''
@dataclass_transform
def create_model():
pass
class Y:
y: int
@create_model
class X(Y):'''), [], True],
[dedent('''
@dataclass_transform
def create_model():
pass
@create_model
class Y:
y: int
@create_model
class X(Y):'''), ["y"], True],
[dedent('''
@dataclass_transform
def create_model():
pass
@create_model
class Y:
y: int
class Z(Y):
z: int
@create_model
class X(Z):'''), ["y"], True],
# Metaclass based
[dedent('''
@dataclass_transform
class ModelMeta():
y: int
z = 5
class ModelBase(metaclass=ModelMeta):
t: int
p = 5
class X(ModelBase):'''), [], True],
# 3/ Init custom init
[dedent('''
@dataclass_transform()
class Y():
y: int
z = 5
class X(Y):
def __init__(self, toto: str):
pass
'''), ["toto"], False],
# 4/ init=false
# Class based
# WARNING: Unsupported
# [dedent('''
# @dataclass_transform
# class Y():
# y: int
# z = 5
# def __init_subclass__(
# cls,
# *,
# init: bool = False,
# )
# class X(Y):'''), [], False],
[dedent('''
@dataclass_transform
class Y():
y: int
z = 5
def __init_subclass__(
cls,
*,
init: bool = False,
)
class X(Y, init=True):'''), [], True],
[dedent('''
@dataclass_transform
class Y():
y: int
z = 5
def __init_subclass__(
cls,
*,
init: bool = False,
)
class X(Y, init=False):'''), [], False],
[dedent('''
@dataclass_transform
class Y():
y: int
z = 5
class X(Y, init=False):'''), [], False],
# Decorator based
[dedent('''
@dataclass_transform
def create_model(init=False):
pass
@create_model()
class X:'''), [], False],
[dedent('''
@dataclass_transform
def create_model(init=False):
pass
@create_model(init=True)
class X:'''), [], True],
[dedent('''
@dataclass_transform
def create_model(init=False):
pass
@create_model(init=False)
class X:'''), [], False],
[dedent('''
@dataclass_transform
def create_model():
pass
@create_model(init=False)
class X:'''), [], False],
# Metaclass based
[dedent('''
@dataclass_transform
class ModelMeta():
y: int
z = 5
def __new__(
cls,
name,
bases,
namespace,
*,
init: bool = False,
):
...
class ModelBase(metaclass=ModelMeta):
t: int
p = 5
class X(ModelBase):'''), [], False],
[dedent('''
@dataclass_transform
class ModelMeta():
y: int
z = 5
def __new__(
cls,
name,
bases,
namespace,
*,
init: bool = False,
):
...
class ModelBase(metaclass=ModelMeta):
t: int
p = 5
class X(ModelBase, init=True):'''), [], True],
[dedent('''
@dataclass_transform
class ModelMeta():
y: int
z = 5
def __new__(
cls,
name,
bases,
namespace,
*,
init: bool = False,
):
...
class ModelBase(metaclass=ModelMeta):
t: int
p = 5
class X(ModelBase, init=False):'''), [], False],
[dedent('''
@dataclass_transform
class ModelMeta():
y: int
z = 5
class ModelBase(metaclass=ModelMeta):
t: int
p = 5
class X(ModelBase, init=False):'''), [], False],
# 4/ Other parameters
# Class based
[dedent('''
@dataclass_transform
class Y():
y: int
z = 5
class X(Y, eq=True):'''), [], True],
# Decorator based
[dedent('''
@dataclass_transform
def create_model():
pass
@create_model(eq=True)
class X:'''), [], True],
# Metaclass based
[dedent('''
@dataclass_transform
class ModelMeta():
y: int
z = 5
class ModelBase(metaclass=ModelMeta):
t: int
p = 5
class X(ModelBase, eq=True):'''), [], True],
]
ids = [
"direct_transformer",
"transformer_with_params",
"subclass_transformer",
"base_transformed",
"base_transformed_with_params",
"decorator_transformed_direct",
"decorator_transformed_subclass",
"decorator_transformed_both",
"decorator_transformed_intermediate_not",
"metaclass_transformed",
"custom_init",
# "base_transformed_init_false_dataclass_init_default",
"base_transformed_init_false_dataclass_init_true",
"base_transformed_init_false_dataclass_init_false",
"base_transformed_init_default_dataclass_init_false",
"decorator_transformed_init_false_dataclass_init_default",
"decorator_transformed_init_false_dataclass_init_true",
"decorator_transformed_init_false_dataclass_init_false",
"decorator_transformed_init_default_dataclass_init_false",
"metaclass_transformed_init_false_dataclass_init_default",
"metaclass_transformed_init_false_dataclass_init_true",
"metaclass_transformed_init_false_dataclass_init_false",
"metaclass_transformed_init_default_dataclass_init_false",
"base_transformed_other_parameters",
"decorator_transformed_other_parameters",
"metaclass_transformed_other_parameters",
]
@pytest.mark.parametrize(
'start, start_params, include_params', dataclass_transform_cases, ids=ids
)
def test_extensions_dataclass_transform_signature(
Script, skip_pre_python37, start, start_params, include_params, environment
):
has_typing_ext = bool(Script('import typing_extensions').infer())
if not has_typing_ext:
raise pytest.skip("typing_extensions needed in target environment to run this test")
if environment.version_info < (3, 8):
# Final is not yet supported
price_type = "float"
price_type_infer = "float"
else:
price_type = "Final[float]"
price_type_infer = "object"
code = dedent(
f"""
name: str
foo = 3
blob: ClassVar[str]
price: {price_type}
quantity: int = 0.0
X("""
)
code = (
"from typing_extensions import dataclass_transform\n"
+ "from typing import ClassVar, Final\n"
+ start
+ code
)
(sig,) = Script(code).get_signatures()
expected_params = (
[*start_params, "name", "price", "quantity"]
if include_params
else [*start_params]
)
assert [p.name for p in sig.params] == expected_params
if include_params:
quantity, = sig.params[-1].infer()
assert quantity.name == 'int'
price, = sig.params[-2].infer()
assert price.name == price_type_infer
def test_dataclass_transform_complete(Script):
script = Script('''\
@dataclass_transform
class Y():
y: int
z = 5
class X(Y):
name: str
foo = 3
def f(x: X):
x.na''')
completion, = script.complete()
assert completion.description == 'name: str'
@pytest.mark.parametrize(
"start, start_params, include_params", dataclass_transform_cases, ids=ids
)
def test_dataclass_transform_signature(
Script, skip_pre_python311, start, start_params, include_params
):
def test_dataclass_signature(Script, skip_pre_python37, start, start_params):
code = dedent('''
name: str
foo = 3
blob: ClassVar[str]
price: Final[float]
price: float
quantity: int = 0.0
X(''')
code = (
"from typing import dataclass_transform\n"
+ "from typing import ClassVar, Final\n"
+ start
+ code
)
code = 'from dataclasses import dataclass\n' + start + code
sig, = Script(code).get_signatures()
expected_params = (
[*start_params, "name", "price", "quantity"]
if include_params
else [*start_params]
)
assert [p.name for p in sig.params] == expected_params
if include_params:
quantity, = sig.params[-1].infer()
assert quantity.name == 'int'
price, = sig.params[-2].infer()
assert price.name == 'object'
assert [p.name for p in sig.params] == start_params + ['name', 'price', 'quantity']
quantity, = sig.params[-1].infer()
assert quantity.name == 'int'
price, = sig.params[-2].infer()
assert price.name == 'float'
@pytest.mark.parametrize(
@@ -842,8 +370,7 @@ def test_dataclass_transform_signature(
z = 5
@define
class X(Y):'''), ['y']],
],
ids=["define", "frozen", "define_customized", "define_subclass", "define_both"]
]
)
def test_attrs_signature(Script, skip_pre_python37, start, start_params):
has_attrs = bool(Script('import attrs').infer())

View File

@@ -48,8 +48,8 @@ def test_venv_and_pths(venv_path, environment):
ETALON = [
# For now disable egg-links. I have no idea how they work... ~ dave
# pjoin('/path', 'from', 'egg-link'),
# pjoin(site_pkg_path, '.', 'relative', 'egg-link', 'path'),
#pjoin('/path', 'from', 'egg-link'),
#pjoin(site_pkg_path, '.', 'relative', 'egg-link', 'path'),
site_pkg_path,
pjoin(site_pkg_path, 'dir-from-foo-pth'),
'/foo/smth.py:module',

View File

@@ -1,5 +1,6 @@
from textwrap import dedent
import pytest
from parso import parse

View File

@@ -12,8 +12,8 @@ class TestSetupReadline(unittest.TestCase):
class NameSpace(object):
pass
def setUp(self, *args, **kwargs):
super().setUp(*args, **kwargs)
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.namespace = self.NameSpace()
utils.setup_readline(self.namespace)
@@ -73,25 +73,19 @@ class TestSetupReadline(unittest.TestCase):
import os
s = 'from os import '
goal = {s + el for el in dir(os)}
# There are minor differences, e.g. the dir doesn't include deleted
# items as well as items that are not only available on linux.
difference = set(self.complete(s)).symmetric_difference(goal)
ACCEPTED_DIFFERENCE_PREFIXES = [
'_', 'O_', 'EX_', 'EFD_', 'MFD_', 'TFD_',
'SF_', 'ST_', 'CLD_', 'POSIX_SPAWN_', 'P_',
'RWF_', 'CLONE_', 'SCHED_', 'SPLICE_',
]
difference = {
x for x in difference
if not any(
x.startswith('from os import ' + prefix)
for prefix in ACCEPTED_DIFFERENCE_PREFIXES
)
if all(not x.startswith('from os import ' + s)
for s in ['_', 'O_', 'EX_', 'MFD_', 'SF_', 'ST_',
'CLD_', 'POSIX_SPAWN_', 'P_', 'RWF_',
'CLONE_', 'SCHED_'])
}
# There are quite a few differences, because both Windows and Linux
# (posix and nt) libraries are included.
assert len(difference) < 40
assert len(difference) < 30
def test_local_import(self):
s = 'import test.test_utils'