1
0
forked from VimPlug/jedi

Compare commits

...

67 Commits

Author SHA1 Message Date
Dave Halter
c7481b3319 Fix a linter issue 2026-01-20 02:05:51 +01:00
Dave Halter
3ac1632a5c Avoid the need to import search_ancestor 2026-01-19 20:30:03 +01:00
Dave Halter
4a7b5f4879 Increase the required vesion of parso 2026-01-19 17:08:11 +01:00
Dave Halter
d4fb9c4531 Move Mypy config to pyproject.toml 2026-01-19 14:47:06 +01:00
Dave Halter
2b37bc3518 Change the Zuban link to GitHub 2026-01-19 13:26:43 +01:00
Dave Halter
ade9131d04 Merge pull request #2083 from diekhans/issue-2082-None-file-attr
Handle object with __file__ attribute having a None value (issue #2082)
2025-12-21 01:47:23 +00:00
Mark Diekhans
a89757a966 Handle object with __file__ attribute having a None value (issue #2082) 2025-12-20 17:24:25 -08:00
Dave Halter
b80c0b8992 Merge pull request #2079 from Hoblovski/fix/star-unpack
Fix unhandled '*' operator in infer_node
2025-11-13 15:53:08 +00:00
Hoblovski
1b33f0d77c fix: move test to arrays.py 2025-11-13 21:12:15 +08:00
Hoblovski
3454ebb1de fix: let star_expr infer to NO_VALUES instead of crashing 2025-11-13 20:41:58 +08:00
Hoblovski
3d2ce2e01f tests: add bad case 2025-11-13 20:33:00 +08:00
Dave Halter
88d3da4ef6 Merge pull request #2049 from Morikko/support-dataclass-transform
Support dataclass transform
2025-09-03 13:15:39 +00:00
Eric Masseran
15a7513fd0 Improve code comment 2025-08-29 18:54:30 +02:00
Eric Masseran
0f35a1b18b Split dataclass and dataclass_transform logic 2025-08-29 18:54:14 +02:00
Eric Masseran
4ea7981680 Add complete test 2025-08-29 18:37:51 +02:00
Eric Masseran
3a436df7ac Remove property usage 2025-08-29 18:37:37 +02:00
Eric Masseran
c1e9aee15b Clean code comments 2025-08-29 18:37:23 +02:00
Eric Masseran
6e5f201f6c Use future annotations 2025-08-29 18:36:54 +02:00
Eric Masseran
356923e40d Merge remote-tracking branch 'origin' into support-dataclass-transform
* origin:
  Fix pip install -e in docs
  Upgrade Mypy
  Fix a few flake8 issues
  Upgrade flake8
  Upgrade other test runners
  Remove 3.6/3.7 references and change tests slightly
  Upgrade OS's that it is tested on
  Try to add something to the README
2025-08-28 10:33:17 +02:00
Dave Halter
86c3a02c8c Fix pip install -e in docs 2025-06-24 12:28:18 +02:00
Dave Halter
f4ca099afb Merge pull request #2066 from davidhalter/ci
Upgrade test runners, Mypy and flake8
2025-06-16 15:50:37 +00:00
Dave Halter
d411290dff Upgrade Mypy 2025-06-16 16:49:46 +02:00
Dave Halter
7c27da8d68 Fix a few flake8 issues 2025-06-16 16:41:36 +02:00
Dave Halter
13063221f2 Upgrade flake8 2025-06-16 16:31:14 +02:00
Dave Halter
e83228478e Upgrade other test runners 2025-06-16 16:21:50 +02:00
Dave Halter
e5a72695a8 Remove 3.6/3.7 references and change tests slightly 2025-06-16 16:18:15 +02:00
Dave Halter
4238198eea Upgrade OS's that it is tested on 2025-06-16 16:07:20 +02:00
Dave Halter
a10b158bcc Try to add something to the README 2025-06-16 15:13:01 +02:00
Eric Masseran
503c88d987 Merge remote-tracking branch 'origin' into support-dataclass-transform
* origin:
  Don't remove `sys.path[0]`.
  perf: improve performance by replacing list to set
  Explicit sphinx config path
2025-05-05 02:03:07 +02:00
Eric Masseran
d53a8ef81c Support init customization on dataclass_transform source 2025-05-05 02:02:17 +02:00
Eric Masseran
eb80dc08f3 Add decorator tests - sandwich mode 2025-05-05 00:37:38 +02:00
Eric Masseran
5f4afa27e5 Documentation and better naming 2025-05-04 23:34:58 +02:00
Eric Masseran
e49032ed6b Dataclass transform typing extension without Final support 2025-03-18 00:59:27 +01:00
Eric Masseran
e20c3c955f Dataclass 3.7 mode without Final 2025-03-18 00:52:01 +01:00
Eric Masseran
a3fd90d734 Fix dataclass decorator other parameters 2025-03-18 00:42:58 +01:00
Eric Masseran
999332ef77 Dataclass transform change init False 2025-03-18 00:30:50 +01:00
Eric Masseran
e140523211 Fix attrs + remove dataclass_transform init=false tests 2025-03-17 23:51:53 +01:00
Eric Masseran
bd1edfce78 Fix test 2025-03-17 19:48:42 +01:00
Eric Masseran
7dcb944b05 Fix decorator transformed case 2025-03-15 16:42:16 +01:00
Eric Masseran
50778c390f Fix init=false for transform and exclude fields on base transform 2025-03-15 16:23:32 +01:00
Eric Masseran
e0797be681 Check final+classvar support for dataclass transform 2025-03-15 16:02:23 +01:00
Eric Masseran
8912a35502 Support init=False for dataclass_transform 2025-03-15 16:00:51 +01:00
Eric Masseran
77cf382a1b Support init=False for dataclass 2025-03-15 15:53:51 +01:00
Eric Masseran
70efe2134c Check final support for dataclass 2025-03-15 13:17:18 +01:00
Eric Masseran
472ee75e3c Add ClassVar support for dataclass 2025-03-15 13:15:19 +01:00
Eric Masseran
68c7bf35ce Add init cases for dataclass 2025-03-15 13:07:35 +01:00
Eric Masseran
efc7248175 Fix mypy 2025-03-15 12:05:05 +01:00
Dave Halter
c4f0538930 Merge pull request #2055 from zuckerruebe/dont-remove-sys-path-0
Don't remove `sys.path[0]`.
2025-03-14 17:20:41 +00:00
Damian Birchler
35a12fab7a Don't remove sys.path[0].
Fixes https://github.com/davidhalter/jedi/issues/2053.
2025-03-13 09:52:49 +01:00
Dave Halter
a856a93bd1 Merge pull request #2051 from allrob23/perf-list-to-set
perf: optimize transitions and path handling using sets
2025-03-06 09:04:07 +00:00
Robin
60f0894f66 Merge branch 'davidhalter:master' into perf-list-to-set 2025-03-05 09:53:36 -03:00
Robin
699c930bd4 perf: improve performance by replacing list to set 2025-03-05 12:53:08 +00:00
Peter Law
9dd76c7ce5 Explicit sphinx config path
This is now required by ReadTheDocs; see
https://about.readthedocs.com/blog/2024/12/deprecate-config-files-without-sphinx-or-mkdocs-config/
2025-03-04 22:07:56 +00:00
Eric Masseran
74b46f3ee3 Add doc 2025-02-15 20:27:08 +01:00
Eric Masseran
027e29ec50 Support base class and metaclass mode 2025-02-15 20:12:53 +01:00
Eric Masseran
f9beef0f6b Add fixture to skip pre 3.11 2025-02-15 20:09:11 +01:00
Eric Masseran
d866ec0f80 Add support for dataclass_transform decorator 2025-02-14 17:05:28 +01:00
Dave Halter
6aee460b1d Merge pull request #2042 from bluthej/docs/fix-inheritance-diagram
Fix inheritance diagram
2024-12-29 15:00:40 +00:00
bluthej
0315e6ee8f Add graphviz to installed APT packages 2024-12-28 16:43:05 +01:00
Dave Halter
ce109a8cdf Fix a small fail in test_duplicated_import 2024-11-25 09:49:44 +01:00
Dave Halter
ecb922c6ff Fix a few issues around duplicated import paths, fixes #2033 2024-11-25 00:53:09 +01:00
Dave Halter
41e9e957e7 Increase Jedi version 2024-11-11 02:39:18 +01:00
Dave Halter
b225678a42 Add a release for Python 3.13 2024-11-10 23:04:28 +01:00
Dave Halter
30adf43a89 Merge pull request #2027 from WutingjiaX/feat/filterImported
Filter duplicate imports when completing
2024-10-17 21:10:55 +00:00
wutingjia
be6df62434 filter imported names during completion 2024-10-17 19:20:39 +08:00
Dave Halter
e53359ad88 Fix a test that had issues with a minor upgrade of Python 3.12 2024-10-16 12:56:10 +02:00
Dave Halter
6e5d5b779c Enable workflow_dispatch in CI 2024-10-16 12:39:33 +02:00
44 changed files with 1083 additions and 212 deletions

View File

@@ -1,14 +1,14 @@
name: ci name: ci
on: [push, pull_request] on: [push, pull_request, workflow_dispatch]
jobs: jobs:
tests: tests:
runs-on: ${{ matrix.os }} runs-on: ${{ matrix.os }}
strategy: strategy:
matrix: matrix:
os: [ubuntu-20.04, windows-2019] os: [ubuntu-24.04, windows-2022]
python-version: ["3.13", "3.12", "3.11", "3.10", "3.9", "3.8", "3.7", "3.6"] python-version: ["3.13", "3.12", "3.11", "3.10", "3.9", "3.8"]
environment: ['3.8', '3.13', '3.12', '3.11', '3.10', '3.9', '3.7', '3.6', 'interpreter'] environment: ['3.8', '3.13', '3.12', '3.11', '3.10', '3.9', 'interpreter']
steps: steps:
- name: Checkout code - name: Checkout code
uses: actions/checkout@v4 uses: actions/checkout@v4
@@ -35,7 +35,7 @@ jobs:
JEDI_TEST_ENVIRONMENT: ${{ matrix.environment }} JEDI_TEST_ENVIRONMENT: ${{ matrix.environment }}
code-quality: code-quality:
runs-on: ubuntu-20.04 runs-on: ubuntu-24.04
steps: steps:
- name: Checkout code - name: Checkout code
uses: actions/checkout@v4 uses: actions/checkout@v4
@@ -51,7 +51,7 @@ jobs:
python -m mypy jedi sith.py setup.py python -m mypy jedi sith.py setup.py
coverage: coverage:
runs-on: ubuntu-20.04 runs-on: ubuntu-24.04
steps: steps:
- name: Checkout code - name: Checkout code

View File

@@ -10,7 +10,12 @@ python:
submodules: submodules:
include: all include: all
sphinx:
configuration: docs/conf.py
build: build:
os: ubuntu-22.04 os: ubuntu-22.04
tools: tools:
python: "3.11" python: "3.11"
apt_packages:
- graphviz

View File

@@ -6,6 +6,9 @@ Changelog
Unreleased Unreleased
++++++++++ ++++++++++
0.19.2 (2024-11-10)
+++++++++++++++++++
- Python 3.13 support - Python 3.13 support
0.19.1 (2023-10-02) 0.19.1 (2023-10-02)

View File

@@ -2,6 +2,9 @@
Jedi - an awesome autocompletion, static analysis and refactoring library for Python Jedi - an awesome autocompletion, static analysis and refactoring library for Python
#################################################################################### ####################################################################################
**I released the successor to Jedi: A
Mypy-Compatible Python Language Server Built in Rust** - `Zuban <https://github.com/zubanls/zuban>`_
.. image:: http://isitmaintained.com/badge/open/davidhalter/jedi.svg .. image:: http://isitmaintained.com/badge/open/davidhalter/jedi.svg
:target: https://github.com/davidhalter/jedi/issues :target: https://github.com/davidhalter/jedi/issues
:alt: The percentage of open issues and pull requests :alt: The percentage of open issues and pull requests
@@ -10,7 +13,7 @@ Jedi - an awesome autocompletion, static analysis and refactoring library for Py
:target: https://github.com/davidhalter/jedi/issues :target: https://github.com/davidhalter/jedi/issues
:alt: The resolution time is the median time an issue or pull request stays open. :alt: The resolution time is the median time an issue or pull request stays open.
.. image:: https://github.com/davidhalter/jedi/workflows/ci/badge.svg?branch=master .. image:: https://github.com/davidhalter/jedi/actions/workflows/ci.yml/badge.svg?branch=master
:target: https://github.com/davidhalter/jedi/actions :target: https://github.com/davidhalter/jedi/actions
:alt: Tests :alt: Tests
@@ -99,7 +102,7 @@ Features and Limitations
Jedi's features are listed here: Jedi's features are listed here:
`Features <https://jedi.readthedocs.org/en/latest/docs/features.html>`_. `Features <https://jedi.readthedocs.org/en/latest/docs/features.html>`_.
You can run Jedi on Python 3.6+ but it should also You can run Jedi on Python 3.8+ but it should also
understand code that is older than those versions. Additionally you should be understand code that is older than those versions. Additionally you should be
able to use `Virtualenvs <https://jedi.readthedocs.org/en/latest/docs/api.html#environments>`_ able to use `Virtualenvs <https://jedi.readthedocs.org/en/latest/docs/api.html#environments>`_
very well. very well.

View File

@@ -156,6 +156,14 @@ def jedi_path():
return os.path.dirname(__file__) return os.path.dirname(__file__)
@pytest.fixture()
def skip_pre_python311(environment):
if environment.version_info < (3, 11):
# This if is just needed to avoid that tests ever skip way more than
# they should for all Python versions.
pytest.skip()
@pytest.fixture() @pytest.fixture()
def skip_pre_python38(environment): def skip_pre_python38(environment):
if environment.version_info < (3, 8): if environment.version_info < (3, 8):

View File

@@ -16,7 +16,7 @@ Jedi's main API calls and features are:
Basic Features Basic Features
-------------- --------------
- Python 3.6+ support - Python 3.8+ support
- Ignores syntax errors and wrong indentation - Ignores syntax errors and wrong indentation
- Can deal with complex module / function / class structures - Can deal with complex module / function / class structures
- Great ``virtualenv``/``venv`` support - Great ``virtualenv``/``venv`` support

View File

@@ -38,7 +38,7 @@ using pip::
If you want to install the current development version (master branch):: If you want to install the current development version (master branch)::
sudo pip install -e git://github.com/davidhalter/jedi.git#egg=jedi sudo pip install -e git+https://github.com/davidhalter/jedi.git#egg=jedi
System-wide installation via a package manager System-wide installation via a package manager

View File

@@ -27,7 +27,7 @@ ad
load load
""" """
__version__ = '0.19.1' __version__ = '0.19.2'
from jedi.api import Script, Interpreter, set_debug_function, preload_module from jedi.api import Script, Interpreter, set_debug_function, preload_module
from jedi import settings from jedi import settings

View File

@@ -483,7 +483,7 @@ class Script:
module_context = self._get_module_context() module_context = self._get_module_context()
n = tree.search_ancestor(leaf, 'funcdef', 'classdef') n = leaf.search_ancestor('funcdef', 'classdef')
if n is not None and n.start_pos < pos <= n.children[-1].start_pos: if n is not None and n.start_pos < pos <= n.children[-1].start_pos:
# This is a bit of a special case. The context of a function/class # This is a bit of a special case. The context of a function/class
# name/param/keyword is always it's parent context, not the # name/param/keyword is always it's parent context, not the

View File

@@ -17,8 +17,6 @@ import re
from pathlib import Path from pathlib import Path
from typing import Optional from typing import Optional
from parso.tree import search_ancestor
from jedi import settings from jedi import settings
from jedi import debug from jedi import debug
from jedi.inference.utils import unite from jedi.inference.utils import unite
@@ -509,7 +507,7 @@ class BaseName:
# - param: The parent_context of a param is not its function but # - param: The parent_context of a param is not its function but
# e.g. the outer class or module. # e.g. the outer class or module.
cls_or_func_node = self._name.tree_name.get_definition() cls_or_func_node = self._name.tree_name.get_definition()
parent = search_ancestor(cls_or_func_node, 'funcdef', 'classdef', 'file_input') parent = cls_or_func_node.search_ancestor('funcdef', 'classdef', 'file_input')
context = self._get_module_context().create_value(parent).as_context() context = self._get_module_context().create_value(parent).as_context()
else: else:
context = self._name.parent_context context = self._name.parent_context

View File

@@ -4,7 +4,7 @@ from inspect import Parameter
from parso.python.token import PythonTokenTypes from parso.python.token import PythonTokenTypes
from parso.python import tree from parso.python import tree
from parso.tree import search_ancestor, Leaf from parso.tree import Leaf
from parso import split_lines from parso import split_lines
from jedi import debug from jedi import debug
@@ -65,12 +65,15 @@ def _must_be_kwarg(signatures, positional_count, used_kwargs):
return must_be_kwarg return must_be_kwarg
def filter_names(inference_state, completion_names, stack, like_name, fuzzy, cached_name): def filter_names(inference_state, completion_names, stack, like_name, fuzzy,
imported_names, cached_name):
comp_dct = set() comp_dct = set()
if settings.case_insensitive_completion: if settings.case_insensitive_completion:
like_name = like_name.lower() like_name = like_name.lower()
for name in completion_names: for name in completion_names:
string = name.string_name string = name.string_name
if string in imported_names and string != like_name:
continue
if settings.case_insensitive_completion: if settings.case_insensitive_completion:
string = string.lower() string = string.lower()
if helpers.match(string, like_name, fuzzy=fuzzy): if helpers.match(string, like_name, fuzzy=fuzzy):
@@ -174,9 +177,13 @@ class Completion:
cached_name, completion_names = self._complete_python(leaf) cached_name, completion_names = self._complete_python(leaf)
imported_names = []
if leaf.parent is not None and leaf.parent.type in ['import_as_names', 'dotted_as_names']:
imported_names.extend(extract_imported_names(leaf.parent))
completions = list(filter_names(self._inference_state, completion_names, completions = list(filter_names(self._inference_state, completion_names,
self.stack, self._like_name, self.stack, self._like_name,
self._fuzzy, cached_name=cached_name)) self._fuzzy, imported_names, cached_name=cached_name))
return ( return (
# Removing duplicates mostly to remove False/True/None duplicates. # Removing duplicates mostly to remove False/True/None duplicates.
@@ -237,8 +244,8 @@ class Completion:
if previous_leaf is not None: if previous_leaf is not None:
stmt = previous_leaf stmt = previous_leaf
while True: while True:
stmt = search_ancestor( stmt = stmt.search_ancestor(
stmt, 'if_stmt', 'for_stmt', 'while_stmt', 'try_stmt', 'if_stmt', 'for_stmt', 'while_stmt', 'try_stmt',
'error_node', 'error_node',
) )
if stmt is None: if stmt is None:
@@ -349,7 +356,7 @@ class Completion:
stack_node = self.stack[-3] stack_node = self.stack[-3]
if stack_node.nonterminal == 'funcdef': if stack_node.nonterminal == 'funcdef':
context = get_user_context(self._module_context, self._position) context = get_user_context(self._module_context, self._position)
node = search_ancestor(leaf, 'error_node', 'funcdef') node = leaf.search_ancestor('error_node', 'funcdef')
if node is not None: if node is not None:
if node.type == 'error_node': if node.type == 'error_node':
n = node.children[0] n = node.children[0]
@@ -419,7 +426,7 @@ class Completion:
Autocomplete inherited methods when overriding in child class. Autocomplete inherited methods when overriding in child class.
""" """
leaf = self._module_node.get_leaf_for_position(self._position, include_prefixes=True) leaf = self._module_node.get_leaf_for_position(self._position, include_prefixes=True)
cls = tree.search_ancestor(leaf, 'classdef') cls = leaf.search_ancestor('classdef')
if cls is None: if cls is None:
return return
@@ -448,6 +455,7 @@ class Completion:
- Having some doctest code that starts with `>>>` - Having some doctest code that starts with `>>>`
- Having backticks that doesn't have whitespace inside it - Having backticks that doesn't have whitespace inside it
""" """
def iter_relevant_lines(lines): def iter_relevant_lines(lines):
include_next_line = False include_next_line = False
for l in code_lines: for l in code_lines:
@@ -670,3 +678,19 @@ def search_in_module(inference_state, module_context, names, wanted_names,
def_ = classes.Name(inference_state, n2) def_ = classes.Name(inference_state, n2)
if not wanted_type or wanted_type == def_.type: if not wanted_type or wanted_type == def_.type:
yield def_ yield def_
def extract_imported_names(node):
imported_names = []
if node.type in ['import_as_names', 'dotted_as_names', 'dotted_as_name', 'import_as_name']:
for index, child in enumerate(node.children):
if child.type == 'name':
if (index > 1 and node.children[index - 1].type == "keyword"
and node.children[index - 1].value == "as"):
continue
imported_names.append(child.value)
elif child.type in ('import_as_name', 'dotted_as_name'):
imported_names.extend(extract_imported_names(child))
return imported_names

View File

@@ -22,7 +22,7 @@ if TYPE_CHECKING:
_VersionInfo = namedtuple('VersionInfo', 'major minor micro') # type: ignore[name-match] _VersionInfo = namedtuple('VersionInfo', 'major minor micro') # type: ignore[name-match]
_SUPPORTED_PYTHONS = ['3.13', '3.12', '3.11', '3.10', '3.9', '3.8', '3.7', '3.6'] _SUPPORTED_PYTHONS = ['3.13', '3.12', '3.11', '3.10', '3.9', '3.8']
_SAFE_PATHS = ['/usr/bin', '/usr/local/bin'] _SAFE_PATHS = ['/usr/bin', '/usr/local/bin']
_CONDA_VAR = 'CONDA_PREFIX' _CONDA_VAR = 'CONDA_PREFIX'
_CURRENT_VERSION = '%s.%s' % (sys.version_info.major, sys.version_info.minor) _CURRENT_VERSION = '%s.%s' % (sys.version_info.major, sys.version_info.minor)

View File

@@ -28,7 +28,7 @@ def clear_time_caches(delete_all: bool = False) -> None:
:param delete_all: Deletes also the cache that is normally not deleted, :param delete_all: Deletes also the cache that is normally not deleted,
like parser cache, which is important for faster parsing. like parser cache, which is important for faster parsing.
""" """
global _time_caches global _time_caches # noqa: F824
if delete_all: if delete_all:
for cache in _time_caches.values(): for cache in _time_caches.values():

View File

@@ -21,7 +21,7 @@ try:
raise ImportError raise ImportError
else: else:
# Use colorama for nicer console output. # Use colorama for nicer console output.
from colorama import Fore, init # type: ignore[import] from colorama import Fore, init # type: ignore[import, unused-ignore]
from colorama import initialise from colorama import initialise
def _lazy_colorama_init(): # noqa: F811 def _lazy_colorama_init(): # noqa: F811

View File

@@ -122,14 +122,14 @@ class InferenceState:
return value_set return value_set
# mypy doesn't suppport decorated propeties (https://github.com/python/mypy/issues/1362) # mypy doesn't suppport decorated propeties (https://github.com/python/mypy/issues/1362)
@property # type: ignore[misc] @property
@inference_state_function_cache() @inference_state_function_cache()
def builtins_module(self): def builtins_module(self):
module_name = 'builtins' module_name = 'builtins'
builtins_module, = self.import_module((module_name,), sys_path=[]) builtins_module, = self.import_module((module_name,), sys_path=[])
return builtins_module return builtins_module
@property # type: ignore[misc] @property
@inference_state_function_cache() @inference_state_function_cache()
def typing_module(self): def typing_module(self):
typing_module, = self.import_module(('typing',)) typing_module, = self.import_module(('typing',))

View File

@@ -184,7 +184,7 @@ class DirectObjectAccess:
def py__file__(self) -> Optional[Path]: def py__file__(self) -> Optional[Path]:
try: try:
return Path(self._obj.__file__) return Path(self._obj.__file__)
except AttributeError: except (AttributeError, TypeError):
return None return None
def py__doc__(self): def py__doc__(self):

View File

@@ -3,10 +3,6 @@ import sys
from importlib.abc import MetaPathFinder from importlib.abc import MetaPathFinder
from importlib.machinery import PathFinder from importlib.machinery import PathFinder
# Remove the first entry, because it's simply a directory entry that equals
# this directory.
del sys.path[0]
def _get_paths(): def _get_paths():
# Get the path to jedi. # Get the path to jedi.

View File

@@ -3,7 +3,6 @@ from contextlib import contextmanager
from pathlib import Path from pathlib import Path
from typing import Optional from typing import Optional
from parso.tree import search_ancestor
from parso.python.tree import Name from parso.python.tree import Name
from jedi.inference.filters import ParserTreeFilter, MergedFilter, \ from jedi.inference.filters import ParserTreeFilter, MergedFilter, \
@@ -290,7 +289,7 @@ class TreeContextMixin:
def create_name(self, tree_name): def create_name(self, tree_name):
definition = tree_name.get_definition() definition = tree_name.get_definition()
if definition and definition.type == 'param' and definition.name == tree_name: if definition and definition.type == 'param' and definition.name == tree_name:
funcdef = search_ancestor(definition, 'funcdef', 'lambdef') funcdef = definition.search_ancestor('funcdef', 'lambdef')
func = self.create_value(funcdef) func = self.create_value(funcdef)
return AnonymousParamName(func, tree_name) return AnonymousParamName(func, tree_name)
else: else:
@@ -416,13 +415,13 @@ def _get_global_filters_for_name(context, name_or_none, position):
# function and get inferred in the value before the function. So # function and get inferred in the value before the function. So
# make sure to exclude the function/class name. # make sure to exclude the function/class name.
if name_or_none is not None: if name_or_none is not None:
ancestor = search_ancestor(name_or_none, 'funcdef', 'classdef', 'lambdef') ancestor = name_or_none.search_ancestor('funcdef', 'classdef', 'lambdef')
lambdef = None lambdef = None
if ancestor == 'lambdef': if ancestor == 'lambdef':
# For lambdas it's even more complicated since parts will # For lambdas it's even more complicated since parts will
# be inferred later. # be inferred later.
lambdef = ancestor lambdef = ancestor
ancestor = search_ancestor(name_or_none, 'funcdef', 'classdef') ancestor = name_or_none.search_ancestor('funcdef', 'classdef')
if ancestor is not None: if ancestor is not None:
colon = ancestor.children[-2] colon = ancestor.children[-2]
if position is not None and position < colon.start_pos: if position is not None and position < colon.start_pos:

View File

@@ -48,7 +48,7 @@ def _get_numpy_doc_string_cls():
global _numpy_doc_string_cache global _numpy_doc_string_cache
if isinstance(_numpy_doc_string_cache, (ImportError, SyntaxError)): if isinstance(_numpy_doc_string_cache, (ImportError, SyntaxError)):
raise _numpy_doc_string_cache raise _numpy_doc_string_cache
from numpydoc.docscrape import NumpyDocString # type: ignore[import] from numpydoc.docscrape import NumpyDocString # type: ignore[import, unused-ignore]
_numpy_doc_string_cache = NumpyDocString _numpy_doc_string_cache = NumpyDocString
return _numpy_doc_string_cache return _numpy_doc_string_cache
@@ -109,7 +109,7 @@ def _expand_typestr(type_str):
yield type_str.split('of')[0] yield type_str.split('of')[0]
# Check if type has is a set of valid literal values eg: {'C', 'F', 'A'} # Check if type has is a set of valid literal values eg: {'C', 'F', 'A'}
elif type_str.startswith('{'): elif type_str.startswith('{'):
node = parse(type_str, version='3.7').children[0] node = parse(type_str, version='3.13').children[0]
if node.type == 'atom': if node.type == 'atom':
for leaf in getattr(node.children[1], "children", []): for leaf in getattr(node.children[1], "children", []):
if leaf.type == 'number': if leaf.type == 'number':

View File

@@ -6,7 +6,6 @@ from abc import abstractmethod
from typing import List, MutableMapping, Type from typing import List, MutableMapping, Type
import weakref import weakref
from parso.tree import search_ancestor
from parso.python.tree import Name, UsedNamesMapping from parso.python.tree import Name, UsedNamesMapping
from jedi.inference import flow_analysis from jedi.inference import flow_analysis
@@ -181,7 +180,7 @@ class _FunctionExecutionFilter(ParserTreeFilter):
@to_list @to_list
def _convert_names(self, names): def _convert_names(self, names):
for name in names: for name in names:
param = search_ancestor(name, 'param') param = name.search_ancestor('param')
# Here we don't need to check if the param is a default/annotation, # Here we don't need to check if the param is a default/annotation,
# because those are not definitions and never make it to this # because those are not definitions and never make it to this
# point. # point.

View File

@@ -15,7 +15,6 @@ Unfortunately every other thing is being ignored (e.g. a == '' would be easy to
check for -> a is a string). There's big potential in these checks. check for -> a is a string). There's big potential in these checks.
""" """
from parso.tree import search_ancestor
from parso.python.tree import Name from parso.python.tree import Name
from jedi import settings from jedi import settings
@@ -76,7 +75,7 @@ def check_flow_information(value, flow, search_name, pos):
]) ])
for name in names: for name in names:
ass = search_ancestor(name, 'assert_stmt') ass = name.search_ancestor('assert_stmt')
if ass is not None: if ass is not None:
result = _check_isinstance_type(value, ass.assertion, search_name) result = _check_isinstance_type(value, ass.assertion, search_name)
if result is not None: if result is not None:

View File

@@ -12,7 +12,6 @@ import os
from pathlib import Path from pathlib import Path
from parso.python import tree from parso.python import tree
from parso.tree import search_ancestor
from jedi import debug from jedi import debug
from jedi import settings from jedi import settings
@@ -95,7 +94,7 @@ def goto_import(context, tree_name):
def _prepare_infer_import(module_context, tree_name): def _prepare_infer_import(module_context, tree_name):
import_node = search_ancestor(tree_name, 'import_name', 'import_from') import_node = tree_name.search_ancestor('import_name', 'import_from')
import_path = import_node.get_path_for_name(tree_name) import_path = import_node.get_path_for_name(tree_name)
from_import_name = None from_import_name = None
try: try:
@@ -480,7 +479,7 @@ def _load_builtin_module(inference_state, import_names=None, sys_path=None):
if sys_path is None: if sys_path is None:
sys_path = inference_state.get_sys_path() sys_path = inference_state.get_sys_path()
if not project._load_unsafe_extensions: if not project._load_unsafe_extensions:
safe_paths = project._get_base_sys_path(inference_state) safe_paths = set(project._get_base_sys_path(inference_state))
sys_path = [p for p in sys_path if p in safe_paths] sys_path = [p for p in sys_path if p in safe_paths]
dotted_name = '.'.join(import_names) dotted_name = '.'.join(import_names)
@@ -549,7 +548,7 @@ def load_namespace_from_path(inference_state, folder_io):
def follow_error_node_imports_if_possible(context, name): def follow_error_node_imports_if_possible(context, name):
error_node = tree.search_ancestor(name, 'error_node') error_node = name.search_ancestor('error_node')
if error_node is not None: if error_node is not None:
# Get the first command start of a started simple_stmt. The error # Get the first command start of a started simple_stmt. The error
# node is sometimes a small_stmt and sometimes a simple_stmt. Check # node is sometimes a small_stmt and sometimes a simple_stmt. Check

View File

@@ -2,8 +2,6 @@ from abc import abstractmethod
from inspect import Parameter from inspect import Parameter
from typing import Optional, Tuple from typing import Optional, Tuple
from parso.tree import search_ancestor
from jedi.parser_utils import find_statement_documentation, clean_scope_docstring from jedi.parser_utils import find_statement_documentation, clean_scope_docstring
from jedi.inference.utils import unite from jedi.inference.utils import unite
from jedi.inference.base_value import ValueSet, NO_VALUES from jedi.inference.base_value import ValueSet, NO_VALUES
@@ -112,7 +110,7 @@ class AbstractTreeName(AbstractNameDefinition):
self.tree_name = tree_name self.tree_name = tree_name
def get_qualified_names(self, include_module_names=False): def get_qualified_names(self, include_module_names=False):
import_node = search_ancestor(self.tree_name, 'import_name', 'import_from') import_node = self.tree_name.search_ancestor('import_name', 'import_from')
# For import nodes we cannot just have names, because it's very unclear # For import nodes we cannot just have names, because it's very unclear
# how they would look like. For now we just ignore them in most cases. # how they would look like. For now we just ignore them in most cases.
# In case of level == 1, it works always, because it's like a submodule # In case of level == 1, it works always, because it's like a submodule
@@ -205,15 +203,13 @@ class AbstractTreeName(AbstractNameDefinition):
values = infer_call_of_leaf(context, name, cut_own_trailer=True) values = infer_call_of_leaf(context, name, cut_own_trailer=True)
return values.goto(name, name_context=context) return values.goto(name, name_context=context)
else: else:
stmt = search_ancestor( stmt = name.search_ancestor('expr_stmt', 'lambdef') or name
name, 'expr_stmt', 'lambdef'
) or name
if stmt.type == 'lambdef': if stmt.type == 'lambdef':
stmt = name stmt = name
return context.goto(name, position=stmt.start_pos) return context.goto(name, position=stmt.start_pos)
def is_import(self): def is_import(self):
imp = search_ancestor(self.tree_name, 'import_from', 'import_name') imp = self.tree_name.search_ancestor('import_from', 'import_name')
return imp is not None return imp is not None
@property @property
@@ -451,7 +447,7 @@ class _ActualTreeParamName(BaseTreeParamName):
self.function_value = function_value self.function_value = function_value
def _get_param_node(self): def _get_param_node(self):
return search_ancestor(self.tree_name, 'param') return self.tree_name.search_ancestor('param')
@property @property
def annotation_node(self): def annotation_node(self):

View File

@@ -12,15 +12,12 @@ The signature here for bar should be `bar(b, c)` instead of bar(*args).
""" """
from inspect import Parameter from inspect import Parameter
from parso import tree
from jedi.inference.utils import to_list from jedi.inference.utils import to_list
from jedi.inference.names import ParamNameWrapper from jedi.inference.names import ParamNameWrapper
from jedi.inference.helpers import is_big_annoying_library from jedi.inference.helpers import is_big_annoying_library
def _iter_nodes_for_param(param_name): def _iter_nodes_for_param(param_name):
from parso.python.tree import search_ancestor
from jedi.inference.arguments import TreeArguments from jedi.inference.arguments import TreeArguments
execution_context = param_name.parent_context execution_context = param_name.parent_context
@@ -28,7 +25,7 @@ def _iter_nodes_for_param(param_name):
# tree rather than going via the execution context so that we're agnostic of # tree rather than going via the execution context so that we're agnostic of
# the specific scope we're evaluating within (i.e: module or function, # the specific scope we're evaluating within (i.e: module or function,
# etc.). # etc.).
function_node = tree.search_ancestor(param_name.tree_name, 'funcdef', 'lambdef') function_node = param_name.tree_name.search_ancestor('funcdef', 'lambdef')
module_node = function_node.get_root_node() module_node = function_node.get_root_node()
start = function_node.children[-1].start_pos start = function_node.children[-1].start_pos
end = function_node.children[-1].end_pos end = function_node.children[-1].end_pos
@@ -38,7 +35,7 @@ def _iter_nodes_for_param(param_name):
argument = name.parent argument = name.parent
if argument.type == 'argument' \ if argument.type == 'argument' \
and argument.children[0] == '*' * param_name.star_count: and argument.children[0] == '*' * param_name.star_count:
trailer = search_ancestor(argument, 'trailer') trailer = argument.search_ancestor('trailer')
if trailer is not None: # Make sure we're in a function if trailer is not None: # Make sure we're in a function
context = execution_context.create_context(trailer) context = execution_context.create_context(trailer)
if _goes_to_param_name(param_name, context, name): if _goes_to_param_name(param_name, context, name):

View File

@@ -251,6 +251,8 @@ def _infer_node(context, element):
return NO_VALUES return NO_VALUES
elif typ == 'namedexpr_test': elif typ == 'namedexpr_test':
return context.infer_node(element.children[2]) return context.infer_node(element.children[2])
elif typ == 'star_expr':
return NO_VALUES
else: else:
return infer_or_test(context, element) return infer_or_test(context, element)
@@ -288,7 +290,7 @@ def infer_atom(context, atom):
state = context.inference_state state = context.inference_state
if atom.type == 'name': if atom.type == 'name':
# This is the first global lookup. # This is the first global lookup.
stmt = tree.search_ancestor(atom, 'expr_stmt', 'lambdef', 'if_stmt') or atom stmt = atom.search_ancestor('expr_stmt', 'lambdef', 'if_stmt') or atom
if stmt.type == 'if_stmt': if stmt.type == 'if_stmt':
if not any(n.start_pos <= atom.start_pos < n.end_pos for n in stmt.get_test_nodes()): if not any(n.start_pos <= atom.start_pos < n.end_pos for n in stmt.get_test_nodes()):
stmt = atom stmt = atom
@@ -434,7 +436,7 @@ def _infer_expr_stmt(context, stmt, seek_name=None):
else: else:
operator = copy.copy(first_operator) operator = copy.copy(first_operator)
operator.value = operator.value[:-1] operator.value = operator.value[:-1]
for_stmt = tree.search_ancestor(stmt, 'for_stmt') for_stmt = stmt.search_ancestor('for_stmt')
if for_stmt is not None and for_stmt.type == 'for_stmt' and value_set \ if for_stmt is not None and for_stmt.type == 'for_stmt' and value_set \
and parser_utils.for_stmt_defines_one_name(for_stmt): and parser_utils.for_stmt_defines_one_name(for_stmt):
# Iterate through result and add the values, that's possible # Iterate through result and add the values, that's possible
@@ -547,7 +549,7 @@ def _infer_comparison(context, left_values, operator, right_values):
def _is_annotation_name(name): def _is_annotation_name(name):
ancestor = tree.search_ancestor(name, 'param', 'funcdef', 'expr_stmt') ancestor = name.search_ancestor('param', 'funcdef', 'expr_stmt')
if ancestor is None: if ancestor is None:
return False return False

View File

@@ -1,5 +1,3 @@
from parso.python import tree
from jedi import debug from jedi import debug
from jedi.inference.cache import inference_state_method_cache, CachedMetaClass from jedi.inference.cache import inference_state_method_cache, CachedMetaClass
from jedi.inference import compiled from jedi.inference import compiled
@@ -262,7 +260,7 @@ class BaseFunctionExecutionContext(ValueContext, TreeContextMixin):
@recursion.execution_recursion_decorator(default=iter([])) @recursion.execution_recursion_decorator(default=iter([]))
def get_yield_lazy_values(self, is_async=False): def get_yield_lazy_values(self, is_async=False):
# TODO: if is_async, wrap yield statements in Awaitable/async_generator_asend # TODO: if is_async, wrap yield statements in Awaitable/async_generator_asend
for_parents = [(y, tree.search_ancestor(y, 'for_stmt', 'funcdef', for_parents = [(y, y.search_ancestor('for_stmt', 'funcdef',
'while_stmt', 'if_stmt')) 'while_stmt', 'if_stmt'))
for y in get_yield_exprs(self.inference_state, self.tree_node)] for y in get_yield_exprs(self.inference_state, self.tree_node)]

View File

@@ -1,7 +1,5 @@
from abc import abstractproperty from abc import abstractproperty
from parso.tree import search_ancestor
from jedi import debug from jedi import debug
from jedi import settings from jedi import settings
from jedi.inference import compiled from jedi.inference import compiled
@@ -229,7 +227,7 @@ class _BaseTreeInstance(AbstractInstanceValue):
new = node new = node
while True: while True:
func_node = new func_node = new
new = search_ancestor(new, 'funcdef', 'classdef') new = new.search_ancestor('funcdef', 'classdef')
if class_context.tree_node is new: if class_context.tree_node is new:
func = FunctionValue.from_context(class_context, func_node) func = FunctionValue.from_context(class_context, func_node)
bound_method = BoundMethod(self, class_context, func) bound_method = BoundMethod(self, class_context, func)
@@ -498,7 +496,7 @@ class SelfName(TreeNameDefinition):
return self._instance return self._instance
def infer(self): def infer(self):
stmt = search_ancestor(self.tree_name, 'expr_stmt') stmt = self.tree_name.search_ancestor('expr_stmt')
if stmt is not None: if stmt is not None:
if stmt.children[1].type == "annassign": if stmt.children[1].type == "annassign":
from jedi.inference.gradual.annotation import infer_annotation from jedi.inference.gradual.annotation import infer_annotation

View File

@@ -36,6 +36,10 @@ py__doc__() Returns the docstring for a value.
====================================== ======================================== ====================================== ========================================
""" """
from __future__ import annotations
from typing import List, Optional, Tuple
from jedi import debug from jedi import debug
from jedi.parser_utils import get_cached_parent_scope, expr_is_dotted, \ from jedi.parser_utils import get_cached_parent_scope, expr_is_dotted, \
function_is_property function_is_property
@@ -47,11 +51,15 @@ from jedi.inference.filters import ParserTreeFilter
from jedi.inference.names import TreeNameDefinition, ValueName from jedi.inference.names import TreeNameDefinition, ValueName
from jedi.inference.arguments import unpack_arglist, ValuesArguments from jedi.inference.arguments import unpack_arglist, ValuesArguments
from jedi.inference.base_value import ValueSet, iterator_to_value_set, \ from jedi.inference.base_value import ValueSet, iterator_to_value_set, \
NO_VALUES NO_VALUES, ValueWrapper
from jedi.inference.context import ClassContext from jedi.inference.context import ClassContext
from jedi.inference.value.function import FunctionAndClassBase from jedi.inference.value.function import FunctionAndClassBase, FunctionMixin
from jedi.inference.value.decorator import Decoratee
from jedi.inference.gradual.generics import LazyGenericManager, TupleGenericManager from jedi.inference.gradual.generics import LazyGenericManager, TupleGenericManager
from jedi.plugins import plugin_manager from jedi.plugins import plugin_manager
from inspect import Parameter
from jedi.inference.names import BaseTreeParamName
from jedi.inference.signature import AbstractSignature
class ClassName(TreeNameDefinition): class ClassName(TreeNameDefinition):
@@ -129,6 +137,65 @@ class ClassFilter(ParserTreeFilter):
return [name for name in names if self._access_possible(name)] return [name for name in names if self._access_possible(name)]
def init_param_value(arg_nodes) -> Optional[bool]:
"""
Returns:
- ``True`` if ``@dataclass(init=True)``
- ``False`` if ``@dataclass(init=False)``
- ``None`` if not specified ``@dataclass()``
"""
for arg_node in arg_nodes:
if (
arg_node.type == "argument"
and arg_node.children[0].value == "init"
):
if arg_node.children[2].value == "False":
return False
elif arg_node.children[2].value == "True":
return True
return None
def get_dataclass_param_names(cls) -> List[DataclassParamName]:
"""
``cls`` is a :class:`ClassMixin`. The type is only documented as mypy would
complain that some fields are missing.
.. code:: python
@dataclass
class A:
a: int
b: str = "toto"
For the previous example, the param names would be ``a`` and ``b``.
"""
param_names = []
filter_ = cls.as_context().get_global_filter()
for name in sorted(filter_.values(), key=lambda name: name.start_pos):
d = name.tree_name.get_definition()
annassign = d.children[1]
if d.type == 'expr_stmt' and annassign.type == 'annassign':
node = annassign.children[1]
if node.type == "atom_expr" and node.children[0].value == "ClassVar":
continue
if len(annassign.children) < 4:
default = None
else:
default = annassign.children[3]
param_names.append(DataclassParamName(
parent_context=cls.parent_context,
tree_name=name.tree_name,
annotation_node=annassign.children[1],
default_node=default,
))
return param_names
class ClassMixin: class ClassMixin:
def is_class(self): def is_class(self):
return True return True
@@ -221,6 +288,73 @@ class ClassMixin:
assert x is not None assert x is not None
yield x yield x
def _has_dataclass_transform_metaclasses(self) -> Tuple[bool, Optional[bool]]:
for meta in self.get_metaclasses(): # type: ignore[attr-defined]
if (
isinstance(meta, Decoratee)
# Internal leakage :|
and isinstance(meta._wrapped_value, DataclassTransformer)
):
return True, meta._wrapped_value.init_mode_from_new()
return False, None
def _get_dataclass_transform_signatures(self) -> List[DataclassSignature]:
"""
Returns: A non-empty list if the class has dataclass semantics else an
empty list.
The dataclass-like semantics will be assumed for any class that directly
or indirectly derives from the decorated class or uses the decorated
class as a metaclass.
"""
param_names = []
is_dataclass_transform = False
default_init_mode: Optional[bool] = None
for cls in reversed(list(self.py__mro__())):
if not is_dataclass_transform:
# If dataclass_transform is applied to a class, dataclass-like semantics
# will be assumed for any class that directly or indirectly derives from
# the decorated class or uses the decorated class as a metaclass.
if (
isinstance(cls, DataclassTransformer)
and cls.init_mode_from_init_subclass
):
is_dataclass_transform = True
default_init_mode = cls.init_mode_from_init_subclass
elif (
# Some object like CompiledValues would not be compatible
isinstance(cls, ClassMixin)
):
is_dataclass_transform, default_init_mode = (
cls._has_dataclass_transform_metaclasses()
)
# Attributes on the decorated class and its base classes are not
# considered to be fields.
if is_dataclass_transform:
continue
# All inherited classes behave like dataclass semantics
if (
is_dataclass_transform
and isinstance(cls, ClassValue)
and (
cls.init_param_mode()
or (cls.init_param_mode() is None and default_init_mode)
)
):
param_names.extend(
get_dataclass_param_names(cls)
)
if is_dataclass_transform:
return [DataclassSignature(cls, param_names)]
else:
return []
def get_signatures(self): def get_signatures(self):
# Since calling staticmethod without a function is illegal, the Jedi # Since calling staticmethod without a function is illegal, the Jedi
# plugin doesn't return anything. Therefore call directly and get what # plugin doesn't return anything. Therefore call directly and get what
@@ -232,6 +366,11 @@ class ClassMixin:
return sigs return sigs
args = ValuesArguments([]) args = ValuesArguments([])
init_funcs = self.py__call__(args).py__getattribute__('__init__') init_funcs = self.py__call__(args).py__getattribute__('__init__')
dataclass_sigs = self._get_dataclass_transform_signatures()
if dataclass_sigs:
return dataclass_sigs
else:
return [sig.bind(self) for sig in init_funcs.get_signatures()] return [sig.bind(self) for sig in init_funcs.get_signatures()]
def _as_context(self): def _as_context(self):
@@ -319,6 +458,158 @@ class ClassMixin:
return ValueSet({self}) return ValueSet({self})
class DataclassParamName(BaseTreeParamName):
"""
Represent a field declaration on a class with dataclass semantics.
"""
def __init__(self, parent_context, tree_name, annotation_node, default_node):
super().__init__(parent_context, tree_name)
self.annotation_node = annotation_node
self.default_node = default_node
def get_kind(self):
return Parameter.POSITIONAL_OR_KEYWORD
def infer(self):
if self.annotation_node is None:
return NO_VALUES
else:
return self.parent_context.infer_node(self.annotation_node)
class DataclassSignature(AbstractSignature):
"""
It represents the ``__init__`` signature of a class with dataclass semantics.
.. code:: python
"""
def __init__(self, value, param_names):
super().__init__(value)
self._param_names = param_names
def get_param_names(self, resolve_stars=False):
return self._param_names
class DataclassDecorator(ValueWrapper, FunctionMixin):
"""
A dataclass(-like) decorator with custom parameters.
.. code:: python
@dataclass(init=True) # this
class A: ...
@dataclass_transform
def create_model(*, init=False): pass
@create_model(init=False) # or this
class B: ...
"""
def __init__(self, function, arguments, default_init: bool = True):
"""
Args:
function: Decoratee | function
arguments: The parameters to the dataclass function decorator
default_init: Boolean to indicate the default init value
"""
super().__init__(function)
argument_init = self._init_param_value(arguments)
self.init_param_mode = (
argument_init if argument_init is not None else default_init
)
def _init_param_value(self, arguments) -> Optional[bool]:
if not arguments.argument_node:
return None
arg_nodes = (
arguments.argument_node.children
if arguments.argument_node.type == "arglist"
else [arguments.argument_node]
)
return init_param_value(arg_nodes)
class DataclassTransformer(ValueWrapper, ClassMixin):
"""
A class decorated with the ``dataclass_transform`` decorator. dataclass-like
semantics will be assumed for any class that directly or indirectly derives
from the decorated class or uses the decorated class as a metaclass.
Attributes on the decorated class and its base classes are not considered to
be fields.
"""
def __init__(self, wrapped_value):
super().__init__(wrapped_value)
def init_mode_from_new(self) -> bool:
"""Default value if missing is ``True``"""
new_methods = self._wrapped_value.py__getattribute__("__new__")
if not new_methods:
return True
new_method = list(new_methods)[0]
for param in new_method.get_param_names():
if (
param.string_name == "init"
and param.default_node
and param.default_node.type == "keyword"
):
if param.default_node.value == "False":
return False
elif param.default_node.value == "True":
return True
return True
@property
def init_mode_from_init_subclass(self) -> Optional[bool]:
# def __init_subclass__(cls) -> None: ... is hardcoded in the typeshed
# so the extra parameters can not be inferred.
return True
class DataclassWrapper(ValueWrapper, ClassMixin):
"""
A class with dataclass semantics from a decorator. The init parameters are
only from the current class and parent classes decorated where the ``init``
parameter was ``True``.
.. code:: python
@dataclass
class A: ... # this
@dataclass_transform
def create_model(): pass
@create_model()
class B: ... # or this
"""
def __init__(
self, wrapped_value, should_generate_init: bool
):
super().__init__(wrapped_value)
self.should_generate_init = should_generate_init
def get_signatures(self):
param_names = []
for cls in reversed(list(self.py__mro__())):
if (
isinstance(cls, DataclassWrapper)
and cls.should_generate_init
):
param_names.extend(get_dataclass_param_names(cls))
return [DataclassSignature(cls, param_names)]
class ClassValue(ClassMixin, FunctionAndClassBase, metaclass=CachedMetaClass): class ClassValue(ClassMixin, FunctionAndClassBase, metaclass=CachedMetaClass):
api_type = 'class' api_type = 'class'
@@ -385,6 +676,19 @@ class ClassValue(ClassMixin, FunctionAndClassBase, metaclass=CachedMetaClass):
return values return values
return NO_VALUES return NO_VALUES
def init_param_mode(self) -> Optional[bool]:
"""
It returns ``True`` if ``class X(init=False):`` else ``False``.
"""
bases_arguments = self._get_bases_arguments()
if bases_arguments.argument_node.type != "arglist":
# If it is not inheriting from the base model and having
# extra parameters, then init behavior is not changed.
return None
return init_param_value(bases_arguments.argument_node.children)
@plugin_manager.decorate() @plugin_manager.decorate()
def get_metaclass_signatures(self, metaclasses): def get_metaclass_signatures(self, metaclasses):
return [] return []

View File

@@ -80,7 +80,7 @@ class ModuleMixin(SubModuleDictMixin):
def is_stub(self): def is_stub(self):
return False return False
@property # type: ignore[misc] @property
@inference_state_method_cache() @inference_state_method_cache()
def name(self): def name(self):
return self._module_name_class(self, self.string_names[-1]) return self._module_name_class(self, self.string_names[-1])
@@ -138,7 +138,7 @@ class ModuleValue(ModuleMixin, TreeValue):
api_type = 'module' api_type = 'module'
def __init__(self, inference_state, module_node, code_lines, file_io=None, def __init__(self, inference_state, module_node, code_lines, file_io=None,
string_names=None, is_package=False): string_names=None, is_package=False) -> None:
super().__init__( super().__init__(
inference_state, inference_state,
parent_context=None, parent_context=None,
@@ -149,7 +149,7 @@ class ModuleValue(ModuleMixin, TreeValue):
self._path: Optional[Path] = None self._path: Optional[Path] = None
else: else:
self._path = file_io.path self._path = file_io.path
self.string_names = string_names # Optional[Tuple[str, ...]] self.string_names: Optional[tuple[str, ...]] = string_names
self.code_lines = code_lines self.code_lines = code_lines
self._is_package = is_package self._is_package = is_package

View File

@@ -38,7 +38,7 @@ class ImplicitNamespaceValue(Value, SubModuleDictMixin):
def get_qualified_names(self): def get_qualified_names(self):
return () return ()
@property # type: ignore[misc] @property
@inference_state_method_cache() @inference_state_method_cache()
def name(self): def name(self):
string_name = self.py__package__()[-1] string_name = self.py__package__()[-1]

View File

@@ -2,7 +2,6 @@ import sys
from typing import List from typing import List
from pathlib import Path from pathlib import Path
from parso.tree import search_ancestor
from jedi.inference.cache import inference_state_method_cache from jedi.inference.cache import inference_state_method_cache
from jedi.inference.imports import goto_import, load_module_from_path from jedi.inference.imports import goto_import, load_module_from_path
from jedi.inference.filters import ParserTreeFilter from jedi.inference.filters import ParserTreeFilter
@@ -120,7 +119,7 @@ def _is_a_pytest_param_and_inherited(param_name):
This is a heuristic and will work in most cases. This is a heuristic and will work in most cases.
""" """
funcdef = search_ancestor(param_name.tree_name, 'funcdef') funcdef = param_name.tree_name.search_ancestor('funcdef')
if funcdef is None: # A lambda if funcdef is None: # A lambda
return False, False return False, False
decorators = funcdef.get_decorators() decorators = funcdef.get_decorators()

View File

@@ -11,7 +11,6 @@ compiled module that returns the types for C-builtins.
""" """
import parso import parso
import os import os
from inspect import Parameter
from jedi import debug from jedi import debug
from jedi.inference.utils import safe_property from jedi.inference.utils import safe_property
@@ -25,15 +24,20 @@ from jedi.inference.value.instance import \
from jedi.inference.base_value import ContextualizedNode, \ from jedi.inference.base_value import ContextualizedNode, \
NO_VALUES, ValueSet, ValueWrapper, LazyValueWrapper NO_VALUES, ValueSet, ValueWrapper, LazyValueWrapper
from jedi.inference.value import ClassValue, ModuleValue from jedi.inference.value import ClassValue, ModuleValue
from jedi.inference.value.klass import ClassMixin from jedi.inference.value.decorator import Decoratee
from jedi.inference.value.klass import (
DataclassWrapper,
DataclassDecorator,
DataclassTransformer,
)
from jedi.inference.value.function import FunctionMixin from jedi.inference.value.function import FunctionMixin
from jedi.inference.value import iterable from jedi.inference.value import iterable
from jedi.inference.lazy_value import LazyTreeValue, LazyKnownValue, \ from jedi.inference.lazy_value import LazyTreeValue, LazyKnownValue, \
LazyKnownValues LazyKnownValues
from jedi.inference.names import ValueName, BaseTreeParamName from jedi.inference.names import ValueName
from jedi.inference.filters import AttributeOverwrite, publish_method, \ from jedi.inference.filters import AttributeOverwrite, publish_method, \
ParserTreeFilter, DictFilter ParserTreeFilter, DictFilter
from jedi.inference.signature import AbstractSignature, SignatureWrapper from jedi.inference.signature import SignatureWrapper
# Copied from Python 3.6's stdlib. # Copied from Python 3.6's stdlib.
@@ -591,65 +595,103 @@ def _random_choice(sequences):
def _dataclass(value, arguments, callback): def _dataclass(value, arguments, callback):
"""
Decorator entry points for dataclass.
1. dataclass decorator declaration with parameters
2. dataclass semantics on a class from a dataclass(-like) decorator
"""
for c in _follow_param(value.inference_state, arguments, 0): for c in _follow_param(value.inference_state, arguments, 0):
if c.is_class(): if c.is_class():
return ValueSet([DataclassWrapper(c)]) # Declare dataclass semantics on a class from a dataclass decorator
should_generate_init = (
# Customized decorator, init may be disabled
value.init_param_mode
if isinstance(value, DataclassDecorator)
# Bare dataclass decorator, always with init mode
else True
)
return ValueSet([DataclassWrapper(c, should_generate_init)])
else: else:
# @dataclass(init=False)
# dataclass decorator customization
return ValueSet(
[
DataclassDecorator(
value,
arguments=arguments,
default_init=True,
)
]
)
return NO_VALUES
def _dataclass_transform(value, arguments, callback):
"""
Decorator entry points for dataclass_transform.
1. dataclass-like decorator instantiation from a dataclass_transform decorator
2. dataclass_transform decorator declaration with parameters
3. dataclass-like decorator declaration with parameters
4. dataclass-like semantics on a class from a dataclass-like decorator
"""
for c in _follow_param(value.inference_state, arguments, 0):
if c.is_class():
is_dataclass_transform = (
value.name.string_name == "dataclass_transform"
# The decorator function from dataclass_transform acting as the
# dataclass decorator.
and not isinstance(value, Decoratee)
# The decorator function from dataclass_transform acting as the
# dataclass decorator with customized parameters
and not isinstance(value, DataclassDecorator)
)
if is_dataclass_transform:
# Declare base class
return ValueSet([DataclassTransformer(c)])
else:
# Declare dataclass-like semantics on a class from a
# dataclass-like decorator
should_generate_init = value.init_param_mode
return ValueSet([DataclassWrapper(c, should_generate_init)])
elif c.is_function():
# dataclass-like decorator instantiation:
# @dataclass_transform
# def create_model()
return ValueSet(
[
DataclassDecorator(
value,
arguments=arguments,
default_init=True,
)
]
)
elif (
# @dataclass_transform
# def create_model(): pass
# @create_model(init=...)
isinstance(value, Decoratee)
):
# dataclass (or like) decorator customization
return ValueSet(
[
DataclassDecorator(
value,
arguments=arguments,
default_init=value._wrapped_value.init_param_mode,
)
]
)
else:
# dataclass_transform decorator with parameters; nothing impactful
return ValueSet([value]) return ValueSet([value])
return NO_VALUES return NO_VALUES
class DataclassWrapper(ValueWrapper, ClassMixin):
def get_signatures(self):
param_names = []
for cls in reversed(list(self.py__mro__())):
if isinstance(cls, DataclassWrapper):
filter_ = cls.as_context().get_global_filter()
# .values ordering is not guaranteed, at least not in
# Python < 3.6, when dicts where not ordered, which is an
# implementation detail anyway.
for name in sorted(filter_.values(), key=lambda name: name.start_pos):
d = name.tree_name.get_definition()
annassign = d.children[1]
if d.type == 'expr_stmt' and annassign.type == 'annassign':
if len(annassign.children) < 4:
default = None
else:
default = annassign.children[3]
param_names.append(DataclassParamName(
parent_context=cls.parent_context,
tree_name=name.tree_name,
annotation_node=annassign.children[1],
default_node=default,
))
return [DataclassSignature(cls, param_names)]
class DataclassSignature(AbstractSignature):
def __init__(self, value, param_names):
super().__init__(value)
self._param_names = param_names
def get_param_names(self, resolve_stars=False):
return self._param_names
class DataclassParamName(BaseTreeParamName):
def __init__(self, parent_context, tree_name, annotation_node, default_node):
super().__init__(parent_context, tree_name)
self.annotation_node = annotation_node
self.default_node = default_node
def get_kind(self):
return Parameter.POSITIONAL_OR_KEYWORD
def infer(self):
if self.annotation_node is None:
return NO_VALUES
else:
return self.parent_context.infer_node(self.annotation_node)
class ItemGetterCallable(ValueWrapper): class ItemGetterCallable(ValueWrapper):
def __init__(self, instance, args_value_set): def __init__(self, instance, args_value_set):
super().__init__(instance) super().__init__(instance)
@@ -798,22 +840,17 @@ _implemented = {
# runtime_checkable doesn't really change anything and is just # runtime_checkable doesn't really change anything and is just
# adding logs for infering stuff, so we can safely ignore it. # adding logs for infering stuff, so we can safely ignore it.
'runtime_checkable': lambda value, arguments, callback: NO_VALUES, 'runtime_checkable': lambda value, arguments, callback: NO_VALUES,
# Python 3.11+
'dataclass_transform': _dataclass_transform,
},
'typing_extensions': {
# Python <3.11
'dataclass_transform': _dataclass_transform,
}, },
'dataclasses': { 'dataclasses': {
# For now this works at least better than Jedi trying to understand it. # For now this works at least better than Jedi trying to understand it.
'dataclass': _dataclass 'dataclass': _dataclass
}, },
# attrs exposes declaration interface roughly compatible with dataclasses
# via attrs.define, attrs.frozen and attrs.mutable
# https://www.attrs.org/en/stable/names.html
'attr': {
'define': _dataclass,
'frozen': _dataclass,
},
'attrs': {
'define': _dataclass,
'frozen': _dataclass,
},
'os.path': { 'os.path': {
'dirname': _create_string_input_function(os.path.dirname), 'dirname': _create_string_input_function(os.path.dirname),
'abspath': _create_string_input_function(os.path.abspath), 'abspath': _create_string_input_function(os.path.abspath),

32
pyproject.toml Normal file
View File

@@ -0,0 +1,32 @@
[tool.mypy]
# Exclude our copies of external stubs
exclude = "^jedi/third_party"
show_error_codes = true
enable_error_code = "ignore-without-code"
# Ensure generics are explicit about what they are (e.g: `List[str]` rather than
# just `List`)
disallow_any_generics = true
disallow_subclassing_any = true
# Avoid creating future gotchas emerging from bad typing
warn_redundant_casts = true
warn_unused_ignores = true
warn_return_any = true
warn_unused_configs = true
warn_unreachable = true
# Require values to be explicitly re-exported; this makes things easier for
# Flake8 too and avoids accidentally importing thing from the "wrong" place
# (which helps avoid circular imports)
implicit_reexport = false
strict_equality = true
[[tool.mypy.overrides]]
# Various __init__.py files which contain re-exports we want to implicitly make.
module = ["jedi", "jedi.inference.compiled", "jedi.inference.value", "parso"]
implicit_reexport = true

View File

@@ -31,36 +31,3 @@ exclude =
[pycodestyle] [pycodestyle]
max-line-length = 100 max-line-length = 100
[mypy]
# Exclude our copies of external stubs
exclude = ^jedi/third_party
show_error_codes = true
enable_error_code = ignore-without-code
# Ensure generics are explicit about what they are (e.g: `List[str]` rather than
# just `List`)
disallow_any_generics = True
disallow_subclassing_any = True
# Avoid creating future gotchas emerging from bad typing
warn_redundant_casts = True
warn_unused_ignores = True
warn_return_any = True
warn_unused_configs = True
warn_unreachable = True
# Require values to be explicitly re-exported; this makes things easier for
# Flake8 too and avoids accidentally importing thing from the "wrong" place
# (which helps avoid circular imports)
implicit_reexport = False
strict_equality = True
[mypy-jedi,jedi.inference.compiled,jedi.inference.value,parso]
# Various __init__.py files which contain re-exports we want to implicitly make.
implicit_reexport = True

View File

@@ -1,4 +1,5 @@
#!/usr/bin/env python #!/usr/bin/env python
from typing import cast
from setuptools import setup, find_packages from setuptools import setup, find_packages
from setuptools.depends import get_module_constant from setuptools.depends import get_module_constant
@@ -9,7 +10,7 @@ __AUTHOR__ = 'David Halter'
__AUTHOR_EMAIL__ = 'davidhalter88@gmail.com' __AUTHOR_EMAIL__ = 'davidhalter88@gmail.com'
# Get the version from within jedi. It's defined in exactly one place now. # Get the version from within jedi. It's defined in exactly one place now.
version = get_module_constant("jedi", "__version__") version = cast(str, get_module_constant("jedi", "__version__"))
readme = open('README.rst').read() + '\n\n' + open('CHANGELOG.rst').read() readme = open('README.rst').read() + '\n\n' + open('CHANGELOG.rst').read()
@@ -34,9 +35,9 @@ setup(name='jedi',
keywords='python completion refactoring vim', keywords='python completion refactoring vim',
long_description=readme, long_description=readme,
packages=find_packages(exclude=['test', 'test.*']), packages=find_packages(exclude=['test', 'test.*']),
python_requires='>=3.6', python_requires='>=3.8',
# Python 3.13 grammars are added to parso in 0.8.4 # Python 3.13 grammars are added to parso in 0.8.4
install_requires=['parso>=0.8.4,<0.9.0'], install_requires=['parso>=0.8.5,<0.9.0'],
extras_require={ extras_require={
'testing': [ 'testing': [
'pytest<9.0.0', 'pytest<9.0.0',
@@ -46,14 +47,15 @@ setup(name='jedi',
'colorama', 'colorama',
'Django', 'Django',
'attrs', 'attrs',
'typing_extensions',
], ],
'qa': [ 'qa': [
# latest version on 2025-06-16
'flake8==7.2.0',
# latest version supporting Python 3.6 # latest version supporting Python 3.6
'flake8==5.0.4', 'mypy==1.16',
# latest version supporting Python 3.6
'mypy==0.971',
# Arbitrary pins, latest at the time of pinning # Arbitrary pins, latest at the time of pinning
'types-setuptools==67.2.0.1', 'types-setuptools==80.9.0.20250529',
], ],
'docs': [ 'docs': [
# Just pin all of these. # Just pin all of these.
@@ -94,8 +96,6 @@ setup(name='jedi',
'License :: OSI Approved :: MIT License', 'License :: OSI Approved :: MIT License',
'Operating System :: OS Independent', 'Operating System :: OS Independent',
'Programming Language :: Python :: 3', 'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: 3.8', 'Programming Language :: Python :: 3.8',
'Programming Language :: Python :: 3.9', 'Programming Language :: Python :: 3.9',
'Programming Language :: Python :: 3.10', 'Programming Language :: Python :: 3.10',

View File

@@ -44,7 +44,7 @@ Options:
--pudb Launch pudb when error is raised. --pudb Launch pudb when error is raised.
""" """
from docopt import docopt # type: ignore[import] from docopt import docopt # type: ignore[import, unused-ignore]
import json import json
import os import os

View File

@@ -527,3 +527,11 @@ lc = [x for a, *x in [(1, '', 1.0)]]
lc[0][0] lc[0][0]
#? #?
lc[0][1] lc[0][1]
xy = (1,)
x, y = *xy, None
# whatever it is should not crash
#?
x

View File

@@ -134,7 +134,7 @@ TEST_GOTO = 2
TEST_REFERENCES = 3 TEST_REFERENCES = 3
grammar36 = parso.load_grammar(version='3.6') grammar313 = parso.load_grammar(version='3.13')
class BaseTestCase(object): class BaseTestCase(object):
@@ -238,7 +238,7 @@ class IntegrationTestCase(BaseTestCase):
should_be = set() should_be = set()
for match in re.finditer('(?:[^ ]+)', correct): for match in re.finditer('(?:[^ ]+)', correct):
string = match.group(0) string = match.group(0)
parser = grammar36.parse(string, start_symbol='eval_input', error_recovery=False) parser = grammar313.parse(string, start_symbol='eval_input', error_recovery=False)
parser_utils.move(parser.get_root_node(), self.line_nr) parser_utils.move(parser.get_root_node(), self.line_nr)
node = parser.get_root_node() node = parser.get_root_node()
module_context = script._get_module_context() module_context = script._get_module_context()
@@ -504,7 +504,7 @@ if __name__ == '__main__':
if arguments['--env']: if arguments['--env']:
environment = get_system_environment(arguments['--env']) environment = get_system_environment(arguments['--env'])
else: else:
# Will be 3.6. # Will be 3.13.
environment = get_default_environment() environment = get_default_environment()
import traceback import traceback

View File

@@ -26,7 +26,7 @@ def test_find_system_environments():
@pytest.mark.parametrize( @pytest.mark.parametrize(
'version', 'version',
['3.6', '3.7', '3.8', '3.9'] ['3.8', '3.9', '3.10', '3.11', '3.12', '3.13']
) )
def test_versions(version): def test_versions(version):
try: try:

View File

@@ -6,6 +6,7 @@ from datetime import datetime
import pytest import pytest
import jedi
from jedi.inference import compiled from jedi.inference import compiled
from jedi.inference.compiled.access import DirectObjectAccess from jedi.inference.compiled.access import DirectObjectAccess
from jedi.inference.gradual.conversion import _stub_to_python_value_set from jedi.inference.gradual.conversion import _stub_to_python_value_set
@@ -81,9 +82,9 @@ def test_method_completion(Script, environment):
assert [c.name for c in Script(code).complete()] == ['__func__'] assert [c.name for c in Script(code).complete()] == ['__func__']
def test_time_docstring(Script): def test_time_docstring():
import time import time
comp, = Script('import time\ntime.sleep').complete() comp, = jedi.Script('import time\ntime.sleep').complete()
assert comp.docstring(raw=True) == time.sleep.__doc__ assert comp.docstring(raw=True) == time.sleep.__doc__
expected = 'sleep(secs: float) -> None\n\n' + time.sleep.__doc__ expected = 'sleep(secs: float) -> None\n\n' + time.sleep.__doc__
assert comp.docstring() == expected assert comp.docstring() == expected

View File

@@ -307,6 +307,33 @@ def test_os_issues(Script):
assert 'path' in import_names(s, column=len(s) - 3) assert 'path' in import_names(s, column=len(s) - 3)
def test_duplicated_import(Script):
def import_names(*args, **kwargs):
return [d.name for d in Script(*args).complete(**kwargs)]
s = 'import os, o'
assert 'os' not in import_names(s)
assert 'os' in import_names(s, column=len(s) - 3)
s = 'from os import path, p'
assert 'path' not in import_names(s)
assert 'path' in import_names(s, column=len(s) - 3)
assert 'path' in import_names("from os import path")
assert 'path' in import_names("from os import chdir, path")
s = 'import math as mm, m'
assert 'math' not in import_names(s)
s = 'import math as os, o'
assert 'os' in import_names(s)
s = 'from os import path as pp, p'
assert 'path' not in import_names(s)
s = 'from os import chdir as path, p'
assert 'path' in import_names(s)
def test_path_issues(Script): def test_path_issues(Script):
""" """
See pull request #684 for details. See pull request #684 for details.

View File

@@ -16,13 +16,13 @@ def test_on_code():
assert i.infer() assert i.infer()
def test_generics_without_definition(): def test_generics_without_definition() -> None:
# Used to raise a recursion error # Used to raise a recursion error
T = TypeVar('T') T = TypeVar('T')
class Stack(Generic[T]): class Stack(Generic[T]):
def __init__(self): def __init__(self) -> None:
self.items = [] # type: List[T] self.items: List[T] = []
def push(self, item): def push(self, item):
self.items.append(item) self.items.append(item)

View File

@@ -318,40 +318,511 @@ def test_wraps_signature(Script, code, signature):
@pytest.mark.parametrize( @pytest.mark.parametrize(
'start, start_params', [ "start, start_params, include_params",
['@dataclass\nclass X:', []], [
['@dataclass(eq=True)\nclass X:', []], ["@dataclass\nclass X:", [], True],
[dedent(''' ["@dataclass(eq=True)\nclass X:", [], True],
[
dedent(
"""
class Y(): class Y():
y: int y: int
@dataclass @dataclass
class X(Y):'''), []], class X(Y):"""
[dedent(''' ),
[],
True,
],
[
dedent(
"""
@dataclass @dataclass
class Y(): class Y():
y: int y: int
z = 5 z = 5
@dataclass @dataclass
class X(Y):'''), ['y']], class X(Y):"""
] ),
["y"],
True,
],
[
dedent(
"""
@dataclass
class Y():
y: int
class Z(Y): # Not included
z = 5
@dataclass
class X(Z):"""
),
["y"],
True,
],
# init=False
[
dedent(
"""
@dataclass(init=False)
class X:"""
),
[],
False,
],
[
dedent(
"""
@dataclass(eq=True, init=False)
class X:"""
),
[],
False,
],
# custom init
[
dedent(
"""
@dataclass()
class X:
def __init__(self, toto: str):
pass
"""
),
["toto"],
False,
],
],
ids=[
"direct_transformed",
"transformed_with_params",
"subclass_transformed",
"both_transformed",
"intermediate_not_transformed",
"init_false",
"init_false_multiple",
"custom_init",
],
) )
def test_dataclass_signature(Script, skip_pre_python37, start, start_params): def test_dataclass_signature(
Script, skip_pre_python37, start, start_params, include_params, environment
):
if environment.version_info < (3, 8):
# Final is not yet supported
price_type = "float"
price_type_infer = "float"
else:
price_type = "Final[float]"
price_type_infer = "object"
code = dedent(
f"""
name: str
foo = 3
blob: ClassVar[str]
price: {price_type}
quantity: int = 0.0
X("""
)
code = (
"from dataclasses import dataclass\n"
+ "from typing import ClassVar, Final\n"
+ start
+ code
)
sig, = Script(code).get_signatures()
expected_params = (
[*start_params, "name", "price", "quantity"]
if include_params
else [*start_params]
)
assert [p.name for p in sig.params] == expected_params
if include_params:
quantity, = sig.params[-1].infer()
assert quantity.name == 'int'
price, = sig.params[-2].infer()
assert price.name == price_type_infer
dataclass_transform_cases = [
# Attributes on the decorated class and its base classes
# are not considered to be fields.
# 1/ Declare dataclass transformer
# Base Class
['@dataclass_transform\nclass X:', [], False],
# Base Class with params
['@dataclass_transform(eq_default=True)\nclass X:', [], False],
# Subclass
[dedent('''
class Y():
y: int
@dataclass_transform
class X(Y):'''), [], False],
# 2/ Declare dataclass transformed
# Class based
[dedent('''
@dataclass_transform
class Y():
y: int
z = 5
class X(Y):'''), [], True],
# Class based with params
[dedent('''
@dataclass_transform(eq_default=True)
class Y():
y: int
z = 5
class X(Y):'''), [], True],
# Decorator based
[dedent('''
@dataclass_transform
def create_model():
pass
@create_model
class X:'''), [], True],
[dedent('''
@dataclass_transform
def create_model():
pass
class Y:
y: int
@create_model
class X(Y):'''), [], True],
[dedent('''
@dataclass_transform
def create_model():
pass
@create_model
class Y:
y: int
@create_model
class X(Y):'''), ["y"], True],
[dedent('''
@dataclass_transform
def create_model():
pass
@create_model
class Y:
y: int
class Z(Y):
z: int
@create_model
class X(Z):'''), ["y"], True],
# Metaclass based
[dedent('''
@dataclass_transform
class ModelMeta():
y: int
z = 5
class ModelBase(metaclass=ModelMeta):
t: int
p = 5
class X(ModelBase):'''), [], True],
# 3/ Init custom init
[dedent('''
@dataclass_transform()
class Y():
y: int
z = 5
class X(Y):
def __init__(self, toto: str):
pass
'''), ["toto"], False],
# 4/ init=false
# Class based
# WARNING: Unsupported
# [dedent('''
# @dataclass_transform
# class Y():
# y: int
# z = 5
# def __init_subclass__(
# cls,
# *,
# init: bool = False,
# )
# class X(Y):'''), [], False],
[dedent('''
@dataclass_transform
class Y():
y: int
z = 5
def __init_subclass__(
cls,
*,
init: bool = False,
)
class X(Y, init=True):'''), [], True],
[dedent('''
@dataclass_transform
class Y():
y: int
z = 5
def __init_subclass__(
cls,
*,
init: bool = False,
)
class X(Y, init=False):'''), [], False],
[dedent('''
@dataclass_transform
class Y():
y: int
z = 5
class X(Y, init=False):'''), [], False],
# Decorator based
[dedent('''
@dataclass_transform
def create_model(init=False):
pass
@create_model()
class X:'''), [], False],
[dedent('''
@dataclass_transform
def create_model(init=False):
pass
@create_model(init=True)
class X:'''), [], True],
[dedent('''
@dataclass_transform
def create_model(init=False):
pass
@create_model(init=False)
class X:'''), [], False],
[dedent('''
@dataclass_transform
def create_model():
pass
@create_model(init=False)
class X:'''), [], False],
# Metaclass based
[dedent('''
@dataclass_transform
class ModelMeta():
y: int
z = 5
def __new__(
cls,
name,
bases,
namespace,
*,
init: bool = False,
):
...
class ModelBase(metaclass=ModelMeta):
t: int
p = 5
class X(ModelBase):'''), [], False],
[dedent('''
@dataclass_transform
class ModelMeta():
y: int
z = 5
def __new__(
cls,
name,
bases,
namespace,
*,
init: bool = False,
):
...
class ModelBase(metaclass=ModelMeta):
t: int
p = 5
class X(ModelBase, init=True):'''), [], True],
[dedent('''
@dataclass_transform
class ModelMeta():
y: int
z = 5
def __new__(
cls,
name,
bases,
namespace,
*,
init: bool = False,
):
...
class ModelBase(metaclass=ModelMeta):
t: int
p = 5
class X(ModelBase, init=False):'''), [], False],
[dedent('''
@dataclass_transform
class ModelMeta():
y: int
z = 5
class ModelBase(metaclass=ModelMeta):
t: int
p = 5
class X(ModelBase, init=False):'''), [], False],
# 4/ Other parameters
# Class based
[dedent('''
@dataclass_transform
class Y():
y: int
z = 5
class X(Y, eq=True):'''), [], True],
# Decorator based
[dedent('''
@dataclass_transform
def create_model():
pass
@create_model(eq=True)
class X:'''), [], True],
# Metaclass based
[dedent('''
@dataclass_transform
class ModelMeta():
y: int
z = 5
class ModelBase(metaclass=ModelMeta):
t: int
p = 5
class X(ModelBase, eq=True):'''), [], True],
]
ids = [
"direct_transformer",
"transformer_with_params",
"subclass_transformer",
"base_transformed",
"base_transformed_with_params",
"decorator_transformed_direct",
"decorator_transformed_subclass",
"decorator_transformed_both",
"decorator_transformed_intermediate_not",
"metaclass_transformed",
"custom_init",
# "base_transformed_init_false_dataclass_init_default",
"base_transformed_init_false_dataclass_init_true",
"base_transformed_init_false_dataclass_init_false",
"base_transformed_init_default_dataclass_init_false",
"decorator_transformed_init_false_dataclass_init_default",
"decorator_transformed_init_false_dataclass_init_true",
"decorator_transformed_init_false_dataclass_init_false",
"decorator_transformed_init_default_dataclass_init_false",
"metaclass_transformed_init_false_dataclass_init_default",
"metaclass_transformed_init_false_dataclass_init_true",
"metaclass_transformed_init_false_dataclass_init_false",
"metaclass_transformed_init_default_dataclass_init_false",
"base_transformed_other_parameters",
"decorator_transformed_other_parameters",
"metaclass_transformed_other_parameters",
]
@pytest.mark.parametrize(
'start, start_params, include_params', dataclass_transform_cases, ids=ids
)
def test_extensions_dataclass_transform_signature(
Script, skip_pre_python37, start, start_params, include_params, environment
):
has_typing_ext = bool(Script('import typing_extensions').infer())
if not has_typing_ext:
raise pytest.skip("typing_extensions needed in target environment to run this test")
if environment.version_info < (3, 8):
# Final is not yet supported
price_type = "float"
price_type_infer = "float"
else:
price_type = "Final[float]"
price_type_infer = "object"
code = dedent(
f"""
name: str
foo = 3
blob: ClassVar[str]
price: {price_type}
quantity: int = 0.0
X("""
)
code = (
"from typing_extensions import dataclass_transform\n"
+ "from typing import ClassVar, Final\n"
+ start
+ code
)
(sig,) = Script(code).get_signatures()
expected_params = (
[*start_params, "name", "price", "quantity"]
if include_params
else [*start_params]
)
assert [p.name for p in sig.params] == expected_params
if include_params:
quantity, = sig.params[-1].infer()
assert quantity.name == 'int'
price, = sig.params[-2].infer()
assert price.name == price_type_infer
def test_dataclass_transform_complete(Script):
script = Script('''\
@dataclass_transform
class Y():
y: int
z = 5
class X(Y):
name: str
foo = 3
def f(x: X):
x.na''')
completion, = script.complete()
assert completion.description == 'name: str'
@pytest.mark.parametrize(
"start, start_params, include_params", dataclass_transform_cases, ids=ids
)
def test_dataclass_transform_signature(
Script, skip_pre_python311, start, start_params, include_params
):
code = dedent(''' code = dedent('''
name: str name: str
foo = 3 foo = 3
price: float blob: ClassVar[str]
price: Final[float]
quantity: int = 0.0 quantity: int = 0.0
X(''') X(''')
code = 'from dataclasses import dataclass\n' + start + code code = (
"from typing import dataclass_transform\n"
+ "from typing import ClassVar, Final\n"
+ start
+ code
)
sig, = Script(code).get_signatures() sig, = Script(code).get_signatures()
assert [p.name for p in sig.params] == start_params + ['name', 'price', 'quantity'] expected_params = (
[*start_params, "name", "price", "quantity"]
if include_params
else [*start_params]
)
assert [p.name for p in sig.params] == expected_params
if include_params:
quantity, = sig.params[-1].infer() quantity, = sig.params[-1].infer()
assert quantity.name == 'int' assert quantity.name == 'int'
price, = sig.params[-2].infer() price, = sig.params[-2].infer()
assert price.name == 'float' assert price.name == 'object'
@pytest.mark.parametrize( @pytest.mark.parametrize(
@@ -371,7 +842,8 @@ def test_dataclass_signature(Script, skip_pre_python37, start, start_params):
z = 5 z = 5
@define @define
class X(Y):'''), ['y']], class X(Y):'''), ['y']],
] ],
ids=["define", "frozen", "define_customized", "define_subclass", "define_both"]
) )
def test_attrs_signature(Script, skip_pre_python37, start, start_params): def test_attrs_signature(Script, skip_pre_python37, start, start_params):
has_attrs = bool(Script('import attrs').infer()) has_attrs = bool(Script('import attrs').infer())

View File

@@ -91,7 +91,7 @@ class TestSetupReadline(unittest.TestCase):
} }
# There are quite a few differences, because both Windows and Linux # There are quite a few differences, because both Windows and Linux
# (posix and nt) libraries are included. # (posix and nt) libraries are included.
assert len(difference) < 30 assert len(difference) < 40
def test_local_import(self): def test_local_import(self):
s = 'import test.test_utils' s = 'import test.test_utils'