Compare commits

...

39 Commits

Author SHA1 Message Date
Dave Halter
a79a1fbef5 Merge branch 'parso' 2018-06-30 14:27:30 +02:00
Dave Halter
58141f1e1e Don't use requirements for now, and use the git version instead in tox 2018-06-30 14:14:52 +02:00
Dave Halter
e0e2be3027 Add a better comment about why people need to upgrade parso 2018-06-29 18:17:29 +02:00
Dave Halter
1e7662c3e1 Prepare release of 0.12.1 2018-06-29 18:10:41 +02:00
Dave Halter
68974aee58 Don't use internal parso APIs if possible 2018-06-29 10:04:03 +02:00
Dave Halter
c208d37ac4 Remove code that is no longer used, because parso was refactored. 2018-06-29 09:56:56 +02:00
Dave Halter
38474061cf Make jedi work with the next parso release 2018-06-29 09:54:57 +02:00
micbou
95f835a014 Force unicode when listing module names
pkgutil.iter_modules may return the module name as str instead of unicode on
Python 2.
2018-06-24 22:41:14 +02:00
micbou
282c6a2ba1 Use highest possible pickle protocol 2018-06-23 14:45:34 +02:00
Daniel Hahler
ea71dedaa1 Include stderr with "subprocess has crashed" exception (#1124)
* Include stderr with "subprocess has crashed" exception

This does not add it to the other similar exception raised from `kill`,
since this should be something like "was killed already" anyway.

* fixup! Include stderr with "subprocess has crashed" exception
2018-06-23 11:37:43 +02:00
micbou
106b11f1af Set stdout and stdin to binary mode on Python 2 and Windows 2018-06-22 00:08:53 +02:00
micbou
f9e90e863b Use system default buffering on Python 2 2018-06-21 19:50:51 +02:00
micbou
197aa22f29 Use cPickle on Python 2 if available
Attempt to load the C version of pickle on Python 2 as it is way faster.
2018-06-21 19:39:08 +02:00
Tarcisio Eduardo Moreira Crocomo
e96ebbe88f Add tests for DefaultDict support. 2018-06-17 11:28:12 +02:00
Tarcisio Eduardo Moreira Crocomo
55941e506b Add support for DefaultDict on jedi_typing.py. 2018-06-17 11:28:12 +02:00
Carl George
ff4a77391a Parse correct AST attribute for version
Earlier development versions of Python 3.7 added the docstring field to
AST nodes.  This was later reverted in Python 3.7.0b5.

https://bugs.python.org/issue29463
https://github.com/python/cpython/pull/7121
2018-06-16 14:43:17 +02:00
micbou
70c2fce9c2 Replace distutils.spawn.find_executable with shutil.which
The distutils.spawn.find_executable function is not available on stock system
Python 3 in recent Debian-based distributions. Since shutil.which is a better
alternative but not available on Python 2.7, we include a copy of that function
and use it in place of find_executable.
2018-06-07 21:07:22 +02:00
Dave Halter
5dab97a303 Add an error message, see also #1139. 2018-06-07 21:01:41 +02:00
Dave Halter
e2cd228aad Dict comprehension items call should now work, fixes #1129 2018-06-07 21:00:23 +02:00
micbou
c1014e00ca Fix flow analysis test
There is no seekable method for file objects on Python 2. Use flush instead.
2018-06-07 01:01:18 +02:00
Dave Halter
62a3f99594 Fix a wrong branch check, fixes #1128 2018-06-01 08:59:16 +02:00
Dave Halter
6ebe3f87a3 Drop 3.3 tests from travis
They are causing only problems now that Python3.3 is deprecated. See e.g. https://travis-ci.org/davidhalter/jedi/jobs/381881020.
Also as a solution approach: https://github.com/davidhalter/jedi/pull/1125.
2018-05-23 11:24:39 +02:00
Dave Halter
50812b5836 A simple yield should not cause an error, fixes #1117 2018-05-23 11:12:19 +02:00
Daniel Hahler
d10eff5625 Travis: report coverage also to codecov.io 2018-05-21 23:40:42 +02:00
Daniel Hahler
6748faa071 Fix _get_numpy_doc_string_cls: use cache
I've noticed that Jedi tries to import numpydoc a lot when using
jedi-vim's goto method in jedi_vim.py itself (via printing in Neovim's
VimPathFinder.find_spec).

This patch uses the cache before trying the import again and again.
2018-05-06 10:54:49 +02:00
Maxim Novikov
fc14aad8f2 Fix namespace autocompletion error 2018-05-03 09:12:17 +02:00
Daniel Hahler
3c909a9849 Travis: remove TOXENV=cov from allowed failures 2018-05-02 20:04:46 +02:00
Daniel Hahler
b94b45cfa1 Environment._get_version: add msgs with exceptions 2018-05-02 00:09:40 +02:00
Dave Halter
a95274d66f None/False/True are atom non-terminals in the syntax tree, fixes #1103 2018-05-01 23:43:49 +02:00
Dave Halter
8d48e7453a When searching submodules, use all of __path__, fixes #1105 2018-05-01 23:17:42 +02:00
Dave Halter
91499565a9 Specially crafted docstrings sometimes lead to errors, fixes #1103 2018-04-25 21:04:05 +02:00
Dave Halter
ba96c21f83 Follow up from the last async issue, fixes more related things about #1092. 2018-04-24 01:02:31 +02:00
Dave Halter
8494164b22 Fix an async funcdef issue, fixes 1092. 2018-04-24 00:41:18 +02:00
Dave Halter
4075c384e6 In some very rare cases it was possible to get an interpreter crash because of this bug. Fixes #1087 2018-04-23 21:26:51 +02:00
Dave Halter
0bcd1701f0 Start using our own monkeypatch function for some things 2018-04-23 21:26:51 +02:00
Dima Gerasimov
ceb5509170 Include function return type annotation in docstring if it is present 2018-04-23 21:20:21 +02:00
Dave Halter
88243d2408 Don't catch IndexError where we don't have to 2018-04-20 01:46:32 +02:00
micbou
5f37d08761 Extend create_environment to accept an executable path
Assume environments specified by the user are safe.
2018-04-19 21:36:44 +02:00
Daniel Hahler
aa6857d22d check_fs: handle FileNotFoundError
Ref: https://github.com/davidhalter/jedi-vim/pull/801
2018-04-17 23:40:25 +02:00
44 changed files with 515 additions and 224 deletions

View File

@@ -2,7 +2,6 @@ language: python
sudo: true sudo: true
python: python:
- 2.7 - 2.7
- 3.3
- 3.4 - 3.4
- 3.5 - 3.5
- 3.6 - 3.6
@@ -24,9 +23,6 @@ matrix:
allow_failures: allow_failures:
- python: pypy - python: pypy
- env: TOXENV=sith - env: TOXENV=sith
- env:
- TOXENV=cov
- JEDI_TEST_ENVIRONMENT=36
- python: 3.7-dev - python: 3.7-dev
include: include:
- python: 3.6 - python: 3.6
@@ -53,7 +49,11 @@ install:
script: script:
- tox - tox
after_script: after_script:
- if [ $TOXENV == "cov" ]; then - |
pip install --quiet coveralls; if [ $TOXENV == "cov" ]; then
coveralls; pip install --quiet codecov coveralls
coverage xml
coverage report -m
coveralls
bash <(curl -s https://codecov.io/bash) -X gcov -X coveragepy -X search -X fix -X xcode -f coverage.xml
fi fi

View File

@@ -50,5 +50,6 @@ Anton Zub (@zabulazza)
Maksim Novikov (@m-novikov) <mnovikov.work@gmail.com> Maksim Novikov (@m-novikov) <mnovikov.work@gmail.com>
Tobias Rzepka (@TobiasRzepka) Tobias Rzepka (@TobiasRzepka)
micbou (@micbou) micbou (@micbou)
Dima Gerasimov (@karlicoss) <karlicoss@gmail.com>
Note: (@user) means a github user name. Note: (@user) means a github user name.

View File

@@ -3,6 +3,14 @@
Changelog Changelog
--------- ---------
0.12.1 (2018-06-30)
+++++++++++++++++++
- This release forces you to upgrade parso. If you don't, nothing will work
anymore. Otherwise changes should be limited to bug fixes. Unfortunately Jedi
still uses a few internals of parso that make it hard to keep compatibility
over multiple releases. Parso >=0.3.0 is going to be needed.
0.12.0 (2018-04-15) 0.12.0 (2018-04-15)
+++++++++++++++++++ +++++++++++++++++++

View File

@@ -36,7 +36,7 @@ As you see Jedi is pretty simple and allows you to concentrate on writing a
good text editor, while still having very good IDE features for Python. good text editor, while still having very good IDE features for Python.
""" """
__version__ = '0.12.0' __version__ = '0.12.1'
from jedi.api import Script, Interpreter, set_debug_function, \ from jedi.api import Script, Interpreter, set_debug_function, \
preload_module, names preload_module, names

View File

@@ -2,7 +2,6 @@
To ensure compatibility from Python ``2.7`` - ``3.x``, a module has been To ensure compatibility from Python ``2.7`` - ``3.x``, a module has been
created. Clearly there is huge need to use conforming syntax. created. Clearly there is huge need to use conforming syntax.
""" """
import binascii
import errno import errno
import sys import sys
import os import os
@@ -381,8 +380,12 @@ if is_py3:
else: else:
import Queue as queue import Queue as queue
try:
import pickle # Attempt to load the C implementation of pickle on Python 2 as it is way
# faster.
import cPickle as pickle
except ImportError:
import pickle
if sys.version_info[:2] == (3, 3): if sys.version_info[:2] == (3, 3):
""" """
Monkeypatch the unpickler in Python 3.3. This is needed, because the Monkeypatch the unpickler in Python 3.3. This is needed, because the
@@ -443,53 +446,45 @@ if sys.version_info[:2] == (3, 3):
pickle.loads = loads pickle.loads = loads
_PICKLE_PROTOCOL = 2
is_windows = sys.platform == 'win32'
# The Windows shell on Python 2 consumes all control characters (below 32) and expand on
# all Python versions \n to \r\n.
# pickle starting from protocol version 1 uses binary data, which could not be escaped by
# any normal unicode encoder. Therefore, the only bytes encoder which doesn't produce
# control characters is binascii.hexlify.
def pickle_load(file): def pickle_load(file):
if is_windows: try:
try:
data = file.readline()
data = binascii.unhexlify(data.strip())
if is_py3:
return pickle.loads(data, encoding='bytes')
else:
return pickle.loads(data)
# Python on Windows don't throw EOF errors for pipes. So reraise them with
# the correct type, which is cought upwards.
except OSError:
raise EOFError()
else:
if is_py3: if is_py3:
return pickle.load(file, encoding='bytes') return pickle.load(file, encoding='bytes')
else: return pickle.load(file)
return pickle.load(file) # Python on Windows don't throw EOF errors for pipes. So reraise them with
# the correct type, which is caught upwards.
except OSError:
if sys.platform == 'win32':
raise EOFError()
raise
def pickle_dump(data, file): def pickle_dump(data, file, protocol):
if is_windows: try:
try: pickle.dump(data, file, protocol)
data = pickle.dumps(data, protocol=_PICKLE_PROTOCOL) # On Python 3.3 flush throws sometimes an error even though the writing
data = binascii.hexlify(data) # operation should be completed.
file.write(data)
file.write(b'\n')
# On Python 3.3 flush throws sometimes an error even if the two file writes
# should done it already before. This could be also computer / speed depending.
file.flush()
# Python on Windows don't throw EPIPE errors for pipes. So reraise them with
# the correct type and error number.
except OSError:
raise IOError(errno.EPIPE, "Broken pipe")
else:
pickle.dump(data, file, protocol=_PICKLE_PROTOCOL)
file.flush() file.flush()
# Python on Windows don't throw EPIPE errors for pipes. So reraise them with
# the correct type and error number.
except OSError:
if sys.platform == 'win32':
raise IOError(errno.EPIPE, "Broken pipe")
raise
# Determine the highest protocol version compatible for a given list of Python
# versions.
def highest_pickle_protocol(python_versions):
protocol = 4
for version in python_versions:
if version[0] == 2:
# The minimum protocol version for the versions of Python that we
# support (2.7 and 3.3+) is 2.
return 2
if version[1] < 4:
protocol = 3
return protocol
try: try:
@@ -513,3 +508,67 @@ class GeneralizedPopen(subprocess.Popen):
CREATE_NO_WINDOW = 0x08000000 CREATE_NO_WINDOW = 0x08000000
kwargs['creationflags'] = CREATE_NO_WINDOW kwargs['creationflags'] = CREATE_NO_WINDOW
super(GeneralizedPopen, self).__init__(*args, **kwargs) super(GeneralizedPopen, self).__init__(*args, **kwargs)
# shutil.which is not available on Python 2.7.
def which(cmd, mode=os.F_OK | os.X_OK, path=None):
"""Given a command, mode, and a PATH string, return the path which
conforms to the given mode on the PATH, or None if there is no such
file.
`mode` defaults to os.F_OK | os.X_OK. `path` defaults to the result
of os.environ.get("PATH"), or can be overridden with a custom search
path.
"""
# Check that a given file can be accessed with the correct mode.
# Additionally check that `file` is not a directory, as on Windows
# directories pass the os.access check.
def _access_check(fn, mode):
return (os.path.exists(fn) and os.access(fn, mode)
and not os.path.isdir(fn))
# If we're given a path with a directory part, look it up directly rather
# than referring to PATH directories. This includes checking relative to the
# current directory, e.g. ./script
if os.path.dirname(cmd):
if _access_check(cmd, mode):
return cmd
return None
if path is None:
path = os.environ.get("PATH", os.defpath)
if not path:
return None
path = path.split(os.pathsep)
if sys.platform == "win32":
# The current directory takes precedence on Windows.
if not os.curdir in path:
path.insert(0, os.curdir)
# PATHEXT is necessary to check on Windows.
pathext = os.environ.get("PATHEXT", "").split(os.pathsep)
# See if the given file matches any of the expected path extensions.
# This will allow us to short circuit when given "python.exe".
# If it does match, only test that one, otherwise we have to try
# others.
if any(cmd.lower().endswith(ext.lower()) for ext in pathext):
files = [cmd]
else:
files = [cmd + ext for ext in pathext]
else:
# On other platforms you don't have things like PATHEXT to tell you
# what file suffixes are executable, so just pass on cmd as-is.
files = [cmd]
seen = set()
for dir in path:
normdir = os.path.normcase(dir)
if not normdir in seen:
seen.add(normdir)
for thefile in files:
name = os.path.join(dir, thefile)
if _access_check(name, mode):
return name
return None

View File

@@ -342,7 +342,7 @@ class BaseDefinition(object):
followed = list(self._name.infer()) followed = list(self._name.infer())
if not followed or not hasattr(followed[0], 'py__call__'): if not followed or not hasattr(followed[0], 'py__call__'):
raise AttributeError() raise AttributeError('There are no params defined on this.')
context = followed[0] # only check the first one. context = followed[0] # only check the first one.
return [Definition(self._evaluator, n) for n in get_param_names(context)] return [Definition(self._evaluator, n) for n in get_param_names(context)]
@@ -404,8 +404,9 @@ class Completion(BaseDefinition):
append = '(' append = '('
if self._name.api_type == 'param' and self._stack is not None: if self._name.api_type == 'param' and self._stack is not None:
node_names = list(self._stack.get_node_names(self._evaluator.grammar._pgen_grammar)) nonterminals = [stack_node.nonterminal for stack_node in self._stack]
if 'trailer' in node_names and 'argument' not in node_names: if 'trailer' in nonterminals and 'argument' not in nonterminals:
# TODO this doesn't work for nested calls.
append += '=' append += '='
name = self._name.string_name name = self._name.string_name

View File

@@ -1,4 +1,4 @@
from parso.python import token from parso.python.token import PythonTokenTypes
from parso.python import tree from parso.python import tree
from parso.tree import search_ancestor, Leaf from parso.tree import search_ancestor, Leaf
@@ -57,7 +57,8 @@ def get_user_scope(module_context, position):
def scan(scope): def scan(scope):
for s in scope.children: for s in scope.children:
if s.start_pos <= position <= s.end_pos: if s.start_pos <= position <= s.end_pos:
if isinstance(s, (tree.Scope, tree.Flow)): if isinstance(s, (tree.Scope, tree.Flow)) \
or s.type in ('async_stmt', 'async_funcdef'):
return scan(s) or s return scan(s) or s
elif s.type in ('suite', 'decorated'): elif s.type in ('suite', 'decorated'):
return scan(s) return scan(s)
@@ -121,11 +122,11 @@ class Completion:
grammar = self._evaluator.grammar grammar = self._evaluator.grammar
try: try:
self.stack = helpers.get_stack_at_position( self.stack = stack = helpers.get_stack_at_position(
grammar, self._code_lines, self._module_node, self._position grammar, self._code_lines, self._module_node, self._position
) )
except helpers.OnErrorLeaf as e: except helpers.OnErrorLeaf as e:
self.stack = None self.stack = stack = None
if e.error_leaf.value == '.': if e.error_leaf.value == '.':
# After ErrorLeaf's that are dots, we will not do any # After ErrorLeaf's that are dots, we will not do any
# completions since this probably just confuses the user. # completions since this probably just confuses the user.
@@ -134,10 +135,10 @@ class Completion:
return self._global_completions() return self._global_completions()
allowed_keywords, allowed_tokens = \ allowed_transitions = \
helpers.get_possible_completion_types(grammar._pgen_grammar, self.stack) list(stack._allowed_transition_names_and_token_types())
if 'if' in allowed_keywords: if 'if' in allowed_transitions:
leaf = self._module_node.get_leaf_for_position(self._position, include_prefixes=True) leaf = self._module_node.get_leaf_for_position(self._position, include_prefixes=True)
previous_leaf = leaf.get_previous_leaf() previous_leaf = leaf.get_previous_leaf()
@@ -163,50 +164,52 @@ class Completion:
# Compare indents # Compare indents
if stmt.start_pos[1] == indent: if stmt.start_pos[1] == indent:
if type_ == 'if_stmt': if type_ == 'if_stmt':
allowed_keywords += ['elif', 'else'] allowed_transitions += ['elif', 'else']
elif type_ == 'try_stmt': elif type_ == 'try_stmt':
allowed_keywords += ['except', 'finally', 'else'] allowed_transitions += ['except', 'finally', 'else']
elif type_ == 'for_stmt': elif type_ == 'for_stmt':
allowed_keywords.append('else') allowed_transitions.append('else')
completion_names = list(self._get_keyword_completion_names(allowed_keywords)) completion_names = list(self._get_keyword_completion_names(allowed_transitions))
if token.NAME in allowed_tokens or token.INDENT in allowed_tokens: if any(t in allowed_transitions for t in (PythonTokenTypes.NAME,
PythonTokenTypes.INDENT)):
# This means that we actually have to do type inference. # This means that we actually have to do type inference.
symbol_names = list(self.stack.get_node_names(grammar._pgen_grammar)) nonterminals = [stack_node.nonterminal for stack_node in stack]
nodes = list(self.stack.get_nodes()) nodes = [node for stack_node in stack for node in stack_node.nodes]
if nodes and nodes[-1] in ('as', 'def', 'class'): if nodes and nodes[-1] in ('as', 'def', 'class'):
# No completions for ``with x as foo`` and ``import x as foo``. # No completions for ``with x as foo`` and ``import x as foo``.
# Also true for defining names as a class or function. # Also true for defining names as a class or function.
return list(self._get_class_context_completions(is_function=True)) return list(self._get_class_context_completions(is_function=True))
elif "import_stmt" in symbol_names: elif "import_stmt" in nonterminals:
level, names = self._parse_dotted_names(nodes, "import_from" in symbol_names) level, names = self._parse_dotted_names(nodes, "import_from" in nonterminals)
only_modules = not ("import_from" in symbol_names and 'import' in nodes) only_modules = not ("import_from" in nonterminals and 'import' in nodes)
completion_names += self._get_importer_names( completion_names += self._get_importer_names(
names, names,
level, level,
only_modules=only_modules, only_modules=only_modules,
) )
elif symbol_names[-1] in ('trailer', 'dotted_name') and nodes[-1] == '.': elif nonterminals[-1] in ('trailer', 'dotted_name') and nodes[-1] == '.':
dot = self._module_node.get_leaf_for_position(self._position) dot = self._module_node.get_leaf_for_position(self._position)
completion_names += self._trailer_completions(dot.get_previous_leaf()) completion_names += self._trailer_completions(dot.get_previous_leaf())
else: else:
completion_names += self._global_completions() completion_names += self._global_completions()
completion_names += self._get_class_context_completions(is_function=False) completion_names += self._get_class_context_completions(is_function=False)
if 'trailer' in symbol_names: if 'trailer' in nonterminals:
call_signatures = self._call_signatures_method() call_signatures = self._call_signatures_method()
completion_names += get_call_signature_param_names(call_signatures) completion_names += get_call_signature_param_names(call_signatures)
return completion_names return completion_names
def _get_keyword_completion_names(self, keywords_): def _get_keyword_completion_names(self, allowed_transitions):
for k in keywords_: for k in allowed_transitions:
yield keywords.KeywordName(self._evaluator, k) if isinstance(k, str) and k.isalpha():
yield keywords.KeywordName(self._evaluator, k)
def _global_completions(self): def _global_completions(self):
context = get_user_scope(self._module_context, self._position) context = get_user_scope(self._module_context, self._position)

View File

@@ -9,11 +9,8 @@ import hashlib
import filecmp import filecmp
from subprocess import PIPE from subprocess import PIPE
from collections import namedtuple from collections import namedtuple
# When dropping Python 2.7 support we should consider switching to
# `shutil.which`.
from distutils.spawn import find_executable
from jedi._compatibility import GeneralizedPopen from jedi._compatibility import GeneralizedPopen, which
from jedi.cache import memoize_method, time_cache from jedi.cache import memoize_method, time_cache
from jedi.evaluate.compiled.subprocess import get_subprocess, \ from jedi.evaluate.compiled.subprocess import get_subprocess, \
EvaluatorSameProcess, EvaluatorSubprocess EvaluatorSameProcess, EvaluatorSubprocess
@@ -77,9 +74,12 @@ class Environment(_BaseEnvironment):
stdout, stderr = process.communicate() stdout, stderr = process.communicate()
retcode = process.poll() retcode = process.poll()
if retcode: if retcode:
raise InvalidPythonEnvironment() raise InvalidPythonEnvironment(
except OSError: "Exited with %d (stdout=%r, stderr=%r)" % (
raise InvalidPythonEnvironment() retcode, stdout, stderr))
except OSError as exc:
raise InvalidPythonEnvironment(
"Could not get version information: %r" % exc)
# Until Python 3.4 wthe version string is part of stderr, after that # Until Python 3.4 wthe version string is part of stderr, after that
# stdout. # stdout.
@@ -98,7 +98,7 @@ class Environment(_BaseEnvironment):
return EvaluatorSubprocess(evaluator, self._get_subprocess()) return EvaluatorSubprocess(evaluator, self._get_subprocess())
def _get_subprocess(self): def _get_subprocess(self):
return get_subprocess(self.executable) return get_subprocess(self.executable, self.version_info)
@memoize_method @memoize_method
def get_sys_path(self): def get_sys_path(self):
@@ -273,7 +273,7 @@ def get_system_environment(version):
:raises: :exc:`.InvalidPythonEnvironment` :raises: :exc:`.InvalidPythonEnvironment`
:returns: :class:`Environment` :returns: :class:`Environment`
""" """
exe = find_executable('python' + version) exe = which('python' + version)
if exe: if exe:
if exe == sys.executable: if exe == sys.executable:
return SameEnvironment() return SameEnvironment()
@@ -287,11 +287,15 @@ def get_system_environment(version):
def create_environment(path, safe=True): def create_environment(path, safe=True):
""" """
Make it possible to create an environment by hand. Make it possible to manually create an environment by specifying a
Virtualenv path or an executable path.
:raises: :exc:`.InvalidPythonEnvironment` :raises: :exc:`.InvalidPythonEnvironment`
:returns: :class:`Environment` :returns: :class:`Environment`
""" """
if os.path.isfile(path):
_assert_safe(path, safe)
return Environment(_get_python_prefix(path), path)
return Environment(path, _get_executable_path(path, safe=safe)) return Environment(path, _get_executable_path(path, safe=safe))
@@ -307,8 +311,7 @@ def _get_executable_path(path, safe=True):
if not os.path.exists(python): if not os.path.exists(python):
raise InvalidPythonEnvironment("%s seems to be missing." % python) raise InvalidPythonEnvironment("%s seems to be missing." % python)
if safe and not _is_safe(python): _assert_safe(python, safe)
raise InvalidPythonEnvironment("The python binary is potentially unsafe.")
return python return python
@@ -339,6 +342,12 @@ def _get_executables_from_windows_registry(version):
pass pass
def _assert_safe(executable_path, safe):
if safe and not _is_safe(executable_path):
raise InvalidPythonEnvironment(
"The python binary is potentially unsafe.")
def _is_safe(executable_path): def _is_safe(executable_path):
# Resolve sym links. A venv typically is a symlink to a known Python # Resolve sym links. A venv typically is a symlink to a known Python
# binary. Only virtualenvs copy symlinks around. # binary. Only virtualenvs copy symlinks around.

View File

@@ -12,7 +12,6 @@ from jedi._compatibility import u
from jedi.evaluate.syntax_tree import eval_atom from jedi.evaluate.syntax_tree import eval_atom
from jedi.evaluate.helpers import evaluate_call_of_leaf from jedi.evaluate.helpers import evaluate_call_of_leaf
from jedi.evaluate.compiled import get_string_context_set from jedi.evaluate.compiled import get_string_context_set
from jedi.evaluate.base_context import ContextSet
from jedi.cache import call_signature_time_cache from jedi.cache import call_signature_time_cache
@@ -127,61 +126,10 @@ def get_stack_at_position(grammar, code_lines, module_node, pos):
try: try:
p.parse(tokens=tokenize_without_endmarker(code)) p.parse(tokens=tokenize_without_endmarker(code))
except EndMarkerReached: except EndMarkerReached:
return Stack(p.pgen_parser.stack) return p.stack
raise SystemError("This really shouldn't happen. There's a bug in Jedi.") raise SystemError("This really shouldn't happen. There's a bug in Jedi.")
class Stack(list):
def get_node_names(self, grammar):
for dfa, state, (node_number, nodes) in self:
yield grammar.number2symbol[node_number]
def get_nodes(self):
for dfa, state, (node_number, nodes) in self:
for node in nodes:
yield node
def get_possible_completion_types(pgen_grammar, stack):
def add_results(label_index):
try:
grammar_labels.append(inversed_tokens[label_index])
except KeyError:
try:
keywords.append(inversed_keywords[label_index])
except KeyError:
t, v = pgen_grammar.labels[label_index]
assert t >= 256
# See if it's a symbol and if we're in its first set
inversed_keywords
itsdfa = pgen_grammar.dfas[t]
itsstates, itsfirst = itsdfa
for first_label_index in itsfirst.keys():
add_results(first_label_index)
inversed_keywords = dict((v, k) for k, v in pgen_grammar.keywords.items())
inversed_tokens = dict((v, k) for k, v in pgen_grammar.tokens.items())
keywords = []
grammar_labels = []
def scan_stack(index):
dfa, state, node = stack[index]
states, first = dfa
arcs = states[state]
for label_index, new_state in arcs:
if label_index == 0:
# An accepting state, check the stack below.
scan_stack(index - 1)
else:
add_results(label_index)
scan_stack(-1)
return keywords, grammar_labels
def evaluate_goto_definition(evaluator, context, leaf): def evaluate_goto_definition(evaluator, context, leaf):
if leaf.type == 'name': if leaf.type == 'name':
# In case of a name we can just use goto_definition which does all the # In case of a name we can just use goto_definition which does all the

View File

@@ -1,4 +1,5 @@
import os import os
from contextlib import contextmanager
def traverse_parents(path, include_current=False): def traverse_parents(path, include_current=False):
@@ -10,3 +11,16 @@ def traverse_parents(path, include_current=False):
yield path yield path
previous = path previous = path
path = os.path.dirname(path) path = os.path.dirname(path)
@contextmanager
def monkeypatch(obj, attribute_name, new_value):
"""
Like pytest's monkeypatch, but as a context manager.
"""
old_value = getattr(obj, attribute_name)
try:
setattr(obj, attribute_name, new_value)
yield
finally:
setattr(obj, attribute_name, old_value)

View File

@@ -12,6 +12,8 @@ from jedi import debug
from jedi._compatibility import Python3Method, zip_longest, unicode from jedi._compatibility import Python3Method, zip_longest, unicode
from jedi.parser_utils import clean_scope_docstring, get_doc_with_call_signature from jedi.parser_utils import clean_scope_docstring, get_doc_with_call_signature
from jedi.common import BaseContextSet, BaseContext from jedi.common import BaseContextSet, BaseContext
from jedi.evaluate.helpers import EvaluatorIndexError, EvaluatorTypeError, \
EvaluatorKeyError
class Context(BaseContext): class Context(BaseContext):
@@ -128,11 +130,15 @@ class Context(BaseContext):
else: else:
try: try:
result |= getitem(index) result |= getitem(index)
except IndexError: except EvaluatorIndexError:
result |= iterate_contexts(ContextSet(self)) result |= iterate_contexts(ContextSet(self))
except KeyError: except EvaluatorKeyError:
# Must be a dict. Lists don't raise KeyErrors. # Must be a dict. Lists don't raise KeyErrors.
result |= self.dict_values() result |= self.dict_values()
except EvaluatorTypeError:
# The type is wrong and therefore it makes no sense to do
# anything anymore.
result = NO_CONTEXTS
return result return result
def eval_node(self, node): def eval_node(self, node):

View File

@@ -13,6 +13,7 @@ from jedi.evaluate.base_context import Context, ContextSet
from jedi.evaluate.lazy_context import LazyKnownContext from jedi.evaluate.lazy_context import LazyKnownContext
from jedi.evaluate.compiled.access import _sentinel from jedi.evaluate.compiled.access import _sentinel
from jedi.evaluate.cache import evaluator_function_cache from jedi.evaluate.cache import evaluator_function_cache
from jedi.evaluate.helpers import reraise_as_evaluator
from . import fake from . import fake
@@ -145,7 +146,8 @@ class CompiledObject(Context):
@CheckAttribute @CheckAttribute
def py__getitem__(self, index): def py__getitem__(self, index):
access = self.access_handle.py__getitem__(index) with reraise_as_evaluator(IndexError, KeyError, TypeError):
access = self.access_handle.py__getitem__(index)
if access is None: if access is None:
return ContextSet() return ContextSet()

View File

@@ -166,11 +166,10 @@ def _find_syntax_node_name(evaluator, access_handle):
return None # It's too hard to find lambdas. return None # It's too hard to find lambdas.
# Doesn't always work (e.g. os.stat_result) # Doesn't always work (e.g. os.stat_result)
try: names = module_node.get_used_names().get(name_str, [])
names = module_node.get_used_names()[name_str]
except KeyError:
return None
names = [n for n in names if n.is_definition()] names = [n for n in names if n.is_definition()]
if not names:
return None
try: try:
code = python_object.__code__ code = python_object.__code__

View File

@@ -17,7 +17,7 @@ import traceback
from functools import partial from functools import partial
from jedi._compatibility import queue, is_py3, force_unicode, \ from jedi._compatibility import queue, is_py3, force_unicode, \
pickle_dump, pickle_load, GeneralizedPopen pickle_dump, pickle_load, highest_pickle_protocol, GeneralizedPopen
from jedi.cache import memoize_method from jedi.cache import memoize_method
from jedi.evaluate.compiled.subprocess import functions from jedi.evaluate.compiled.subprocess import functions
from jedi.evaluate.compiled.access import DirectObjectAccess, AccessPath, \ from jedi.evaluate.compiled.access import DirectObjectAccess, AccessPath, \
@@ -29,11 +29,12 @@ _subprocesses = {}
_MAIN_PATH = os.path.join(os.path.dirname(__file__), '__main__.py') _MAIN_PATH = os.path.join(os.path.dirname(__file__), '__main__.py')
def get_subprocess(executable): def get_subprocess(executable, version):
try: try:
return _subprocesses[executable] return _subprocesses[executable]
except KeyError: except KeyError:
sub = _subprocesses[executable] = _CompiledSubprocess(executable) sub = _subprocesses[executable] = _CompiledSubprocess(executable,
version)
return sub return sub
@@ -125,9 +126,11 @@ class EvaluatorSubprocess(_EvaluatorProcess):
class _CompiledSubprocess(object): class _CompiledSubprocess(object):
_crashed = False _crashed = False
def __init__(self, executable): def __init__(self, executable, version):
self._executable = executable self._executable = executable
self._evaluator_deletion_queue = queue.deque() self._evaluator_deletion_queue = queue.deque()
self._pickle_protocol = highest_pickle_protocol([sys.version_info,
version])
@property @property
@memoize_method @memoize_method
@@ -136,12 +139,17 @@ class _CompiledSubprocess(object):
args = ( args = (
self._executable, self._executable,
_MAIN_PATH, _MAIN_PATH,
os.path.dirname(os.path.dirname(parso_path)) os.path.dirname(os.path.dirname(parso_path)),
str(self._pickle_protocol)
) )
return GeneralizedPopen( return GeneralizedPopen(
args, args,
stdin=subprocess.PIPE, stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
# Use system default buffering on Python 2 to improve performance
# (this is already the case on Python 3).
bufsize=-1
) )
def run(self, evaluator, function, args=(), kwargs={}): def run(self, evaluator, function, args=(), kwargs={}):
@@ -186,7 +194,7 @@ class _CompiledSubprocess(object):
data = evaluator_id, function, args, kwargs data = evaluator_id, function, args, kwargs
try: try:
pickle_dump(data, self._process.stdin) pickle_dump(data, self._process.stdin, self._pickle_protocol)
except (socket.error, IOError) as e: except (socket.error, IOError) as e:
# Once Python2 will be removed we can just use `BrokenPipeError`. # Once Python2 will be removed we can just use `BrokenPipeError`.
# Also, somehow in windows it returns EINVAL instead of EPIPE if # Also, somehow in windows it returns EINVAL instead of EPIPE if
@@ -200,9 +208,18 @@ class _CompiledSubprocess(object):
try: try:
is_exception, traceback, result = pickle_load(self._process.stdout) is_exception, traceback, result = pickle_load(self._process.stdout)
except EOFError: except EOFError as eof_error:
try:
stderr = self._process.stderr.read()
except Exception as exc:
stderr = '<empty/not available (%r)>' % exc
self.kill() self.kill()
raise InternalError("The subprocess %s has crashed." % self._executable) raise InternalError(
"The subprocess %s has crashed (%r, stderr=%s)." % (
self._executable,
eof_error,
stderr,
))
if is_exception: if is_exception:
# Replace the attribute error message with a the traceback. It's # Replace the attribute error message with a the traceback. It's
@@ -223,11 +240,12 @@ class _CompiledSubprocess(object):
class Listener(object): class Listener(object):
def __init__(self): def __init__(self, pickle_protocol):
self._evaluators = {} self._evaluators = {}
# TODO refactor so we don't need to process anymore just handle # TODO refactor so we don't need to process anymore just handle
# controlling. # controlling.
self._process = _EvaluatorProcess(Listener) self._process = _EvaluatorProcess(Listener)
self._pickle_protocol = pickle_protocol
def _get_evaluator(self, function, evaluator_id): def _get_evaluator(self, function, evaluator_id):
from jedi.evaluate import Evaluator from jedi.evaluate import Evaluator
@@ -275,6 +293,12 @@ class Listener(object):
if sys.version_info[0] > 2: if sys.version_info[0] > 2:
stdout = stdout.buffer stdout = stdout.buffer
stdin = stdin.buffer stdin = stdin.buffer
# Python 2 opens streams in text mode on Windows. Set stdout and stdin
# to binary mode.
elif sys.platform == 'win32':
import msvcrt
msvcrt.setmode(stdout.fileno(), os.O_BINARY)
msvcrt.setmode(stdin.fileno(), os.O_BINARY)
while True: while True:
try: try:
@@ -288,7 +312,7 @@ class Listener(object):
except Exception as e: except Exception as e:
result = True, traceback.format_exc(), e result = True, traceback.format_exc(), e
pickle_dump(result, file=stdout) pickle_dump(result, stdout, self._pickle_protocol)
class AccessHandle(object): class AccessHandle(object):

View File

@@ -45,5 +45,7 @@ else:
load('jedi') load('jedi')
from jedi.evaluate.compiled import subprocess # NOQA from jedi.evaluate.compiled import subprocess # NOQA
# Retrieve the pickle protocol.
pickle_protocol = int(sys.argv[2])
# And finally start the client. # And finally start the client.
subprocess.Listener().listen() subprocess.Listener(pickle_protocol).listen()

View File

@@ -69,7 +69,7 @@ def get_module_info(evaluator, sys_path=None, full_name=None, **kwargs):
def list_module_names(evaluator, search_path): def list_module_names(evaluator, search_path):
return [ return [
name force_unicode(name)
for module_loader, name, is_pkg in iter_modules(search_path) for module_loader, name, is_pkg in iter_modules(search_path)
] ]

View File

@@ -30,7 +30,8 @@ from jedi.evaluate import recursion
from jedi.evaluate.lazy_context import LazyKnownContext, LazyKnownContexts, \ from jedi.evaluate.lazy_context import LazyKnownContext, LazyKnownContexts, \
LazyTreeContext LazyTreeContext
from jedi.evaluate.helpers import get_int_or_none, is_string, \ from jedi.evaluate.helpers import get_int_or_none, is_string, \
predefine_names, evaluate_call_of_leaf predefine_names, evaluate_call_of_leaf, reraise_as_evaluator, \
EvaluatorKeyError
from jedi.evaluate.utils import safe_property from jedi.evaluate.utils import safe_property
from jedi.evaluate.utils import to_list from jedi.evaluate.utils import to_list
from jedi.evaluate.cache import evaluator_method_cache from jedi.evaluate.cache import evaluator_method_cache
@@ -219,7 +220,9 @@ class ListComprehension(ComprehensionMixin, Sequence):
return ContextSet(self) return ContextSet(self)
all_types = list(self.py__iter__()) all_types = list(self.py__iter__())
return all_types[index].infer() with reraise_as_evaluator(IndexError, TypeError):
lazy_context = all_types[index]
return lazy_context.infer()
class SetComprehension(ComprehensionMixin, Sequence): class SetComprehension(ComprehensionMixin, Sequence):
@@ -254,14 +257,19 @@ class DictComprehension(ComprehensionMixin, Sequence):
@publish_method('items') @publish_method('items')
def _imitate_items(self): def _imitate_items(self):
items = ContextSet.from_iterable( lazy_contexts = [
FakeSequence( LazyKnownContext(
self.evaluator, u'tuple' FakeSequence(
(LazyKnownContexts(keys), LazyKnownContexts(values)) self.evaluator,
) for keys, values in self._iterate() u'tuple',
) [LazyKnownContexts(key),
LazyKnownContexts(value)]
)
)
for key, value in self._iterate()
]
return create_evaluated_sequence_set(self.evaluator, items, sequence_type=u'list') return ContextSet(FakeSequence(self.evaluator, u'list', lazy_contexts))
class GeneratorComprehension(ComprehensionMixin, GeneratorBase): class GeneratorComprehension(ComprehensionMixin, GeneratorBase):
@@ -293,13 +301,15 @@ class SequenceLiteralContext(Sequence):
if isinstance(k, compiled.CompiledObject) \ if isinstance(k, compiled.CompiledObject) \
and k.execute_operation(compiled_obj_index, u'==').get_safe_value(): and k.execute_operation(compiled_obj_index, u'==').get_safe_value():
return self._defining_context.eval_node(value) return self._defining_context.eval_node(value)
raise KeyError('No key found in dictionary %s.' % self) raise EvaluatorKeyError('No key found in dictionary %s.' % self)
# Can raise an IndexError # Can raise an IndexError
if isinstance(index, slice): if isinstance(index, slice):
return ContextSet(self) return ContextSet(self)
else: else:
return self._defining_context.eval_node(self._items()[index]) with reraise_as_evaluator(TypeError, KeyError, IndexError):
node = self._items()[index]
return self._defining_context.eval_node(node)
def py__iter__(self): def py__iter__(self):
""" """
@@ -413,7 +423,9 @@ class FakeSequence(_FakeArray):
self._lazy_context_list = lazy_context_list self._lazy_context_list = lazy_context_list
def py__getitem__(self, index): def py__getitem__(self, index):
return self._lazy_context_list[index].infer() with reraise_as_evaluator(IndexError, TypeError):
lazy_context = self._lazy_context_list[index]
return lazy_context.infer()
def py__iter__(self): def py__iter__(self):
return self._lazy_context_list return self._lazy_context_list
@@ -450,7 +462,9 @@ class FakeDict(_FakeArray):
except KeyError: except KeyError:
pass pass
return self._dct[index].infer() with reraise_as_evaluator(KeyError):
lazy_context = self._dct[index]
return lazy_context.infer()
@publish_method('values') @publish_method('values')
def _values(self): def _values(self):

View File

@@ -186,13 +186,17 @@ class ModuleContext(TreeContext):
Lists modules in the directory of this module (if this module is a Lists modules in the directory of this module (if this module is a
package). package).
""" """
path = self._path
names = {} names = {}
if path is not None and path.endswith(os.path.sep + '__init__.py'): try:
mods = iter_modules([os.path.dirname(path)]) method = self.py__path__
for module_loader, name, is_pkg in mods: except AttributeError:
# It's obviously a relative import to the current module. pass
names[name] = SubModuleName(self, name) else:
for path in method():
mods = iter_modules([path])
for module_loader, name, is_pkg in mods:
# It's obviously a relative import to the current module.
names[name] = SubModuleName(self, name)
# TODO add something like this in the future, its cleaner than the # TODO add something like this in the future, its cleaner than the
# import hacks. # import hacks.

View File

@@ -3,25 +3,19 @@ from itertools import chain
from jedi.evaluate.cache import evaluator_method_cache from jedi.evaluate.cache import evaluator_method_cache
from jedi.evaluate import imports from jedi.evaluate import imports
from jedi.evaluate.filters import DictFilter, AbstractNameDefinition from jedi.evaluate.filters import DictFilter, AbstractNameDefinition, ContextNameMixin
from jedi.evaluate.base_context import TreeContext, ContextSet from jedi.evaluate.base_context import TreeContext, ContextSet
class ImplicitNSName(AbstractNameDefinition): class ImplicitNSName(ContextNameMixin, AbstractNameDefinition):
""" """
Accessing names for implicit namespace packages should infer to nothing. Accessing names for implicit namespace packages should infer to nothing.
This object will prevent Jedi from raising exceptions This object will prevent Jedi from raising exceptions
""" """
def __init__(self, implicit_ns_context, string_name): def __init__(self, implicit_ns_context, string_name):
self.parent_context = implicit_ns_context self._context = implicit_ns_context
self.string_name = string_name self.string_name = string_name
def infer(self):
return ContextSet(self.parent_context)
def get_root_context(self):
return self.parent_context
class ImplicitNamespaceContext(TreeContext): class ImplicitNamespaceContext(TreeContext):
""" """
@@ -56,9 +50,11 @@ class ImplicitNamespaceContext(TreeContext):
""" """
return self._fullname return self._fullname
@property
def py__path__(self): def py__path__(self):
return lambda: [self.paths] return [self.paths]
def py__name__(self):
return self._fullname
@evaluator_method_cache() @evaluator_method_cache()
def _sub_modules_dict(self): def _sub_modules_dict(self):

View File

@@ -47,13 +47,14 @@ _numpy_doc_string_cache = None
def _get_numpy_doc_string_cls(): def _get_numpy_doc_string_cls():
global _numpy_doc_string_cache global _numpy_doc_string_cache
if isinstance(_numpy_doc_string_cache, ImportError):
raise _numpy_doc_string_cache
try: try:
from numpydoc.docscrape import NumpyDocString from numpydoc.docscrape import NumpyDocString
_numpy_doc_string_cache = NumpyDocString _numpy_doc_string_cache = NumpyDocString
except ImportError as e: except ImportError as e:
_numpy_doc_string_cache = e _numpy_doc_string_cache = e
if isinstance(_numpy_doc_string_cache, ImportError): raise
raise _numpy_doc_string_cache
return _numpy_doc_string_cache return _numpy_doc_string_cache
@@ -214,6 +215,9 @@ def _evaluate_for_statement_string(module_context, string):
except (AttributeError, IndexError): except (AttributeError, IndexError):
return [] return []
if stmt.type not in ('name', 'atom', 'atom_expr'):
return []
from jedi.evaluate.context import FunctionContext from jedi.evaluate.context import FunctionContext
function_context = FunctionContext( function_context = FunctionContext(
module_context.evaluator, module_context.evaluator,

View File

@@ -60,7 +60,8 @@ def reachability_check(context, context_scope, node, origin_scope=None):
if not branch_matches and origin_keyword == 'else' \ if not branch_matches and origin_keyword == 'else' \
and node_keyword == 'except': and node_keyword == 'except':
return UNREACHABLE return UNREACHABLE
break if branch_matches:
break
# Direct parents get resolved, we filter scopes that are separate # Direct parents get resolved, we filter scopes that are separate
# branches. This makes sense for autocompletion and static analysis. # branches. This makes sense for autocompletion and static analysis.

View File

@@ -9,7 +9,6 @@ from parso.python import tree
from jedi._compatibility import unicode from jedi._compatibility import unicode
from jedi.parser_utils import get_parent_scope from jedi.parser_utils import get_parent_scope
from jedi.evaluate.compiled import CompiledObject
def is_stdlib_path(path): def is_stdlib_path(path):
@@ -184,6 +183,7 @@ def predefine_names(context, flow_scope, dct):
def is_compiled(context): def is_compiled(context):
from jedi.evaluate.compiled import CompiledObject
return isinstance(context, CompiledObject) return isinstance(context, CompiledObject)
@@ -212,3 +212,24 @@ def get_int_or_none(context):
def is_number(context): def is_number(context):
return _get_safe_value_or_none(context, (int, float)) is not None return _get_safe_value_or_none(context, (int, float)) is not None
class EvaluatorTypeError(Exception):
pass
class EvaluatorIndexError(Exception):
pass
class EvaluatorKeyError(Exception):
pass
@contextmanager
def reraise_as_evaluator(*exception_classes):
try:
yield
except exception_classes as e:
new_exc_cls = globals()['Evaluator' + e.__class__.__name__]
raise new_exc_cls(e)

View File

@@ -17,7 +17,8 @@ from parso.python import tree
from parso.tree import search_ancestor from parso.tree import search_ancestor
from parso import python_bytes_to_unicode from parso import python_bytes_to_unicode
from jedi._compatibility import unicode, ImplicitNSInfo, force_unicode from jedi._compatibility import (FileNotFoundError, ImplicitNSInfo,
force_unicode, unicode)
from jedi import debug from jedi import debug
from jedi import settings from jedi import settings
from jedi.parser_utils import get_cached_code_lines from jedi.parser_utils import get_cached_code_lines
@@ -533,7 +534,11 @@ def get_modules_containing_name(evaluator, modules, name):
yield path yield path
def check_fs(path): def check_fs(path):
with open(path, 'rb') as f: try:
f = open(path, 'rb')
except FileNotFoundError:
return
with f:
code = python_bytes_to_unicode(f.read(), errors='replace') code = python_bytes_to_unicode(f.read(), errors='replace')
if name in code: if name in code:
e_sys_path = evaluator.get_sys_path() e_sys_path = evaluator.get_sys_path()

View File

@@ -81,6 +81,9 @@ def factory(typing_name, indextypes):
class Dict(MutableMapping, dict): class Dict(MutableMapping, dict):
pass pass
class DefaultDict(MutableMapping, dict):
pass
dct = { dct = {
"Sequence": Sequence, "Sequence": Sequence,
"MutableSequence": MutableSequence, "MutableSequence": MutableSequence,
@@ -96,5 +99,6 @@ def factory(typing_name, indextypes):
"ItemsView": ItemsView, "ItemsView": ItemsView,
"ValuesView": ValuesView, "ValuesView": ValuesView,
"Dict": Dict, "Dict": Dict,
"DefaultDict": DefaultDict,
} }
return dct[typing_name] return dct[typing_name]

View File

@@ -1,4 +1,5 @@
from jedi.evaluate.base_context import ContextSet, NO_CONTEXTS from jedi.evaluate.base_context import ContextSet, NO_CONTEXTS
from jedi.common.utils import monkeypatch
class AbstractLazyContext(object): class AbstractLazyContext(object):
def __init__(self, data): def __init__(self, data):
@@ -40,12 +41,8 @@ class LazyTreeContext(AbstractLazyContext):
self._predefined_names = dict(context.predefined_names) self._predefined_names = dict(context.predefined_names)
def infer(self): def infer(self):
old, self._context.predefined_names = \ with monkeypatch(self._context, 'predefined_names', self._predefined_names):
self._context.predefined_names, self._predefined_names
try:
return self._context.eval_node(self.data) return self._context.eval_node(self.data)
finally:
self._context.predefined_names = old
def get_merged_lazy_context(lazy_contexts): def get_merged_lazy_context(lazy_contexts):

View File

@@ -68,14 +68,8 @@ def eval_node(context, element):
debug.dbg('eval_node %s@%s', element, element.start_pos) debug.dbg('eval_node %s@%s', element, element.start_pos)
evaluator = context.evaluator evaluator = context.evaluator
typ = element.type typ = element.type
if typ in ('name', 'number', 'string', 'atom', 'strings'): if typ in ('name', 'number', 'string', 'atom', 'strings', 'keyword'):
return eval_atom(context, element) return eval_atom(context, element)
elif typ == 'keyword':
# For False/True/None
if element.value in ('False', 'True', 'None'):
return ContextSet(compiled.builtin_from_name(evaluator, element.value))
# else: print e.g. could be evaluated like this in Python 2.7
return NO_CONTEXTS
elif typ == 'lambdef': elif typ == 'lambdef':
return ContextSet(FunctionContext(evaluator, context, element)) return ContextSet(FunctionContext(evaluator, context, element))
elif typ == 'expr_stmt': elif typ == 'expr_stmt':
@@ -207,6 +201,18 @@ def eval_atom(context, atom):
position=stmt.start_pos, position=stmt.start_pos,
search_global=True search_global=True
) )
elif atom.type == 'keyword':
# For False/True/None
if atom.value in ('False', 'True', 'None'):
return ContextSet(compiled.builtin_from_name(context.evaluator, atom.value))
elif atom.value == 'print':
# print e.g. could be evaluated like this in Python 2.7
return NO_CONTEXTS
elif atom.value == 'yield':
# Contrary to yield from, yield can just appear alone to return a
# value when used with `.send()`.
return NO_CONTEXTS
assert False, 'Cannot evaluate the keyword %s' % atom
elif isinstance(atom, tree.Literal): elif isinstance(atom, tree.Literal):
string = context.evaluator.compiled_subprocess.safe_literal_eval(atom.value) string = context.evaluator.compiled_subprocess.safe_literal_eval(atom.value)

View File

@@ -88,10 +88,12 @@ def get_flow_branch_keyword(flow_node, node):
keyword = first_leaf keyword = first_leaf
return 0 return 0
def get_statement_of_position(node, pos): def get_statement_of_position(node, pos):
for c in node.children: for c in node.children:
if c.start_pos <= pos <= c.end_pos: if c.start_pos <= pos <= c.end_pos:
if c.type not in ('decorated', 'simple_stmt', 'suite') \ if c.type not in ('decorated', 'simple_stmt', 'suite',
'async_stmt', 'async_funcdef') \
and not isinstance(c, (tree.Flow, tree.ClassOrFunc)): and not isinstance(c, (tree.Flow, tree.ClassOrFunc)):
return c return c
else: else:
@@ -156,7 +158,11 @@ def get_call_signature(funcdef, width=72, call_string=None):
p = '(' + ''.join(param.get_code() for param in funcdef.get_params()).strip() + ')' p = '(' + ''.join(param.get_code() for param in funcdef.get_params()).strip() + ')'
else: else:
p = funcdef.children[2].get_code() p = funcdef.children[2].get_code()
code = call_string + p if funcdef.annotation:
rtype = " ->" + funcdef.annotation.get_code()
else:
rtype = ""
code = call_string + p + rtype
return '\n'.join(textwrap.wrap(code, width)) return '\n'.join(textwrap.wrap(code, width))

View File

@@ -1 +1 @@
parso>=0.2.0 parso>=0.3.0

View File

@@ -3,7 +3,6 @@
from setuptools import setup, find_packages from setuptools import setup, find_packages
import ast import ast
import sys
__AUTHOR__ = 'David Halter' __AUTHOR__ = 'David Halter'
__AUTHOR_EMAIL__ = 'davidhalter88@gmail.com' __AUTHOR_EMAIL__ = 'davidhalter88@gmail.com'
@@ -11,10 +10,7 @@ __AUTHOR_EMAIL__ = 'davidhalter88@gmail.com'
# Get the version from within jedi. It's defined in exactly one place now. # Get the version from within jedi. It's defined in exactly one place now.
with open('jedi/__init__.py') as f: with open('jedi/__init__.py') as f:
tree = ast.parse(f.read()) tree = ast.parse(f.read())
if sys.version_info > (3, 7): version = tree.body[1].value.s
version = tree.body[0].value.s
else:
version = tree.body[1].value.s
readme = open('README.rst').read() + '\n\n' + open('CHANGELOG.rst').read() readme = open('README.rst').read() + '\n\n' + open('CHANGELOG.rst').read()
with open('requirements.txt') as f: with open('requirements.txt') as f:

View File

@@ -29,6 +29,9 @@ b = [6,7]
#? int() #? int()
b[8-7] b[8-7]
# Something unreasonable:
#?
b['']
# ----------------- # -----------------
# Slices # Slices

View File

@@ -73,3 +73,12 @@ async def wrapper():
asgen().__ane asgen().__ane
#? [] #? []
asgen().mro asgen().mro
# Normal completion (#1092)
normal_var1 = 42
async def foo():
normal_var2 = False
#? ['normal_var1', 'normal_var2']
normal_var

View File

@@ -22,6 +22,9 @@ a(0):.
0x0 0x0
#? ['and', 'or', 'if', 'is', 'in', 'not'] #? ['and', 'or', 'if', 'is', 'in', 'not']
1j 1j
x = None()
#?
x
# ----------------- # -----------------
# if/else/elif # if/else/elif

View File

@@ -30,13 +30,17 @@ def sphinxy(a, b, c, d, x):
sphinxy() sphinxy()
# wrong declarations # wrong declarations
def sphinxy2(a, b, x): def sphinxy2(a, b, x, y, z):
""" """
:param a: Forgot type declaration :param a: Forgot type declaration
:type a: :type a:
:param b: Just something :param b: Just something
:type b: `` :type b: ``
:param x: Just something without type :param x: Just something without type
:param y: A function
:type y: def l(): pass
:param z: A keyword
:type z: return
:rtype: :rtype:
""" """
#? #?
@@ -45,6 +49,10 @@ def sphinxy2(a, b, x):
b b
#? #?
x x
#?
y
#?
z
#? #?
sphinxy2() sphinxy2()

View File

@@ -29,6 +29,10 @@ finally:
x x
x = tuple x = tuple
if False:
with open("") as defined_in_false:
#? ['flush']
defined_in_false.flu
# ----------------- # -----------------
# Return checks # Return checks

View File

@@ -208,6 +208,14 @@ def x():
#? int() #? int()
next(x()) next(x())
# -----------------
# statements
# -----------------
def x():
foo = yield
#?
foo
# ----------------- # -----------------
# yield from # yield from
# ----------------- # -----------------

View File

@@ -129,11 +129,12 @@ class Key:
class Value: class Value:
pass pass
def mapping(p, q, d, r, s, t): def mapping(p, q, d, dd, r, s, t):
""" """
:type p: typing.Mapping[Key, Value] :type p: typing.Mapping[Key, Value]
:type q: typing.MutableMapping[Key, Value] :type q: typing.MutableMapping[Key, Value]
:type d: typing.Dict[Key, Value] :type d: typing.Dict[Key, Value]
:type dd: typing.DefaultDict[Key, Value]
:type r: typing.KeysView[Key] :type r: typing.KeysView[Key]
:type s: typing.ValuesView[Value] :type s: typing.ValuesView[Value]
:type t: typing.ItemsView[Key, Value] :type t: typing.ItemsView[Key, Value]
@@ -144,6 +145,8 @@ def mapping(p, q, d, r, s, t):
q.setd q.setd
#? ["setdefault"] #? ["setdefault"]
d.setd d.setd
#? ["setdefault"]
dd.setd
#? Value() #? Value()
p[1] p[1]
for key in p: for key in p:
@@ -241,6 +244,36 @@ for value in x.values():
#? int() #? int()
value value
# python >= 3.2 # python >= 3.2
class TestDefaultDict(typing.DefaultDict[str, int]):
def setdud(self):
pass
def testdict(x):
"""
:type x: TestDefaultDict
"""
#? ["setdud", "setdefault"]
x.setd
for key in x.keys():
#? str()
key
for value in x.values():
#? int()
value
x = TestDefaultDict()
#? ["setdud", "setdefault"]
x.setd
for key in x.keys():
#? str()
key
for value in x.values():
#? int()
value
# python >= 3.2
""" """
docstrings have some auto-import, annotations can use all of Python's docstrings have some auto-import, annotations can use all of Python's
import logic import logic

View File

@@ -91,6 +91,12 @@ d.items()[0][0]
#? int() #? int()
d.items()[0][1] d.items()[0][1]
(a, b), = {a:1 for a in [1.0]}.items()
#? float()
a
#? int()
b
# ----------------- # -----------------
# tuples # tuples
# ----------------- # -----------------

View File

@@ -2,6 +2,8 @@ import os
import sys import sys
from textwrap import dedent from textwrap import dedent
import pytest
def test_in_whitespace(Script): def test_in_whitespace(Script):
code = dedent(''' code = dedent('''
@@ -118,3 +120,24 @@ def test_generator(Script):
def test_in_comment(Script): def test_in_comment(Script):
assert Script(" # Comment").completions() assert Script(" # Comment").completions()
assert Script("max_attr_value = int(2) # Cast to int for spe").completions() assert Script("max_attr_value = int(2) # Cast to int for spe").completions()
def test_async(Script, environment):
if environment.version_info < (3, 5):
pytest.skip()
code = dedent('''
foo = 3
async def x():
hey = 3
ho'''
)
print(code)
comps = Script(code, column=4).completions()
names = [c.name for c in comps]
assert 'foo' in names
assert 'hey' in names
def test_with_stmt_error_recovery(Script):
assert Script('with open('') as foo: foo.\na', line=1).completions()

View File

@@ -1,4 +1,5 @@
import os import os
import sys
from contextlib import contextmanager from contextlib import contextmanager
import pytest import pytest
@@ -6,7 +7,8 @@ import pytest
import jedi import jedi
from jedi._compatibility import py_version from jedi._compatibility import py_version
from jedi.api.environment import get_default_environment, find_virtualenvs, \ from jedi.api.environment import get_default_environment, find_virtualenvs, \
InvalidPythonEnvironment, find_system_environments, get_system_environment InvalidPythonEnvironment, find_system_environments, \
get_system_environment, create_environment
def test_sys_path(): def test_sys_path():
@@ -111,3 +113,13 @@ def test_working_venv(venv_path):
def test_scanning_venvs(venv_path): def test_scanning_venvs(venv_path):
parent_dir = os.path.dirname(venv_path) parent_dir = os.path.dirname(venv_path)
assert any(venv.path == venv_path for venv in find_virtualenvs([parent_dir])) assert any(venv.path == venv_path for venv in find_virtualenvs([parent_dir]))
def test_create_environment_venv_path(venv_path):
environment = create_environment(venv_path)
assert environment.path == venv_path
def test_create_environment_executable():
environment = create_environment(sys.executable)
assert environment.executable == sys.executable

View File

@@ -341,3 +341,18 @@ def test_dir_magic_method():
foo = [c for c in completions if c.name == 'foo'][0] foo = [c for c in completions if c.name == 'foo'][0]
assert foo._goto_definitions() == [] assert foo._goto_definitions() == []
def test_name_not_findable():
class X():
if 0:
NOT_FINDABLE
def hidden(self):
return
hidden.__name__ = 'NOT_FINDABLE'
setattr(X, 'NOT_FINDABLE', X.hidden)
assert jedi.Interpreter("X.NOT_FINDA", [locals()]).completions()

View File

@@ -0,0 +1,26 @@
from collections import namedtuple
from jedi._compatibility import highest_pickle_protocol
def test_highest_pickle_protocol():
v = namedtuple('version', 'major, minor')
assert highest_pickle_protocol([v(2, 7), v(2, 7)]) == 2
assert highest_pickle_protocol([v(2, 7), v(3, 3)]) == 2
assert highest_pickle_protocol([v(2, 7), v(3, 4)]) == 2
assert highest_pickle_protocol([v(2, 7), v(3, 5)]) == 2
assert highest_pickle_protocol([v(2, 7), v(3, 6)]) == 2
assert highest_pickle_protocol([v(3, 3), v(2, 7)]) == 2
assert highest_pickle_protocol([v(3, 3), v(3, 3)]) == 3
assert highest_pickle_protocol([v(3, 3), v(3, 4)]) == 3
assert highest_pickle_protocol([v(3, 3), v(3, 5)]) == 3
assert highest_pickle_protocol([v(3, 3), v(3, 6)]) == 3
assert highest_pickle_protocol([v(3, 4), v(2, 7)]) == 2
assert highest_pickle_protocol([v(3, 4), v(3, 3)]) == 3
assert highest_pickle_protocol([v(3, 4), v(3, 4)]) == 4
assert highest_pickle_protocol([v(3, 4), v(3, 5)]) == 4
assert highest_pickle_protocol([v(3, 4), v(3, 6)]) == 4
assert highest_pickle_protocol([v(3, 6), v(2, 7)]) == 2
assert highest_pickle_protocol([v(3, 6), v(3, 3)]) == 3
assert highest_pickle_protocol([v(3, 6), v(3, 4)]) == 4
assert highest_pickle_protocol([v(3, 6), v(3, 5)]) == 4
assert highest_pickle_protocol([v(3, 6), v(3, 6)]) == 4

View File

@@ -93,3 +93,13 @@ def test_namespace_package_in_multiple_directories_goto_definition(Script):
script = Script(sys_path=sys_path, source=CODE) script = Script(sys_path=sys_path, source=CODE)
result = script.goto_definitions() result = script.goto_definitions()
assert len(result) == 1 assert len(result) == 1
def test_namespace_name_autocompletion_full_name(Script):
CODE = 'from pk'
sys_path = [join(dirname(__file__), d)
for d in ['implicit_namespace_package/ns1', 'implicit_namespace_package/ns2']]
script = Script(sys_path=sys_path, source=CODE)
compl = script.completions()
assert set(c.full_name for c in compl) == set(['pkg'])

View File

@@ -75,7 +75,8 @@ def test_hex_values_in_docstring():
@pytest.mark.parametrize( @pytest.mark.parametrize(
'code,call_signature', [ 'code,call_signature', [
('def my_function(x, y, z) -> str:\n return', 'my_function(x, y, z)'), ('def my_function(x, typed: Type, z):\n return', 'my_function(x, typed: Type, z)'),
('def my_function(x, y, z) -> str:\n return', 'my_function(x, y, z) -> str'),
('lambda x, y, z: x + y * z\n', '<lambda>(x, y, z)') ('lambda x, y, z: x + y * z\n', '<lambda>(x, y, z)')
]) ])
def test_get_call_signature(code, call_signature): def test_get_call_signature(code, call_signature):

View File

@@ -8,7 +8,9 @@ deps =
docopt docopt
# coloroma for colored debug output # coloroma for colored debug output
colorama colorama
-rrequirements.txt # Overwrite the parso version (only used sometimes).
git+https://github.com/davidhalter/parso.git
# -rrequirements.txt
passenv = JEDI_TEST_ENVIRONMENT passenv = JEDI_TEST_ENVIRONMENT
setenv = setenv =
# https://github.com/tomchristie/django-rest-framework/issues/1957 # https://github.com/tomchristie/django-rest-framework/issues/1957
@@ -23,8 +25,6 @@ setenv =
env36: JEDI_TEST_ENVIRONMENT=36 env36: JEDI_TEST_ENVIRONMENT=36
env37: JEDI_TEST_ENVIRONMENT=37 env37: JEDI_TEST_ENVIRONMENT=37
commands = commands =
# Overwrite the parso version (only used sometimes).
# pip install git+https://github.com/davidhalter/parso.git
py.test {posargs:jedi test} py.test {posargs:jedi test}
[testenv:py27] [testenv:py27]
deps = deps =