1
0
forked from VimPlug/jedi

Merge pull request #281 from davidhalter/dev

New release: 0.7.0
This commit is contained in:
David Halter
2013-08-17 11:45:38 -07:00
94 changed files with 3700 additions and 1703 deletions

1
.gitignore vendored
View File

@@ -10,3 +10,4 @@
/docs/_build/
/dist/
jedi.egg-info/
record.json

View File

@@ -5,9 +5,11 @@ env:
- TOXENV=py32
- TOXENV=py33
- TOXENV=cov
- TOXENV=sith
matrix:
allow_failures:
- env: TOXENV=cov
- env: TOXENV=sith
install:
- pip install --quiet --use-mirrors tox
script:

View File

@@ -1,14 +1,25 @@
Main Authors
============
David Halter (@davidhalter)
Takafumi Arakaki (@tkf)
David Halter (@davidhalter) <davidhalter88@gmail.com>
Takafumi Arakaki (@tkf) <aka.tkf@gmail.com>
Code Contributors
=================
Danilo Bargen (@dbrgn)
Danilo Bargen (@dbrgn) <gezuru@gmail.com>
Laurens Van Houtven (@lvh) <_@lvh.cc>
Aldo Stracquadanio (@Astrac) <aldo.strac@gmail.com>
Jean-Louis Fuchs (@ganwell) <ganwell@fangorn.ch>
tek (@tek)
Yasha Borevich (@jjay)
Aaron Griffin
Yasha Borevich (@jjay) <j.borevich@gmail.com>
Aaron Griffin <aaronmgriffin@gmail.com>
andviro (@andviro)
Mike Gilbert (@floppym) <floppym@gentoo.org>
Aaron Meurer (@asmeurer) <asmeurer@gmail.com>
Lubos Trilety <ltrilety@redhat.com>
Akinori Hattori (@hattya)
srusskih (@srusskih)
Note: (@user) means a github user name.

View File

@@ -3,6 +3,13 @@
Changelog
---------
0.7.0 (2013-08-09)
++++++++++++++++++
* switched from LGPL to MIT license
* added an Interpreter class to the API to make autocompletion in REPL possible.
* added autocompletion support for namespace packages
* add sith.py, a new random testing method
0.6.0 (2013-05-14)
++++++++++++++++++

View File

@@ -10,4 +10,19 @@ My **master** branch is a 100% stable (should be). I only push to it after I am
certain that things are working out. Many people are using Jedi directly from
the github master branch.
Please use Pep8 to style your code.
**Please use PEP8 to style your code.**
Changing Issues to Pull Requests (Github)
-----------------------------------------
If you have have previously filed a GitHub issue and want to contribute code
that addresses that issue, we prefer it if you use
[hub](https://github.com/github/hub) to convert your existing issue to a pull
request. To do that, first push the changes to a separate branch in your fork
and then issue the following command:
hub pull-request -b davidhalter:dev -i <issue-number> -h <your-github-username>:<your-branch-name>
It's no strict requirement though, if you don't have hub installed or prefer to
use the web interface, then feel free to post a traditional pull request.

View File

@@ -1,170 +1,21 @@
Licensed under the GNU LGPL v3 or later.
Copyright (C) 2012 David Halter <davidhalter88@gmail.com>.
The MIT License (MIT)
===============================================================================
Copyright (c) <2013> <David Halter and others, see AUTHORS.txt>
GNU LESSER GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
This version of the GNU Lesser General Public License incorporates
the terms and conditions of version 3 of the GNU General Public
License, supplemented by the additional permissions listed below.
0. Additional Definitions.
As used herein, "this License" refers to version 3 of the GNU Lesser
General Public License, and the "GNU GPL" refers to version 3 of the GNU
General Public License.
"The Library" refers to a covered work governed by this License,
other than an Application or a Combined Work as defined below.
An "Application" is any work that makes use of an interface provided
by the Library, but which is not otherwise based on the Library.
Defining a subclass of a class defined by the Library is deemed a mode
of using an interface provided by the Library.
A "Combined Work" is a work produced by combining or linking an
Application with the Library. The particular version of the Library
with which the Combined Work was made is also called the "Linked
Version".
The "Minimal Corresponding Source" for a Combined Work means the
Corresponding Source for the Combined Work, excluding any source code
for portions of the Combined Work that, considered in isolation, are
based on the Application, and not on the Linked Version.
The "Corresponding Application Code" for a Combined Work means the
object code and/or source code for the Application, including any data
and utility programs needed for reproducing the Combined Work from the
Application, but excluding the System Libraries of the Combined Work.
1. Exception to Section 3 of the GNU GPL.
You may convey a covered work under sections 3 and 4 of this License
without being bound by section 3 of the GNU GPL.
2. Conveying Modified Versions.
If you modify a copy of the Library, and, in your modifications, a
facility refers to a function or data to be supplied by an Application
that uses the facility (other than as an argument passed when the
facility is invoked), then you may convey a copy of the modified
version:
a) under this License, provided that you make a good faith effort to
ensure that, in the event an Application does not supply the
function or data, the facility still operates, and performs
whatever part of its purpose remains meaningful, or
b) under the GNU GPL, with none of the additional permissions of
this License applicable to that copy.
3. Object Code Incorporating Material from Library Header Files.
The object code form of an Application may incorporate material from
a header file that is part of the Library. You may convey such object
code under terms of your choice, provided that, if the incorporated
material is not limited to numerical parameters, data structure
layouts and accessors, or small macros, inline functions and templates
(ten or fewer lines in length), you do both of the following:
a) Give prominent notice with each copy of the object code that the
Library is used in it and that the Library and its use are
covered by this License.
b) Accompany the object code with a copy of the GNU GPL and this license
document.
4. Combined Works.
You may convey a Combined Work under terms of your choice that,
taken together, effectively do not restrict modification of the
portions of the Library contained in the Combined Work and reverse
engineering for debugging such modifications, if you also do each of
the following:
a) Give prominent notice with each copy of the Combined Work that
the Library is used in it and that the Library and its use are
covered by this License.
b) Accompany the Combined Work with a copy of the GNU GPL and this license
document.
c) For a Combined Work that displays copyright notices during
execution, include the copyright notice for the Library among
these notices, as well as a reference directing the user to the
copies of the GNU GPL and this license document.
d) Do one of the following:
0) Convey the Minimal Corresponding Source under the terms of this
License, and the Corresponding Application Code in a form
suitable for, and under terms that permit, the user to
recombine or relink the Application with a modified version of
the Linked Version to produce a modified Combined Work, in the
manner specified by section 6 of the GNU GPL for conveying
Corresponding Source.
1) Use a suitable shared library mechanism for linking with the
Library. A suitable mechanism is one that (a) uses at run time
a copy of the Library already present on the user's computer
system, and (b) will operate properly with a modified version
of the Library that is interface-compatible with the Linked
Version.
e) Provide Installation Information, but only if you would otherwise
be required to provide such information under section 6 of the
GNU GPL, and only to the extent that such information is
necessary to install and execute a modified version of the
Combined Work produced by recombining or relinking the
Application with a modified version of the Linked Version. (If
you use option 4d0, the Installation Information must accompany
the Minimal Corresponding Source and Corresponding Application
Code. If you use option 4d1, you must provide the Installation
Information in the manner specified by section 6 of the GNU GPL
for conveying Corresponding Source.)
5. Combined Libraries.
You may place library facilities that are a work based on the
Library side by side in a single library together with other library
facilities that are not Applications and are not covered by this
License, and convey such a combined library under terms of your
choice, if you do both of the following:
a) Accompany the combined library with a copy of the same work based
on the Library, uncombined with any other library facilities,
conveyed under the terms of this License.
b) Give prominent notice with the combined library that part of it
is a work based on the Library, and explaining where to find the
accompanying uncombined form of the same work.
6. Revised Versions of the GNU Lesser General Public License.
The Free Software Foundation may publish revised and/or new versions
of the GNU Lesser General Public License from time to time. Such new
versions will be similar in spirit to the present version, but may
differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the
Library as you received it specifies that a certain numbered version
of the GNU Lesser General Public License "or any later version"
applies to it, you have the option of following the terms and
conditions either of that published version or of any later version
published by the Free Software Foundation. If the Library as you
received it does not specify a version number of the GNU Lesser
General Public License, you may choose any version of the GNU Lesser
General Public License ever published by the Free Software Foundation.
If the Library as you received it specifies that a proxy can decide
whether future versions of the GNU Lesser General Public License shall
apply, that proxy's public statement of acceptance of any version is
permanent authorization for you to choose that version for the
Library.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

View File

@@ -2,5 +2,11 @@ include README.rst
include CHANGELOG.rst
include LICENSE.txt
include AUTHORS.txt
include .coveragerc
include sith.py
include conftest.py
include pytest.ini
include tox.ini
include jedi/mixin/*.pym
recursive-include test *
recursive-exclude * *.pyc

View File

@@ -10,6 +10,8 @@ Jedi - an awesome autocompletion library for Python
:target: https://coveralls.io/r/davidhalter/jedi
:alt: Coverage Status
.. image:: https://pypip.in/d/jedi/badge.png
:target: https://crate.io/packages/jedi/
Jedi is an autocompletion tool for Python that can be used in IDEs/editors.
Jedi works. Jedi is fast. It understands all of the basic Python syntax
@@ -24,12 +26,15 @@ which uses Jedi's autocompletion. I encourage you to use Jedi in your IDEs.
It's really easy. If there are any problems (also with licensing), just contact
me.
Jedi can be used with the following plugins/software:
Jedi can be used with the following editors:
- `VIM-Plugin <https://github.com/davidhalter/jedi-vim>`_
- `Emacs-Plugin <https://github.com/tkf/emacs-jedi>`_
- `Sublime-Plugin <https://github.com/srusskih/SublimeJEDI>`_
- `wdb (web debugger) <https://github.com/Kozea/wdb>`_
- Vim (jedi-vim_, YouCompleteMe_)
- Emacs (Jedi.el_)
- Sublime Text (SublimeJEDI_ [ST2 + ST3], anaconda_ [only ST3])
And it powers the following projects:
- wdb_
Here are some pictures:
@@ -121,3 +126,11 @@ Tests are also run automatically on `Travis CI
For more detailed information visit the `testing documentation
<https://jedi.readthedocs.org/en/latest/docs/testing.html>`_
.. _jedi-vim: https://github.com/davidhalter/jedi-vim
.. _youcompleteme: http://valloric.github.io/YouCompleteMe/
.. _Jedi.el: https://github.com/tkf/emacs-jedi
.. _sublimejedi: https://github.com/srusskih/SublimeJEDI
.. _anaconda: https://github.com/DamnWidget/anaconda
.. _wdb: https://github.com/Kozea/wdb

View File

@@ -28,7 +28,7 @@ sys.path.append(os.path.abspath('_themes'))
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = ['sphinx.ext.autodoc', 'sphinx.ext.viewcode', 'sphinx.ext.todo',
'sphinx.ext.inheritance_diagram']
'sphinx.ext.intersphinx', 'sphinx.ext.inheritance_diagram']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
@@ -266,6 +266,14 @@ todo_include_todos = False
# -- Options for autodoc module ------------------------------------------------
autoclass_content = 'both'
autodoc_member_order = 'bysource'
autodoc_default_flags = []
#autodoc_default_flags = ['members', 'undoc-members']
# -- Options for intersphinx module --------------------------------------------
intersphinx_mapping = {
'http://docs.python.org/': None,
}

View File

@@ -64,6 +64,8 @@ Parser Representation (parser_representation.py)
.. automodule:: parsing_representation
Class inheritance diagram:
.. inheritance-diagram::
SubModule
Class

View File

@@ -40,6 +40,7 @@ Supported Python Features
case, that doesn't work with |jedi|)
- simple/usual ``sys.path`` modifications
- ``isinstance`` checks for if/while/assert
- namespace packages (includes ``pkgutil`` and ``pkg_resources`` namespaces)
Unsupported Features
@@ -64,8 +65,8 @@ Caveats
**Malformed Syntax**
Syntax errors and other strange stuff may lead to undefined behaviour of the
completion. |jedi| is **NOT** a Python compiler, that tries to correct you. It is
a tool that wants to help you. But **YOU** have to know Python, not |jedi|.
completion. |jedi| is **NOT** a Python compiler, that tries to correct you. It
is a tool that wants to help you. But **YOU** have to know Python, not |jedi|.
**Legacy Python 2 Features**
@@ -75,23 +76,23 @@ older Python 2 features have been left out:
- Classes: Always Python 3 like, therefore all classes inherit from ``object``.
- Generators: No ``next()`` method. The ``__next__()`` method is used instead.
- Exceptions are only looked at in the form of ``Exception as e``, no comma!
**Slow Performance**
Importing ``numpy`` can be quite slow sometimes, as well as loading the builtins
the first time. If you want to speed things up, you could write import hooks in
|jedi|, which preload stuff. However, once loaded, this is not a problem anymore.
The same is true for huge modules like ``PySide``, ``wx``, etc.
Importing ``numpy`` can be quite slow sometimes, as well as loading the
builtins the first time. If you want to speed things up, you could write import
hooks in |jedi|, which preload stuff. However, once loaded, this is not a
problem anymore. The same is true for huge modules like ``PySide``, ``wx``,
etc.
**Security**
Security is an important issue for |jedi|. Therefore no Python code is executed.
As long as you write pure python, everything is evaluated statically. But: If
you use builtin modules (``c_builtin``) there is no other option than to execute
those modules. However: Execute isn't that critical (as e.g. in pythoncomplete,
which used to execute *every* import!), because it means one import and no more.
So basically the only dangerous thing is using the import itself. If your
``c_builtin`` uses some strange initializations, it might be dangerous. But if
it does you're screwed anyways, because eventualy you're going to execute your
code, which executes the import.
Security is an important issue for |jedi|. Therefore no Python code is
executed. As long as you write pure python, everything is evaluated
statically. But: If you use builtin modules (``c_builtin``) there is no other
option than to execute those modules. However: Execute isn't that critical (as
e.g. in pythoncomplete, which used to execute *every* import!), because it
means one import and no more. So basically the only dangerous thing is using
the import itself. If your ``c_builtin`` uses some strange initializations, it
might be dangerous. But if it does you're screwed anyways, because eventualy
you're going to execute your code, which executes the import.

View File

@@ -30,8 +30,15 @@ System-wide installation via a package manager
Arch Linux
~~~~~~~~~~
You can install jedi directly from AUR: `python-jedi at AUR
<https://aur.archlinux.org/packages/python-jedi/>`__.
You can install |jedi| directly from official AUR packages:
- `python-jedi <https://aur.archlinux.org/packages/python-jedi/>`__ (Python 3)
- `python2-jedi <https://aur.archlinux.org/packages/python2-jedi/>`__ (Python 2)
The specified Python version just refers to the *runtime environment* for
|jedi|. Use the Python 2 version if you're running vim (or whatever editor you
use) under Python 2. Otherwise, use the Python 3 version. But whatever version
you choose, both are able to complete both Python 2 and 3 *code*.
(There is also a packaged version of the vim plugin available: `vim-jedi at AUR
<https://aur.archlinux.org/packages/vim-jedi/>`__.)
@@ -45,7 +52,7 @@ Debian packages are available as `experimental packages
Others
~~~~~~
We are in the discussion of adding Jedi to the Fedora repositories.
We are in the discussion of adding |jedi| to the Fedora repositories.
Manual installation from a downloaded package
@@ -53,7 +60,7 @@ Manual installation from a downloaded package
If you prefer not to use an automated package installer, you can `download
<https://github.com/davidhalter/jedi/archive/master.zip>`__ a current copy of
*Jedi* and install it manually.
|jedi| and install it manually.
To install it, navigate to the directory containing `setup.py` on your console
and type::

18
docs/docs/repl.rst Normal file
View File

@@ -0,0 +1,18 @@
.. include:: ../global.rst
Tab completion in the Python Shell
==================================
There are two different options how you can use Jedi autocompletion in
your Python interpreter. One with your custom ``$HOME/.pythonrc.py`` file
and one that uses ``PYTHONSTARTUP``.
Using ``PYTHONSTARTUP``
-----------------------
.. automodule:: jedi.replstartup
Using a custom ``$HOME/.pythonrc.py``
-------------------------------------
.. autofunction:: jedi.utils.setup_readline

View File

@@ -22,6 +22,7 @@ Docs
docs/installation
docs/features
docs/repl
docs/recipes
docs/plugin-api
docs/history
@@ -44,9 +45,19 @@ Resources
Editor Plugins
--------------
- `Vim <http://github.com/davidhalter/jedi-vim>`_
- `Emacs <https://github.com/tkf/emacs-jedi>`_
- `Sublime Text 2 <https://github.com/srusskih/SublimeJEDI>`_
Vim:
- `jedi-vim <http://github.com/davidhalter/jedi-vim>`_
- `YouCompleteMe <http://valloric.github.io/YouCompleteMe/>`_
Emacs:
- `Jedi.el <https://github.com/tkf/emacs-jedi>`_
Sublime Text 2/3:
- `SublimeJEDI <https://github.com/srusskih/SublimeJEDI>`_ (ST2 & ST3)
- `anaconda <https://github.com/DamnWidget/anaconda>`_ (only ST3)
.. _other-software:

View File

@@ -34,7 +34,7 @@ As you see Jedi is pretty simple and allows you to concentrate on writing a
good text editor, while still having very good IDE features for Python.
"""
__version__ = 0, 6, 0
__version__ = 0, 7, 0
import sys
@@ -42,8 +42,8 @@ import sys
# imports and circular imports... Just avoid it:
sys.path.insert(0, __path__[0])
from .api import Script, NotFoundError, set_debug_function, _quick_complete, \
preload_module
from .api import Script, Interpreter, NotFoundError, set_debug_function
from .api import preload_module, defined_names
from . import settings
sys.path.pop(0)

7
jedi/__main__.py Normal file
View File

@@ -0,0 +1,7 @@
from sys import argv
if len(argv) == 2 and argv[1] == 'repl':
# don't want to use __main__ only for repl yet, maybe we want to use it for
# something else. So just use the keyword ``repl`` for now.
from os import path
print(path.join(path.dirname(path.abspath(__file__)), 'replstartup.py'))

View File

@@ -16,58 +16,45 @@ is_py33 = sys.hexversion >= 0x03030000
def find_module_py33(string, path=None):
mod_info = (None, None, None)
loader = None
if path is not None:
# Check for the module in the specidied path
loader = importlib.machinery.PathFinder.find_module(string, path)
else:
# Check for the module in sys.path
loader = importlib.machinery.PathFinder.find_module(string, sys.path)
if loader is None:
# Fallback to find builtins
loader = importlib.find_loader(string)
loader = importlib.machinery.PathFinder.find_module(string, path)
if loader is None and path is None: # Fallback to find builtins
loader = importlib.find_loader(string)
if loader is None:
raise ImportError
raise ImportError("Couldn't find a loader for {0}".format(string))
try:
if (loader.is_package(string)):
mod_info = (None, os.path.dirname(loader.path), True)
is_package = loader.is_package(string)
if is_package:
module_path = os.path.dirname(loader.path)
module_file = None
else:
filename = loader.get_filename(string)
if filename and os.path.exists(filename):
mod_info = (open(filename, 'U'), filename, False)
else:
mod_info = (None, filename, False)
module_path = loader.get_filename(string)
module_file = open(module_path)
except AttributeError:
mod_info = (None, loader.load_module(string).__name__, False)
module_path = loader.load_module(string).__name__
module_file = None
return mod_info
return module_file, module_path, is_package
def find_module_pre_py33(string, path=None):
mod_info = None
if path is None:
mod_info = imp.find_module(string)
else:
mod_info = imp.find_module(string, path)
return (mod_info[0], mod_info[1], mod_info[2][2] == imp.PKG_DIRECTORY)
module_file, module_path, description = imp.find_module(string, path)
module_type = description[2]
return module_file, module_path, module_type is imp.PKG_DIRECTORY
def find_module(string, path=None):
"""Provides information about a module.
find_module = find_module_py33 if is_py33 else find_module_pre_py33
find_module.__doc__ = """
Provides information about a module.
This function isolates the differences in importing libraries introduced with
python 3.3 on; it gets a module name and optionally a path. It will return a
tuple containin an open file for the module (if not builtin), the filename
or the name of the module if it is a builtin one and a boolean indicating
if the module is contained in a package."""
if is_py33:
return find_module_py33(string, path)
else:
return find_module_pre_py33(string, path)
This function isolates the differences in importing libraries introduced with
python 3.3 on; it gets a module name and optionally a path. It will return a
tuple containin an open file for the module (if not builtin), the filename
or the name of the module if it is a builtin one and a boolean indicating
if the module is contained in a package.
"""
# next was defined in python 2.6, in python 3 obj.next won't be possible
# anymore
@@ -157,13 +144,6 @@ class Python3Method(object):
else:
return lambda *args, **kwargs: self.func(obj, *args, **kwargs)
try:
# the python3 way
from functools import reduce
except ImportError:
reduce = reduce
def use_metaclass(meta, *bases):
""" Create a class with a metaclass. """
if not bases:
@@ -181,3 +161,20 @@ try:
encoding = 'utf-8'
except AttributeError:
encoding = 'ascii'
def u(string):
"""Cast to unicode DAMMIT!
Written because Python2 repr always implicitly casts to a string, so we
have to cast back to a unicode (and we now that we always deal with valid
unicode, because we check that in the beginning).
"""
if is_py3k:
return str(string)
elif not isinstance(string, unicode):
return unicode(str(string), 'UTF-8')
return string
try:
import builtins # module name in python 3
except ImportError:
import __builtin__ as builtins

View File

@@ -11,6 +11,7 @@ from __future__ import with_statement
import re
import os
import warnings
from itertools import chain
from jedi import parsing
from jedi import parsing_representation as pr
@@ -20,9 +21,10 @@ from jedi import helpers
from jedi import common
from jedi import cache
from jedi import modules
from jedi._compatibility import next, unicode
import evaluate
from jedi import interpret
from jedi._compatibility import next, unicode, builtins
import keywords
import evaluate
import api_classes
import evaluate_representation as er
import dynamic
@@ -32,7 +34,6 @@ import builtin
class NotFoundError(Exception):
"""A custom error to avoid catching the wrong exceptions."""
pass
class Script(object):
@@ -46,26 +47,46 @@ class Script(object):
:type line: int
:param col: The column of the cursor (starting with 0).
:type col: int
:param source_path: The path of the file in the file system, or ``''`` if
:param path: The path of the file in the file system, or ``''`` if
it hasn't been saved yet.
:type source_path: str or None
:type path: str or None
:param source_encoding: The encoding of ``source``, if it is not a
``unicode`` object (default ``'utf-8'``).
:type source_encoding: str
"""
def __init__(self, source, line, column, source_path,
source_encoding='utf-8'):
def __init__(self, source, line=None, column=None, path=None,
source_encoding='utf-8', source_path=None):
if source_path is not None:
warnings.warn("Use path instead of source_path.", DeprecationWarning)
path = source_path
lines = source.splitlines()
if source and source[-1] == '\n':
lines.append('')
self._line = max(len(lines), 1) if line is None else line
self._column = len(lines[-1]) if column is None else column
api_classes._clear_caches()
debug.reset_time()
self.source = modules.source_to_unicode(source, source_encoding)
self.pos = line, column
self._module = modules.ModuleWithCursor(source_path,
source=self.source, position=self.pos)
self._source_path = source_path
self.source_path = None if source_path is None \
else os.path.abspath(source_path)
self.pos = self._line, self._column
self._module = modules.ModuleWithCursor(
path, source=self.source, position=self.pos)
self._source_path = path
self.path = None if path is None else os.path.abspath(path)
debug.speed('init')
@property
def source_path(self):
"""
.. deprecated:: 0.7.0
Use :attr:`.path` instead.
.. todo:: Remove!
"""
warnings.warn("Use path instead of source_path.", DeprecationWarning)
return self.path
def __repr__(self):
return '<%s: %s>' % (self.__class__.__name__, repr(self._source_path))
@@ -83,43 +104,24 @@ class Script(object):
:return: Completion objects, sorted by name and __ comes last.
:rtype: list of :class:`api_classes.Completion`
"""
def get_completions(user_stmt, bs):
if isinstance(user_stmt, pr.Import):
context = self._module.get_context()
next(context) # skip the path
if next(context) == 'from':
# completion is just "import" if before stands from ..
return ((k, bs) for k in keywords.keyword_names('import'))
return self._simple_complete(path, like)
debug.speed('completions start')
path = self._module.get_path_until_cursor()
if re.search('^\.|\.\.$', path):
return []
path, dot, like = self._get_completion_parts(path)
completion_line = self._module.get_line(self.pos[0])[:self.pos[1]]
path, dot, like = self._get_completion_parts()
try:
scopes = list(self._prepare_goto(path, True))
except NotFoundError:
scopes = []
scope_generator = evaluate.get_names_of_scope(
self._parser.user_scope, self.pos)
completions = []
for scope, name_list in scope_generator:
for c in name_list:
completions.append((c, scope))
else:
completions = []
debug.dbg('possible scopes', scopes)
for s in scopes:
if s.isinstance(er.Function):
names = s.get_magic_method_names()
else:
if isinstance(s, imports.ImportPath):
if like == 'import':
if not completion_line.endswith('import import'):
continue
a = s.import_stmt.alias
if a and a.start_pos <= self.pos <= a.end_pos:
continue
names = s.get_defined_names(on_import_stmt=True)
else:
names = s.get_defined_names()
for c in names:
completions.append((c, s))
user_stmt = self._user_stmt(True)
bs = builtin.Builtin.scope
completions = get_completions(user_stmt, bs)
if not dot: # named params have no dots
for call_def in self.call_signatures():
@@ -127,19 +129,10 @@ class Script(object):
for p in call_def.params:
completions.append((p.get_name(), p))
# Do the completion if there is no path before and no import stmt.
u = self._parser.user_stmt
bs = builtin.Builtin.scope
if isinstance(u, pr.Import):
if (u.relative_count > 0 or u.from_ns) and not re.search(
r'(,|from)\s*$|import\s+$', completion_line):
completions += ((k, bs) for k
in keywords.get_keywords('import'))
if not path and not isinstance(u, pr.Import):
if not path and not isinstance(user_stmt, pr.Import):
# add keywords
completions += ((k, bs) for k in keywords.get_keywords(
all=True))
completions += ((k, bs) for k in keywords.keyword_names(
all=True))
needs_dot = not dot and path
@@ -151,9 +144,8 @@ class Script(object):
and n.lower().startswith(like.lower()) \
or n.startswith(like):
if not evaluate.filter_private_variable(s,
self._parser.user_stmt, n):
new = api_classes.Completion(c, needs_dot,
len(like), s)
user_stmt or self._parser.user_scope, n):
new = api_classes.Completion(c, needs_dot, len(like), s)
k = (new.name, new.complete) # key
if k in comp_dct and settings.no_completion_duplicates:
comp_dct[k]._same_name_completions.append(new)
@@ -167,15 +159,62 @@ class Script(object):
x.name.startswith('_'),
x.name.lower()))
def _prepare_goto(self, goto_path, is_like_search=False):
def _simple_complete(self, path, like):
try:
scopes = list(self._prepare_goto(path, True))
except NotFoundError:
scopes = []
scope_generator = evaluate.get_names_of_scope(
self._parser.user_scope, self.pos)
completions = []
for scope, name_list in scope_generator:
for c in name_list:
completions.append((c, scope))
else:
completions = []
debug.dbg('possible scopes', scopes)
for s in scopes:
if s.isinstance(er.Function):
names = s.get_magic_method_names()
else:
if isinstance(s, imports.ImportPath):
under = like + self._module.get_path_after_cursor()
if under == 'import':
current_line = self._module.get_position_line()
if not current_line.endswith('import import'):
continue
a = s.import_stmt.alias
if a and a.start_pos <= self.pos <= a.end_pos:
continue
names = s.get_defined_names(on_import_stmt=True)
else:
names = s.get_defined_names()
for c in names:
completions.append((c, s))
return completions
def _user_stmt(self, is_completion=False):
user_stmt = self._parser.user_stmt
debug.speed('parsed')
if is_completion and not user_stmt:
# for statements like `from x import ` (cursor not in statement)
pos = next(self._module.get_context(yield_positions=True))
last_stmt = pos and self._parser.module.get_statement_for_position(
pos, include_imports=True)
if isinstance(last_stmt, pr.Import):
user_stmt = last_stmt
return user_stmt
def _prepare_goto(self, goto_path, is_completion=False):
"""
Base for completions/goto. Basically it returns the resolved scopes
under cursor.
"""
debug.dbg('start: %s in %s' % (goto_path, self._parser.user_scope))
user_stmt = self._parser.user_stmt
debug.speed('parsed')
user_stmt = self._user_stmt(is_completion)
if not user_stmt and len(goto_path.split('\n')) > 1:
# If the user_stmt is not defined and the goto_path is multi line,
# something's strange. Most probably the backwards tokenizer
@@ -183,7 +222,7 @@ class Script(object):
return []
if isinstance(user_stmt, pr.Import):
scopes = [self._get_on_import_stmt(is_like_search)[0]]
scopes = [self._get_on_import_stmt(user_stmt, is_completion)[0]]
else:
# just parse one statement, take it and evaluate it
stmt = self._get_under_cursor_stmt(goto_path)
@@ -321,10 +360,10 @@ class Script(object):
scopes = resolve_import_paths(scopes)
# add keywords
scopes |= keywords.get_keywords(string=goto_path, pos=self.pos)
scopes |= keywords.keywords(string=goto_path, pos=self.pos)
d = set([api_classes.Definition(s) for s in scopes
if not isinstance(s, imports.ImportPath._GlobalNamespace)])
if not isinstance(s, imports.ImportPath._GlobalNamespace)])
return self._sorted_defs(d)
@api_classes._clear_caches_after_call
@@ -337,14 +376,16 @@ class Script(object):
:rtype: list of :class:`api_classes.Definition`
"""
d = [api_classes.Definition(d) for d in set(self._goto()[0])]
results, _ = self._goto()
d = [api_classes.Definition(d) for d in set(results)
if not isinstance(d, imports.ImportPath._GlobalNamespace)]
return self._sorted_defs(d)
def _goto(self, add_import_name=False):
"""
Used for goto_assignments and usages.
:param add_import_name: TODO add description
:param add_import_name: Add the the name (if import) to the result.
"""
def follow_inexistent_imports(defs):
""" Imports can be generated, e.g. following
@@ -354,7 +395,7 @@ class Script(object):
definitions = set(defs)
for d in defs:
if isinstance(d.parent, pr.Import) \
and d.start_pos == (0, 0):
and d.start_pos == (0, 0):
i = imports.ImportPath(d.parent).follow(is_goto=True)
definitions.remove(d)
definitions |= follow_inexistent_imports(i)
@@ -362,13 +403,13 @@ class Script(object):
goto_path = self._module.get_path_under_cursor()
context = self._module.get_context()
user_stmt = self._parser.user_stmt
user_stmt = self._user_stmt()
if next(context) in ('class', 'def'):
user_scope = self._parser.user_scope
definitions = set([user_scope.name])
search_name = unicode(user_scope.name)
elif isinstance(user_stmt, pr.Import):
s, name_part = self._get_on_import_stmt()
s, name_part = self._get_on_import_stmt(user_stmt)
try:
definitions = [s.follow(is_goto=True)[0]]
except IndexError:
@@ -378,14 +419,17 @@ class Script(object):
if add_import_name:
import_name = user_stmt.get_defined_names()
# imports have only one name
if name_part == import_name[0].names[-1]:
if not user_stmt.star \
and name_part == import_name[0].names[-1]:
definitions.append(import_name[0])
else:
stmt = self._get_under_cursor_stmt(goto_path)
defs, search_name = evaluate.goto(stmt)
definitions = follow_inexistent_imports(defs)
if isinstance(user_stmt, pr.Statement):
if user_stmt.get_commands()[0].start_pos > self.pos:
c = user_stmt.get_commands()
if c and not isinstance(c[0], (str, unicode)) and \
c[0].start_pos > self.pos:
# The cursor must be after the start, otherwise the
# statement is just an assignee.
definitions = [user_stmt]
@@ -403,17 +447,20 @@ class Script(object):
:rtype: list of :class:`api_classes.Usage`
"""
user_stmt = self._parser.user_stmt
temp, settings.dynamic_flow_information = \
settings.dynamic_flow_information, False
user_stmt = self._user_stmt()
definitions, search_name = self._goto(add_import_name=True)
if isinstance(user_stmt, pr.Statement) \
and self.pos < user_stmt.get_commands()[0].start_pos:
# the search_name might be before `=`
definitions = [v for v in user_stmt.set_vars
if unicode(v.names[-1]) == search_name]
if isinstance(user_stmt, pr.Statement):
c = user_stmt.get_commands()[0]
if not isinstance(c, unicode) and self.pos < c.start_pos:
# the search_name might be before `=`
definitions = [v for v in user_stmt.set_vars
if unicode(v.names[-1]) == search_name]
if not isinstance(user_stmt, pr.Import):
# import case is looked at with add_import_name option
definitions = dynamic.usages_add_import_modules(definitions,
search_name)
search_name)
module = set([d.get_parent_until() for d in definitions])
module.add(self._parser.module)
@@ -422,9 +469,14 @@ class Script(object):
for d in set(definitions):
if isinstance(d, pr.Module):
names.append(api_classes.Usage(d, d))
elif isinstance(d, er.Instance):
# Instances can be ignored, because they are being created by
# ``__getattr__``.
pass
else:
names.append(api_classes.Usage(d.names[-1], d))
settings.dynamic_flow_information = temp
return self._sorted_defs(set(names))
@api_classes._clear_caches_after_call
@@ -449,29 +501,29 @@ class Script(object):
if call is None:
return []
user_stmt = self._parser.user_stmt
user_stmt = self._user_stmt()
with common.scale_speed_settings(settings.scale_function_definition):
_callable = lambda: evaluate.follow_call(call)
origins = cache.cache_function_definition(_callable, user_stmt)
debug.speed('func_call followed')
return [api_classes.CallDef(o, index, call) for o in origins]
return [api_classes.CallDef(o, index, call) for o in origins
if o.isinstance(er.Function, er.Instance, er.Class)]
def _func_call_and_param_index(self):
debug.speed('func_call start')
call, index = None, 0
if call is None:
user_stmt = self._parser.user_stmt
user_stmt = self._user_stmt()
if user_stmt is not None and isinstance(user_stmt, pr.Statement):
call, index, _ = helpers.search_function_definition(
user_stmt, self.pos)
user_stmt, self.pos)
debug.speed('func_call parsed')
return call, index
def _get_on_import_stmt(self, is_like_search=False):
def _get_on_import_stmt(self, user_stmt, is_like_search=False):
""" Resolve the user statement, if it is an import. Only resolve the
parts until the user position. """
user_stmt = self._parser.user_stmt
import_names = user_stmt.get_all_import_names()
kill_count = -1
cur_name_part = None
@@ -484,15 +536,21 @@ class Script(object):
cur_name_part = name_part
kill_count += 1
context = self._module.get_context()
just_from = next(context) == 'from'
i = imports.ImportPath(user_stmt, is_like_search,
kill_count=kill_count, direct_resolve=True)
kill_count=kill_count, direct_resolve=True,
is_just_from=just_from)
return i, cur_name_part
def _get_completion_parts(self, path):
def _get_completion_parts(self):
"""
Returns the parts for the completion
:return: tuple - (path, dot, like)
"""
path = self._module.get_path_until_cursor()
match = re.match(r'^(.*?)(\.|)(\w?[\w\d]*)$', path, flags=re.S)
return match.groups()
@@ -500,10 +558,93 @@ class Script(object):
def _sorted_defs(d):
# Note: `or ''` below is required because `module_path` could be
# None and you can't compare None and str in Python 3.
return sorted(d, key=lambda x: (x.module_path or '', x.start_pos))
return sorted(d, key=lambda x: (x.module_path or '', x.line, x.column))
def defined_names(source, source_path=None, source_encoding='utf-8'):
class Interpreter(Script):
"""
Jedi API for Python REPLs.
In addition to completion of simple attribute access, Jedi
supports code completion based on static code analysis.
Jedi can complete attributes of object which is not initialized
yet.
>>> from os.path import join
>>> namespace = locals()
>>> script = Interpreter('join().up', [namespace])
>>> print(script.completions()[0].name)
upper
"""
def __init__(self, source, namespaces=[], **kwds):
"""
Parse `source` and mixin interpreted Python objects from `namespaces`.
:type source: str
:arg source: Code to parse.
:type namespaces: list of dict
:arg namespaces: a list of namespace dictionaries such as the one
returned by :func:`locals`.
Other optional arguments are same as the ones for :class:`Script`.
If `line` and `column` are None, they are assumed be at the end of
`source`.
"""
super(Interpreter, self).__init__(source, **kwds)
self.namespaces = namespaces
# Here we add the namespaces to the current parser.
importer = interpret.ObjectImporter(self._parser.user_scope)
for ns in namespaces:
importer.import_raw_namespace(ns)
def _simple_complete(self, path, like):
user_stmt = self._user_stmt(True)
is_simple_path = not path or re.search('^[\w][\w\d.]*$', path)
if isinstance(user_stmt, pr.Import) or not is_simple_path:
return super(type(self), self)._simple_complete(path, like)
else:
class NamespaceModule:
def __getattr__(_, name):
for n in self.namespaces:
try:
return n[name]
except KeyError:
pass
raise AttributeError()
def __dir__(_):
return list(set(chain.from_iterable(n.keys()
for n in self.namespaces)))
paths = path.split('.') if path else []
namespaces = (NamespaceModule(), builtins)
for p in paths:
old, namespaces = namespaces, []
for n in old:
try:
namespaces.append(getattr(n, p))
except AttributeError:
pass
completions = []
for n in namespaces:
for name in dir(n):
if name.lower().startswith(like.lower()):
scope = self._parser.module
n = pr.Name(self._parser.module, [(name, (0, 0))],
(0, 0), (0, 0), scope)
completions.append((n, scope))
return completions
def defined_names(source, path=None, source_encoding='utf-8'):
"""
Get all definitions in `source` sorted by its position.
@@ -517,9 +658,9 @@ def defined_names(source, source_path=None, source_encoding='utf-8'):
"""
parser = parsing.Parser(
modules.source_to_unicode(source, source_encoding),
module_path=source_path,
module_path=path,
)
return api_classes._defined_names(parser.scope)
return api_classes._defined_names(parser.module)
def preload_module(*modules):
@@ -531,11 +672,11 @@ def preload_module(*modules):
"""
for m in modules:
s = "import %s as x; x." % m
Script(s, 1, len(s), None).complete()
Script(s, 1, len(s), None).completions()
def set_debug_function(func_cb=debug.print_to_stdout, warnings=True,
notices=True, speed=True):
notices=True, speed=True):
"""
Define a callback debug function to get all the debug messages.
@@ -545,25 +686,3 @@ def set_debug_function(func_cb=debug.print_to_stdout, warnings=True,
debug.enable_warning = warnings
debug.enable_notice = notices
debug.enable_speed = speed
def _quick_complete(source):
"""
Convenience function to complete a source string at the end.
Example:
>>> _quick_complete('''
... import datetime
... datetime.da''') #doctest: +ELLIPSIS
[<Completion: date>, <Completion: datetime>, ...]
:param source: The source code to be completed.
:type source: string
:return: Completion objects as returned by :meth:`complete`.
:rtype: list of :class:`api_classes.Completion`
"""
lines = re.sub(r'[\n\r\s]*$', '', source).splitlines()
pos = len(lines), len(lines[-1])
script = Script(source, pos[0], pos[1], '')
return script.completions()

View File

@@ -70,7 +70,7 @@ class BaseDefinition(object):
}.items())
def __init__(self, definition, start_pos):
self.start_pos = start_pos
self._start_pos = start_pos
self._definition = definition
"""
An instance of :class:`jedi.parsing_representation.Base` subclass.
@@ -81,6 +81,16 @@ class BaseDefinition(object):
self._module = definition.get_parent_until()
self.module_path = self._module.path
@property
def start_pos(self):
"""
.. deprecated:: 0.7.0
Use :attr:`.line` and :attr:`.column` instead.
.. todo:: Remove!
"""
warnings.warn("Use line/column instead.", DeprecationWarning)
return self._start_pos
@property
def type(self):
"""
@@ -88,12 +98,12 @@ class BaseDefinition(object):
Here is an example of the value of this attribute. Let's consider
the following source. As what is in ``variable`` is unambiguous
to Jedi, :meth:`api.Script.definition` should return a list of
to Jedi, :meth:`api.Script.goto_definitions` should return a list of
definition for ``sys``, ``f``, ``C`` and ``x``.
>>> from jedi import Script
>>> source = '''
... import sys
... import keyword
...
... class C:
... pass
@@ -106,16 +116,16 @@ class BaseDefinition(object):
... def f():
... pass
...
... variable = sys or f or C or x'''
... variable = keyword or f or C or x'''
>>> script = Script(source, len(source.splitlines()), 3, 'example.py')
>>> defs = script.definition()
>>> defs = script.goto_definitions()
Before showing what is in ``defs``, let's sort it by :attr:`line`
so that it is easy to relate the result to the source code.
>>> defs = sorted(defs, key=lambda d: d.line)
>>> defs # doctest: +NORMALIZE_WHITESPACE
[<Definition module sys>, <Definition class C>,
[<Definition module keyword>, <Definition class C>,
<Definition class D>, <Definition def f>]
Finally, here is what you can get from :attr:`type`:
@@ -142,9 +152,19 @@ class BaseDefinition(object):
def path(self):
"""The module path."""
path = []
def insert_nonnone(x):
if x:
path.insert(0, x)
if not isinstance(self._definition, keywords.Keyword):
par = self._definition
while par is not None:
if isinstance(par, pr.Import):
insert_nonnone(par.namespace)
insert_nonnone(par.from_ns)
if par.relative_count == 0:
break
with common.ignored(AttributeError):
path.insert(0, par.name)
par = par.parent
@@ -158,7 +178,7 @@ class BaseDefinition(object):
>>> from jedi import Script
>>> source = 'import datetime'
>>> script = Script(source, 1, len(source), 'example.py')
>>> d = script.definition()[0]
>>> d = script.goto_definitions()[0]
>>> print(d.module_name) # doctest: +ELLIPSIS
datetime
"""
@@ -182,12 +202,16 @@ class BaseDefinition(object):
@property
def line(self):
"""The line where the definition occurs (starting with 1)."""
return self.start_pos[0]
if self.in_builtin_module():
return None
return self._start_pos[0]
@property
def column(self):
"""The column where the definition occurs (starting with 0)."""
return self.start_pos[1]
if self.in_builtin_module():
return None
return self._start_pos[1]
@property
def doc(self):
@@ -202,7 +226,7 @@ class BaseDefinition(object):
... "Document for function f."
... '''
>>> script = Script(source, 1, len('def f'), 'example.py')
>>> d = script.definition()[0]
>>> d = script.goto_definitions()[0]
>>> print(d.doc)
f(a, b = 1)
<BLANKLINE>
@@ -235,31 +259,7 @@ class BaseDefinition(object):
@property
def description(self):
"""
A textual description of the object.
Example:
>>> from jedi import Script
>>> source = '''
... def f():
... pass
...
... class C:
... pass
...
... variable = f or C'''
>>> script = Script(source, len(source.splitlines()), 3, 'example.py')
>>> defs = script.definition() # doctest: +SKIP
>>> defs = sorted(defs, key=lambda d: d.line) # doctest: +SKIP
>>> defs # doctest: +SKIP
[<Definition def f>, <Definition class C>]
>>> defs[0].description # doctest: +SKIP
'def f'
>>> defs[1].description # doctest: +SKIP
'class C'
"""
"""A textual description of the object."""
return unicode(self._definition)
@property
@@ -278,7 +278,7 @@ class BaseDefinition(object):
... import os
... os.path.join'''
>>> script = Script(source, 3, len('os.path.join'), 'example.py')
>>> print(script.definition()[0].full_name)
>>> print(script.goto_definitions()[0].full_name)
os.path.join
Notice that it correctly returns ``'os.path.join'`` instead of
@@ -321,6 +321,24 @@ class Completion(BaseDefinition):
self._followed_definitions = None
def _complete(self, like_name):
dot = '.' if self._needs_dot else ''
append = ''
if settings.add_bracket_after_function \
and self.type == 'Function':
append = '('
if settings.add_dot_after_module:
if isinstance(self._base, pr.Module):
append += '.'
if isinstance(self._base, pr.Param):
append += '='
name = self._name.names[-1]
if like_name:
name = name[self._like_name_length:]
return dot + name + append
@property
def complete(self):
"""
@@ -331,18 +349,7 @@ class Completion(BaseDefinition):
would return the string 'ce'. It also adds additional stuff, depending
on your `settings.py`.
"""
dot = '.' if self._needs_dot else ''
append = ''
if settings.add_bracket_after_function \
and self.type == 'Function':
append = '('
if settings.add_dot_after_module:
if isinstance(self._base, pr.Module):
append += '.'
if isinstance(self._base, pr.Param):
append += '='
return dot + self._name.names[-1][self._like_name_length:] + append
return self._complete(True)
@property
def name(self):
@@ -352,10 +359,22 @@ class Completion(BaseDefinition):
isinstan
would return 'isinstance'.
would return `isinstance`.
"""
return unicode(self._name.names[-1])
@property
def name_with_symbols(self):
"""
Similar to :meth:`Completion.name`, but like :meth:`Completion.name`
returns also the symbols, for example::
list()
would return ``.append`` and others (which means it adds a dot).
"""
return self._complete(False)
@property
def word(self):
"""
@@ -366,19 +385,14 @@ class Completion(BaseDefinition):
warnings.warn("Use name instead.", DeprecationWarning)
return self.name
@property
def description(self):
"""
Provide a description of the completion object.
.. todo:: return value is just __repr__ of some objects, improve!
"""
"""Provide a description of the completion object."""
parent = self._name.parent
if parent is None:
return ''
t = self.type
if t == 'Statement' or t == 'Import':
if t == 'statement' or t == 'import':
desc = self._definition.get_code(False)
else:
desc = '.'.join(unicode(p) for p in self.path)
@@ -404,7 +418,7 @@ class Completion(BaseDefinition):
return [self]
self._followed_definitions = \
[BaseDefinition(d, d.start_pos) for d in defs]
[BaseDefinition(d, d.start_pos) for d in defs]
_clear_caches()
return self._followed_definitions
@@ -460,6 +474,28 @@ class Definition(BaseDefinition):
"""
A description of the :class:`.Definition` object, which is heavily used
in testing. e.g. for ``isinstance`` it returns ``def isinstance``.
Example:
>>> from jedi import Script
>>> source = '''
... def f():
... pass
...
... class C:
... pass
...
... variable = f or C'''
>>> script = Script(source, column=3) # line is maximum by default
>>> defs = script.goto_definitions()
>>> defs = sorted(defs, key=lambda d: d.line)
>>> defs
[<Definition def f>, <Definition class C>]
>>> str(defs[0].description) # strip literals in python2
'def f'
>>> str(defs[1].description)
'class C'
"""
d = self._definition
if isinstance(d, er.InstanceElement):
@@ -479,7 +515,9 @@ class Definition(BaseDefinition):
elif self.is_keyword:
d = 'keyword %s' % d.name
else:
d = d.get_code().replace('\n', '')
code = d.get_code().replace('\n', '')
max_len = 20
d = (code[:max_len] + '...') if len(code) > max_len + 3 else code
return d
@property
@@ -494,7 +532,7 @@ class Definition(BaseDefinition):
`module.class.function` path.
"""
if self.module_path.endswith('.py') \
and not isinstance(self._definition, pr.Module):
and not isinstance(self._definition, pr.Module):
position = '@%s' % (self.line)
else:
# is a builtin or module
@@ -537,14 +575,14 @@ class Usage(BaseDefinition):
@property
def description(self):
return "%s@%s,%s" % (self.text, self.start_pos[0], self.start_pos[1])
return "%s@%s,%s" % (self.text, self.line, self.column)
def __eq__(self, other):
return self.start_pos == other.start_pos \
return self._start_pos == other._start_pos \
and self.module_path == other.module_path
def __hash__(self):
return hash((self.start_pos, self.module_path))
return hash((self._start_pos, self.module_path))
class CallDef(object):
@@ -591,4 +629,4 @@ class CallDef(object):
def __repr__(self):
return '<%s: %s index %s>' % (type(self).__name__, self._executable,
self.index)
self.index)

View File

@@ -233,7 +233,7 @@ def _generate_code(scope, mixin_funcs={}, depth=0):
if is_in_base_classes(scope, n, exe):
continue
if inspect.isbuiltin(exe) or inspect.ismethod(exe) \
or inspect.ismethoddescriptor(exe):
or inspect.ismethoddescriptor(exe):
funcs[n] = exe
elif inspect.isclass(exe) or inspect.ismodule(exe):
classes[n] = exe
@@ -254,15 +254,15 @@ def _generate_code(scope, mixin_funcs={}, depth=0):
code += get_doc(scope)
names = set(dir(scope)) - set(['__file__', '__name__', '__doc__',
'__path__', '__package__']) \
| set(['mro'])
'__path__', '__package__']) \
| set(['mro'])
classes, funcs, stmts, members = get_scope_objects(names)
# classes
for name, cl in classes.items():
bases = (c.__name__ for c in cl.__bases__) if inspect.isclass(cl) \
else []
else []
code += 'class %s(%s):\n' % (name, ','.join(bases))
if depth == 0:
try:
@@ -321,7 +321,7 @@ def _generate_code(scope, mixin_funcs={}, depth=0):
file_type = io.TextIOWrapper
else:
file_type = types.FileType
if type(value) == file_type:
if isinstance(value, file_type):
value = 'open()'
elif name == 'None':
value = ''
@@ -336,13 +336,6 @@ def _generate_code(scope, mixin_funcs={}, depth=0):
value = '%s.%s' % (mod, value)
code += '%s = %s\n' % (name, value)
if depth == 0:
#with open('writeout.py', 'w') as f:
# f.write(code)
#import sys
#sys.stdout.write(code)
#exit()
pass
return code
@@ -378,7 +371,7 @@ def _parse_function_doc(func):
return ','.join(args)
while True:
param_str, changes = re.subn(r' ?\[([^\[\]]+)\]',
change_options, param_str)
change_options, param_str)
if changes == 0:
break
except (ValueError, AttributeError):

View File

@@ -282,7 +282,7 @@ class _ModulePickling(object):
try:
with open(self._get_path('index.json')) as f:
data = json.load(f)
except IOError:
except (IOError, ValueError):
self.__index = {}
else:
# 0 means version is not defined (= always delete cache):

View File

@@ -147,8 +147,8 @@ class NoErrorTokenizer(object):
and self.previous[0] in (tokenize.INDENT, tokenize.NL, None,
tokenize.NEWLINE, tokenize.DEDENT) \
and c[0] not in (tokenize.COMMENT, tokenize.INDENT,
tokenize.NL, tokenize.NEWLINE, tokenize.DEDENT):
#print c, tokenize.tok_name[c[0]]
tokenize.NL, tokenize.NEWLINE, tokenize.DEDENT):
# print c, tokenize.tok_name[c[0]]
tok = c[1]
indent = c[2][1]

View File

@@ -1,3 +1,4 @@
from _compatibility import u, encoding, is_py3k
import inspect
import time
@@ -36,17 +37,16 @@ def dbg(*args):
frm = inspect.stack()[1]
mod = inspect.getmodule(frm[0])
if not (mod.__name__ in ignored_modules):
debug_function(NOTICE, 'dbg: ' + ', '.join(str(a) for a in args))
debug_function(NOTICE, 'dbg: ' + ', '.join(u(a) for a in args))
def warning(*args):
if debug_function and enable_warning:
debug_function(WARNING, 'warning: ' + ', '.join(str(a) for a in args))
debug_function(WARNING, 'warning: ' + ', '.join(u(a) for a in args))
def speed(name):
if debug_function and enable_speed:
global start_time
now = time.time()
debug_function(SPEED, 'speed: ' + '%s %s' % (name, now - start_time))
@@ -59,7 +59,9 @@ def print_to_stdout(level, str_out):
col = Fore.RED
else:
col = Fore.YELLOW
if not is_py3k:
str_out = str_out.encode(encoding, 'replace')
print(col + str_out + Fore.RESET)
#debug_function = print_to_stdout
# debug_function = print_to_stdout

View File

@@ -27,8 +27,8 @@ DOCSTRING_PARAM_PATTERNS = [
]
DOCSTRING_RETURN_PATTERNS = [
re.compile(r'\s*:rtype:\s*([^\n]+)', re.M), # Sphinx
re.compile(r'\s*@rtype:\s*([^\n]+)', re.M), # Epydoc
re.compile(r'\s*:rtype:\s*([^\n]+)', re.M), # Sphinx
re.compile(r'\s*@rtype:\s*([^\n]+)', re.M), # Epydoc
]
REST_ROLE_PATTERN = re.compile(r':[^`]+:`([^`]+)`')
@@ -37,7 +37,7 @@ REST_ROLE_PATTERN = re.compile(r':[^`]+:`([^`]+)`')
@cache.memoize_default()
def follow_param(param):
func = param.parent_function
#print func, param, param.parent_function
# print func, param, param.parent_function
param_str = _search_param_in_docstr(func.docstr, str(param.get_name()))
user_position = (1, 0)
@@ -52,7 +52,9 @@ def follow_param(param):
user_position = (2, 0)
p = parsing.Parser(param_str, None, user_position,
no_docstr=True)
no_docstr=True)
if p.user_stmt is None:
return []
return evaluate.follow_statement(p.user_stmt)
return []
@@ -123,5 +125,7 @@ def find_return_types(func):
return []
p = parsing.Parser(type_str, None, (1, 0), no_docstr=True)
if p.user_stmt is None:
return []
p.user_stmt.parent = func
return list(evaluate.follow_statement(p.user_stmt))

View File

@@ -109,7 +109,8 @@ def get_directory_modules_for_name(mods, name):
if entry.endswith('.py'):
paths.add(d + os.path.sep + entry)
for p in paths:
for p in sorted(paths):
# make testing easier, sort it - same results on every interpreter
c = check_python_file(p)
if c is not None and c not in mods:
yield c
@@ -171,14 +172,46 @@ def search_params(param):
return []
for stmt in possible_stmts:
if not isinstance(stmt, pr.Import):
calls = _scan_statement(stmt, func_name)
for c in calls:
# no execution means that params cannot be set
call_path = c.generate_call_path()
pos = c.start_pos
scope = stmt.parent
evaluate.follow_call_path(call_path, scope, pos)
if isinstance(stmt, pr.Import):
continue
calls = _scan_statement(stmt, func_name)
for c in calls:
# no execution means that params cannot be set
call_path = list(c.generate_call_path())
pos = c.start_pos
scope = stmt.parent
# this whole stuff is just to not execute certain parts
# (speed improvement), basically we could just call
# ``follow_call_path`` on the call_path and it would
# also work.
def listRightIndex(lst, value):
return len(lst) - lst[-1::-1].index(value) -1
# Need to take right index, because there could be a
# func usage before.
i = listRightIndex(call_path, func_name)
first, last = call_path[:i], call_path[i+1:]
if not last and not call_path.index(func_name) != i:
continue
scopes = [scope]
if first:
scopes = evaluate.follow_call_path(iter(first), scope, pos)
pos = None
for scope in scopes:
s = evaluate.find_name(scope, func_name, position=pos,
search_global=not first,
resolve_decorator=False)
c = [getattr(escope, 'base_func', None) or escope.base
for escope in s
if escope.isinstance(er.Function, er.Class)
]
if compare in c:
# only if we have the correct function we execute
# it, otherwise just ignore it.
evaluate.follow_paths(iter(last), s, scope)
return listener.param_possibilities
result = []
@@ -191,8 +224,10 @@ def search_params(param):
func = param.get_parent_until(pr.Function)
current_module = param.get_parent_until()
func_name = str(func.name)
compare = func
if func_name == '__init__' and isinstance(func.parent, pr.Class):
func_name = str(func.parent.name)
compare = func.parent
# get the param name
if param.assignment_details:
@@ -334,7 +369,7 @@ def _check_array_additions(compare_array, module, is_list):
settings.dynamic_params_for_other_modules = False
search_names = ['append', 'extend', 'insert'] if is_list else \
['add', 'update']
['add', 'update']
comp_arr_parent = get_execution_parent(compare_array, er.Execution)
possible_stmts = []
@@ -351,7 +386,7 @@ def _check_array_additions(compare_array, module, is_list):
# literally copy the contents of a function.
if isinstance(comp_arr_parent, er.Execution):
stmt = comp_arr_parent. \
get_statement_for_position(stmt.start_pos)
get_statement_for_position(stmt.start_pos)
if stmt is None:
continue
# InstanceElements are special, because they don't get copied,
@@ -403,7 +438,9 @@ class ArrayInstance(pr.Base):
if self.var_args.start_pos != array.var_args.start_pos:
items += array.iter_content()
else:
debug.warning('ArrayInstance recursion', self.var_args)
debug.warning(
'ArrayInstance recursion',
self.var_args)
continue
items += evaluate.get_iterator_types([typ])
@@ -472,7 +509,7 @@ def usages(definitions, search_name, mods):
for used_count, name_part in imps:
i = imports.ImportPath(stmt, kill_count=count - used_count,
direct_resolve=True)
direct_resolve=True)
f = i.follow(is_goto=True)
if set(f) & set(definitions):
names.append(api_classes.Usage(name_part, stmt))
@@ -503,28 +540,30 @@ def check_flow_information(flow, search_name, pos):
ensures that `k` is a string.
"""
if not settings.dynamic_flow_information:
return None
result = []
if isinstance(flow, (pr.Scope, fast_parser.Module)) and not result:
for ass in reversed(flow.asserts):
if pos is None or ass.start_pos > pos:
continue
result = check_statement_information(ass, search_name)
result = _check_isinstance_type(ass, search_name)
if result:
break
if isinstance(flow, pr.Flow) and not result:
if flow.command in ['if', 'while'] and len(flow.inputs) == 1:
result = check_statement_information(flow.inputs[0], search_name)
result = _check_isinstance_type(flow.inputs[0], search_name)
return result
def check_statement_information(stmt, search_name):
def _check_isinstance_type(stmt, search_name):
try:
commands = stmt.get_commands()
# this might be removed if we analyze and, etc
assert len(commands) == 1
call = commands[0]
assert type(call) == pr.Call and str(call.name) == 'isinstance'
assert type(call) is pr.Call and str(call.name) == 'isinstance'
assert bool(call.execution)
# isinstance check

View File

@@ -73,7 +73,7 @@ from __future__ import with_statement
import sys
import itertools
from jedi._compatibility import next, hasattr, is_py3k, unicode, reraise
from jedi._compatibility import next, hasattr, is_py3k, unicode, reraise, u
from jedi import common
from jedi import cache
from jedi import parsing_representation as pr
@@ -105,8 +105,8 @@ def get_defined_names_for_position(scope, position=None, start_scope=None):
# Instances have special rules, always return all the possible completions,
# because class variables are always valid and the `self.` variables, too.
if (not position or isinstance(scope, (er.Array, er.Instance))
or start_scope != scope
and isinstance(start_scope, (pr.Function, er.Execution))):
or start_scope != scope
and isinstance(start_scope, (pr.Function, er.Execution))):
return names
names_new = []
for n in names:
@@ -116,7 +116,7 @@ def get_defined_names_for_position(scope, position=None, start_scope=None):
def get_names_of_scope(scope, position=None, star_search=True,
include_builtin=True):
include_builtin=True):
"""
Get all completions (names) possible for the current scope.
The star search option is only here to provide an optimization. Otherwise
@@ -133,20 +133,20 @@ def get_names_of_scope(scope, position=None, star_search=True,
... ''')
>>> scope = parser.module.subscopes[0]
>>> scope
<Function: func@3-5>
<Function: func@3-4>
`get_names_of_scope` is a generator. First it yields names from
most inner scope.
>>> pairs = list(get_names_of_scope(scope))
>>> pairs[0]
(<Function: func@3-5>, [<Name: y@4,4>])
(<Function: func@3-4>, [<Name: y@4,4>])
Then it yield the names from one level outer scope. For this
example, this is the most outer scope.
>>> pairs[1]
(<SubModule: None@1-5>, [<Name: x@2,0>, <Name: func@3,4>])
(<SubModule: None@1-4>, [<Name: x@2,0>, <Name: func@3,4>])
Finally, it yields names from builtin, if `include_builtin` is
true (default).
@@ -168,17 +168,16 @@ def get_names_of_scope(scope, position=None, star_search=True,
# Ignore the Flows, because the classes and functions care for that.
# InstanceElement of Class is ignored, if it is not the start scope.
if not (scope != non_flow and scope.isinstance(pr.Class)
or scope.isinstance(pr.Flow)
or scope.isinstance(er.Instance)
and non_flow.isinstance(er.Function)
):
or scope.isinstance(pr.Flow)
or scope.isinstance(er.Instance)
and non_flow.isinstance(er.Function)):
try:
if isinstance(scope, er.Instance):
for g in scope.scope_generator():
yield g
else:
yield scope, get_defined_names_for_position(scope,
position, in_func_scope)
position, in_func_scope)
except StopIteration:
reraise(common.MultiLevelStopIteration, sys.exc_info()[2])
if scope.isinstance(pr.ForFlow) and scope.is_list_comp:
@@ -204,7 +203,7 @@ def get_names_of_scope(scope, position=None, star_search=True,
def find_name(scope, name_str, position=None, search_global=False,
is_goto=False):
is_goto=False, resolve_decorator=True):
"""
This is the search function. The most important part to debug.
`remove_statements` and `filter_statements` really are the core part of
@@ -259,13 +258,22 @@ def find_name(scope, name_str, position=None, search_global=False,
if not r.is_generated:
res_new += dynamic.search_params(r)
if not res_new:
c = r.get_commands()[0]
if c in ('*', '**'):
t = 'tuple' if c == '*' else 'dict'
res_new = [er.Instance(
find_name(builtin.Builtin.scope, t)[0])
]
if not r.assignment_details:
# this means that there are no default params,
# so just ignore it.
continue
if r.docstr:
res_new.append(r)
# Remove the statement docstr stuff for now, that has to be
# implemented with the evaluator class.
#if r.docstr:
#res_new.append(r)
scopes = follow_statement(r, seek_name=name_str)
add += remove_statements(scopes)
@@ -273,19 +281,16 @@ def find_name(scope, name_str, position=None, search_global=False,
if check_instance is not None:
# class renames
add = [er.InstanceElement(check_instance, a, True)
if isinstance(a, (er.Function, pr.Function))
else a for a in add]
if isinstance(a, (er.Function, pr.Function))
else a for a in add]
res_new += add
else:
if isinstance(r, pr.Class):
r = er.Class(r)
elif isinstance(r, pr.Function):
r = er.Function(r)
if r.isinstance(er.Function):
try:
r = r.get_decorated_func()
except er.DecoratorNotFound:
continue
if r.isinstance(er.Function) and resolve_decorator:
r = r.get_decorated_func()
res_new.append(r)
debug.dbg('sfn remove, new: %s, old: %s' % (res_new, result))
return res_new
@@ -317,8 +322,11 @@ def find_name(scope, name_str, position=None, search_global=False,
par = name.parent
exc = pr.Class, pr.Function
until = lambda: par.parent.parent.get_parent_until(exc)
is_array_assignment = False
if par.isinstance(pr.Flow):
if par is None:
pass
elif par.isinstance(pr.Flow):
if par.command == 'for':
result += handle_for_loops(par)
else:
@@ -340,6 +348,8 @@ def find_name(scope, name_str, position=None, search_global=False,
elif par.isinstance(pr.Statement):
def is_execution(calls):
for c in calls:
if isinstance(c, (unicode, str)):
continue
if c.isinstance(pr.Array):
if is_execution(c):
return True
@@ -347,7 +357,7 @@ def find_name(scope, name_str, position=None, search_global=False,
# Compare start_pos, because names may be different
# because of executions.
if c.name.start_pos == name.start_pos \
and c.execution:
and c.execution:
return True
return False
@@ -358,7 +368,7 @@ def find_name(scope, name_str, position=None, search_global=False,
if is_exe:
# filter array[3] = ...
# TODO check executions for dict contents
pass
is_array_assignment = True
else:
details = par.assignment_details
if details and details[0][1] != '=':
@@ -366,13 +376,16 @@ def find_name(scope, name_str, position=None, search_global=False,
# TODO this makes self variables non-breakable. wanted?
if isinstance(name, er.InstanceElement) \
and not name.is_class_var:
and not name.is_class_var:
no_break_scope = True
result.append(par)
else:
# TODO multi-level import non-breakable
if isinstance(par, pr.Import) and len(par.namespace) > 1:
no_break_scope = True
result.append(par)
return result, no_break_scope
return result, no_break_scope, is_array_assignment
flow_scope = scope
result = []
@@ -385,14 +398,12 @@ def find_name(scope, name_str, position=None, search_global=False,
for name in sorted(name_list, key=comparison_func, reverse=True):
p = name.parent.parent if name.parent else None
if isinstance(p, er.InstanceElement) \
and isinstance(p.var, pr.Class):
and isinstance(p.var, pr.Class):
p = p.var
if name_str == name.get_code() and p not in break_scopes:
r, no_break_scope = process(name)
r, no_break_scope, is_array_assignment = process(name)
if is_goto:
if r:
# Directly assign the name, but there has to be a
# result.
if not is_array_assignment: # shouldn't goto arr[1] =
result.append(name)
else:
result += r
@@ -408,7 +419,7 @@ def find_name(scope, name_str, position=None, search_global=False,
while flow_scope:
# TODO check if result is in scope -> no evaluation necessary
n = dynamic.check_flow_information(flow_scope, name_str,
position)
position)
if n:
result = n
break
@@ -426,7 +437,7 @@ def find_name(scope, name_str, position=None, search_global=False,
# __getattr__ / __getattribute__
result += check_getattr(nscope, name_str)
debug.dbg('sfn filter "%s" in (%s-%s): %s@%s' % (name_str, scope,
nscope, result, position))
nscope, u(result), position))
return result
def descriptor_check(result):
@@ -434,7 +445,7 @@ def find_name(scope, name_str, position=None, search_global=False,
res_new = []
for r in result:
if isinstance(scope, (er.Instance, er.Class)) \
and hasattr(r, 'get_descriptor_return'):
and hasattr(r, 'get_descriptor_return'):
# handle descriptors
with common.ignored(KeyError):
res_new += r.get_descriptor_return(scope)
@@ -466,7 +477,7 @@ def check_getattr(inst, name_str):
result = []
# str is important to lose the NamePart!
module = builtin.Builtin.scope
name = pr.Call(module, str(name_str), pr.Call.STRING, (0, 0), inst)
name = pr.Call(module, str(name_str), pr.Call.STRING, (0, 0), (0, 0), inst)
with common.ignored(KeyError):
result = inst.execute_subscope_by_name('__getattr__', [name])
if not result:
@@ -535,7 +546,7 @@ def assign_tuples(tup, results, seek_name):
func = r.get_exact_index_types
except AttributeError:
debug.warning("invalid tuple lookup %s of result %s in %s"
% (tup, results, seek_name))
% (tup, results, seek_name))
else:
with common.ignored(IndexError):
types += func(index)
@@ -554,16 +565,33 @@ def assign_tuples(tup, results, seek_name):
else:
r = eval_results(i)
# are there still tuples or is it just a Call.
if isinstance(command, pr.Array):
# These are "sub"-tuples.
result += assign_tuples(command, r, seek_name)
else:
if command.name.names[-1] == seek_name:
result += r
# LHS of tuples can be nested, so resolve it recursively
result += find_assignments(command, r, seek_name)
return result
def find_assignments(lhs, results, seek_name):
"""
Check if `seek_name` is in the left hand side `lhs` of assignment.
`lhs` can simply be a variable (`pr.Call`) or a tuple/list (`pr.Array`)
representing the following cases::
a = 1 # lhs is pr.Call
(a, b) = 2 # lhs is pr.Array
:type lhs: pr.Call
:type results: list
:type seek_name: str
"""
if isinstance(lhs, pr.Array):
return assign_tuples(lhs, results, seek_name)
elif lhs.name.names[-1] == seek_name:
return results
else:
return []
@recursion.RecursionDecorator
@cache.memoize_default(default=())
def follow_statement(stmt, seek_name=None):
@@ -587,7 +615,7 @@ def follow_statement(stmt, seek_name=None):
if len(stmt.get_set_vars()) > 1 and seek_name and stmt.assignment_details:
new_result = []
for ass_commands, op in stmt.assignment_details:
new_result += assign_tuples(ass_commands[0], result, seek_name)
new_result += find_assignments(ass_commands[0], result, seek_name)
result = new_result
return set(result)
@@ -624,7 +652,7 @@ def follow_call_list(call_list, follow_array=False):
call_path = call.generate_call_path()
next(call_path, None) # the first one has been used already
result += follow_paths(call_path, r, call.parent,
position=call.start_pos)
position=call.start_pos)
elif isinstance(call, pr.ListComprehension):
loop = evaluate_list_comprehension(call)
# Caveat: parents are being changed, but this doesn't matter,
@@ -635,8 +663,8 @@ def follow_call_list(call_list, follow_array=False):
if isinstance(call, pr.Lambda):
result.append(er.Function(call))
# With things like params, these can also be functions...
elif isinstance(call, (er.Function, er.Class, er.Instance,
dynamic.ArrayInstance)):
elif isinstance(call, pr.Base) and call.isinstance(er.Function,
er.Class, er.Instance, dynamic.ArrayInstance):
result.append(call)
# The string tokens are just operations (+, -, etc.)
elif not isinstance(call, (str, unicode)):
@@ -654,8 +682,8 @@ def follow_call_list(call_list, follow_array=False):
result += follow_call(call)
elif call == '*':
if [r for r in result if isinstance(r, er.Array)
or isinstance(r, er.Instance)
and str(r.name) == 'str']:
or isinstance(r, er.Instance)
and str(r.name) == 'str']:
# if it is an iterable, ignore * operations
next(calls_iterator)
return set(result)
@@ -682,7 +710,7 @@ def follow_call_path(path, scope, position):
if isinstance(current, pr.NamePart):
# This is the first global lookup.
scopes = find_name(scope, current, position=position,
search_global=True)
search_global=True)
else:
if current.type in (pr.Call.STRING, pr.Call.NUMBER):
t = type(current.name).__name__
@@ -756,14 +784,14 @@ def follow_path(path, scope, call_scope, position=None):
if filter_private_variable(scope, call_scope, current):
return []
result = imports.strip_imports(find_name(scope, current,
position=position))
position=position))
return follow_paths(path, set(result), call_scope, position=position)
def filter_private_variable(scope, call_scope, var_name):
"""private variables begin with a double underline `__`"""
if isinstance(var_name, (str, unicode)) \
and var_name.startswith('__') and isinstance(scope, er.Instance):
if isinstance(var_name, (str, unicode)) and isinstance(scope, er.Instance)\
and var_name.startswith('__') and not var_name.endswith('__'):
s = call_scope.get_parent_until((pr.Class, er.Instance))
if s != scope and s != scope.base.base:
return True
@@ -773,9 +801,15 @@ def filter_private_variable(scope, call_scope, var_name):
def goto(stmt, call_path=None):
if call_path is None:
commands = stmt.get_commands()
assert len(commands) == 1
if len(commands) == 0:
return [], ''
# Only the first command is important, the rest should basically not
# happen except in broken code (e.g. docstrings that aren't code).
call = commands[0]
call_path = list(call.generate_call_path())
if isinstance(call, (str, unicode)):
call_path = [call]
else:
call_path = list(call.generate_call_path())
scope = stmt.get_parent_until(pr.IsScope)
pos = stmt.start_pos
@@ -792,5 +826,5 @@ def goto(stmt, call_path=None):
follow_res = []
for s in scopes:
follow_res += find_name(s, search, pos,
search_global=search_global, is_goto=True)
search_global=search_global, is_goto=True)
return follow_res, search

View File

@@ -14,7 +14,7 @@ from __future__ import with_statement
import copy
import itertools
from jedi._compatibility import use_metaclass, next, hasattr
from jedi._compatibility import use_metaclass, next, hasattr, unicode
from jedi import parsing_representation as pr
from jedi import cache
from jedi import helpers
@@ -28,13 +28,6 @@ import builtin
import dynamic
class DecoratorNotFound(LookupError):
"""
Decorators are sometimes not found, if that happens, that error is raised.
"""
pass
class Executable(pr.IsScope):
"""
An instance is also an executable - because __init__ is called
@@ -45,19 +38,28 @@ class Executable(pr.IsScope):
self.var_args = var_args
def get_parent_until(self, *args, **kwargs):
return self.base.get_parent_until(*args, **kwargs)
return self.decorated.get_parent_until(*args, **kwargs)
@property
def parent(self):
return self.base.parent
return self.decorated.parent
@property
def decorated(self):
"""
Instance doesn't care about decorators and Execution overrides this
"""
return self.base
class Instance(use_metaclass(cache.CachedMetaClass, Executable)):
""" This class is used to evaluate instances. """
"""
This class is used to evaluate instances.
"""
def __init__(self, base, var_args=()):
super(Instance, self).__init__(base, var_args)
if str(base.name) in ['list', 'set'] \
and builtin.Builtin.scope == base.get_parent_until():
and builtin.Builtin.scope == base.get_parent_until():
# compare the module path with the builtin name.
self.var_args = dynamic.check_array_instances(self)
else:
@@ -70,22 +72,27 @@ class Instance(use_metaclass(cache.CachedMetaClass, Executable)):
self.is_generated = False
@cache.memoize_default()
def get_init_execution(self, func):
def _get_method_execution(self, func):
func = InstanceElement(self, func, True)
return Execution(func, self.var_args)
def get_func_self_name(self, func):
def _get_func_self_name(self, func):
"""
Returns the name of the first param in a class method (which is
normally self
normally self.
"""
try:
return func.params[0].used_vars[0].names[0]
return str(func.params[0].used_vars[0])
except IndexError:
return None
def get_self_properties(self):
@cache.memoize_default([])
def _get_self_attributes(self):
def add_self_dot_name(name):
"""
Need to copy and rewrite the name, because names are now
``instance_usage.variable`` instead of ``self.variable``.
"""
n = copy.copy(name)
n.names = n.names[1:]
names.append(InstanceElement(self, n))
@@ -97,23 +104,27 @@ class Instance(use_metaclass(cache.CachedMetaClass, Executable)):
if isinstance(sub, pr.Class):
continue
# Get the self name, if there's one.
self_name = self.get_func_self_name(sub)
if self_name:
# Check the __init__ function.
if sub.name.get_code() == '__init__':
sub = self.get_init_execution(sub)
for n in sub.get_set_vars():
# Only names with the selfname are being added.
# It is also important, that they have a len() of 2,
# because otherwise, they are just something else
if n.names[0] == self_name and len(n.names) == 2:
add_self_dot_name(n)
self_name = self._get_func_self_name(sub)
if not self_name:
continue
if sub.name.get_code() == '__init__':
# ``__init__`` is special because the params need are injected
# this way. Therefore an execution is necessary.
if not sub.decorators:
# __init__ decorators should generally just be ignored,
# because to follow them and their self variables is too
# complicated.
sub = self._get_method_execution(sub)
for n in sub.get_set_vars():
# Only names with the selfname are being added.
# It is also important, that they have a len() of 2,
# because otherwise, they are just something else
if n.names[0] == self_name and len(n.names) == 2:
add_self_dot_name(n)
for s in self.base.get_super_classes():
if s == self.base:
# I don't know how this could happen... But saw it once.
continue
names += Instance(s).get_self_properties()
names += Instance(s)._get_self_attributes()
return names
@@ -138,7 +149,7 @@ class Instance(use_metaclass(cache.CachedMetaClass, Executable)):
Get the instance vars of a class. This includes the vars of all
classes
"""
names = self.get_self_properties()
names = self._get_self_attributes()
class_names = self.base.get_defined_names()
for var in class_names:
@@ -150,7 +161,7 @@ class Instance(use_metaclass(cache.CachedMetaClass, Executable)):
An Instance has two scopes: The scope with self names and the class
scope. Instance variables have priority over the class scope.
"""
yield self, self.get_self_properties()
yield self, self._get_self_attributes()
names = []
class_names = self.base.get_defined_names()
@@ -168,17 +179,17 @@ class Instance(use_metaclass(cache.CachedMetaClass, Executable)):
def __getattr__(self, name):
if name not in ['start_pos', 'end_pos', 'name', 'get_imports',
'doc', 'docstr', 'asserts']:
'doc', 'docstr', 'asserts']:
raise AttributeError("Instance %s: Don't touch this (%s)!"
% (self, name))
% (self, name))
return getattr(self.base, name)
def __repr__(self):
return "<e%s of %s (var_args: %s)>" % \
(type(self).__name__, self.base, len(self.var_args or []))
(type(self).__name__, self.base, len(self.var_args or []))
class InstanceElement(use_metaclass(cache.CachedMetaClass)):
class InstanceElement(use_metaclass(cache.CachedMetaClass, pr.Base)):
"""
InstanceElement is a wrapper for any object, that is used as an instance
variable (e.g. self.variable or class methods).
@@ -197,8 +208,8 @@ class InstanceElement(use_metaclass(cache.CachedMetaClass)):
def parent(self):
par = self.var.parent
if isinstance(par, Class) and par == self.instance.base \
or isinstance(par, pr.Class) \
and par == self.instance.base.base:
or isinstance(par, pr.Class) \
and par == self.instance.base.base:
par = self.instance
elif not isinstance(par, pr.Module):
par = InstanceElement(self.instance, par, self.is_class_var)
@@ -209,7 +220,7 @@ class InstanceElement(use_metaclass(cache.CachedMetaClass)):
def get_decorated_func(self):
""" Needed because the InstanceElement should not be stripped """
func = self.var.get_decorated_func()
func = self.var.get_decorated_func(self.instance)
if func == self.var:
return self
return func
@@ -217,8 +228,13 @@ class InstanceElement(use_metaclass(cache.CachedMetaClass)):
def get_commands(self):
# Copy and modify the array.
return [InstanceElement(self.instance, command, self.is_class_var)
if not isinstance(command, unicode) else command
for command in self.var.get_commands()]
def __iter__(self):
for el in self.var.__iter__():
yield InstanceElement(self.instance, el, self.is_class_var)
def __getattr__(self, name):
return getattr(self.var, name)
@@ -287,8 +303,8 @@ class Class(use_metaclass(cache.CachedMetaClass, pr.IsScope)):
def __getattr__(self, name):
if name not in ['start_pos', 'end_pos', 'parent', 'asserts', 'docstr',
'doc', 'get_imports', 'get_parent_until', 'get_code',
'subscopes']:
'doc', 'get_imports', 'get_parent_until', 'get_code',
'subscopes']:
raise AttributeError("Don't touch this: %s of %s !" % (name, self))
return getattr(self.base, name)
@@ -305,9 +321,8 @@ class Function(use_metaclass(cache.CachedMetaClass, pr.IsScope)):
self.base_func = func
self.is_decorated = is_decorated
@property
@cache.memoize_default()
def _decorated_func(self):
def _decorated_func(self, instance=None):
"""
Returns the function, that is to be executed in the end.
This is also the places where the decorators are processed.
@@ -318,17 +333,20 @@ class Function(use_metaclass(cache.CachedMetaClass, pr.IsScope)):
if not self.is_decorated:
for dec in reversed(self.base_func.decorators):
debug.dbg('decorator:', dec, f)
dec_results = evaluate.follow_statement(dec)
dec_results = set(evaluate.follow_statement(dec))
if not len(dec_results):
debug.warning('decorator func not found: %s in stmt %s' %
(self.base_func, dec))
debug.warning('decorator not found: %s on %s' %
(dec, self.base_func))
return None
if len(dec_results) > 1:
debug.warning('multiple decorators found', self.base_func,
dec_results)
decorator = dec_results.pop()
if dec_results:
debug.warning('multiple decorators found', self.base_func,
dec_results)
# Create param array.
old_func = Function(f, is_decorated=True)
if instance is not None and decorator.isinstance(Function):
old_func = InstanceElement(instance, old_func)
instance = None
wrappers = Execution(decorator, (old_func,)).get_return_types()
if not len(wrappers):
@@ -336,7 +354,7 @@ class Function(use_metaclass(cache.CachedMetaClass, pr.IsScope)):
return None
if len(wrappers) > 1:
debug.warning('multiple wrappers found', self.base_func,
wrappers)
wrappers)
# This is here, that the wrapper gets executed.
f = wrappers[0]
@@ -345,12 +363,16 @@ class Function(use_metaclass(cache.CachedMetaClass, pr.IsScope)):
f = Function(f)
return f
def get_decorated_func(self):
if self._decorated_func is None:
raise DecoratorNotFound()
if self._decorated_func == self.base_func:
def get_decorated_func(self, instance=None):
decorated_func = self._decorated_func(instance)
if decorated_func == self.base_func:
return self
return self._decorated_func
if decorated_func is None:
# If the decorator func is not found, just ignore the decorator
# function, because sometimes decorators are just really
# complicated.
return Function(self.base_func, True)
return decorated_func
def get_magic_method_names(self):
return builtin.Builtin.magic_function_scope.get_defined_names()
@@ -363,8 +385,8 @@ class Function(use_metaclass(cache.CachedMetaClass, pr.IsScope)):
def __repr__(self):
dec = ''
if self._decorated_func != self.base_func:
dec = " is " + repr(self._decorated_func)
if self._decorated_func() != self.base_func:
dec = " is " + repr(self._decorated_func())
return "<e%s of %s%s>" % (type(self).__name__, self.base_func, dec)
@@ -388,36 +410,43 @@ class Execution(Executable):
else:
return [stmt] # just some arbitrary object
@property
@cache.memoize_default()
def decorated(self):
"""Get the decorated version of the input"""
base = self.base
if self.base.isinstance(Function):
base = base.get_decorated_func()
return base
@cache.memoize_default(default=())
@recursion.ExecutionRecursionDecorator
def get_return_types(self, evaluate_generator=False):
""" Get the return types of a function. """
base = self.decorated
stmts = []
if self.base.parent == builtin.Builtin.scope \
and not isinstance(self.base, (Generator, Array)):
func_name = str(self.base.name)
if base.parent == builtin.Builtin.scope \
and not isinstance(base, (Generator, Array)):
func_name = str(base.name)
# some implementations of builtins:
if func_name == 'getattr':
# follow the first param
try:
objects = self.follow_var_arg(0)
names = self.follow_var_arg(1)
except IndexError:
debug.warning('getattr() called with to few args.')
return []
objects = self.follow_var_arg(0)
names = self.follow_var_arg(1)
for obj in objects:
if not isinstance(obj, (Instance, Class, pr.Module)):
debug.warning('getattr called without instance')
continue
for arr_name in names:
if not isinstance(arr_name, Instance):
debug.warning('getattr called without str')
continue
if len(arr_name.var_args) != 1:
debug.warning('jedi getattr is too simple')
key = arr_name.var_args[0]
stmts += evaluate.follow_path(iter([key]), obj,
self.base)
stmts += evaluate.follow_path(iter([key]), obj, base)
return stmts
elif func_name == 'type':
# otherwise it would be a metaclass
@@ -430,7 +459,7 @@ class Execution(Executable):
func = self.var_args.get_parent_until(accept)
if func.isinstance(*accept):
cls = func.get_parent_until(accept + (pr.Class,),
include_current=False)
include_current=False)
if isinstance(cls, pr.Class):
cls = Class(cls)
su = cls.get_super_classes()
@@ -438,38 +467,35 @@ class Execution(Executable):
return [Instance(su[0])]
return []
if self.base.isinstance(Class):
if base.isinstance(Class):
# There maybe executions of executions.
stmts = [Instance(self.base, self.var_args)]
elif isinstance(self.base, Generator):
return self.base.iter_content()
stmts = [Instance(base, self.var_args)]
elif isinstance(base, Generator):
return base.iter_content()
else:
# Don't do this with exceptions, as usual, because some deeper
# exceptions could be catched - and I wouldn't know what happened.
try:
self.base.returns
except (AttributeError, DecoratorNotFound):
if hasattr(self.base, 'execute_subscope_by_name'):
base.returns # Test if it is a function
except AttributeError:
if hasattr(base, 'execute_subscope_by_name'):
try:
stmts = self.base.execute_subscope_by_name('__call__',
self.var_args)
stmts = base.execute_subscope_by_name('__call__',
self.var_args)
except KeyError:
debug.warning("no __call__ func available", self.base)
debug.warning("no __call__ func available", base)
else:
debug.warning("no execution possible", self.base)
debug.warning("no execution possible", base)
else:
stmts = self._get_function_returns(evaluate_generator)
stmts = self._get_function_returns(base, evaluate_generator)
debug.dbg('exec result: %s in %s' % (stmts, self))
return imports.strip_imports(stmts)
def _get_function_returns(self, evaluate_generator):
def _get_function_returns(self, func, evaluate_generator):
""" A normal Function execution """
# Feed the listeners, with the params.
for listener in self.base.listeners:
for listener in func.listeners:
listener.execute(self.get_params())
func = self.base.get_decorated_func()
if func.is_generator and not evaluate_generator:
return [Generator(func, self.var_args)]
else:
@@ -495,7 +521,7 @@ class Execution(Executable):
parent = self.var_args.parent
start_pos = self.var_args.start_pos
else:
parent = self.base
parent = self.decorated
start_pos = 0, 0
new_param = copy.copy(param)
@@ -523,15 +549,15 @@ class Execution(Executable):
result = []
start_offset = 0
if isinstance(self.base, InstanceElement):
if isinstance(self.decorated, InstanceElement):
# Care for self -> just exclude it and add the instance
start_offset = 1
self_name = copy.copy(self.base.params[0].get_name())
self_name.parent = self.base.instance
self_name = copy.copy(self.decorated.params[0].get_name())
self_name.parent = self.decorated.instance
result.append(self_name)
param_dict = {}
for param in self.base.params:
for param in self.decorated.params:
param_dict[str(param.get_name())] = param
# There may be calls, which don't fit all the params, this just ignores
# it.
@@ -540,7 +566,7 @@ class Execution(Executable):
non_matching_keys = []
keys_used = set()
keys_only = False
for param in self.base.params[start_offset:]:
for param in self.decorated.params[start_offset:]:
# The value and key can both be null. There, the defaults apply.
# args / kwargs will just be empty arrays / dicts, respectively.
# Wrong value count is just ignored. If you try to test cases that
@@ -556,7 +582,7 @@ class Execution(Executable):
else:
keys_used.add(str(key))
result.append(gen_param_name_copy(key_param,
values=[value]))
values=[value]))
key, value = next(var_arg_iterator, (None, None))
commands = param.get_commands()
@@ -601,7 +627,7 @@ class Execution(Executable):
if not ignore_creation and (not keys_only or commands[0] == '**'):
keys_used.add(str(key))
result.append(gen_param_name_copy(param, keys=keys,
values=values, array_type=array_type))
values=values, array_type=array_type))
if keys_only:
# sometimes param arguments are not completely written (which would
@@ -628,23 +654,32 @@ class Execution(Executable):
stmt._commands = [old]
# *args
if stmt.get_commands()[0] == '*':
arrays = evaluate.follow_call_list(stmt.get_commands()[1:])
commands = stmt.get_commands()
if not len(commands):
continue
if commands[0] == '*':
arrays = evaluate.follow_call_list(commands[1:])
# *args must be some sort of an array, otherwise -> ignore
for array in arrays:
for field_stmt in array: # yield from plz!
yield None, field_stmt
if isinstance(array, Array):
for field_stmt in array: # yield from plz!
yield None, field_stmt
elif isinstance(array, Generator):
for field_stmt in array.iter_content():
yield None, helpers.FakeStatement(field_stmt)
# **kwargs
elif stmt.get_commands()[0] == '**':
arrays = evaluate.follow_call_list(stmt.get_commands()[1:])
elif commands[0] == '**':
arrays = evaluate.follow_call_list(commands[1:])
for array in arrays:
for key_stmt, value_stmt in array.items():
# first index, is the key if syntactically correct
call = key_stmt.get_commands()[0]
if isinstance(call, pr.Name):
yield call, value_stmt
elif type(call) == pr.Call:
yield call.name, value_stmt
if isinstance(array, Array):
for key_stmt, value_stmt in array.items():
# first index, is the key if syntactically correct
call = key_stmt.get_commands()[0]
if isinstance(call, pr.Name):
yield call, value_stmt
elif type(call) is pr.Call:
yield call.name, value_stmt
# Normal arguments (including key arguments).
else:
if stmt.assignment_details:
@@ -657,9 +692,6 @@ class Execution(Executable):
return iter(common.PushBackIterator(iterate()))
def get_set_vars(self):
return self.get_defined_names()
def get_defined_names(self):
"""
Call the default method with the own instance (self implements all
@@ -667,6 +699,8 @@ class Execution(Executable):
"""
return self.get_params() + pr.Scope.get_set_vars(self)
get_set_vars = get_defined_names
@common.rethrow_uncaught
def copy_properties(self, prop):
"""
@@ -676,7 +710,7 @@ class Execution(Executable):
execution.
"""
# Copy all these lists into this local function.
attr = getattr(self.base, prop)
attr = getattr(self.decorated, prop)
objects = []
for element in attr:
if element is None:
@@ -692,7 +726,7 @@ class Execution(Executable):
def __getattr__(self, name):
if name not in ['start_pos', 'end_pos', 'imports', '_sub_module']:
raise AttributeError('Tried to access %s: %s. Why?' % (name, self))
return getattr(self.base, name)
return getattr(self.decorated, name)
@cache.memoize_default()
@common.rethrow_uncaught
@@ -734,7 +768,7 @@ class Execution(Executable):
def __repr__(self):
return "<%s of %s>" % \
(type(self).__name__, self.base)
(type(self).__name__, self.decorated)
class Generator(use_metaclass(cache.CachedMetaClass, pr.Base)):
@@ -754,9 +788,11 @@ class Generator(use_metaclass(cache.CachedMetaClass, pr.Base)):
executes_generator = ('__next__', 'send')
for n in ('close', 'throw') + executes_generator:
name = pr.Name(builtin.Builtin.scope, [(n, none_pos)],
none_pos, none_pos)
none_pos, none_pos)
if n in executes_generator:
name.parent = self
else:
name.parent = builtin.Builtin.scope
names.append(name)
debug.dbg('generator names', names)
return names
@@ -771,10 +807,10 @@ class Generator(use_metaclass(cache.CachedMetaClass, pr.Base)):
def __getattr__(self, name):
if name not in ['start_pos', 'end_pos', 'parent', 'get_imports',
'asserts', 'doc', 'docstr', 'get_parent_until', 'get_code',
'subscopes']:
'asserts', 'doc', 'docstr', 'get_parent_until', 'get_code',
'subscopes']:
raise AttributeError("Accessing %s of %s is not allowed."
% (self, name))
% (self, name))
return getattr(self.func, name)
def __repr__(self):
@@ -802,9 +838,12 @@ class Array(use_metaclass(cache.CachedMetaClass, pr.Base)):
# otherwise it just ignores the index (e.g. [1+1]).
index = index_possibilities[0]
if isinstance(index, Instance) \
and str(index.name) in ['int', 'str'] \
and len(index.var_args) == 1:
with common.ignored(KeyError, IndexError):
and str(index.name) in ['int', 'str'] \
and len(index.var_args) == 1:
# TODO this is just very hackish and a lot of use cases are
# being ignored
with common.ignored(KeyError, IndexError,
UnboundLocalError, TypeError):
return self.get_exact_index_types(index.var_args[0])
result = list(self._follow_values(self._array.values))

View File

@@ -55,7 +55,7 @@ class Module(pr.Simple, pr.Module):
def __repr__(self):
return "<%s: %s@%s-%s>" % (type(self).__name__, self.name,
self.start_pos[0], self.end_pos[0])
self.start_pos[0], self.end_pos[0])
class CachedFastParser(type):
@@ -67,7 +67,7 @@ class CachedFastParser(type):
pi = cache.parser_cache.get(module_path, None)
if pi is None or isinstance(pi.parser, parsing.Parser):
p = super(CachedFastParser, self).__call__(source, module_path,
user_position)
user_position)
else:
p = pi.parser # pi is a `cache.ParserCacheItem`
p.update(source, user_position)
@@ -192,7 +192,12 @@ class FastParser(use_metaclass(CachedFastParser)):
self.module = Module(self.parsers)
self.reset_caches()
self._parse(code)
try:
self._parse(code)
except:
# FastParser is cached, be careful with exceptions
self.parsers[:] = []
raise
@property
def user_scope(self):
@@ -204,7 +209,7 @@ class FastParser(use_metaclass(CachedFastParser)):
self._user_scope = p.user_scope
if isinstance(self._user_scope, pr.SubModule) \
or self._user_scope is None:
or self._user_scope is None:
self._user_scope = self.module
return self._user_scope
@@ -221,7 +226,13 @@ class FastParser(use_metaclass(CachedFastParser)):
self.user_position = user_position
self.reset_caches()
self._parse(code)
try:
self._parse(code)
except:
# FastParser is cached, be careful with exceptions
self.parsers[:] = []
raise
def _scan_user_scope(self, sub_module):
""" Scan with self.user_position. """
@@ -324,11 +335,11 @@ class FastParser(use_metaclass(CachedFastParser)):
if self.current_node is not None:
self.current_node = \
self.current_node.parent_until_indent(indent)
self.current_node.parent_until_indent(indent)
nodes += self.current_node.old_children
# check if code_part has already been parsed
#print '#'*45,line_offset, p and p.end_pos, '\n', code_part
# print '#'*45,line_offset, p and p.end_pos, '\n', code_part
p, node = self._get_parser(code_part, code[start:],
line_offset, nodes, not is_first)
@@ -351,12 +362,12 @@ class FastParser(use_metaclass(CachedFastParser)):
else:
if node is None:
self.current_node = \
self.current_node.add_parser(p, code_part)
self.current_node.add_parser(p, code_part)
else:
self.current_node = self.current_node.add_node(node)
if self.current_node.parent and (isinstance(p.user_scope,
pr.SubModule) or p.user_scope is None) \
pr.SubModule) or p.user_scope is None) \
and self.user_position \
and p.start_pos <= self.user_position < p.end_pos:
p.user_scope = self.current_node.parent.content_scope
@@ -365,7 +376,7 @@ class FastParser(use_metaclass(CachedFastParser)):
is_first = False
else:
#print '#'*45, line_offset, p.end_pos, 'theheck\n', code_part
# print '#'*45, line_offset, p.end_pos, 'theheck\n', code_part
pass
line_offset += lines
@@ -378,7 +389,7 @@ class FastParser(use_metaclass(CachedFastParser)):
self.module.end_pos = self.parsers[-1].end_pos
#print(self.parsers[0].module.get_code())
# print(self.parsers[0].module.get_code())
del code
def _get_parser(self, code, parser_code, line_offset, nodes, no_docstr):
@@ -408,7 +419,7 @@ class FastParser(use_metaclass(CachedFastParser)):
# It's important to take care of the whole user
# positioning stuff, if no reparsing is being done.
p.user_stmt = m.get_statement_for_position(
self.user_position, include_imports=True)
self.user_position, include_imports=True)
if p.user_stmt:
p.user_scope = p.user_stmt.parent
else:

View File

@@ -123,5 +123,15 @@ def search_function_definition(stmt, pos):
while isinstance(call.parent, pr.Call):
call = call.parent
arr.parent.execution = None
return call, index, False
return call if call.type == pr.Call.NAME else None, index, False
return None, 0, False
class FakeStatement(pr.Statement):
class SubModule():
line_offset = 0
def __init__(self, content):
cls = type(self)
p = 0, 0
super(cls, self).__init__(cls.SubModule, [], [], [content], p, p)

View File

@@ -56,10 +56,12 @@ class ImportPath(pr.Base):
GlobalNamespace = _GlobalNamespace()
def __init__(self, import_stmt, is_like_search=False, kill_count=0,
direct_resolve=False):
direct_resolve=False, is_just_from=False):
self.import_stmt = import_stmt
self.is_like_search = is_like_search
self.direct_resolve = direct_resolve
self.is_just_from = is_just_from
self.is_partial_import = bool(max(0, kill_count))
path = import_stmt.get_parent_until().path
self.file_path = os.path.dirname(path) if path is not None else None
@@ -69,7 +71,7 @@ class ImportPath(pr.Base):
if import_stmt.from_ns:
self.import_path += import_stmt.from_ns.names
if import_stmt.namespace:
if self.is_nested_import() and not direct_resolve:
if self._is_nested_import() and not direct_resolve:
self.import_path.append(import_stmt.namespace.names[0])
else:
self.import_path += import_stmt.namespace.names
@@ -80,7 +82,7 @@ class ImportPath(pr.Base):
def __repr__(self):
return '<%s: %s>' % (type(self).__name__, self.import_stmt)
def is_nested_import(self):
def _is_nested_import(self):
"""
This checks for the special case of nested imports, without aliases and
from statement::
@@ -88,12 +90,12 @@ class ImportPath(pr.Base):
import foo.bar
"""
return not self.import_stmt.alias and not self.import_stmt.from_ns \
and len(self.import_stmt.namespace.names) > 1 \
and not self.direct_resolve
and len(self.import_stmt.namespace.names) > 1 \
and not self.direct_resolve
def get_nested_import(self, parent):
def _get_nested_import(self, parent):
"""
See documentation of `self.is_nested_import`.
See documentation of `self._is_nested_import`.
Generates an Import statement, that can be used to fake nested imports.
"""
i = self.import_stmt
@@ -111,30 +113,41 @@ class ImportPath(pr.Base):
names = []
for scope in self.follow():
if scope is ImportPath.GlobalNamespace:
if self.import_stmt.relative_count == 0:
names += self.get_module_names()
if self._is_relative_import() == 0:
names += self._get_module_names()
if self.file_path is not None:
path = os.path.abspath(self.file_path)
for i in range(self.import_stmt.relative_count - 1):
path = os.path.dirname(path)
names += self.get_module_names([path])
names += self._get_module_names([path])
if self.import_stmt.relative_count:
rel_path = self.get_relative_path() + '/__init__.py'
if self._is_relative_import():
rel_path = self._get_relative_path() + '/__init__.py'
with common.ignored(IOError):
m = modules.Module(rel_path)
names += m.parser.module.get_defined_names()
else:
if on_import_stmt and isinstance(scope, pr.Module) \
and scope.path.endswith('__init__.py'):
and scope.path.endswith('__init__.py'):
pkg_path = os.path.dirname(scope.path)
names += self.get_module_names([pkg_path])
paths = self._namespace_packages(pkg_path, self.import_path)
names += self._get_module_names([pkg_path] + paths)
if self.is_just_from:
# In the case of an import like `from x.` we don't need to
# add all the variables.
if ['os'] == self.import_path and not self._is_relative_import():
# os.path is a hardcoded exception, because it's a
# ``sys.modules`` modification.
p = (0, 0)
names.append(pr.Name(self.GlobalNamespace, [('path', p)],
p, p, self.import_stmt))
continue
for s, scope_names in evaluate.get_names_of_scope(scope,
include_builtin=False):
include_builtin=False):
for n in scope_names:
if self.import_stmt.from_ns is None \
or self.is_partial_import:
or self.is_partial_import:
# from_ns must be defined to access module
# values plus a partial import means that there
# is something after the import, which
@@ -144,27 +157,34 @@ class ImportPath(pr.Base):
names.append(n)
return names
def get_module_names(self, search_path=None):
def _get_module_names(self, search_path=None):
"""
Get the names of all modules in the search_path. This means file names
and not names defined in the files.
"""
if not search_path:
search_path = self.sys_path_with_modifications()
def generate_name(name):
return pr.Name(self.GlobalNamespace, [(name, inf_pos)],
inf_pos, inf_pos, self.import_stmt)
names = []
inf_pos = float('inf'), float('inf')
# add builtin module names
if search_path is None:
names += [generate_name(name) for name in sys.builtin_module_names]
if search_path is None:
search_path = self._sys_path_with_modifications()
for module_loader, name, is_pkg in pkgutil.iter_modules(search_path):
inf_pos = (float('inf'), float('inf'))
names.append(pr.Name(self.GlobalNamespace, [(name, inf_pos)],
inf_pos, inf_pos, self.import_stmt))
names.append(generate_name(name))
return names
def sys_path_with_modifications(self):
def _sys_path_with_modifications(self):
# If you edit e.g. gunicorn, there will be imports like this:
# `from gunicorn import something`. But gunicorn is not in the
# sys.path. Therefore look if gunicorn is a parent directory, #56.
parts = self.file_path.split(os.path.sep)
in_path = []
if self.import_path:
parts = self.file_path.split(os.path.sep)
for i, p in enumerate(parts):
if p == self.import_path[0]:
new = os.path.sep.join(parts[:i])
@@ -195,19 +215,26 @@ class ImportPath(pr.Base):
# follow the rest of the import (not FS -> classes, functions)
if len(rest) > 1 or rest and self.is_like_search:
scopes = []
if ['os', 'path'] == self.import_path[:2] \
and not self._is_relative_import():
# This is a huge exception, we follow a nested import
# ``os.path``, because it's a very important one in Python
# that is being achieved by messing with ``sys.modules`` in
# ``os``.
scopes = evaluate.follow_path(iter(rest), scope, scope)
elif rest:
if is_goto:
scopes = itertools.chain.from_iterable(
evaluate.find_name(s, rest[0], is_goto=True)
for s in scopes)
evaluate.find_name(s, rest[0], is_goto=True)
for s in scopes)
else:
scopes = itertools.chain.from_iterable(
evaluate.follow_path(iter(rest), s, s)
for s in scopes)
evaluate.follow_path(iter(rest), s, s)
for s in scopes)
scopes = list(scopes)
if self.is_nested_import():
scopes.append(self.get_nested_import(scope))
if self._is_nested_import():
scopes.append(self._get_nested_import(scope))
else:
scopes = [ImportPath.GlobalNamespace]
debug.dbg('after import', scopes)
@@ -215,12 +242,44 @@ class ImportPath(pr.Base):
evaluate.follow_statement.pop_stmt()
return scopes
def get_relative_path(self):
def _is_relative_import(self):
return bool(self.import_stmt.relative_count)
def _get_relative_path(self):
path = self.file_path
for i in range(self.import_stmt.relative_count - 1):
path = os.path.dirname(path)
return path
def _namespace_packages(self, found_path, import_path):
"""
Returns a list of paths of possible ``pkgutil``/``pkg_resources``
namespaces. If the package is no "namespace package", an empty list is
returned.
"""
def follow_path(directories, paths):
try:
directory = next(directories)
except StopIteration:
return paths
else:
deeper_paths = []
for p in paths:
new = os.path.join(p, directory)
if os.path.isdir(new) and new != found_path:
deeper_paths.append(new)
return follow_path(directories, deeper_paths)
with open(os.path.join(found_path, '__init__.py')) as f:
content = f.read()
# these are strings that need to be used for namespace packages,
# the first one is ``pkgutil``, the second ``pkg_resources``.
options = 'declare_namespace(__name__)', 'extend_path(__path__'
if options[0] in content or options[1] in content:
# It is a namespace, now try to find the rest of the modules.
return follow_path(iter(import_path), sys.path)
return []
def _follow_file_system(self):
"""
Find a module with a path (of the module, like usb.backend.libusb10).
@@ -230,12 +289,11 @@ class ImportPath(pr.Base):
path = None
if ns_path:
path = ns_path
elif self.import_stmt.relative_count:
path = self.get_relative_path()
elif self._is_relative_import():
path = self._get_relative_path()
global imports_processed
imports_processed += 1
importing = None
if path is not None:
importing = find_module(string, [path])
else:
@@ -245,16 +303,18 @@ class ImportPath(pr.Base):
sys.path, temp = sys_path_mod, sys.path
try:
importing = find_module(string)
except ImportError:
finally:
sys.path = temp
raise
sys.path = temp
return importing
if self.file_path:
sys_path_mod = list(self.sys_path_with_modifications())
sys_path_mod.insert(0, self.file_path)
sys_path_mod = list(self._sys_path_with_modifications())
module = self.import_stmt.get_parent_until()
if not module.has_explicit_absolute_import:
# If the module explicitly asks for absolute imports,
# there's probably a bogus local one.
sys_path_mod.insert(0, self.file_path)
else:
sys_path_mod = list(modules.get_sys_path())
@@ -268,16 +328,29 @@ class ImportPath(pr.Base):
try:
current_namespace = follow_str(current_namespace[1], s)
except ImportError:
if self.import_stmt.relative_count \
and len(self.import_path) == 1:
_continue = False
if self._is_relative_import() and len(self.import_path) == 1:
# follow `from . import some_variable`
rel_path = self.get_relative_path()
rel_path = self._get_relative_path()
with common.ignored(ImportError):
current_namespace = follow_str(rel_path, '__init__')
if current_namespace[1]:
rest = self.import_path[i:]
else:
module_not_found()
elif current_namespace[2]: # is a package
for n in self._namespace_packages(current_namespace[1],
self.import_path[:i]):
try:
current_namespace = follow_str(n, s)
if current_namespace[1]:
_continue = True
break
except ImportError:
pass
if not _continue:
if current_namespace[1]:
rest = self.import_path[i:]
break
else:
module_not_found()
if current_namespace == (None, None, False):
module_not_found()

171
jedi/interpret.py Normal file
View File

@@ -0,0 +1,171 @@
"""
Module to handle interpreted Python objects.
"""
import itertools
import tokenize
from jedi import parsing_representation as pr
class ObjectImporter(object):
"""
Import objects in "raw" namespace such as :func:`locals`.
"""
def __init__(self, scope):
self.scope = scope
count = itertools.count()
self._genname = lambda: '*jedi-%s*' % next(count)
"""
Generate unique variable names to avoid name collision.
To avoid name collision to already defined names, generated
names are invalid as Python identifier.
"""
def import_raw_namespace(self, raw_namespace):
"""
Import interpreted Python objects in a namespace.
Three kinds of objects are treated here.
1. Functions and classes. The objects imported like this::
from os.path import join
2. Modules. The objects imported like this::
import os
3. Instances. The objects created like this::
from datetime import datetime
dt = datetime(2013, 1, 1)
:type raw_namespace: dict
:arg raw_namespace: e.g., the dict given by `locals`
"""
scope = self.scope
for (variable, obj) in raw_namespace.items():
objname = getattr(obj, '__name__', None)
# Import functions and classes
module = getattr(obj, '__module__', None)
if module and objname:
fakeimport = self.make_fakeimport(module, objname, variable)
scope.add_import(fakeimport)
continue
# Import modules
if getattr(obj, '__file__', None) and objname:
fakeimport = self.make_fakeimport(objname)
scope.add_import(fakeimport)
continue
# Import instances
objclass = getattr(obj, '__class__', None)
module = getattr(objclass, '__module__', None)
if objclass and module:
alias = self._genname()
fakeimport = self.make_fakeimport(module, objclass.__name__,
alias)
fakestmt = self.make_fakestatement(variable, alias, call=True)
scope.add_import(fakeimport)
scope.add_statement(fakestmt)
continue
def make_fakeimport(self, module, variable=None, alias=None):
"""
Make a fake import object.
The following statements are created depending on what parameters
are given:
- only `module`: ``import <module>``
- `module` and `variable`: ``from <module> import <variable>``
- all: ``from <module> import <variable> as <alias>``
:type module: str
:arg module: ``<module>`` part in ``from <module> import ...``
:type variable: str
:arg variable: ``<variable>`` part in ``from ... import <variable>``
:type alias: str
:arg alias: ``<alias>`` part in ``... import ... as <alias>``.
:rtype: :class:`parsing_representation.Import`
"""
submodule = self.scope._sub_module
if variable:
varname = pr.Name(
module=submodule,
names=[(variable, (-1, 0))],
start_pos=(-1, 0),
end_pos=(None, None))
else:
varname = None
modname = pr.Name(
module=submodule,
names=[(module, (-1, 0))],
start_pos=(-1, 0),
end_pos=(None, None))
if alias:
aliasname = pr.Name(
module=submodule,
names=[(alias, (-1, 0))],
start_pos=(-1, 0),
end_pos=(None, None))
else:
aliasname = None
if varname:
fakeimport = pr.Import(
module=submodule,
namespace=varname,
from_ns=modname,
alias=aliasname,
start_pos=(-1, 0),
end_pos=(None, None))
else:
fakeimport = pr.Import(
module=submodule,
namespace=modname,
alias=aliasname,
start_pos=(-1, 0),
end_pos=(None, None))
return fakeimport
def make_fakestatement(self, lhs, rhs, call=False):
"""
Make a fake statement object that represents ``lhs = rhs``.
:type call: bool
:arg call: When `call` is true, make a fake statement that represents
``lhs = rhs()``.
:rtype: :class:`parsing_representation.Statement`
"""
submodule = self.scope._sub_module
lhsname = pr.Name(
module=submodule,
names=[(lhs, (0, 0))],
start_pos=(0, 0),
end_pos=(None, None))
rhsname = pr.Name(
module=submodule,
names=[(rhs, (0, 0))],
start_pos=(0, 0),
end_pos=(None, None))
token_list = [lhsname, (tokenize.OP, '=', (0, 0)), rhsname]
if call:
token_list.extend([
(tokenize.OP, '(', (0, 0)),
(tokenize.OP, ')', (0, 0)),
])
return pr.Statement(
module=submodule,
set_vars=[lhsname],
used_vars=[rhsname],
token_list=token_list,
start_pos=(0, 0),
end_pos=(None, None))

View File

@@ -4,6 +4,7 @@ import pydoc
import keyword
from jedi._compatibility import is_py3k
from jedi import parsing_representation as pr
from jedi import common
import builtin
@@ -19,7 +20,7 @@ else:
keys = keyword.kwlist + ['None', 'False', 'True']
def get_keywords(string='', pos=(0, 0), all=False):
def keywords(string='', pos=(0, 0), all=False):
if all:
return set([Keyword(k, pos) for k in keys])
if string in keys:
@@ -27,6 +28,15 @@ def get_keywords(string='', pos=(0, 0), all=False):
return set()
def keyword_names(*args, **kwargs):
kwds = []
for k in keywords(*args, **kwargs):
start = k.start_pos
end = start[0], start[1] + len(k.name)
kwds.append(pr.Name(k.parent, [(k.name, start)], start, end, k))
return kwds
def get_operator(string, pos):
return Keyword(string, pos)

View File

@@ -45,7 +45,7 @@ class CachedModule(object):
""" get the parser lazy """
if self._parser is None:
self._parser = cache.load_module(self.path, self.name) \
or self._load_module()
or self._load_module()
return self._parser
def _get_source(self):
@@ -95,13 +95,13 @@ class ModuleWithCursor(Module):
def __init__(self, path, source, position):
super(ModuleWithCursor, self).__init__(path, source)
self.position = position
self.source = source
self._path_until_cursor = None
# this two are only used, because there is no nonlocal in Python 2
self._line_temp = None
self._relevant_temp = None
self.source = source
@property
def parser(self):
""" get the parser lazy """
@@ -113,44 +113,44 @@ class ModuleWithCursor(Module):
# Also, the position is here important (which will not be used by
# default), therefore fill the cache here.
self._parser = fast_parser.FastParser(self.source, self.path,
self.position)
self.position)
# don't pickle that module, because it's changing fast
cache.save_module(self.path, self.name, self._parser,
pickling=False)
pickling=False)
return self._parser
def get_path_until_cursor(self):
""" Get the path under the cursor. """
result = self._get_path_until_cursor()
self._start_cursor_pos = self._line_temp + 1, self._column_temp
return result
if self._path_until_cursor is None: # small caching
self._path_until_cursor, self._start_cursor_pos = \
self._get_path_until_cursor(self.position)
return self._path_until_cursor
def _get_path_until_cursor(self, start_pos=None):
def fetch_line():
line = self.get_line(self._line_temp)
if self._is_first:
self._is_first = False
self._line_length = self._column_temp
line = line[:self._column_temp]
line = self._first_line
else:
line = self.get_line(self._line_temp)
self._line_length = len(line)
line = line + '\n'
# add lines with a backslash at the end
while 1:
while True:
self._line_temp -= 1
last_line = self.get_line(self._line_temp)
#print self._line_temp, repr(last_line)
if last_line and last_line[-1] == '\\':
line = last_line[:-1] + ' ' + line
self._line_length = len(last_line)
else:
break
return line[::-1]
self._is_first = True
if start_pos is None:
self._line_temp = self.position[0]
self._column_temp = self.position[1]
else:
self._line_temp, self._column_temp = start_pos
self._line_temp, self._column_temp = start_cursor = start_pos
self._first_line = self.get_line(self._line_temp)[:self._column_temp]
open_brackets = ['(', '[', '{']
close_brackets = [')', ']', '}']
@@ -162,7 +162,7 @@ class ModuleWithCursor(Module):
last_type = None
try:
for token_type, tok, start, end, line in gen:
#print 'tok', token_type, tok, force_point
# print 'tok', token_type, tok, force_point
if last_type == token_type == tokenize.NAME:
string += ' '
@@ -187,8 +187,13 @@ class ModuleWithCursor(Module):
elif token_type == tokenize.NUMBER:
pass
else:
self._column_temp = self._line_length - end[1]
break
x = start_pos[0] - end[0] + 1
l = self.get_line(x)
l = self._first_line if x == start_pos[0] else l
start_cursor = x, len(l) - end[1]
self._column_temp = self._line_length - end[1]
string += tok
last_type = token_type
@@ -196,46 +201,65 @@ class ModuleWithCursor(Module):
debug.warning("Tokenize couldn't finish", sys.exc_info)
# string can still contain spaces at the end
return string[::-1].strip()
return string[::-1].strip(), start_cursor
def get_path_under_cursor(self):
"""
Return the path under the cursor. If there is a rest of the path left,
it will be added to the stuff before it.
"""
return self.get_path_until_cursor() + self.get_path_after_cursor()
def get_path_after_cursor(self):
line = self.get_line(self.position[0])
after = re.search("[\w\d]*", line[self.position[1]:]).group(0)
return self.get_path_until_cursor() + after
return re.search("[\w\d]*", line[self.position[1]:]).group(0)
def get_operator_under_cursor(self):
line = self.get_line(self.position[0])
after = re.match("[^\w\s]+", line[self.position[1]:])
before = re.match("[^\w\s]+", line[:self.position[1]][::-1])
return (before.group(0) if before is not None else '') \
+ (after.group(0) if after is not None else '')
+ (after.group(0) if after is not None else '')
def get_context(self):
def get_context(self, yield_positions=False):
pos = self._start_cursor_pos
while pos > (1, 0):
while True:
# remove non important white space
line = self.get_line(pos[0])
while pos[1] > 0 and line[pos[1] - 1].isspace():
pos = pos[0], pos[1] - 1
while True:
if pos[1] == 0:
line = self.get_line(pos[0] - 1)
if line and line[-1] == '\\':
pos = pos[0] - 1, len(line) - 1
continue
else:
break
if line[pos[1] - 1].isspace():
pos = pos[0], pos[1] - 1
else:
break
try:
yield self._get_path_until_cursor(start_pos=pos)
result, pos = self._get_path_until_cursor(start_pos=pos)
if yield_positions:
yield pos
else:
yield result
except StopIteration:
yield ''
pos = self._line_temp, self._column_temp
while True:
yield ''
if yield_positions:
yield None
else:
yield ''
def get_line(self, line_nr):
if not self._line_cache:
self._line_cache = self.source.splitlines()
if not self.source: # ''.splitlines() == []
self._line_cache = [self.source]
if self.source:
if self.source[-1] == '\n':
self._line_cache.append('')
else: # ''.splitlines() == []
self._line_cache = ['']
if line_nr == 0:
# This is a fix for the zeroth line. We need a newline there, for
@@ -248,6 +272,9 @@ class ModuleWithCursor(Module):
except IndexError:
raise StopIteration()
def get_position_line(self):
return self.get_line(self.position[0])[:self.position[1]]
def get_sys_path():
def check_virtual_env(sys_path):
@@ -327,7 +354,9 @@ def sys_path_with_modifications(module):
return sys_path
if module.path is None:
return [] # support for modules without a path is intentionally bad.
# Support for modules without a path is bad, therefore return the
# normal path.
return list(get_sys_path())
curdir = os.path.abspath(os.curdir)
with common.ignored(OSError):
@@ -373,7 +402,7 @@ def source_to_unicode(source, encoding=None):
first_two_lines = re.match(r'(?:[^\n]*\n){0,2}', str(source)).group(0)
possible_encoding = re.search(r"coding[=:]\s*([-\w.]+)",
first_two_lines)
first_two_lines)
if possible_encoding:
return possible_encoding.group(1)
else:

View File

@@ -46,8 +46,8 @@ class Parser(object):
:param top_module: Use this module as a parent instead of `self.module`.
"""
def __init__(self, source, module_path=None, user_position=None,
no_docstr=False, offset=(0, 0), is_fast_parser=None,
top_module=None):
no_docstr=False, offset=(0, 0), is_fast_parser=None,
top_module=None):
self.user_position = user_position
self.user_scope = None
self.user_stmt = None
@@ -56,8 +56,8 @@ class Parser(object):
self.start_pos = self.end_pos = 1 + offset[0], offset[1]
# initialize global Scope
self.module = pr.SubModule(module_path, self.start_pos, top_module)
self.scope = self.module
self.current = (None, None)
self._scope = self.module
self._current = (None, None)
source = source + '\n' # end with \n, because the parser needs it
buf = StringIO(source)
@@ -78,10 +78,10 @@ class Parser(object):
# because of `self.module.used_names`.
d.parent = self.module
if self.current[0] in (tokenize.NL, tokenize.NEWLINE):
if self._current[0] in (tokenize.NL, tokenize.NEWLINE):
# we added a newline before, so we need to "remove" it again.
self.end_pos = self._gen.previous[2]
if self.current[0] == tokenize.INDENT:
elif self._current[0] == tokenize.INDENT:
self.end_pos = self._gen.last_previous[2]
self.start_pos = self.module.start_pos
@@ -140,6 +140,7 @@ class Parser(object):
append((tok, self.start_pos))
first_pos = self.start_pos
while True:
end_pos = self.end_pos
token_type, tok = self.next()
if tok != '.':
break
@@ -148,8 +149,7 @@ class Parser(object):
break
append((tok, self.start_pos))
n = pr.Name(self.module, names, first_pos, self.end_pos) if names \
else None
n = pr.Name(self.module, names, first_pos, end_pos) if names else None
return n, token_type, tok
def _parse_import_list(self):
@@ -171,16 +171,16 @@ class Parser(object):
imports = []
brackets = False
continue_kw = [",", ";", "\n", ')'] \
+ list(set(keyword.kwlist) - set(['as']))
+ list(set(keyword.kwlist) - set(['as']))
while True:
defunct = False
token_type, tok = self.next()
if brackets and tok == '\n':
self.next()
if tok == '(': # python allows only one `(` in the statement.
brackets = True
token_type, tok = self.next()
if brackets and tok == '\n':
self.next()
i, token_type, tok = self._parse_dot_name(self.current)
i, token_type, tok = self._parse_dot_name(self._current)
if not i:
defunct = True
name2 = None
@@ -207,7 +207,7 @@ class Parser(object):
breaks = [',', ':']
while tok not in [')', ':']:
param, tok = self._parse_statement(added_breaks=breaks,
stmt_class=pr.Param)
stmt_class=pr.Param)
if param and tok == ':':
# parse annotations
annotation, tok = self._parse_statement(added_breaks=breaks)
@@ -236,7 +236,7 @@ class Parser(object):
return None
fname = pr.Name(self.module, [(fname, self.start_pos)], self.start_pos,
self.end_pos)
self.end_pos)
token_type, open = self.next()
if open != '(':
@@ -260,7 +260,7 @@ class Parser(object):
# because of 2 line func param definitions
scope = pr.Function(self.module, fname, params, first_pos, annotation)
if self.user_scope and scope != self.user_scope \
and self.user_position > first_pos:
and self.user_position > first_pos:
self.user_scope = scope
return scope
@@ -276,11 +276,11 @@ class Parser(object):
token_type, cname = self.next()
if token_type != tokenize.NAME:
debug.warning("class: syntax err, token is not a name@%s (%s: %s)"
% (self.start_pos[0], tokenize.tok_name[token_type], cname))
% (self.start_pos[0], tokenize.tok_name[token_type], cname))
return None
cname = pr.Name(self.module, [(cname, self.start_pos)], self.start_pos,
self.end_pos)
self.end_pos)
super = []
token_type, _next = self.next()
@@ -295,12 +295,12 @@ class Parser(object):
# because of 2 line class initializations
scope = pr.Class(self.module, cname, super, first_pos)
if self.user_scope and scope != self.user_scope \
and self.user_position > first_pos:
and self.user_position > first_pos:
self.user_scope = scope
return scope
def _parse_statement(self, pre_used_token=None, added_breaks=None,
stmt_class=pr.Statement):
stmt_class=pr.Statement):
"""
Parses statements like::
@@ -345,15 +345,15 @@ class Parser(object):
tok_list = []
while not (tok in always_break
or tok in not_first_break and not tok_list
or tok in breaks and level <= 0):
or tok in not_first_break and not tok_list
or tok in breaks and level <= 0):
try:
#print 'parse_stmt', tok, tokenize.tok_name[token_type]
tok_list.append(self.current + (self.start_pos,))
# print 'parse_stmt', tok, tokenize.tok_name[token_type]
tok_list.append(self._current + (self.start_pos,))
if tok == 'as':
token_type, tok = self.next()
if token_type == tokenize.NAME:
n, token_type, tok = self._parse_dot_name(self.current)
n, token_type, tok = self._parse_dot_name(self._current)
if n:
set_vars.append(n)
tok_list.append(n)
@@ -363,7 +363,7 @@ class Parser(object):
if tok == 'lambda':
breaks.discard(':')
elif token_type == tokenize.NAME:
n, token_type, tok = self._parse_dot_name(self.current)
n, token_type, tok = self._parse_dot_name(self._current)
# removed last entry, because we add Name
tok_list.pop()
if n:
@@ -387,10 +387,10 @@ class Parser(object):
if not tok_list:
return None, tok
#print 'new_stat', set_vars, used_vars
# print 'new_stat', set_vars, used_vars
if self.freshscope and not self.no_docstr and len(tok_list) == 1 \
and self.last_token[0] == tokenize.STRING:
self.scope.add_docstr(self.last_token[1])
and self.last_token[0] == tokenize.STRING:
self._scope.add_docstr(self.last_token[1])
return None, tok
else:
stmt = stmt_class(self.module, set_vars, used_vars, tok_list,
@@ -408,7 +408,7 @@ class Parser(object):
and len(stmt.token_list) == 1
and first_tok[0] == tokenize.STRING):
# ... then set it as a docstring
self.scope.statements[-1].add_docstr(first_tok[1])
self._scope.statements[-1].add_docstr(first_tok[1])
if tok in always_break + not_first_break:
self._gen.push_last_back()
@@ -426,13 +426,15 @@ class Parser(object):
typ, tok, start_pos, end_pos, self.parserline = next(self._gen)
# dedents shouldn't change positions
if typ != tokenize.DEDENT:
self.start_pos, self.end_pos = start_pos, end_pos
self.start_pos = start_pos
if typ not in (tokenize.INDENT, tokenize.NEWLINE, tokenize.NL):
self.start_pos, self.end_pos = start_pos, end_pos
except (StopIteration, common.MultiLevelStopIteration):
# on finish, set end_pos correctly
s = self.scope
s = self._scope
while s is not None:
if isinstance(s, pr.Module) \
and not isinstance(s, pr.SubModule):
and not isinstance(s, pr.SubModule):
self.module.end_pos = self.end_pos
break
s.end_pos = self.end_pos
@@ -440,14 +442,14 @@ class Parser(object):
raise
if self.user_position and (self.start_pos[0] == self.user_position[0]
or self.user_scope is None
and self.start_pos[0] >= self.user_position[0]):
or self.user_scope is None
and self.start_pos[0] >= self.user_position[0]):
debug.dbg('user scope found [%s] = %s' %
(self.parserline.replace('\n', ''), repr(self.scope)))
self.user_scope = self.scope
self.last_token = self.current
self.current = (typ, tok)
return self.current
(self.parserline.replace('\n', ''), repr(self._scope)))
self.user_scope = self._scope
self.last_token = self._current
self._current = (typ, tok)
return self._current
def _parse(self):
"""
@@ -469,41 +471,41 @@ class Parser(object):
# This iterator stuff is not intentional. It grew historically.
for token_type, tok in self.iterator:
self.module.temp_used_names = []
#debug.dbg('main: tok=[%s] type=[%s] indent=[%s]'\
# debug.dbg('main: tok=[%s] type=[%s] indent=[%s]'\
# % (tok, tokenize.tok_name[token_type], start_position[0]))
while token_type == tokenize.DEDENT and self.scope != self.module:
while token_type == tokenize.DEDENT and self._scope != self.module:
token_type, tok = self.next()
if self.start_pos[1] <= self.scope.start_pos[1]:
self.scope.end_pos = self.start_pos
self.scope = self.scope.parent
if isinstance(self.scope, pr.Module) \
and not isinstance(self.scope, pr.SubModule):
self.scope = self.module
if self.start_pos[1] <= self._scope.start_pos[1]:
self._scope.end_pos = self.start_pos
self._scope = self._scope.parent
if isinstance(self._scope, pr.Module) \
and not isinstance(self._scope, pr.SubModule):
self._scope = self.module
# check again for unindented stuff. this is true for syntax
# errors. only check for names, because thats relevant here. If
# some docstrings are not indented, I don't care.
while self.start_pos[1] <= self.scope.start_pos[1] \
while self.start_pos[1] <= self._scope.start_pos[1] \
and (token_type == tokenize.NAME or tok in ['(', '['])\
and self.scope != self.module:
self.scope.end_pos = self.start_pos
self.scope = self.scope.parent
if isinstance(self.scope, pr.Module) \
and not isinstance(self.scope, pr.SubModule):
self.scope = self.module
and self._scope != self.module:
self._scope.end_pos = self.start_pos
self._scope = self._scope.parent
if isinstance(self._scope, pr.Module) \
and not isinstance(self._scope, pr.SubModule):
self._scope = self.module
use_as_parent_scope = self.top_module if isinstance(self.scope,
pr.SubModule) else self.scope
use_as_parent_scope = self.top_module if isinstance(self._scope,
pr.SubModule) else self._scope
first_pos = self.start_pos
if tok == 'def':
func = self._parse_function()
if func is None:
debug.warning("function: syntax error@%s" %
self.start_pos[0])
self.start_pos[0])
continue
self.freshscope = True
self.scope = self.scope.add_scope(func, self._decorators)
self._scope = self._scope.add_scope(func, self._decorators)
self._decorators = []
elif tok == 'class':
cls = self._parse_class()
@@ -511,7 +513,7 @@ class Parser(object):
debug.warning("class: syntax error@%s" % self.start_pos[0])
continue
self.freshscope = True
self.scope = self.scope.add_scope(cls, self._decorators)
self._scope = self._scope.add_scope(cls, self._decorators)
self._decorators = []
# import stuff
elif tok == 'import':
@@ -520,25 +522,25 @@ class Parser(object):
e = (alias or m or self).end_pos
end_pos = self.end_pos if count + 1 == len(imports) else e
i = pr.Import(self.module, first_pos, end_pos, m,
alias, defunct=defunct)
alias, defunct=defunct)
self._check_user_stmt(i)
self.scope.add_import(i)
self._scope.add_import(i)
if not imports:
i = pr.Import(self.module, first_pos, self.end_pos, None,
defunct=True)
defunct=True)
self._check_user_stmt(i)
self.freshscope = False
elif tok == 'from':
defunct = False
# take care for relative imports
relative_count = 0
while 1:
while True:
token_type, tok = self.next()
if tok != '.':
break
relative_count += 1
# the from import
mod, token_type, tok = self._parse_dot_name(self.current)
mod, token_type, tok = self._parse_dot_name(self._current)
if str(mod) == 'import' and relative_count:
self._gen.push_last_back()
tok = 'import'
@@ -556,12 +558,12 @@ class Parser(object):
e = (alias or name or self).end_pos
end_pos = self.end_pos if count + 1 == len(names) else e
i = pr.Import(self.module, first_pos, end_pos, name,
alias, mod, star, relative_count,
defunct=defunct or defunct2)
alias, mod, star, relative_count,
defunct=defunct or defunct2)
self._check_user_stmt(i)
self.scope.add_import(i)
self._scope.add_import(i)
self.freshscope = False
#loops
# loops
elif tok == 'for':
set_stmt, tok = self._parse_statement(added_breaks=['in'])
if tok == 'in':
@@ -569,17 +571,17 @@ class Parser(object):
if tok == ':':
s = [] if statement is None else [statement]
f = pr.ForFlow(self.module, s, first_pos, set_stmt)
self.scope = self.scope.add_statement(f)
self._scope = self._scope.add_statement(f)
else:
debug.warning('syntax err, for flow started @%s',
self.start_pos[0])
self.start_pos[0])
if statement is not None:
statement.parent = use_as_parent_scope
if set_stmt is not None:
set_stmt.parent = use_as_parent_scope
else:
debug.warning('syntax err, for flow incomplete @%s',
self.start_pos[0])
self.start_pos[0])
if set_stmt is not None:
set_stmt.parent = use_as_parent_scope
@@ -592,7 +594,7 @@ class Parser(object):
inputs = []
first = True
while first or command == 'with' \
and tok not in [':', '\n']:
and tok not in [':', '\n']:
statement, tok = \
self._parse_statement(added_breaks=added_breaks)
if command == 'except' and tok in added_breaks:
@@ -600,6 +602,7 @@ class Parser(object):
# this is only true for python 2
n, token_type, tok = self._parse_dot_name()
if n:
n.parent = statement
statement.set_vars.append(n)
if statement:
inputs.append(statement)
@@ -612,24 +615,24 @@ class Parser(object):
# the flow statement, because a dedent releases the
# main scope, so just take the last statement.
try:
s = self.scope.statements[-1].set_next(f)
s = self._scope.statements[-1].set_next(f)
except (AttributeError, IndexError):
# If set_next doesn't exist, just add it.
s = self.scope.add_statement(f)
s = self._scope.add_statement(f)
else:
s = self.scope.add_statement(f)
self.scope = s
s = self._scope.add_statement(f)
self._scope = s
else:
for i in inputs:
i.parent = use_as_parent_scope
debug.warning('syntax err, flow started @%s',
self.start_pos[0])
self.start_pos[0])
# returns
elif tok in ['return', 'yield']:
s = self.start_pos
self.freshscope = False
# add returns to the scope
func = self.scope.get_parent_until(pr.Function)
func = self._scope.get_parent_until(pr.Function)
if tok == 'yield':
func.is_generator = True
@@ -644,9 +647,9 @@ class Parser(object):
debug.warning('return in non-function')
# globals
elif tok == 'global':
stmt, tok = self._parse_statement(self.current)
stmt, tok = self._parse_statement(self._current)
if stmt:
self.scope.add_statement(stmt)
self._scope.add_statement(stmt)
for name in stmt.used_vars:
# add the global to the top, because there it is
# important.
@@ -654,13 +657,15 @@ class Parser(object):
# decorator
elif tok == '@':
stmt, tok = self._parse_statement()
self._decorators.append(stmt)
if stmt is not None:
self._decorators.append(stmt)
elif tok == 'pass':
continue
elif tok == 'assert':
stmt, tok = self._parse_statement()
stmt.parent = use_as_parent_scope
self.scope.asserts.append(stmt)
if stmt is not None:
stmt.parent = use_as_parent_scope
self._scope.asserts.append(stmt)
# default
elif token_type in [tokenize.NAME, tokenize.STRING,
tokenize.NUMBER] \
@@ -668,14 +673,14 @@ class Parser(object):
# this is the main part - a name can be a function or a
# normal var, which can follow anything. but this is done
# by the statement parser.
stmt, tok = self._parse_statement(self.current)
stmt, tok = self._parse_statement(self._current)
if stmt:
self.scope.add_statement(stmt)
self._scope.add_statement(stmt)
self.freshscope = False
else:
if token_type not in [tokenize.COMMENT, tokenize.INDENT,
tokenize.NEWLINE, tokenize.NL]:
debug.warning('token not classified', tok, token_type,
self.start_pos[0])
self.start_pos[0])
continue
self.no_docstr = False

View File

@@ -16,11 +16,11 @@ is the easiest way to write a parser. The same behaviour applies to ``Param``,
which is being used in a function definition.
The easiest way to play with this module is to use :class:`parsing.Parser`.
:attr:`parsing.Parser.scope` holds an instance of :class:`SubModule`:
:attr:`parsing.Parser.module` holds an instance of :class:`SubModule`:
>>> from jedi.parsing import Parser
>>> parser = Parser('import os', 'example.py')
>>> submodule = parser.scope
>>> submodule = parser.module
>>> submodule
<SubModule: example.py@1-1>
@@ -91,7 +91,7 @@ class Simple(Base):
@property
def start_pos(self):
return self._sub_module.line_offset + self._start_pos[0], \
self._start_pos[1]
self._start_pos[1]
@start_pos.setter
def start_pos(self, value):
@@ -102,7 +102,7 @@ class Simple(Base):
if None in self._end_pos:
return self._end_pos
return self._sub_module.line_offset + self._end_pos[0], \
self._end_pos[1]
self._end_pos[1]
@end_pos.setter
def end_pos(self, value):
@@ -110,7 +110,7 @@ class Simple(Base):
@Python3Method
def get_parent_until(self, classes=(), reverse=False,
include_current=True):
include_current=True):
""" Takes always the parent, until one class (not a Class) """
if type(classes) not in (tuple, list):
classes = (classes,)
@@ -247,19 +247,19 @@ class Scope(Simple, IsScope):
... b = y
... b.c = z
... ''')
>>> parser.scope.get_defined_names()
>>> parser.module.get_defined_names()
[<Name: a@2,0>, <Name: b@3,0>]
Note that unlike :meth:`get_set_vars`, assignment to object
attribute does not change the result because it does not change
the defined names in this scope.
>>> parser.scope.get_set_vars()
>>> parser.module.get_set_vars()
[<Name: a@2,0>, <Name: b@3,0>, <Name: b.c@4,0>]
"""
return [n for n in self.get_set_vars()
if isinstance(n, Import) or len(n) == 1]
if isinstance(n, Import) or len(n) == 1]
def is_empty(self):
"""
@@ -285,7 +285,7 @@ class Scope(Simple, IsScope):
p = s.get_statement_for_position(pos, include_imports)
if p:
return p
elif s.start_pos <= pos < s.end_pos:
elif s.start_pos <= pos <= s.end_pos:
return s
for s in self.subscopes:
@@ -304,22 +304,21 @@ class Scope(Simple, IsScope):
name = self.command
return "<%s: %s@%s-%s>" % (type(self).__name__, name,
self.start_pos[0], self.end_pos[0])
self.start_pos[0], self.end_pos[0])
class Module(IsScope):
""" For isinstance checks. fast_parser.Module also inherits from this. """
pass
"""
For isinstance checks. fast_parser.Module also inherits from this.
"""
class SubModule(Scope, Module):
"""
The top scope, which is always a module.
Depending on the underlying parser this may be a full module or just a part
of a module.
"""
def __init__(self, path, start_pos=(1, 0), top_module=None):
"""
Initialize :class:`SubModule`.
@@ -367,17 +366,33 @@ class SubModule(Scope, Module):
else:
sep = (re.escape(os.path.sep),) * 2
r = re.search(r'([^%s]*?)(%s__init__)?(\.py|\.so)?$' % sep,
self.path)
self.path)
# remove PEP 3149 names
string = re.sub('\.[a-z]+-\d{2}[mud]{0,3}$', '', r.group(1))
# positions are not real therefore choose (0, 0)
names = [(string, (0, 0))]
self._name = Name(self, names, self.start_pos, self.end_pos,
self.use_as_parent)
self._name = Name(self, names, (0, 0), (0, 0), self.use_as_parent)
return self._name
def is_builtin(self):
return not (self.path is None or self.path.endswith('.py'))
@property
def has_explicit_absolute_import(self):
"""
Checks if imports in this module are explicitly absolute, i.e. there
is a ``__future__`` import.
"""
for imp in self.imports:
if imp.from_ns is None or imp.namespace is None:
continue
namespace, feature = imp.from_ns.names[0], imp.namespace.names[0]
if namespace == "__future__" and feature == "absolute_import":
return True
return False
class Class(Scope):
"""
@@ -517,7 +532,7 @@ class Lambda(Function):
def __repr__(self):
return "<%s @%s (%s-%s)>" % (type(self).__name__, self.start_pos[0],
self.start_pos[1], self.end_pos[1])
self.start_pos[1], self.end_pos[1])
class Flow(Scope):
@@ -626,7 +641,7 @@ class ForFlow(Flow):
def __init__(self, module, inputs, start_pos, set_stmt,
is_list_comp=False):
super(ForFlow, self).__init__(module, 'for', inputs, start_pos,
set_stmt.used_vars)
set_stmt.used_vars)
self.set_stmt = set_stmt
set_stmt.parent = self.use_as_parent
self.is_list_comp = is_list_comp
@@ -799,7 +814,7 @@ class Statement(Simple):
def get_code(self, new_line=True):
def assemble(command_list, assignment=None):
pieces = [c.get_code() if isinstance(c, Simple) else unicode(c)
for c in command_list]
for c in command_list]
if assignment is None:
return ''.join(pieces)
return '%s %s ' % (''.join(pieces), assignment)
@@ -851,7 +866,7 @@ class Statement(Simple):
"""
def is_assignment(tok):
return isinstance(tok, (str, unicode)) and tok.endswith('=') \
and not tok in ['>=', '<=', '==', '!=']
and not tok in ['>=', '<=', '==', '!=']
def parse_array(token_iterator, array_type, start_pos, add_el=None,
added_breaks=()):
@@ -886,7 +901,7 @@ class Statement(Simple):
c = token_iterator.current[1]
arr.end_pos = c.end_pos if isinstance(c, Simple) \
else (c[2][0], c[2][1] + len(c[1]))
else (c[2][0], c[2][1] + len(c[1]))
return arr, break_tok
def parse_stmt(token_iterator, maybe_dict=False, added_breaks=(),
@@ -896,7 +911,7 @@ class Statement(Simple):
level = 1
tok = None
first = True
end_pos = None
end_pos = None, None
for i, tok_temp in token_iterator:
if isinstance(tok_temp, Base):
# the token is a Name, which has already been parsed
@@ -924,7 +939,7 @@ class Statement(Simple):
token_list.append(lambd)
elif tok == 'for':
list_comp, tok = parse_list_comp(token_iterator,
token_list, start_pos, last_end_pos)
token_list, start_pos, last_end_pos)
if list_comp is not None:
token_list = [list_comp]
@@ -936,8 +951,8 @@ class Statement(Simple):
if level == 0 and tok in closing_brackets \
or tok in added_breaks \
or level == 1 and (tok == ','
or maybe_dict and tok == ':'
or is_assignment(tok) and break_on_assignment):
or maybe_dict and tok == ':'
or is_assignment(tok) and break_on_assignment):
end_pos = end_pos[0], end_pos[1] - 1
break
token_list.append(tok_temp)
@@ -946,7 +961,7 @@ class Statement(Simple):
return None, tok
statement = stmt_class(self._sub_module, [], [], token_list,
start_pos, end_pos, self.parent)
start_pos, end_pos, self.parent)
statement.used_vars = used_vars
return statement, tok
@@ -1027,8 +1042,10 @@ class Statement(Simple):
tok = tok_temp
token_type = None
start_pos = tok.start_pos
end_pos = tok.end_pos
else:
token_type, tok, start_pos = tok_temp
end_pos = start_pos[0], start_pos[1] + len(tok)
if is_assignment(tok):
# This means, there is an assignment here.
# Add assignments, which can be more than one
@@ -1044,6 +1061,8 @@ class Statement(Simple):
lambd, tok = parse_lambda(token_iterator)
if lambd is not None:
result.append(lambd)
else:
continue
is_literal = token_type in [tokenize.STRING, tokenize.NUMBER]
if isinstance(tok, Name) or is_literal:
@@ -1055,7 +1074,7 @@ class Statement(Simple):
elif token_type == tokenize.NUMBER:
c_type = Call.NUMBER
call = Call(self._sub_module, tok, c_type, start_pos, self)
call = Call(self._sub_module, tok, c_type, start_pos, end_pos, self)
if is_chain:
result[-1].set_next(call)
else:
@@ -1063,7 +1082,7 @@ class Statement(Simple):
is_chain = False
elif tok in brackets.keys():
arr, is_ass = parse_array(token_iterator, brackets[tok],
start_pos)
start_pos)
if result and isinstance(result[-1], Call):
result[-1].set_execution(arr)
else:
@@ -1083,13 +1102,13 @@ class Statement(Simple):
token_iterator.push_back((i, tok))
t = self.token_list[i - 1]
try:
end_pos = t.end_pos
e = t.end_pos
except AttributeError:
end_pos = (t[2][0], t[2][1] + len(t[1])) \
if isinstance(t, tuple) else t.start_pos
e = (t[2][0], t[2][1] + len(t[1])) \
if isinstance(t, tuple) else t.start_pos
stmt = Statement(self._sub_module, [], [], result,
start_pos, end_pos, self.parent)
start_pos, e, self.parent)
stmt._commands = result
arr, break_tok = parse_array(token_iterator, Array.TUPLE,
stmt.start_pos, stmt)
@@ -1099,7 +1118,7 @@ class Statement(Simple):
result = []
is_chain = False
else:
if tok != '\n':
if tok != '\n' and token_type != tokenize.COMMENT:
result.append(tok)
return result
@@ -1145,8 +1164,8 @@ class Call(Simple):
NUMBER = 2
STRING = 3
def __init__(self, module, name, type, start_pos, parent=None):
super(Call, self).__init__(module, start_pos)
def __init__(self, module, name, type, start_pos, end_pos, parent=None):
super(Call, self).__init__(module, start_pos, end_pos)
self.name = name
# parent is not the oposite of next. The parent of c: a = [b.c] would
# be an array.
@@ -1195,13 +1214,7 @@ class Call(Simple):
if self.type == Call.NAME:
s = self.name.get_code()
else:
if not is_py3k and isinstance(self.name, str)\
and "'" not in self.name:
# This is a very rough spot, because of repr not supporting
# unicode signs, see `test_unicode_script`.
s = "'%s'" % unicode(self.name, 'UTF-8')
else:
s = '' if self.name is None else repr(self.name)
s = '' if self.name is None else repr(self.name)
if self.execution is not None:
s += self.execution.get_code()
if self.next is not None:
@@ -1210,7 +1223,7 @@ class Call(Simple):
def __repr__(self):
return "<%s: %s>" % \
(type(self).__name__, self.name)
(type(self).__name__, self.name)
class Array(Call):
@@ -1231,10 +1244,10 @@ class Array(Call):
SET = 'set'
def __init__(self, module, start_pos, arr_type=NOARRAY, parent=None):
super(Array, self).__init__(module, None, arr_type, start_pos, parent)
super(Array, self).__init__(module, None, arr_type, start_pos, (None, None), parent)
self.end_pos = None, None
self.values = []
self.keys = []
self.end_pos = None, None
def add_statement(self, statement, is_key=False):
"""Just add a new statement"""
@@ -1251,9 +1264,11 @@ class Array(Call):
This is not only used for calls on the actual object, but for
ducktyping, to invoke this function with anything as `self`.
"""
if isinstance(instance, Array):
try:
if instance.type in types:
return True
except AttributeError:
pass
return False
def __len__(self):
@@ -1341,7 +1356,7 @@ class Name(Simple):
def __init__(self, module, names, start_pos, end_pos, parent=None):
super(Name, self).__init__(module, start_pos, end_pos)
self.names = tuple(n if isinstance(n, NamePart) else
NamePart(n[0], self, n[1]) for n in names)
NamePart(n[0], self, n[1]) for n in names)
if parent is not None:
self.parent = parent
@@ -1384,7 +1399,7 @@ class ListComprehension(Base):
def __repr__(self):
return "<%s: %s>" % \
(type(self).__name__, self.get_code())
(type(self).__name__, self.get_code())
def get_code(self):
statements = self.stmt, self.middle, self.input

View File

@@ -24,7 +24,7 @@ class RecursionDecorator(object):
self.reset()
def __call__(self, stmt, *args, **kwargs):
#print stmt, len(self.node_statements())
# print stmt, len(self.node_statements())
if self.push_stmt(stmt):
return []
else:
@@ -37,7 +37,7 @@ class RecursionDecorator(object):
check = self._check_recursion()
if check: # TODO remove False!!!!
debug.warning('catched stmt recursion: %s against %s @%s'
% (stmt, check.stmt, stmt.start_pos))
% (stmt, check.stmt, stmt.start_pos))
self.pop_stmt()
return True
return False
@@ -82,7 +82,7 @@ class RecursionNode(object):
# The same's true for the builtins, because the builtins are really
# simple.
self.is_ignored = isinstance(stmt, pr.Param) \
or (self.script == builtin.Builtin.scope)
or (self.script == builtin.Builtin.scope)
def __eq__(self, other):
if not other:
@@ -90,10 +90,10 @@ class RecursionNode(object):
is_list_comp = lambda x: isinstance(x, pr.ForFlow) and x.is_list_comp
return self.script == other.script \
and self.position == other.position \
and not is_list_comp(self.stmt.parent) \
and not is_list_comp(other.parent) \
and not self.is_ignored and not other.is_ignored
and self.position == other.position \
and not is_list_comp(self.stmt.parent) \
and not is_list_comp(other.parent) \
and not self.is_ignored and not other.is_ignored
class ExecutionRecursionDecorator(object):
@@ -107,7 +107,7 @@ class ExecutionRecursionDecorator(object):
def __call__(self, execution, evaluate_generator=False):
debug.dbg('Execution recursions: %s' % execution, self.recursion_level,
self.execution_count, len(self.execution_funcs))
self.execution_count, len(self.execution_funcs))
if self.check_recursion(execution, evaluate_generator):
result = []
else:

View File

@@ -64,8 +64,8 @@ def rename(script, new_name):
def _rename(names, replace_str):
""" For both rename and inline. """
order = sorted(names, key=lambda x: (x.module_path, x.start_pos),
reverse=True)
order = sorted(names, key=lambda x: (x.module_path, x.line, x.column),
reverse=True)
def process(path, old_lines, new_lines):
if new_lines is not None: # goto next file, save last
@@ -89,10 +89,10 @@ def _rename(names, replace_str):
new_lines = modules.source_to_unicode(source).splitlines()
old_lines = new_lines[:]
nr, indent = name.start_pos
nr, indent = name.line, name.column
line = new_lines[nr - 1]
new_lines[nr - 1] = line[:indent] + replace_str + \
line[indent + len(name.text):]
line[indent + len(name.text):]
process(current_path, old_lines, new_lines)
return dct
@@ -148,14 +148,14 @@ def extract(script, new_name):
open_brackets = ['(', '[', '{']
close_brackets = [')', ']', '}']
if '\n' in text and not (text[0] in open_brackets and text[-1] ==
close_brackets[open_brackets.index(text[0])]):
close_brackets[open_brackets.index(text[0])]):
text = '(%s)' % text
# add new line before statement
indent = user_stmt.start_pos[1]
new = "%s%s = %s" % (' ' * indent, new_name, text)
new_lines.insert(line_index, new)
dct[script.source_path] = script.source_path, old_lines, new_lines
dct[script.path] = script.path, old_lines, new_lines
return Refactoring(dct)
@@ -167,15 +167,15 @@ def inline(script):
dct = {}
definitions = script.goto()
definitions = script.goto_assignments()
with common.ignored(AssertionError):
assert len(definitions) == 1
stmt = definitions[0]._definition
usages = script.usages()
inlines = [r for r in usages
if not stmt.start_pos <= r.start_pos <= stmt.end_pos]
inlines = sorted(inlines, key=lambda x: (x.module_path, x.start_pos),
reverse=True)
if not stmt.start_pos <= (r.line, r.column) <= stmt.end_pos]
inlines = sorted(inlines, key=lambda x: (x.module_path, x.line, x.column),
reverse=True)
commands = stmt.get_commands()
# don't allow multiline refactorings for now.
assert stmt.start_pos[0] == stmt.end_pos[0]
@@ -196,7 +196,7 @@ def inline(script):
dct = _rename(inlines, replace_str)
# remove the empty line
new_lines = dct[script.source_path][2]
new_lines = dct[script.path][2]
if line.strip():
new_lines[index] = line
else:

23
jedi/replstartup.py Normal file
View File

@@ -0,0 +1,23 @@
"""
To use Jedi completion in Python interpreter, add the following in your shell
setup (e.g., ``.bashrc``)::
export PYTHONSTARTUP="$(python -m jedi repl)"
Then you will be able to use Jedi completer in your Python interpreter::
$ python
Python 2.7.2+ (default, Jul 20 2012, 22:15:08)
[GCC 4.6.1] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import os
>>> os.path.join().split().in<TAB> # doctest: +SKIP
os.path.join().split().index os.path.join().split().insert
"""
import jedi.utils
jedi.utils.setup_readline()
del jedi
# Note: try not to do many things here, as it will contaminate global
# namespace of the interpreter.

View File

@@ -127,8 +127,10 @@ cache_directory = os.path.expanduser(_cache_directory)
"""
The path where all the caches can be found.
On Linux, this defaults to ``~/.cache/jedi/``, on OS X to ``~/.jedi/`` and on
Windows to ``%APPDATA%\\Jedi\\Jedi\\``.
On Linux, this defaults to ``~/.cache/jedi/``, on OS X to
``~/Library/Caches/Jedi/`` and on Windows to ``%APPDATA%\\Jedi\\Jedi\\``.
On Linux, if environment variable ``$XDG_CACHE_HOME`` is set,
``$XDG_CACHE_HOME/jedi`` is used instead of the default one.
"""
# ----------------
@@ -173,6 +175,11 @@ Additional modules in which |jedi| checks if statements are to be found. This
is practical for IDEs, that want to administrate their modules themselves.
"""
dynamic_flow_information = True
"""
Check for `isinstance` and other information to infer a type.
"""
# ----------------
# recursions
# ----------------

View File

@@ -168,12 +168,12 @@ def generate_tokens(readline):
if endmatch:
pos = end = endmatch.end(0)
yield TokenInfo(STRING, contstr + line[:end],
strstart, (lnum, end), contline + line)
strstart, (lnum, end), contline + line)
contstr, needcont = '', 0
contline = None
elif needcont and line[-2:] != '\\\n' and line[-3:] != '\\\r\n':
yield TokenInfo(ERRORTOKEN, contstr + line,
strstart, (lnum, len(line)), contline)
strstart, (lnum, len(line)), contline)
contstr = ''
contline = None
continue
@@ -204,12 +204,13 @@ def generate_tokens(readline):
comment_token = line[pos:].rstrip('\r\n')
nl_pos = pos + len(comment_token)
yield TokenInfo(COMMENT, comment_token,
(lnum, pos), (lnum, pos + len(comment_token)), line)
(lnum, pos), (lnum, pos + len(comment_token)), line)
yield TokenInfo(NL, line[nl_pos:],
(lnum, nl_pos), (lnum, len(line)), line)
(lnum, nl_pos), (lnum, len(line)), line)
else:
yield TokenInfo((NL, COMMENT)[line[pos] == '#'], line[pos:],
(lnum, pos), (lnum, len(line)), line)
yield TokenInfo(
(NL, COMMENT)[line[pos] == '#'], line[pos:],
(lnum, pos), (lnum, len(line)), line)
continue
if column > indents[-1]: # count indents or dedents
@@ -237,7 +238,7 @@ def generate_tokens(readline):
yield TokenInfo(NUMBER, token, spos, epos, line)
elif initial in '\r\n':
yield TokenInfo(NL if parenlev > 0 else NEWLINE,
token, spos, epos, line)
token, spos, epos, line)
elif initial == '#':
assert not token.endswith("\n")
yield TokenInfo(COMMENT, token, spos, epos, line)
@@ -277,7 +278,7 @@ def generate_tokens(readline):
yield TokenInfo(OP, token, spos, epos, line)
else:
yield TokenInfo(ERRORTOKEN, line[pos],
(lnum, pos), (lnum, pos + 1), line)
(lnum, pos), (lnum, pos + 1), line)
pos += 1
for indent in indents[1:]: # pop remaining indent levels

97
jedi/utils.py Normal file
View File

@@ -0,0 +1,97 @@
"""
Utilities for end-users.
"""
from __future__ import absolute_import
import __main__
from jedi import Interpreter
def setup_readline(namespace_module=__main__):
"""
Install Jedi completer to :mod:`readline`.
This function setups :mod:`readline` to use Jedi in Python interactive
shell. If you want to use a custom ``PYTHONSTARTUP`` file (typically
``$HOME/.pythonrc.py``), you can add this piece of code::
try:
from jedi.utils import setup_readline
setup_readline()
except ImportError:
# Fallback to the stdlib readline completer if it is installed.
# Taken from http://docs.python.org/2/library/rlcompleter.html
print("Jedi is not installed, falling back to readline")
try:
import readline
import rlcompleter
readline.parse_and_bind("tab: complete")
except ImportError:
print("Readline is not installed either. No tab completion is enabled.")
This will fallback to the readline completer if Jedi is not installed.
The readline completer will only complete names in the global namespace,
so for example,
>>> ran<TAB> # doctest: +SKIP
will complete to ``range``
with both Jedi and readline, but
>>> range(10).cou<TAB> # doctest: +SKIP
will show complete to ``range(10).count`` only with Jedi.
You'll also need to add ``export PYTHONSTARTUP=$HOME/.pythonrc.py`` to
your shell profile (usually ``.bash_profile`` or ``.profile`` if you use
bash).
"""
class JediRL():
def complete(self, text, state):
"""
This complete stuff is pretty weird, a generator would make
a lot more sense, but probably due to backwards compatibility
this is still the way how it works.
The only important part is stuff in the ``state == 0`` flow,
everything else has been copied from the ``rlcompleter`` std.
library module.
"""
if state == 0:
import os, sys
sys.path.insert(0, os.getcwd())
# Calling python doesn't have a path, so add to sys.path.
try:
interpreter = Interpreter(text, [namespace_module.__dict__])
path, dot, like = interpreter._get_completion_parts()
before = text[:len(text) - len(like)]
completions = interpreter.completions()
finally:
sys.path.pop(0)
self.matches = [before + c.name_with_symbols for c in completions]
try:
return self.matches[state]
except IndexError:
return None
try:
import readline
except ImportError:
print("Module readline not available.")
else:
readline.set_completer(JediRL().complete)
readline.parse_and_bind("tab: complete")
# jedi itself does the case matching
readline.parse_and_bind("set completion-ignore-case on")
# because it's easier to hit the tab just once
readline.parse_and_bind("set show-all-if-unmodified")
readline.parse_and_bind("set show-all-if-ambiguous on")
# don't repeat all the things written in the readline all the time
readline.parse_and_bind("set completion-prefix-display-length 2")
# No delimiters, Jedi handles that.
readline.set_completer_delims('')

View File

@@ -2,7 +2,7 @@
addopts = --doctest-modules
# Ignore broken files in blackbox test directories
norecursedirs = .* docs completion refactor
norecursedirs = .* docs completion refactor absolute_import namespace_package
# Activate `clean_jedi_cache` fixture for all tests. This should be
# fine as long as we are using `clean_jedi_cache` as a session scoped

View File

@@ -1,7 +1,11 @@
#!/usr/bin/env python
from __future__ import with_statement
from setuptools import setup
try:
from setuptools import setup
except ImportError:
# Distribute is not actually required to install
from distutils.core import setup
__AUTHOR__ = 'David Halter'
__AUTHOR_EMAIL__ = 'davidhalter88@gmail.com'
@@ -19,7 +23,7 @@ setup(name='jedi',
maintainer=__AUTHOR__,
maintainer_email=__AUTHOR_EMAIL__,
url='https://github.com/davidhalter/jedi',
license='LGPLv3',
license='MIT',
keywords='python completion refactoring vim',
long_description=readme,
packages=['jedi'],
@@ -29,7 +33,7 @@ setup(name='jedi',
'Development Status :: 4 - Beta',
'Environment :: Plugins',
'Intended Audience :: Developers',
'License :: OSI Approved :: GNU Library or Lesser General Public License (LGPL)',
'License :: OSI Approved :: MIT License',
'Operating System :: OS Independent',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.6',

221
sith.py Executable file
View File

@@ -0,0 +1,221 @@
#!/usr/bin/env python
"""
Sith attacks (and helps debugging) Jedi.
Randomly search Python files and run Jedi on it. Exception and used
arguments are recorded to ``./record.json`` (specified by --record)::
./sith.py random /path/to/sourcecode
Redo recorded exception::
./sith.py redo
Show recorded exception::
./sith.py show
Run a specific operation
./sith.py run <operation> </path/to/source/file.py> <line> <col>
Where operation is one of completions, goto_assignments, goto_definitions,
usages, or call_signatures.
Note: Line numbers start at 1; columns start at 0 (this is consistent with
many text editors, including Emacs).
Usage:
sith.py [--pdb|--ipdb|--pudb] [-d] [-n=<nr>] [-f] [--record=<file>] random [<path>]
sith.py [--pdb|--ipdb|--pudb] [-d] [-f] [--record=<file>] redo
sith.py [--pdb|--ipdb|--pudb] [-d] [-f] run <operation> <path> <line> <column>
sith.py show [--record=<file>]
sith.py -h | --help
Options:
-h --help Show this screen.
--record=<file> Exceptions are recorded in here [default: record.json].
-f, --fs-cache By default, file system cache is off for reproducibility.
-n, --maxtries=<nr> Maximum of random tries [default: 100]
-d, --debug Jedi print debugging when an error is raised.
--pdb Launch pdb when error is raised.
--ipdb Launch ipdb when error is raised.
--pudb Launch pudb when error is raised.
"""
from __future__ import print_function, division, unicode_literals
from docopt import docopt
import json
import os
import random
import sys
import traceback
import jedi
class SourceFinder(object):
_files = None
@staticmethod
def fetch(file_path):
if not os.path.isdir(file_path):
yield file_path
return
for root, dirnames, filenames in os.walk(file_path):
for name in filenames:
if name.endswith('.py'):
yield os.path.join(root, name)
@classmethod
def files(cls, file_path):
if cls._files is None:
cls._files = list(cls.fetch(file_path))
return cls._files
class TestCase(object):
def __init__(self, operation, path, line, column, traceback=None):
if operation not in self.operations:
raise ValueError("%s is not a valid operation" % operation)
# Set other attributes
self.operation = operation
self.path = path
self.line = line
self.column = column
self.traceback = traceback
@classmethod
def from_cache(cls, record):
with open(record) as f:
args = json.load(f)
return cls(*args)
operations = [
'completions', 'goto_assignments', 'goto_definitions', 'usages',
'call_signatures']
@classmethod
def generate(cls, file_path):
operation = random.choice(cls.operations)
path = random.choice(SourceFinder.files(file_path))
with open(path) as f:
source = f.read()
lines = source.splitlines()
if not lines:
lines = ['']
line = random.randint(1, len(lines))
column = random.randint(0, len(lines[line - 1]))
return cls(operation, path, line, column)
def run(self, debugger, record=None, print_result=False):
try:
with open(self.path) as f:
self.script = jedi.Script(f.read(), self.line, self.column, self.path)
self.completions = getattr(self.script, self.operation)()
if print_result:
self.show_location(self.line, self.column)
self.show_operation()
except jedi.NotFoundError:
pass
except Exception:
self.traceback = traceback.format_exc()
if record is not None:
call_args = (self.operation, self.path, self.line, self.column, self.traceback)
with open(record, 'w') as f:
json.dump(call_args, f)
self.show_errors()
if debugger:
einfo = sys.exc_info()
pdb = __import__(debugger)
if debugger == 'pudb':
pdb.post_mortem(einfo[2], einfo[0], einfo[1])
else:
pdb.post_mortem(einfo[2])
exit(1)
def show_location(self, lineno, column, show=3):
# Three lines ought to be enough
lower = lineno - show if lineno - show > 0 else 0
for i, line in enumerate(self.script.source.split('\n')[lower:lineno]):
print(lower + i + 1, line)
print(' ' * (column + len(str(lineno))), '^')
def show_operation(self):
print("%s:\n" % self.operation.capitalize())
getattr(self, 'show_' + self.operation)()
def show_completions(self):
for completion in self.completions:
print(completion.name)
# TODO: Support showing the location in other files
# TODO: Move this printing to the completion objects themselves
def show_usages(self):
for completion in self.completions:
print(completion.description)
if os.path.abspath(completion.module_path) == os.path.abspath(self.path):
self.show_location(completion.line, completion.column)
def show_call_signatures(self):
for completion in self.completions:
# This is too complicated to print. It really should be
# implemented in str() anyway.
print(completion)
# Can't print the location here because we don't have the module path
def show_goto_definitions(self):
for completion in self.completions:
print(completion.desc_with_module)
if os.path.abspath(completion.module_path) == os.path.abspath(self.path):
self.show_location(completion.line, completion.column)
show_goto_assignments = show_goto_definitions
def show_errors(self):
print(self.traceback)
print(("Error with running Script(...).{operation}() with\n"
"\tpath: {path}\n"
"\tline: {line}\n"
"\tcolumn: {column}").format(**self.__dict__))
def main(arguments):
debugger = 'pdb' if arguments['--pdb'] else \
'ipdb' if arguments['--ipdb'] else \
'pudb' if arguments['--pudb'] else None
record = arguments['--record']
jedi.settings.use_filesystem_cache = arguments['--fs-cache']
if arguments['--debug']:
jedi.set_debug_function()
if arguments['redo'] or arguments['show']:
t = TestCase.from_cache(record)
if arguments['show']:
t.show_errors()
else:
t.run(debugger)
elif arguments['run']:
TestCase(
arguments['<operation>'], arguments['<path>'],
int(arguments['<line>']), int(arguments['<column>'])
).run(debugger, print_result=True)
else:
for _ in range(int(arguments['--maxtries'])):
t = TestCase.generate(arguments['<path>'] or '.')
t.run(debugger, record)
print('.', end='')
sys.stdout.flush()
print()
if __name__ == '__main__':
arguments = docopt(__doc__)
main(arguments)

View File

@@ -0,0 +1,14 @@
"""
This is a module that imports the *standard library* unittest,
despite there being a local "unittest" module. It specifies that it
wants the stdlib one with the ``absolute_import`` __future__ import.
The twisted equivalent of this module is ``twisted.trial._synctest``.
"""
from __future__ import absolute_import
import unittest
class Assertions(unittest.TestCase):
pass

View File

@@ -0,0 +1,14 @@
"""
This is a module that shadows a builtin (intentionally).
It imports a local module, which in turn imports stdlib unittest (the
name shadowed by this module). If that is properly resolved, there's
no problem. However, if jedi doesn't understand absolute_imports, it
will get this module again, causing infinite recursion.
"""
from local_module import Assertions
class TestCase(Assertions):
def test(self):
self.assertT

View File

@@ -11,6 +11,8 @@
[1,""][2]
#? int() str()
[1,""][20]
#? int() str()
[1,""][str(hello)]
a = list()
#? list()
@@ -94,6 +96,28 @@ a4
b4
# -----------------
# multiple assignments
# -----------------
a = b = 1
#? int()
a
#? int()
b
(a, b) = (c, (e, f)) = ('2', (3, 4))
#? str()
a
#? tuple()
b
#? str()
c
#? int()
e
#? int()
f
# -----------------
# unnessecary braces
# -----------------
@@ -128,6 +152,12 @@ def a(): return ''
#? int()
(tuple)().index()
class C():
def __init__(self):
self.a = (str()).upper()
#? str()
C().a
# -----------------
# imbalanced sides
@@ -186,6 +216,18 @@ f()
#? 9 ['str']
{str: str}
# iteration problem (detected with sith)
d = dict({'a':''})
def y(a):
return a
#?
y(**d)
# problem with more complicated casts
dic = {str(key): ''}
#? str()
dic['']
# -----------------
# with variable as index
# -----------------

View File

@@ -209,6 +209,9 @@ def a():
"""
pass
#?
# str literals in comment """ upper
# -----------------
# magic methods
# -----------------
@@ -220,3 +223,45 @@ class B(): pass
A.__init__
#? ['__init__']
B.__init__
#? ['__init__']
int().__init__
# -----------------
# comments
# -----------------
class A():
def __init__(self):
self.hello = {} # comment shouldn't be a string
#? dict()
A().hello
# -----------------
# unicode
# -----------------
a = 'smörbröd'
#? str()
a
xyz = 'smörbröd.py'
if 1:
#? str()
xyz
# -----------------
# exceptions
# -----------------
try:
import math
except ImportError as i_a:
#? ['i_a']
i_a
#? ImportError()
i_a
try:
import math
except ImportError, i_b:
#? ['i_b']
i_b
#? ImportError()
i_b

View File

@@ -119,6 +119,13 @@ strs.second
#? ['var_class']
TestClass.var_class.var_class.var_class.var_class
# operations (+, *, etc) shouldn't be InstanceElements - #246
class A():
def __init__(self):
self.addition = 1 + 2
#? int()
A().addition
# -----------------
# inheritance
# -----------------
@@ -241,192 +248,6 @@ class A():
#? list()
A().b()
# -----------------
# descriptors
# -----------------
class RevealAccess(object):
"""
A data descriptor that sets and returns values
normally and prints a message logging their access.
"""
def __init__(self, initval=None, name='var'):
self.val = initval
self.name = name
def __get__(self, obj, objtype):
print('Retrieving', self.name)
return self.val
def __set__(self, obj, val):
print('Updating', self.name)
self.val = val
def just_a_method(self):
pass
class C(object):
x = RevealAccess(10, 'var "x"')
#? RevealAccess()
x
#? ['just_a_method']
x.just_a_method
y = 5.0
def __init__(self):
#? int()
self.x
#? []
self.just_a_method
#? []
C.just_a_method
m = C()
#? int()
m.x
#? float()
m.y
#? int()
C.x
#? []
m.just_a_method
#? []
C.just_a_method
# -----------------
# properties
# -----------------
class B():
@property
def r(self):
return 1
@r.setter
def r(self, value):
return ''
def t(self):
return ''
p = property(t)
#? []
B().r()
#? int()
B().r
#? str()
B().p
#? []
B().p()
class PropClass():
def __init__(self, a):
self.a = a
@property
def ret(self):
return self.a
@ret.setter
def ret(self, value):
return 1.0
def ret2(self):
return self.a
ret2 = property(ret2)
@property
def nested(self):
""" causes recusions in properties, should work """
return self.ret
@property
def nested2(self):
""" causes recusions in properties, should not work """
return self.nested2
@property
def join1(self):
""" mutual recusion """
return self.join2
@property
def join2(self):
""" mutual recusion """
return self.join1
#? str()
PropClass("").ret
#? []
PropClass().ret.
#? str()
PropClass("").ret2
#?
PropClass().ret2
#? int()
PropClass(1).nested
#? []
PropClass().nested.
#?
PropClass(1).nested2
#? []
PropClass().nested2.
#?
PropClass(1).join1
# -----------------
# staticmethod/classmethod
# -----------------
class E(object):
a = ''
def __init__(self, a):
self.a = a
def f(x):
return x
f = staticmethod(f)
@staticmethod
def g(x):
return x
def s(cls, x):
return x
s = classmethod(s)
@classmethod
def t(cls, x):
return x
@classmethod
def u(cls, x):
return cls.a
e = E(1)
#? int()
e.f(1)
#? int()
E.f(1)
#? int()
e.g(1)
#? int()
E.g(1)
#? int()
e.s(1)
#? int()
E.s(1)
#? int()
e.t(1)
#? int()
E.t(1)
#? str()
e.u(1)
#? str()
E.u(1)
# -----------------
# recursions
# -----------------
@@ -488,6 +309,8 @@ getattr()
getattr(str)
#?
getattr(getattr, 1)
#?
getattr(str, [])
class Base():

View File

@@ -76,7 +76,7 @@ exe[4]['d']
# -----------------
# class decorators
# Decorator is a class
# -----------------
class Decorator(object):
def __init__(self, func):
@@ -94,18 +94,65 @@ nothing("")[0]
#? str()
nothing("")[1]
@Decorator
def nothing(a,b,c):
return a,b,c
class MethodDecoratorAsClass():
class_var = 3
@Decorator
def func_without_self(arg, arg2):
return arg, arg2
@Decorator
def func_with_self(self, arg):
return self.class_var
#? int()
MethodDecoratorAsClass().func_without_self('')[0]
#? str()
MethodDecoratorAsClass().func_without_self('')[1]
#?
MethodDecoratorAsClass().func_with_self(1)
class SelfVars():
"""Init decorator problem as an instance, #247"""
@Decorator
def __init__(self):
"""
init decorators should be ignored when looking up variables in the
class.
"""
self.c = list
@Decorator
def shouldnt_expose_var(not_self):
"""
Even though in real Python this shouldn't expose the variable, in this
case Jedi exposes the variable, because these kind of decorators are
normally descriptors, which SHOULD be exposed (at least 90%).
"""
not_self.b = 1.0
def other_method(self):
#? float()
self.b
#? list
self.c
# -----------------
# not found decorators
# not found decorators (are just ignored)
# -----------------
@not_found_decorator
def just_a_func():
return 1
#? []
#? int()
just_a_func()
#? []
just_a_func.
#? ['__closure__']
just_a_func.__closure__
class JustAClass:
@@ -113,14 +160,97 @@ class JustAClass:
def a(self):
return 1
#? []
JustAClass().a.
#? []
JustAClass().a()
#? []
JustAClass.a.
#? []
#? ['__closure__']
JustAClass().a.__closure__
#? int()
JustAClass().a()
#? ['__closure__']
JustAClass.a.__closure__
#? int()
JustAClass.a()
# -----------------
# illegal decorators
# -----------------
class DecoratorWithoutCall():
def __init__(self, func):
self.func = func
@DecoratorWithoutCall
def f():
return 1
# cannot be resolved - should be ignored
@DecoratorWithoutCall(None)
def g():
return 1
#?
f()
#? int()
g()
# -----------------
# method decorators
# -----------------
def dec(f):
def wrapper(s):
return f(s)
return wrapper
class MethodDecorators():
_class_var = 1
def __init__(self):
self._method_var = ''
@dec
def constant(self):
return 1.0
@dec
def class_var(self):
return self._class_var
@dec
def method_var(self):
return self._method_var
#? float()
MethodDecorators().constant()
#? int()
MethodDecorators().class_var()
#? str()
MethodDecorators().method_var()
class Base():
@not_existing
def __init__(self):
pass
@not_existing
def b(self):
return ''
@dec
def c(self):
return 1
class MethodDecoratorDoesntExist(Base):
"""#272 github: combination of method decorators and super()"""
def a(self):
#?
super().__init__()
#? str()
super().b()
#? int()
super().c()
#? float()
self.d()
@doesnt_exist
def d(self):
return 1.0
# -----------------
# others
@@ -148,5 +278,9 @@ follow_statement(1)
# class decorators should just be ignored
@should_ignore
class A(): pass
class A():
def ret(self):
return 1
#? int()
A().ret()

View File

@@ -61,5 +61,5 @@ isinstance(None,
def x(): pass # acts like EOF
#? isinstance
##? isinstance
isinstance(None,

View File

@@ -0,0 +1,182 @@
class RevealAccess(object):
"""
A data descriptor that sets and returns values
normally and prints a message logging their access.
"""
def __init__(self, initval=None, name='var'):
self.val = initval
self.name = name
def __get__(self, obj, objtype):
print('Retrieving', self.name)
return self.val
def __set__(self, obj, val):
print('Updating', self.name)
self.val = val
def just_a_method(self):
pass
class C(object):
x = RevealAccess(10, 'var "x"')
#? RevealAccess()
x
#? ['just_a_method']
x.just_a_method
y = 5.0
def __init__(self):
#? int()
self.x
#? []
self.just_a_method
#? []
C.just_a_method
m = C()
#? int()
m.x
#? float()
m.y
#? int()
C.x
#? []
m.just_a_method
#? []
C.just_a_method
# -----------------
# properties
# -----------------
class B():
@property
def r(self):
return 1
@r.setter
def r(self, value):
return ''
def t(self):
return ''
p = property(t)
#? []
B().r()
#? int()
B().r
#? str()
B().p
#? []
B().p()
class PropClass():
def __init__(self, a):
self.a = a
@property
def ret(self):
return self.a
@ret.setter
def ret(self, value):
return 1.0
def ret2(self):
return self.a
ret2 = property(ret2)
@property
def nested(self):
""" causes recusions in properties, should work """
return self.ret
@property
def nested2(self):
""" causes recusions in properties, should not work """
return self.nested2
@property
def join1(self):
""" mutual recusion """
return self.join2
@property
def join2(self):
""" mutual recusion """
return self.join1
#? str()
PropClass("").ret
#? []
PropClass().ret.
#? str()
PropClass("").ret2
#?
PropClass().ret2
#? int()
PropClass(1).nested
#? []
PropClass().nested.
#?
PropClass(1).nested2
#? []
PropClass().nested2.
#?
PropClass(1).join1
# -----------------
# staticmethod/classmethod
# -----------------
class E(object):
a = ''
def __init__(self, a):
self.a = a
def f(x):
return x
f = staticmethod(f)
@staticmethod
def g(x):
return x
def s(cls, x):
return x
s = classmethod(s)
@classmethod
def t(cls, x):
return x
@classmethod
def u(cls, x):
return cls.a
e = E(1)
#? int()
e.f(1)
#? int()
E.f(1)
#? int()
e.g(1)
#? int()
E.g(1)
#? int()
e.s(1)
#? int()
E.s(1)
#? int()
e.t(1)
#? int()
E.t(1)
#? str()
e.u(1)
#? str()
E.u(1)

View File

@@ -1,5 +1,8 @@
""" Test docstrings in functions and classes, which are used to infer types """
# -----------------
# sphinx style
# -----------------
def f(a, b, c, d):
""" asdfasdf
:param a: blablabla
@@ -23,6 +26,26 @@ def f(a, b, c, d):
#? dict()
f()
# wrong declarations
def f(a, b):
"""
:param a: Forgot type declaration
:type a:
:param b: Just something
:type b: ``
:rtype:
"""
#?
a
#?
b
#?
f()
# -----------------
# epydoc style
# -----------------
def e(a, b):
""" asdfasdf
@type a: str
@@ -88,3 +111,11 @@ class Test(object):
def test(self):
#? ['teststr']
self.teststr
# -----------------
# statement docstrings
# -----------------
d = ''
""" bsdf """
#? str()
d.upper()

View File

@@ -63,6 +63,11 @@ def func(c=1):
func(1.0)
# Needs to be here, because in this case func is an import -> shouldn't lead to
# exceptions.
import sys as func
func.sys
# -----------------
# classes
# -----------------

View File

@@ -46,6 +46,15 @@ def multi_line_func(a, # comment blabla
#? str()
multi_line_func(1,'')
# nothing after comma
def asdf(a):
return a
x = asdf(a=1,
)
#? int()
x
# -----------------
# double execution
# -----------------
@@ -146,15 +155,29 @@ def a():
# -----------------
def args_func(*args):
#? tuple()
return args
exe = args_func(1, "")
#? int()
exe[0]
#? str()
exe[1]
# illegal args (TypeError)
#?
args_func(*1)[0]
# iterator
#? int()
args_func(*iter([1]))[0]
# different types
e = args_func(*[1+"", {}])
#? int() str()
e[0]
#? dict()
e[1]
_list = [1,""]
exe2 = args_func(_list)[0]
@@ -181,6 +204,9 @@ exe[1][1]
# ** kwargs
# -----------------
def kwargs_func(**kwargs):
#? ['keys']
kwargs.keys
#? dict()
return kwargs
exe = kwargs_func(a=3,b=4.0)
@@ -202,6 +228,12 @@ exe2['a']
# *args / ** kwargs
# -----------------
def func_without_call(*args, **kwargs):
#? tuple()
args
#? dict()
kwargs
def fu(a=1, b="", *args, **kwargs):
return a, b, args, kwargs
@@ -411,3 +443,9 @@ arg_func(1, 2, a=[], b=10)[1]
a = lambda: 3
#? ['__closure__']
a.__closure__
class C():
def __init__(self):
self.a = lambda: 1
#? int()
C().a()

View File

@@ -155,3 +155,19 @@ def ab2(param): pass
def ab3(a=param): pass
ab1(ClassDef);ab2(ClassDef);ab3(ClassDef)
# -----------------
# for loops
# -----------------
for i in range(1):
#! ['for i in range(1): i']
i
for key, value in [(1,2)]:
#! ['for key,value in [(1...']
key
for i in []:
#! ['for i in []: i']
i

View File

@@ -0,0 +1 @@
from . import mod1 as fake

View File

@@ -0,0 +1,5 @@
import recurse_class2
class C(recurse_class2.C):
def a(self):
pass

View File

@@ -0,0 +1,4 @@
import recurse_class1
class C(recurse_class1.C):
pass

View File

@@ -56,6 +56,53 @@ def scope_nested():
#? set
import_tree.random.a
def scope_nested2():
"""Multiple modules should be indexable, if imported"""
import import_tree.mod1
import import_tree.pkg
#? ['mod1']
import_tree.mod1
#? ['pkg']
import_tree.pkg
#? []
import_tree.rename1
def from_names():
#? ['mod1']
from import_tree.pkg.
#? ['path']
from os.
def builtin_test():
#? ['math']
import math
def scope_from_import_variable():
"""
All of them shouldn't work, because "fake" imports don't work in python
without the use of ``sys.modules`` modifications (e.g. ``os.path`` see also
github issue #213 for clarification.
"""
#?
from import_tree.mod2.fake import a
#?
from import_tree.mod2.fake import c
#?
a
#?
c
def scope_from_import_variable_with_parenthesis():
from import_tree.mod2.fake import (
a, c
)
#?
a
#?
c
# -----------------
# std lib modules
# -----------------
@@ -72,6 +119,16 @@ import os
#? ['dirname']
os.path.dirname
#? os.path.join
from os.path import join
from os.path import (
expanduser
)
#? os.path.expanduser
expanduser
from itertools import (tee,
islice)
#? ['islice']
@@ -152,11 +209,11 @@ from .......import_tree import mod1
#?
mod1.a
from .. import base
from .. import helpers
#? int()
base.sample_int
helpers.sample_int
from ..base import sample_int as f
from ..helpers import sample_int as f
#? int()
f
@@ -196,7 +253,9 @@ import datetime.
#? []
import datetime.date
#? 18 ['mod1', 'random', 'pkg', 'rename1', 'rename2', 'import']
#? 18 ['import']
from import_tree. import pkg
#? 17 ['mod1', 'mod2', 'random', 'pkg', 'rename1', 'rename2', 'recurse_class1', 'recurse_class2']
from import_tree. import pkg
#? 18 ['pkg']
@@ -236,3 +295,11 @@ import json, datetime
from import_tree.mod1 import c
#? set
c
from import_tree import recurse_class1
#? ['a']
recurse_class1.C.a
# github #239 RecursionError
#? ['a']
recurse_class1.C().a

View File

@@ -6,9 +6,22 @@ Basically this file could change depending on the current implementation. But
there should never be any errors.
"""
# wait until keywords are out of definitions (pydoc function).
##? 5
's'()
#? ['upper']
str()).upper
# -----------------
# funcs
# -----------------
def asdf(a or b): # multiple param names
return a
#? int()
asdf(2)
from a import (b
def blub():
return 0
@@ -42,6 +55,22 @@ def normalfunc():
#? int()
normalfunc()
# dots in param
def f(seq1...=None):
return seq1
#? int()
f(1)
@
def test_empty_decorator():
return 1
#? int()
test_empty_decorator()
# -----------------
# flows
# -----------------
# first part not complete (raised errors)
if a
@@ -77,6 +106,10 @@ for_local
for_local
# -----------------
# list comprehensions
# -----------------
a2 = [for a2 in [0]]
#?
a2[0]
@@ -107,10 +140,17 @@ a[0]
#? []
int()).
def asdf(a or b): # multiple param names
return a
# -----------------
# keywords
# -----------------
#? int()
asdf(2)
#! []
as
def empty_assert():
x = 3
assert
#? int()
x
import datetime as

View File

@@ -1,5 +1,6 @@
""" named params:
>>> def a(abc): pass
...
>>> a(abc=3) # <- this stuff
"""

View File

@@ -10,7 +10,7 @@ el.description
scopes, path, dot, like = \
api._prepare_goto(source, row, column, source_path, True)
api._prepare_goto(source, row, column, path, True)
# has problems with that (sometimes) very deep nesting.
#? set()

View File

@@ -89,6 +89,9 @@ from import_tree.rename1 import abc
#< (0, 32),
from import_tree.rename1 import not_existing
# shouldn't work
#<
from not_existing import *
# -----------------
# classes
@@ -129,6 +132,16 @@ class TestInstanceVar():
self._instance_var
class NestedClass():
def __getattr__(self, name):
return self
# Shouldn't find a definition, because there's no name defined (used ``getattr``).
#< (0, 14),
NestedClass().instance
# -----------------
# inheritance
# -----------------
@@ -203,3 +216,18 @@ class TestProperty:
def b(self):
#< 13 (-5,8), (0,13)
self.rw_prop
# -----------------
# *args, **kwargs
# -----------------
#< 11 (1,11), (0,8)
def f(**kwargs):
return kwargs
# -----------------
# No result
# -----------------
if isinstance(j, int):
#<
j

View File

@@ -4,7 +4,7 @@ import tempfile
import pytest
from . import base
from . import helpers
from . import run
from . import refactor
@@ -12,11 +12,11 @@ from . import refactor
def pytest_addoption(parser):
parser.addoption(
"--integration-case-dir",
default=os.path.join(base.test_dir, 'completion'),
default=os.path.join(helpers.test_dir, 'completion'),
help="Directory in which integration test case files locate.")
parser.addoption(
"--refactor-case-dir",
default=os.path.join(base.test_dir, 'refactor'),
default=os.path.join(helpers.test_dir, 'refactor'),
help="Directory in which refactoring test case files locate.")
parser.addoption(
"--test-files", "-T", default=[], action='append',
@@ -73,7 +73,7 @@ def isolated_jedi_cache(monkeypatch, tmpdir):
Same as `clean_jedi_cache`, but create the temporary directory for
each test case (scope='function').
"""
settings = base.jedi.settings
from jedi import settings
monkeypatch.setattr(settings, 'cache_directory', str(tmpdir))
@@ -88,7 +88,7 @@ def clean_jedi_cache(request):
This fixture is activated in ../pytest.ini.
"""
settings = base.jedi.settings
from jedi import settings
old = settings.cache_directory
tmp = tempfile.mkdtemp(prefix='jedi-test-')
settings.cache_directory = tmp

View File

@@ -1,46 +1,24 @@
"""
A helper module for testing, improves compatibility for testing (as
``jedi._compatibility``) as well as introducing helper functions.
"""
import sys
if sys.hexversion < 0x02070000:
import unittest2 as unittest
else:
import unittest
TestCase = unittest.TestCase
import os
from os.path import abspath, dirname
import functools
import jedi
test_dir = dirname(abspath(__file__))
root_dir = dirname(test_dir)
sample_int = 1 # This is used in completion/imports.py
class TestBase(unittest.TestCase):
def get_script(self, src, pos, path=None):
if pos is None:
lines = src.splitlines()
pos = len(lines), len(lines[-1])
return jedi.Script(src, pos[0], pos[1], path)
def goto_definitions(self, src, pos=None):
script = self.get_script(src, pos)
return script.goto_definitions()
def completions(self, src, pos=None, path=None):
script = self.get_script(src, pos, path)
return script.completions()
def goto_assignments(self, src, pos=None):
script = self.get_script(src, pos)
return script.goto_assignments()
def function_definition(self, src, pos=None):
script = self.get_script(src, pos)
return script.function_definition()
def cwd_at(path):
"""
Decorator to run function at `path`.

View File

@@ -0,0 +1,9 @@
foo = 'ns1!'
# this is a namespace package
try:
import pkg_resources
pkg_resources.declare_namespace(__name__)
except ImportError:
import pkgutil
__path__ = pkgutil.extend_path(__path__, __name__)

View File

@@ -0,0 +1 @@
foo = 'ns1_file!'

View File

@@ -0,0 +1 @@
foo = 'ns1_folder!'

View File

@@ -0,0 +1 @@
foo = 'ns2_file!'

View File

@@ -0,0 +1 @@
foo = 'ns2_folder!'

View File

@@ -0,0 +1 @@
foo = 'nested!'

View File

@@ -100,12 +100,8 @@ import os
import re
from ast import literal_eval
if __name__ == '__main__':
import sys
sys.path.insert(0, '..')
import jedi
from jedi._compatibility import unicode, reduce, StringIO
from jedi._compatibility import unicode, reduce, StringIO, is_py3k
TEST_COMPLETIONS = 0
@@ -182,7 +178,7 @@ class IntegrationTestCase(object):
return should_str
script = self.script()
should_str = definition(self.correct, self.start, script.source_path)
should_str = definition(self.correct, self.start, script.path)
result = script.goto_definitions()
is_str = set(r.desc_with_module for r in result)
return compare_cb(self, is_str, should_str)
@@ -195,8 +191,7 @@ class IntegrationTestCase(object):
def run_usages(self, compare_cb):
result = self.script().usages()
self.correct = self.correct.strip()
compare = sorted((r.module_name, r.start_pos[0], r.start_pos[1])
for r in result)
compare = sorted((r.module_name, r.line, r.column) for r in result)
wanted = []
if not self.correct:
positions = []
@@ -221,7 +216,8 @@ def collect_file_tests(lines, lines_to_execute):
test_type = None
for line_nr, line in enumerate(lines):
line_nr += 1 # py2.5 doesn't know about the additional enumerate param
line = unicode(line)
if not is_py3k:
line = unicode(line, 'UTF-8')
if correct:
r = re.match('^(\d+)\s*(.*)$', correct)
if r:
@@ -280,37 +276,40 @@ def collect_dir_tests(base_dir, test_files, check_thirdparty=False):
yield case
docoptstr = """
Using run.py to make debugging easier with integration tests.
An alternative testing format, which is much more hacky, but very nice to
work with.
Usage:
run.py [--pdb] [--debug] [--thirdparty] [<rest>...]
run.py --help
Options:
-h --help Show this screen.
--pdb Enable pdb debugging on fail.
-d, --debug Enable text output debugging (please install ``colorama``).
--thirdparty Also run thirdparty tests (in ``completion/thirdparty``).
"""
if __name__ == '__main__':
# an alternative testing format, this is much more hacky, but very nice to
# work with.
import docopt
arguments = docopt.docopt(docoptstr)
import time
t_start = time.time()
# Sorry I didn't use argparse here. It's because argparse is not in the
# stdlib in 2.5.
args = sys.argv[1:]
try:
i = args.index('--thirdparty')
thirdparty = True
args = args[:i] + args[i + 1:]
except ValueError:
thirdparty = False
import sys
print_debug = False
try:
i = args.index('--debug')
args = args[:i] + args[i + 1:]
except ValueError:
pass
else:
from jedi import api, debug
print_debug = True
api.set_debug_function(debug.print_to_stdout)
if arguments['--debug']:
jedi.set_debug_function()
# get test list, that should be executed
test_files = {}
last = None
for arg in args:
for arg in arguments['<rest>']:
if arg.isdigit():
if last is None:
continue
@@ -326,7 +325,7 @@ if __name__ == '__main__':
# execute tests
cases = list(collect_dir_tests(completion_test_dir, test_files))
if test_files or thirdparty:
if test_files or arguments['--thirdparty']:
completion_test_dir += '/thirdparty'
cases += collect_dir_tests(completion_test_dir, test_files, True)
@@ -356,6 +355,10 @@ if __name__ == '__main__':
print("\ttest fail @%d" % (c.line_nr - 1))
tests_fail += 1
fails += 1
if arguments['--pdb']:
import pdb
pdb.post_mortem()
count += 1
if current != c.path:

View File

@@ -0,0 +1,43 @@
"""
Tests ``from __future__ import absolute_import`` (only important for
Python 2.X)
"""
import jedi
from jedi.parsing import Parser
from . import helpers
def test_explicit_absolute_imports():
"""
Detect modules with ``from __future__ import absolute_import``.
"""
parser = Parser("from __future__ import absolute_import", "test.py")
assert parser.module.has_explicit_absolute_import
def test_no_explicit_absolute_imports():
"""
Detect modules without ``from __future__ import absolute_import``.
"""
parser = Parser("1", "test.py")
assert not parser.module.has_explicit_absolute_import
def test_dont_break_imports_without_namespaces():
"""
The code checking for ``from __future__ import absolute_import`` shouldn't
assume that all imports have non-``None`` namespaces.
"""
src = "from __future__ import absolute_import\nimport xyzzy"
parser = Parser(src, "test.py")
assert parser.module.has_explicit_absolute_import
@helpers.cwd_at("test/absolute_import")
def test_can_complete_when_shadowing():
filename = "unittest.py"
with open(filename) as f:
lines = f.readlines()
src = "".join(lines)
script = jedi.Script(src, len(lines), len(lines[1]), filename)
assert script.completions()

26
test/test_api.py Normal file
View File

@@ -0,0 +1,26 @@
"""
Test all things related to the ``jedi.api`` module.
"""
from jedi import common, api
def test_preload_modules():
def check_loaded(*modules):
# + 1 for builtin, +1 for None module (currently used)
assert len(new) == len(modules) + 2
for i in modules + ('__builtin__',):
assert [i in k for k in new.keys() if k is not None]
from jedi import cache
temp_cache, cache.parser_cache = cache.parser_cache, {}
new = cache.parser_cache
with common.ignored(KeyError): # performance of tests -> no reload
new['__builtin__'] = temp_cache['__builtin__']
api.preload_module('datetime')
check_loaded('datetime')
api.preload_module('json', 'token')
check_loaded('datetime', 'json', 'token')
cache.parser_cache = temp_cache

View File

@@ -1,10 +1,20 @@
""" Test all things related to the ``jedi.api_classes`` module.
"""
import textwrap
import pytest
from jedi import api
from jedi import Script
import jedi
def test_is_keyword():
results = Script('import ', 1, 1, None).goto_definitions()
assert len(results) == 1 and results[0].is_keyword == True
results = Script('str', 1, 1, None).goto_definitions()
assert len(results) == 1 and results[0].is_keyword == False
def make_definitions():
"""
Return a list of definitions for parametrized tests.
@@ -29,19 +39,19 @@ def make_definitions():
""")
definitions = []
definitions += api.defined_names(source)
definitions += jedi.defined_names(source)
source += textwrap.dedent("""
variable = sys or C or x or f or g or g() or h""")
lines = source.splitlines()
script = api.Script(source, len(lines), len('variable'), None)
script = Script(source, len(lines), len('variable'), None)
definitions += script.goto_definitions()
script2 = api.Script(source, 4, len('class C'), None)
script2 = Script(source, 4, len('class C'), None)
definitions += script2.usages()
source_param = "def f(a): return a"
script_param = api.Script(source_param, 1, len(source_param), None)
script_param = Script(source_param, 1, len(source_param), None)
definitions += script_param.goto_assignments()
return definitions
@@ -51,3 +61,27 @@ def make_definitions():
def test_basedefinition_type(definition):
assert definition.type in ('module', 'class', 'instance', 'function',
'generator', 'statement', 'import', 'param')
def test_function_call_signature_in_doc():
defs = Script("""
def f(x, y=1, z='a'):
pass
f""").goto_definitions()
doc = defs[0].doc
assert "f(x, y = 1, z = 'a')" in doc
def test_class_call_signature():
defs = Script("""
class Foo:
def __init__(self, x, y=1, z='a'):
pass
Foo""").goto_definitions()
doc = defs[0].doc
assert "Foo(self, x, y = 1, z = 'a')" in doc
def test_position_none_if_builtin():
gotos = Script('import sys; sys.path').goto_assignments()
assert gotos[0].line is None
assert gotos[0].column is None

View File

@@ -1,6 +1,13 @@
"""
Test all things related to the ``jedi.cache`` module.
"""
import time
import pytest
from jedi import settings
import jedi
from jedi import settings, cache
from jedi.cache import ParserCacheItem, _ModulePickling
@@ -52,3 +59,21 @@ def test_modulepickling_delete_incompatible_cache():
cache2.version = 2
cached2 = load_stored_item(cache2, path, item)
assert cached2 is None
def test_star_import_cache_duration():
new = 0.01
old, jedi.settings.star_import_cache_validity = \
jedi.settings.star_import_cache_validity, new
cache.star_import_cache = {} # first empty...
# path needs to be not-None (otherwise caching effects are not visible)
jedi.Script('', 1, 0, '').completions()
time.sleep(2 * new)
jedi.Script('', 1, 0, '').completions()
# reset values
jedi.settings.star_import_cache_validity = old
length = len(cache.star_import_cache)
cache.star_import_cache = {}
assert length == 1

View File

@@ -0,0 +1,114 @@
import textwrap
from .helpers import TestCase
from jedi import Script
class TestCallSignatures(TestCase):
def _run(self, source, expected_name, expected_index=0, line=None, column=None):
signatures = Script(source, line, column).call_signatures()
assert len(signatures) <= 1
if not signatures:
assert expected_name is None
else:
assert signatures[0].call_name == expected_name
assert signatures[0].index == expected_index
def test_call_signatures(self):
def run(source, name, index=0, column=None, line=1):
self._run(source, name, index, line, column)
# simple
s1 = "abs(a, str("
run(s1, 'abs', 0, 4)
run(s1, 'abs', 1, 6)
run(s1, 'abs', 1, 7)
run(s1, 'abs', 1, 8)
run(s1, 'str', 0, 11)
s2 = "abs(), "
run(s2, 'abs', 0, 4)
run(s2, None, column=5)
run(s2, None)
s3 = "abs()."
run(s3, None, column=5)
run(s3, None)
# more complicated
s4 = 'abs(zip(), , set,'
run(s4, None, column=3)
run(s4, 'abs', 0, 4)
run(s4, 'zip', 0, 8)
run(s4, 'abs', 0, 9)
#run(s4, 'abs', 1, 10)
s5 = "abs(1,\nif 2:\n def a():"
run(s5, 'abs', 0, 4)
run(s5, 'abs', 1, 6)
s6 = "str().center("
run(s6, 'center', 0)
run(s6, 'str', 0, 4)
s7 = "str().upper().center("
s8 = "str(int[zip("
run(s7, 'center', 0)
run(s8, 'zip', 0)
run(s8, 'str', 0, 8)
run("import time; abc = time; abc.sleep(", 'sleep', 0)
# jedi-vim #9
run("with open(", 'open', 0)
# jedi-vim #11
run("for sorted(", 'sorted', 0)
run("for s in sorted(", 'sorted', 0)
# jedi #57
s = "def func(alpha, beta): pass\n" \
"func(alpha='101',"
run(s, 'func', 0, column=13, line=2)
def test_function_definition_complex(self):
s = """
def abc(a,b):
pass
def a(self):
abc(
if 1:
pass
"""
self._run(s, 'abc', 0, line=6, column=24)
s = """
import re
def huhu(it):
re.compile(
return it * 2
"""
self._run(s, 'compile', 0, line=4, column=31)
# jedi-vim #70
s = """def foo("""
assert Script(s).call_signatures() == []
# jedi-vim #116
s = """import functools; test = getattr(functools, 'partial'); test("""
self._run(s, 'partial', 0)
def test_call_signature_on_module(self):
"""github issue #240"""
s = 'import datetime; datetime('
# just don't throw an exception (if numpy doesn't exist, just ignore it)
assert Script(s).call_signatures() == []
def test_function_definition_empty_paren_pre_space(self):
s = textwrap.dedent("""\
def f(a, b):
pass
f( )""")
self._run(s, 'f', 0, line=3, column=3)

View File

@@ -0,0 +1,74 @@
"""
Tests for `api.defined_names`.
"""
import textwrap
from jedi import api
from .helpers import TestCase
class TestDefinedNames(TestCase):
def assert_definition_names(self, definitions, names):
self.assertEqual([d.name for d in definitions], names)
def check_defined_names(self, source, names):
definitions = api.defined_names(textwrap.dedent(source))
self.assert_definition_names(definitions, names)
return definitions
def test_get_definitions_flat(self):
self.check_defined_names("""
import module
class Class:
pass
def func():
pass
data = None
""", ['module', 'Class', 'func', 'data'])
def test_dotted_assignment(self):
self.check_defined_names("""
x = Class()
x.y.z = None
""", ['x'])
def test_multiple_assignment(self):
self.check_defined_names("""
x = y = None
""", ['x', 'y'])
def test_multiple_imports(self):
self.check_defined_names("""
from module import a, b
from another_module import *
""", ['a', 'b'])
def test_nested_definitions(self):
definitions = self.check_defined_names("""
class Class:
def f():
pass
def g():
pass
""", ['Class'])
subdefinitions = definitions[0].defined_names()
self.assert_definition_names(subdefinitions, ['f', 'g'])
self.assertEqual([d.full_name for d in subdefinitions],
['Class.f', 'Class.g'])
def test_nested_class(self):
definitions = self.check_defined_names("""
class L1:
class L2:
class L3:
def f(): pass
def f(): pass
def f(): pass
def f(): pass
""", ['L1', 'f'])
subdefs = definitions[0].defined_names()
subsubdefs = subdefs[0].defined_names()
self.assert_definition_names(subdefs, ['L2', 'f'])
self.assert_definition_names(subsubdefs, ['L3', 'f'])
self.assert_definition_names(subsubdefs[0].defined_names(), ['f'])

61
test/test_docstring.py Normal file
View File

@@ -0,0 +1,61 @@
"""
Testing of docstring related issues and especially ``jedi.docstrings``.
"""
import jedi
from .helpers import unittest
class TestDocstring(unittest.TestCase):
def test_function_doc(self):
defs = jedi.Script("""
def func():
'''Docstring of `func`.'''
func""").goto_definitions()
self.assertEqual(defs[0].raw_doc, 'Docstring of `func`.')
@unittest.skip('need evaluator class for that')
def test_attribute_docstring(self):
defs = jedi.Script("""
x = None
'''Docstring of `x`.'''
x""").goto_definitions()
self.assertEqual(defs[0].raw_doc, 'Docstring of `x`.')
@unittest.skip('need evaluator class for that')
def test_multiple_docstrings(self):
defs = jedi.Script("""
def func():
'''Original docstring.'''
x = func
'''Docstring of `x`.'''
x""").goto_definitions()
docs = [d.raw_doc for d in defs]
self.assertEqual(docs, ['Original docstring.', 'Docstring of `x`.'])
def test_completion(self):
assert jedi.Script('''
class DocstringCompletion():
#? []
""" asdfas """''').completions()
def test_docstrings_type_dotted_import(self):
s = """
def func(arg):
'''
:type arg: threading.Thread
'''
arg."""
names = [c.name for c in jedi.Script(s).completions()]
assert 'start' in names
def test_docstrings_type_str(self):
s = """
def func(arg):
'''
:type arg: str
'''
arg."""
names = [c.name for c in jedi.Script(s).completions()]
assert 'join' in names

26
test/test_fast_parser.py Normal file
View File

@@ -0,0 +1,26 @@
import jedi
def test_add_to_end():
"""
fast_parser doesn't parse everything again. It just updates with the
help of caches, this is an example that didn't work.
"""
a = """
class Abc():
def abc(self):
self.x = 3
class Two(Abc):
def h(self):
self
""" # ^ here is the first completion
b = " def g(self):\n" \
" self."
assert jedi.Script(a, 8, 12, 'example.py').completions()
assert jedi.Script(a + b, path='example.py').completions()
a = a[:-1] + '.\n'
assert jedi.Script(a, 8, 13, 'example.py').completions()
assert jedi.Script(a + b, path='example.py').completions()

86
test/test_full_name.py Normal file
View File

@@ -0,0 +1,86 @@
"""
Tests for :attr:`.BaseDefinition.full_name`.
There are three kinds of test:
#. Test classes derived from :class:`MixinTestFullName`.
Child class defines :attr:`.operation` to alter how
the api definition instance is created.
#. :class:`TestFullDefinedName` is to test combination of
``obj.full_name`` and ``jedi.defined_names``.
#. Misc single-function tests.
"""
import textwrap
import jedi
from jedi import api_classes
from .helpers import TestCase
class MixinTestFullName(object):
operation = None
def check(self, source, desired):
script = jedi.Script(textwrap.dedent(source))
definitions = getattr(script, type(self).operation)()
self.assertEqual(definitions[0].full_name, desired)
def test_os_path_join(self):
self.check('import os; os.path.join', 'os.path.join')
def test_builtin(self):
self.check('type', 'type')
def test_from_import(self):
self.check('from os import path', 'os.path')
class TestFullNameWithGotoDefinitions(MixinTestFullName, TestCase):
operation = 'goto_definitions'
def test_tuple_mapping(self):
self.check("""
import re
any_re = re.compile('.*')
any_re""", 're.RegexObject')
class TestFullNameWithCompletions(MixinTestFullName, TestCase):
operation = 'completions'
class TestFullDefinedName(TestCase):
"""
Test combination of ``obj.full_name`` and ``jedi.defined_names``.
"""
def check(self, source, desired):
definitions = jedi.defined_names(textwrap.dedent(source))
full_names = [d.full_name for d in definitions]
self.assertEqual(full_names, desired)
def test_local_names(self):
self.check("""
def f(): pass
class C: pass
""", ['f', 'C'])
def test_imports(self):
self.check("""
import os
from os import path
from os.path import join
from os import path as opath
""", ['os', 'os.path', 'os.path.join', 'os.path'])
def test_keyword_full_name_should_be_none():
"""issue #94"""
# Using `from jedi.keywords import Keyword` here does NOT work
# in Python 3. This is due to the import hack jedi using.
Keyword = api_classes.keywords.Keyword
d = api_classes.Definition(Keyword('(', (0, 0)))
assert d.full_name is None

View File

@@ -2,7 +2,7 @@ import os
import pytest
from . import base
from . import helpers
def assert_case_equal(case, actual, desired):
@@ -23,7 +23,7 @@ desired = %s
def test_integration(case, monkeypatch, pytestconfig):
if case.skip is not None:
pytest.skip(case.skip)
repo_root = base.root_dir
repo_root = helpers.root_dir
monkeypatch.chdir(os.path.join(repo_root, 'jedi'))
case.run(assert_case_equal)

View File

@@ -0,0 +1,79 @@
"""
Tests of various import related things that could not be tested with "Black Box
Tests".
"""
import itertools
from jedi import Script
from .helpers import cwd_at
def test_goto_definition_on_import():
assert Script("import sys_blabla", 1, 8).goto_definitions() == []
assert len(Script("import sys", 1, 8).goto_definitions()) == 1
@cwd_at('jedi')
def test_complete_on_empty_import():
# should just list the files in the directory
assert 10 < len(Script("from .", path='').completions()) < 30
assert 10 < len(Script("from . import", 1, 5, '').completions()) < 30
assert 10 < len(Script("from . import classes", 1, 5, '').completions()) < 30
assert len(Script("import").completions()) == 0
assert len(Script("import import", path='').completions()) > 0
# 111
assert Script("from datetime import").completions()[0].name == 'import'
assert Script("from datetime import ").completions()
def test_imports_on_global_namespace_without_path():
"""If the path is None, there shouldn't be any import problem"""
completions = Script("import operator").completions()
assert [c.name for c in completions] == ['operator']
completions = Script("import operator", path= 'example.py').completions()
assert [c.name for c in completions] == ['operator']
# the first one has a path the second doesn't
completions = Script("import keyword", path='example.py').completions()
assert [c.name for c in completions] == ['keyword']
completions = Script("import keyword").completions()
assert [c.name for c in completions] == ['keyword']
def test_named_import():
"""named import - jedi-vim issue #8"""
s = "import time as dt"
assert len(Script(s, 1, 15, '/').goto_definitions()) == 1
assert len(Script(s, 1, 10, '/').goto_definitions()) == 1
def test_goto_following_on_imports():
s = "import multiprocessing.dummy; multiprocessing.dummy"
g = Script(s).goto_assignments()
assert len(g) == 1
assert (g[0].line, g[0].column) != (0, 0)
def test_after_from():
def check(source, result, column=None):
completions = Script(source, column=column).completions()
assert [c.name for c in completions] == result
check('from os ', ['import'])
check('\nfrom os ', ['import'])
check('\nfrom os import whatever', ['import'], len('from os im'))
check('from os\\\n', ['import'])
check('from os \\\n', ['import'])
def test_follow_definition():
""" github issue #45 """
c = Script("from datetime import timedelta; timedelta").completions()
# type can also point to import, but there will be additional
# attributes
objs = itertools.chain.from_iterable(r.follow_definition() for r in c)
types = [o.type for o in objs]
assert 'import' not in types and 'class' in types

View File

@@ -0,0 +1,46 @@
"""
Test of keywords and ``jedi.keywords``
"""
import jedi
from jedi import Script, common
def test_goto_assignments_keyword():
"""
Bug: goto assignments on ``in`` used to raise AttributeError::
'unicode' object has no attribute 'generate_call_path'
"""
Script('in').goto_assignments()
def test_keyword_doc():
r = list(Script("or", 1, 1).goto_definitions())
assert len(r) == 1
assert len(r[0].doc) > 100
r = list(Script("asfdasfd", 1, 1).goto_definitions())
assert len(r) == 0
k = Script("fro").completions()[0]
imp_start = '\nThe ``import'
assert k.raw_doc.startswith(imp_start)
assert k.doc.startswith(imp_start)
def test_keyword():
""" github jedi-vim issue #44 """
defs = Script("print").goto_definitions()
assert [d.doc for d in defs]
defs = Script("import").goto_definitions()
assert len(defs) == 1 and [1 for d in defs if d.doc]
# unrelated to #44
defs = Script("import").goto_assignments()
assert len(defs) == 0
completions = Script("import", 1,1).completions()
assert len(completions) == 0
with common.ignored(jedi.NotFoundError): # TODO shouldn't throw that.
defs = Script("assert").goto_definitions()
assert len(defs) == 1
def test_lambda():
defs = Script('lambda x: x', column=0).goto_definitions()
assert [d.type for d in defs] == ['keyword']

44
test/test_interpreter.py Normal file
View File

@@ -0,0 +1,44 @@
"""
Tests of ``jedi.api.Interpreter``.
"""
from .helpers import TestCase
import jedi
from jedi._compatibility import is_py33
class TestInterpreterAPI(TestCase):
def check_interpreter_complete(self, source, namespace, completions,
**kwds):
script = jedi.Interpreter(source, [namespace], **kwds)
cs = script.completions()
actual = [c.name for c in cs]
self.assertEqual(sorted(actual), sorted(completions))
def test_complete_raw_function(self):
from os.path import join
self.check_interpreter_complete('join().up',
locals(),
['upper'])
def test_complete_raw_function_different_name(self):
from os.path import join as pjoin
self.check_interpreter_complete('pjoin().up',
locals(),
['upper'])
def test_complete_raw_module(self):
import os
self.check_interpreter_complete('os.path.join().up',
locals(),
['upper'])
def test_complete_raw_instance(self):
import datetime
dt = datetime.datetime(2013, 1, 1)
completions = ['time', 'timetz', 'timetuple']
if is_py33:
completions += ['timestamp']
self.check_interpreter_complete('(dt - dt).ti',
locals(),
completions)

61
test/test_jedi_system.py Normal file
View File

@@ -0,0 +1,61 @@
"""
Test the Jedi "System" which means for example to test if imports are
correctly used.
"""
import os
import inspect
import jedi
def test_settings_module():
"""
jedi.settings and jedi.cache.settings must be the same module.
"""
from jedi import cache
from jedi import settings
assert cache.settings is settings
def test_no_duplicate_modules():
"""
Make sure that import hack works as expected.
Jedi does an import hack (see: jedi/__init__.py) to have submodules
with circular dependencies. The modules in this circular dependency
"loop" must be imported by ``import <module>`` rather than normal
``from jedi import <module>`` (or ``from . jedi ...``). This test
make sure that this is satisfied.
See also:
- `#160 <https://github.com/davidhalter/jedi/issues/160>`_
- `#161 <https://github.com/davidhalter/jedi/issues/161>`_
"""
import sys
jedipath = os.path.dirname(os.path.abspath(jedi.__file__))
def is_submodule(m):
try:
filepath = m.__file__
except AttributeError:
return False
return os.path.abspath(filepath).startswith(jedipath)
modules = list(filter(is_submodule, sys.modules.values()))
top_modules = [m for m in modules if not m.__name__.startswith('jedi.')]
for m in modules:
if m is jedi:
# py.test automatically improts `jedi.*` when --doctest-modules
# is given. So this test cannot succeeds.
continue
for tm in top_modules:
try:
imported = getattr(m, tm.__name__)
except AttributeError:
continue
if inspect.ismodule(imported):
# module could have a function with the same name, e.g.
# `keywords.keywords`.
assert imported is tm

View File

@@ -0,0 +1,53 @@
import jedi
import sys
from os.path import dirname, join
def test_namespace_package():
sys.path.insert(0, join(dirname(__file__), 'namespace_package/ns1'))
sys.path.insert(1, join(dirname(__file__), 'namespace_package/ns2'))
try:
# goto definition
assert jedi.Script('from pkg import ns1_file').goto_definitions()
assert jedi.Script('from pkg import ns2_file').goto_definitions()
assert not jedi.Script('from pkg import ns3_file').goto_definitions()
# goto assignment
tests = {
'from pkg.ns2_folder.nested import foo': 'nested!',
'from pkg.ns2_folder import foo': 'ns2_folder!',
'from pkg.ns2_file import foo': 'ns2_file!',
'from pkg.ns1_folder import foo': 'ns1_folder!',
'from pkg.ns1_file import foo': 'ns1_file!',
'from pkg import foo': 'ns1!',
}
for source, solution in tests.items():
ass = jedi.Script(source).goto_assignments()
assert len(ass) == 1
assert ass[0].description == "foo = '%s'" % solution
# completion
completions = jedi.Script('from pkg import ').completions()
names = [str(c.name) for c in completions] # str because of unicode
compare = ['foo', 'ns1_file', 'ns1_folder', 'ns2_folder', 'ns2_file']
# must at least contain these items, other items are not important
assert not (set(compare) - set(names))
tests = {
'from pkg import ns2_folder as x': 'ns2_folder!',
'from pkg import ns2_file as x': 'ns2_file!',
'from pkg.ns2_folder import nested as x': 'nested!',
'from pkg import ns1_folder as x': 'ns1_folder!',
'from pkg import ns1_file as x': 'ns1_file!',
'import pkg as x': 'ns1!',
}
for source, solution in tests.items():
for c in jedi.Script(source + '; x.').completions():
if c.name == 'foo':
completion = c
solution = "statement: foo = '%s'" % solution
assert completion.description == solution
finally:
sys.path.pop(0)
sys.path.pop(0)

90
test/test_parsing.py Normal file
View File

@@ -0,0 +1,90 @@
from jedi.parsing import Parser
from jedi import parsing_representation as pr
def test_user_statement_on_import():
"""github #285"""
s = "from datetime import (\n" \
" time)"
for pos in [(2, 1), (2, 4)]:
u = Parser(s, user_position=pos).user_stmt
assert isinstance(u, pr.Import)
assert u.defunct == False
assert [str(n) for n in u.get_defined_names()] == ['time']
class TestCallAndName():
def get_call(self, source):
stmt = Parser(source, no_docstr=True).module.statements[0]
return stmt.get_commands()[0]
def test_name_and_call_positions(self):
call = self.get_call('name\nsomething_else')
assert str(call.name) == 'name'
assert call.name.start_pos == call.start_pos == (1, 0)
assert call.name.end_pos == call.end_pos == (1, 4)
call = self.get_call('1.0\n')
assert call.name == 1.0
assert call.start_pos == (1, 0)
assert call.end_pos == (1, 3)
def test_call_type(self):
call = self.get_call('hello')
assert call.type == pr.Call.NAME
assert type(call.name) == pr.Name
call = self.get_call('1.0')
assert type(call.name) == float
assert call.type == pr.Call.NUMBER
call = self.get_call('1')
assert type(call.name) == int
assert call.type == pr.Call.NUMBER
call = self.get_call('"hello"')
assert call.type == pr.Call.STRING
assert call.name == 'hello'
class TestSubscopes():
def get_sub(self, source):
return Parser(source).module.subscopes[0]
def test_subscope_names(self):
name = self.get_sub('class Foo: pass').name
assert name.start_pos == (1, len('class '))
assert name.end_pos == (1, len('class Foo'))
assert str(name) == 'Foo'
name = self.get_sub('def foo(): pass').name
assert name.start_pos == (1, len('def '))
assert name.end_pos == (1, len('def foo'))
assert str(name) == 'foo'
class TestImports():
def get_import(self, source):
return Parser(source).module.imports[0]
def test_import_names(self):
imp = self.get_import('import math\n')
names = imp.get_defined_names()
assert len(names) == 1
assert str(names[0]) == 'math'
assert names[0].start_pos == (1, len('import '))
assert names[0].end_pos == (1, len('import math'))
assert imp.start_pos == (1, 0)
assert imp.end_pos == (1, len('import math'))
def test_module():
module = Parser('asdf', 'example.py', no_docstr=True).module
name = module.name
assert str(name) == 'example'
assert name.start_pos == (0, 0)
assert name.end_pos == (0, 0)
module = Parser('asdf', no_docstr=True).module
name = module.name
assert str(name) == ''
assert name.start_pos == (0, 0)
assert name.end_pos == (0, 0)

571
test/test_regression.py Executable file → Normal file
View File

@@ -1,45 +1,21 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
Unit tests to avoid errors of the past. Makes use of Python's ``unittest``
module.
Unit tests to avoid errors of the past. These are also all tests that didn't
found a good place in any other testing module.
"""
import time
import functools
import itertools
import os
import textwrap
from .base import TestBase, unittest, cwd_at
from .helpers import TestCase, cwd_at
import jedi
from jedi._compatibility import utf8, unicode
from jedi import api, parsing, common
api_classes = api.api_classes
from jedi import Script
from jedi import api, parsing
#jedi.set_debug_function(jedi.debug.print_to_stdout)
#jedi.set_debug_function()
class TestRegression(TestBase):
def test_star_import_cache_duration(self):
new = 0.01
old, jedi.settings.star_import_cache_validity = \
jedi.settings.star_import_cache_validity, new
cache = api.cache
cache.star_import_cache = {} # first empty...
# path needs to be not-None (otherwise caching effects are not visible)
jedi.Script('', 1, 0, '').completions()
time.sleep(2 * new)
jedi.Script('', 1, 0, '').completions()
# reset values
jedi.settings.star_import_cache_validity = old
length = len(cache.star_import_cache)
cache.star_import_cache = {}
self.assertEqual(length, 1)
class TestRegression(TestCase):
def test_goto_definition_cursor(self):
s = ("class A():\n"
@@ -59,7 +35,9 @@ class TestRegression(TestBase):
diff_line = 4, 10
should2 = 8, 10
get_def = lambda pos: [d.description for d in self.goto_definitions(s, pos)]
def get_def(pos):
return [d.description for d in Script(s, *pos).goto_definitions()]
in_name = get_def(in_name)
under_score = get_def(under_score)
should1 = get_def(should1)
@@ -74,173 +52,26 @@ class TestRegression(TestBase):
self.assertRaises(jedi.NotFoundError, get_def, cls)
def test_keyword_doc(self):
r = list(self.goto_definitions("or", (1, 1)))
assert len(r) == 1
assert len(r[0].doc) > 100
r = list(self.goto_definitions("asfdasfd", (1, 1)))
assert len(r) == 0
def test_operator_doc(self):
r = list(self.goto_definitions("a == b", (1, 3)))
r = list(Script("a == b", 1, 3).goto_definitions())
assert len(r) == 1
assert len(r[0].doc) > 100
def test_function_call_signature(self):
defs = self.goto_definitions("""
def f(x, y=1, z='a'):
pass
f""")
doc = defs[0].doc
assert "f(x, y = 1, z = 'a')" in doc
def test_class_call_signature(self):
defs = self.goto_definitions("""
class Foo:
def __init__(self, x, y=1, z='a'):
pass
Foo""")
doc = defs[0].doc
assert "Foo(self, x, y = 1, z = 'a')" in doc
def test_goto_definition_at_zero(self):
assert self.goto_definitions("a", (1, 1)) == []
s = self.goto_definitions("str", (1, 1))
assert Script("a", 1, 1).goto_definitions() == []
s = Script("str", 1, 1).goto_definitions()
assert len(s) == 1
assert list(s)[0].description == 'class str'
assert self.goto_definitions("", (1, 0)) == []
assert Script("", 1, 0).goto_definitions() == []
def test_complete_at_zero(self):
s = self.completions("str", (1, 3))
s = Script("str", 1, 3).completions()
assert len(s) == 1
assert list(s)[0].name == 'str'
s = self.completions("", (1, 0))
s = Script("", 1, 0).completions()
assert len(s) > 0
def test_goto_definition_on_import(self):
assert self.goto_definitions("import sys_blabla", (1, 8)) == []
assert len(self.goto_definitions("import sys", (1, 8))) == 1
@cwd_at('jedi')
def test_complete_on_empty_import(self):
# should just list the files in the directory
assert 10 < len(self.completions("from .", path='')) < 30
assert 10 < len(self.completions("from . import", (1, 5), '')) < 30
assert 10 < len(self.completions("from . import classes",
(1, 5), '')) < 30
assert len(self.completions("import")) == 0
assert len(self.completions("import import", path='')) > 0
# 111
assert self.completions("from datetime import")[0].name == 'import'
assert self.completions("from datetime import ")
def assert_call_def(self, call_def, name, index):
self.assertEqual(
{'call_name': getattr(call_def, 'call_name', None),
'index': getattr(call_def, 'index', None)},
{'call_name': name, 'index': index},
)
def test_function_definition(self):
check = self.assert_call_def
# simple
s = "abs(a, str("
s2 = "abs(), "
s3 = "abs()."
# more complicated
s4 = 'abs(zip(), , set,'
s5 = "abs(1,\nif 2:\n def a():"
s6 = "str().center("
s7 = "str().upper().center("
s8 = "str(int[zip("
check(self.function_definition(s, (1, 4)), 'abs', 0)
check(self.function_definition(s, (1, 6)), 'abs', 1)
check(self.function_definition(s, (1, 7)), 'abs', 1)
check(self.function_definition(s, (1, 8)), 'abs', 1)
check(self.function_definition(s, (1, 11)), 'str', 0)
check(self.function_definition(s2, (1, 4)), 'abs', 0)
assert self.function_definition(s2, (1, 5)) is None
assert self.function_definition(s2) is None
assert self.function_definition(s3, (1, 5)) is None
assert self.function_definition(s3) is None
assert self.function_definition(s4, (1, 3)) is None
check(self.function_definition(s4, (1, 4)), 'abs', 0)
check(self.function_definition(s4, (1, 8)), 'zip', 0)
check(self.function_definition(s4, (1, 9)), 'abs', 0)
#check(self.function_definition(s4, (1, 10)), 'abs', 1)
check(self.function_definition(s5, (1, 4)), 'abs', 0)
check(self.function_definition(s5, (1, 6)), 'abs', 1)
check(self.function_definition(s6), 'center', 0)
check(self.function_definition(s6, (1, 4)), 'str', 0)
check(self.function_definition(s7), 'center', 0)
check(self.function_definition(s8), 'zip', 0)
check(self.function_definition(s8, (1, 8)), 'str', 0)
s = "import time; abc = time; abc.sleep("
check(self.function_definition(s), 'sleep', 0)
# jedi-vim #9
s = "with open("
check(self.function_definition(s), 'open', 0)
# jedi-vim #11
s1 = "for sorted("
check(self.function_definition(s1), 'sorted', 0)
s2 = "for s in sorted("
check(self.function_definition(s2), 'sorted', 0)
# jedi #57
s = "def func(alpha, beta): pass\n" \
"func(alpha='101',"
check(self.function_definition(s, (2, 13)), 'func', 0)
def test_function_definition_complex(self):
check = self.assert_call_def
s = """
def abc(a,b):
pass
def a(self):
abc(
if 1:
pass
"""
check(self.function_definition(s, (6, 24)), 'abc', 0)
s = """
import re
def huhu(it):
re.compile(
return it * 2
"""
check(self.function_definition(s, (4, 31)), 'compile', 0)
# jedi-vim #70
s = """def foo("""
assert self.function_definition(s) is None
# jedi-vim #116
s = """import functools; test = getattr(functools, 'partial'); test("""
check(self.function_definition(s), 'partial', 0)
def test_function_definition_empty_paren_pre_space(self):
s = textwrap.dedent("""\
def f(a, b):
pass
f( )""")
call_def = self.function_definition(s, (3, 3))
self.assert_call_def(call_def, 'f', 0)
@cwd_at('jedi')
def test_add_dynamic_mods(self):
api.settings.additional_dynamic_modules = ['dynamic.py']
@@ -250,114 +81,23 @@ class TestRegression(TestBase):
src2 = 'from .. import setup; setup.ret(1)'
# .parser to load the module
api.modules.Module(os.path.abspath('dynamic.py'), src2).parser
script = jedi.Script(src1, 1, len(src1), '../setup.py')
result = script.goto_definitions()
result = Script(src1, path='../setup.py').goto_definitions()
assert len(result) == 1
assert result[0].description == 'class int'
def test_named_import(self):
""" named import - jedi-vim issue #8 """
s = "import time as dt"
assert len(jedi.Script(s, 1, 15, '/').goto_definitions()) == 1
assert len(jedi.Script(s, 1, 10, '/').goto_definitions()) == 1
def test_unicode_script(self):
""" normally no unicode objects are being used. (<=2.7) """
s = unicode("import datetime; datetime.timedelta")
completions = self.completions(s)
assert len(completions)
assert type(completions[0].description) is unicode
s = utf8("author='öä'; author")
completions = self.completions(s)
x = completions[0].description
assert type(x) is unicode
s = utf8("#-*- coding: iso-8859-1 -*-\nauthor='öä'; author")
s = s.encode('latin-1')
completions = self.completions(s)
assert type(completions[0].description) is unicode
def test_multibyte_script(self):
""" `jedi.Script` must accept multi-byte string source. """
try:
code = unicode("import datetime; datetime.d")
comment = utf8("# multi-byte comment あいうえおä")
s = (unicode('%s\n%s') % (code, comment)).encode('utf-8')
except NameError:
pass # python 3 has no unicode method
else:
assert len(self.completions(s, (1, len(code))))
def test_unicode_attribute(self):
""" github jedi-vim issue #94 """
s1 = utf8('#-*- coding: utf-8 -*-\nclass Person():\n'
' name = "e"\n\nPerson().name.')
completions1 = self.completions(s1)
assert 'strip' in [c.name for c in completions1]
s2 = utf8('#-*- coding: utf-8 -*-\nclass Person():\n'
' name = "é"\n\nPerson().name.')
completions2 = self.completions(s2)
assert 'strip' in [c.name for c in completions2]
def test_os_nowait(self):
""" github issue #45 """
s = self.completions("import os; os.P_")
s = Script("import os; os.P_").completions()
assert 'P_NOWAIT' in [i.name for i in s]
def test_follow_definition(self):
""" github issue #45 """
c = self.completions("from datetime import timedelta; timedelta")
# type can also point to import, but there will be additional
# attributes
objs = itertools.chain.from_iterable(r.follow_definition() for r in c)
types = [o.type for o in objs]
assert 'import' not in types and 'class' in types
def test_keyword_definition_doc(self):
""" github jedi-vim issue #44 """
defs = self.goto_definitions("print")
assert [d.doc for d in defs]
defs = self.goto_definitions("import")
assert len(defs) == 1
assert [d.doc for d in defs]
def test_goto_following_on_imports(self):
s = "import multiprocessing.dummy; multiprocessing.dummy"
g = self.goto_assignments(s)
assert len(g) == 1
assert g[0].start_pos != (0, 0)
def test_points_in_completion(self):
"""At some point, points were inserted into the completions, this
caused problems, sometimes.
"""
c = self.completions("if IndentationErr")
c = Script("if IndentationErr").completions()
assert c[0].name == 'IndentationError'
self.assertEqual(c[0].complete, 'or')
def test_docstrings_type_str(self):
s = """
def func(arg):
'''
:type arg: str
'''
arg."""
names = [c.name for c in self.completions(s)]
assert 'join' in names
def test_docstrings_type_dotted_import(self):
s = """
def func(arg):
'''
:type arg: threading.Thread
'''
arg."""
names = [c.name for c in self.completions(s)]
assert 'start' in names
def test_no_statement_parent(self):
source = textwrap.dedent("""
def f():
@@ -367,8 +107,7 @@ class TestRegression(TestBase):
pass
variable = f or C""")
lines = source.splitlines()
defs = self.goto_definitions(source, (len(lines), 3))
defs = Script(source, column=3).goto_definitions()
defs = sorted(defs, key=lambda d: d.line)
self.assertEqual([d.description for d in defs],
['def f', 'class C'])
@@ -377,239 +116,55 @@ class TestRegression(TestBase):
# jedi issue #150
s = "x()\nx( )\nx( )\nx ( )"
parser = parsing.Parser(s)
for i, s in enumerate(parser.scope.statements, 3):
for i, s in enumerate(parser.module.statements, 3):
for c in s.get_commands():
self.assertEqual(c.execution.end_pos[1], i)
def check_definition_by_marker(self, source, after_cursor, names):
r"""
Find definitions specified by `after_cursor` and check what found
class TestDocstring(TestBase):
For example, for the following configuration, you can pass
``after_cursor = 'y)'``.::
def test_function_doc(self):
defs = self.goto_definitions("""
def func():
'''Docstring of `func`.'''
func""")
self.assertEqual(defs[0].raw_doc, 'Docstring of `func`.')
function(
x, y)
\
`- You want cursor to be here
"""
source = textwrap.dedent(source)
for (i, line) in enumerate(source.splitlines()):
if after_cursor in line:
break
column = len(line) - len(after_cursor)
defs = Script(source, i + 1, column).goto_definitions()
self.assertEqual([d.name for d in defs], names)
def test_attribute_docstring(self):
defs = self.goto_definitions("""
x = None
'''Docstring of `x`.'''
x""")
self.assertEqual(defs[0].raw_doc, 'Docstring of `x`.')
def test_backslash_continuation(self):
"""
Test that ModuleWithCursor.get_path_until_cursor handles continuation
"""
self.check_definition_by_marker(r"""
x = 0
a = \
[1, 2, 3, 4, 5, 6, 7, 8, 9, x] # <-- here
""", '] # <-- here', ['int'])
def test_multiple_docstrings(self):
defs = self.goto_definitions("""
def func():
'''Original docstring.'''
x = func
'''Docstring of `x`.'''
x""")
docs = [d.raw_doc for d in defs]
self.assertEqual(docs, ['Original docstring.', 'Docstring of `x`.'])
# completion in whitespace
s = 'asdfxyxxxxxxxx sds\\\n hello'
assert Script(s, 2, 4).goto_assignments() == []
def test_backslash_continuation_and_bracket(self):
self.check_definition_by_marker(r"""
x = 0
a = \
[1, 2, 3, 4, 5, 6, 7, 8, 9, (x)] # <-- here
""", '(x)] # <-- here', [None])
class TestFeature(TestBase):
def test_full_name(self):
""" feature request #61"""
assert self.completions('import os; os.path.join')[0].full_name \
== 'os.path.join'
def test_keyword_full_name_should_be_none(self):
"""issue #94"""
# Using `from jedi.keywords import Keyword` here does NOT work
# in Python 3. This is due to the import hack jedi using.
Keyword = api_classes.keywords.Keyword
d = api_classes.Definition(Keyword('(', (0, 0)))
assert d.full_name is None
def test_full_name_builtin(self):
self.assertEqual(self.completions('type')[0].full_name, 'type')
def test_full_name_tuple_mapping(self):
s = """
import re
any_re = re.compile('.*')
any_re"""
self.assertEqual(self.goto_definitions(s)[0].full_name, 're.RegexObject')
def test_preload_modules(self):
def check_loaded(*modules):
# + 1 for builtin, +1 for None module (currently used)
assert len(new) == len(modules) + 2
for i in modules + ('__builtin__',):
assert [i in k for k in new.keys() if k is not None]
from jedi import cache
temp_cache, cache.parser_cache = cache.parser_cache, {}
new = cache.parser_cache
with common.ignored(KeyError): # performance of tests -> no reload
new['__builtin__'] = temp_cache['__builtin__']
jedi.preload_module('datetime')
check_loaded('datetime')
jedi.preload_module('json', 'token')
check_loaded('datetime', 'json', 'token')
cache.parser_cache = temp_cache
def test_quick_completion(self):
sources = [
('import json; json.l', (1, 19)),
('import json; json.l ', (1, 19)),
('import json\njson.l', (2, 6)),
('import json\njson.l ', (2, 6)),
('import json\njson.l\n\n', (2, 6)),
('import json\njson.l \n\n', (2, 6)),
('import json\njson.l \n \n\n', (2, 6)),
]
for source, pos in sources:
# Run quick_complete
quick_completions = api._quick_complete(source)
# Run real completion
script = jedi.Script(source, pos[0], pos[1], '')
real_completions = script.completions()
# Compare results
quick_values = [(c.full_name, c.line, c.column) for c in quick_completions]
real_values = [(c.full_name, c.line, c.column) for c in real_completions]
self.assertEqual(quick_values, real_values)
class TestGetDefinitions(TestBase):
def test_get_definitions_flat(self):
definitions = api.defined_names("""
import module
class Class:
pass
def func():
pass
data = None
""")
self.assertEqual([d.name for d in definitions],
['module', 'Class', 'func', 'data'])
def test_dotted_assignment(self):
definitions = api.defined_names("""
x = Class()
x.y.z = None
""")
self.assertEqual([d.name for d in definitions],
['x'])
def test_multiple_assignment(self):
definitions = api.defined_names("""
x = y = None
""")
self.assertEqual([d.name for d in definitions],
['x', 'y'])
def test_multiple_imports(self):
definitions = api.defined_names("""
from module import a, b
from another_module import *
""")
self.assertEqual([d.name for d in definitions],
['a', 'b'])
def test_nested_definitions(self):
definitions = api.defined_names("""
class Class:
def f():
pass
def g():
pass
""")
self.assertEqual([d.name for d in definitions],
['Class'])
subdefinitions = definitions[0].defined_names()
self.assertEqual([d.name for d in subdefinitions],
['f', 'g'])
self.assertEqual([d.full_name for d in subdefinitions],
['Class.f', 'Class.g'])
class TestSpeed(TestBase):
def _check_speed(time_per_run, number=4, run_warm=True):
""" Speed checks should typically be very tolerant. Some machines are
faster than others, but the tests should still pass. These tests are
here to assure that certain effects that kill jedi performance are not
reintroduced to Jedi."""
def decorated(func):
@functools.wraps(func)
def wrapper(self):
if run_warm:
func(self)
first = time.time()
for i in range(number):
func(self)
single_time = (time.time() - first) / number
print('\nspeed', func, single_time)
assert single_time < time_per_run
return wrapper
return decorated
@_check_speed(0.2)
def test_os_path_join(self):
s = "from posixpath import join; join('', '')."
assert len(self.completions(s)) > 10 # is a str completion
@_check_speed(0.1)
def test_scipy_speed(self):
s = 'import scipy.weave; scipy.weave.inline('
script = jedi.Script(s, 1, len(s), '')
script.function_definition()
#print(jedi.imports.imports_processed)
def test_settings_module():
"""
jedi.settings and jedi.cache.settings must be the same module.
"""
from jedi import cache
from jedi import settings
assert cache.settings is settings
def test_no_duplicate_modules():
"""
Make sure that import hack works as expected.
Jedi does an import hack (see: jedi/__init__.py) to have submodules
with circular dependencies. The modules in this circular dependency
"loop" must be imported by ``import <module>`` rather than normal
``from jedi import <module>`` (or ``from . jedi ...``). This test
make sure that this is satisfied.
See also:
- `#160 <https://github.com/davidhalter/jedi/issues/160>`_
- `#161 <https://github.com/davidhalter/jedi/issues/161>`_
"""
import sys
jedipath = os.path.dirname(os.path.abspath(jedi.__file__))
def is_submodule(m):
try:
filepath = m.__file__
except AttributeError:
return False
return os.path.abspath(filepath).startswith(jedipath)
modules = list(filter(is_submodule, sys.modules.values()))
top_modules = [m for m in modules if not m.__name__.startswith('jedi.')]
for m in modules:
if m is jedi:
# py.test automatically improts `jedi.*` when --doctest-modules
# is given. So this test cannot succeeds.
continue
for tm in top_modules:
try:
imported = getattr(m, tm.__name__)
except AttributeError:
continue
assert imported is tm
if __name__ == '__main__':
unittest.main()
def test_generator(self):
# Did have some problems with the usage of generator completions this
# way.
s = "def abc():\n" \
" yield 1\n" \
"abc()."
assert Script(s).completions()

42
test/test_speed.py Normal file
View File

@@ -0,0 +1,42 @@
"""
Speed tests of Jedi. To prove that certain things don't take longer than they
should.
"""
import time
import functools
from .helpers import TestCase
import jedi
class TestSpeed(TestCase):
def _check_speed(time_per_run, number=4, run_warm=True):
""" Speed checks should typically be very tolerant. Some machines are
faster than others, but the tests should still pass. These tests are
here to assure that certain effects that kill jedi performance are not
reintroduced to Jedi."""
def decorated(func):
@functools.wraps(func)
def wrapper(self):
if run_warm:
func(self)
first = time.time()
for i in range(number):
func(self)
single_time = (time.time() - first) / number
print('\nspeed', func, single_time)
assert single_time < time_per_run
return wrapper
return decorated
@_check_speed(0.2)
def test_os_path_join(self):
s = "from posixpath import join; join('', '')."
assert len(jedi.Script(s).completions()) > 10 # is a str completion
@_check_speed(0.1)
def test_scipy_speed(self):
s = 'import scipy.weave; scipy.weave.inline('
script = jedi.Script(s, 1, len(s), '')
script.call_signatures()
#print(jedi.imports.imports_processed)

46
test/test_unicode.py Normal file
View File

@@ -0,0 +1,46 @@
# -*- coding: utf-8 -*-
"""
All character set and unicode related tests.
"""
from jedi import Script
from jedi._compatibility import utf8, unicode
def test_unicode_script():
""" normally no unicode objects are being used. (<=2.7) """
s = unicode("import datetime; datetime.timedelta")
completions = Script(s).completions()
assert len(completions)
assert type(completions[0].description) is unicode
s = utf8("author='öä'; author")
completions = Script(s).completions()
x = completions[0].description
assert type(x) is unicode
s = utf8("#-*- coding: iso-8859-1 -*-\nauthor='öä'; author")
s = s.encode('latin-1')
completions = Script(s).completions()
assert type(completions[0].description) is unicode
def test_unicode_attribute():
""" github jedi-vim issue #94 """
s1 = utf8('#-*- coding: utf-8 -*-\nclass Person():\n'
' name = "e"\n\nPerson().name.')
completions1 = Script(s1).completions()
assert 'strip' in [c.name for c in completions1]
s2 = utf8('#-*- coding: utf-8 -*-\nclass Person():\n'
' name = "é"\n\nPerson().name.')
completions2 = Script(s2).completions()
assert 'strip' in [c.name for c in completions2]
def test_multibyte_script():
""" `jedi.Script` must accept multi-byte string source. """
try:
code = unicode("import datetime; datetime.d")
comment = utf8("# multi-byte comment あいうえおä")
s = (unicode('%s\n%s') % (code, comment)).encode('utf-8')
except NameError:
pass # python 3 has no unicode method
else:
assert len(Script(s, 1, len(code)).completions())

100
test/test_utils.py Normal file
View File

@@ -0,0 +1,100 @@
import readline
from jedi import utils
from .helpers import TestCase, cwd_at
class TestSetupReadline(TestCase):
class NameSpace():
pass
def __init__(self, *args, **kwargs):
super(type(self), self).__init__(*args, **kwargs)
self.namespace = self.NameSpace()
utils.setup_readline(self.namespace)
def completions(self, text):
completer = readline.get_completer()
i = 0
completions = []
while True:
completion = completer(text, i)
if completion is None:
break
completions.append(completion)
i += 1
return completions
def test_simple(self):
assert self.completions('list') == ['list']
assert self.completions('importerror') == ['ImportError']
s = "print BaseE"
assert self.completions(s) == [s + 'xception']
def test_nested(self):
assert self.completions('list.Insert') == ['list.insert']
assert self.completions('list().Insert') == ['list().insert']
def test_magic_methods(self):
assert self.completions('list.__getitem__') == ['list.__getitem__']
assert self.completions('list().__getitem__') == ['list().__getitem__']
def test_modules(self):
import sys
import os
self.namespace.sys = sys
self.namespace.os = os
assert self.completions('os.path.join') == ['os.path.join']
assert self.completions('os.path.join().upper') == ['os.path.join().upper']
c = set(['os.' + d for d in dir(os) if d.startswith('ch')])
assert set(self.completions('os.ch')) == set(c)
del self.namespace.sys
del self.namespace.os
def test_calls(self):
s = 'str(bytes'
assert self.completions(s) == [s, 'str(BytesWarning']
def test_import(self):
s = 'from os.path import a'
assert set(self.completions(s)) == set([s + 'ltsep', s + 'bspath'])
assert self.completions('import keyword') == ['import keyword']
import os
s = 'from os import '
goal = set([s + el for el in dir(os)])
# There are minor differences, e.g. the dir doesn't include deleted
# items as well as items that are not only available on linux.
assert len(set(self.completions(s)).symmetric_difference(goal)) < 20
@cwd_at('test')
def test_local_import(self):
s = 'import test_utils'
assert self.completions(s) == [s]
def test_preexisting_values(self):
self.namespace.a = range(10)
assert set(self.completions('a.')) == set(['a.' + n for n in dir(range(1))])
del self.namespace.a
def test_colorama(self):
"""
Only test it if colorama library is available.
This module is being tested because it uses ``setattr`` at some point,
which Jedi doesn't understand, but it should still work in the REPL.
"""
try:
# if colorama is installed
import colorama
except ImportError:
pass
else:
self.namespace.colorama = colorama
assert self.completions('colorama')
assert self.completions('colorama.Fore.BLACK') == ['colorama.Fore.BLACK']
del self.namespace.colorama

View File

@@ -3,6 +3,8 @@ envlist = py26, py27, py32, py33
[testenv]
deps =
https://bitbucket.org/hpk42/pytest/get/c4f58165e0d4.zip
# docopt for sith doctests
docopt
commands =
py.test []
[testenv:py26]
@@ -16,3 +18,6 @@ deps =
commands =
coverage run --source jedi -m py.test
coverage report
[testenv:sith]
commands =
{envpython} sith.py --record {envtmpdir}/record.json random {posargs:jedi}