Snapshot idea/138.1696 from git://git.jetbrains.org/idea/community.git

Change-Id: I50c97b83a815ce635e49a38380ba5b8765e4b16a
diff --git a/python/helpers/pydev/LICENSE b/python/helpers/pydev/LICENSE
new file mode 100644
index 0000000..5032843
--- /dev/null
+++ b/python/helpers/pydev/LICENSE
@@ -0,0 +1,203 @@
+Eclipse Public License - v 1.0
+
+THE ACCOMPANYING PROGRAM IS PROVIDED UNDER THE TERMS OF THIS ECLIPSE PUBLIC
+LICENSE ("AGREEMENT"). ANY USE, REPRODUCTION OR DISTRIBUTION OF THE PROGRAM
+CONSTITUTES RECIPIENT'S ACCEPTANCE OF THIS AGREEMENT.
+
+1. DEFINITIONS
+
+"Contribution" means:
+
+a) in the case of the initial Contributor, the initial code and documentation
+   distributed under this Agreement, and
+b) in the case of each subsequent Contributor:
+    i) changes to the Program, and
+   ii) additions to the Program;
+
+   where such changes and/or additions to the Program originate from and are
+   distributed by that particular Contributor. A Contribution 'originates'
+   from a Contributor if it was added to the Program by such Contributor
+   itself or anyone acting on such Contributor's behalf. Contributions do not
+   include additions to the Program which: (i) are separate modules of
+   software distributed in conjunction with the Program under their own
+   license agreement, and (ii) are not derivative works of the Program.
+
+"Contributor" means any person or entity that distributes the Program.
+
+"Licensed Patents" mean patent claims licensable by a Contributor which are
+necessarily infringed by the use or sale of its Contribution alone or when
+combined with the Program.
+
+"Program" means the Contributions distributed in accordance with this
+Agreement.
+
+"Recipient" means anyone who receives the Program under this Agreement,
+including all Contributors.
+
+2. GRANT OF RIGHTS
+  a) Subject to the terms of this Agreement, each Contributor hereby grants
+     Recipient a non-exclusive, worldwide, royalty-free copyright license to
+     reproduce, prepare derivative works of, publicly display, publicly
+     perform, distribute and sublicense the Contribution of such Contributor,
+     if any, and such derivative works, in source code and object code form.
+  b) Subject to the terms of this Agreement, each Contributor hereby grants
+     Recipient a non-exclusive, worldwide, royalty-free patent license under
+     Licensed Patents to make, use, sell, offer to sell, import and otherwise
+     transfer the Contribution of such Contributor, if any, in source code and
+     object code form. This patent license shall apply to the combination of
+     the Contribution and the Program if, at the time the Contribution is
+     added by the Contributor, such addition of the Contribution causes such
+     combination to be covered by the Licensed Patents. The patent license
+     shall not apply to any other combinations which include the Contribution.
+     No hardware per se is licensed hereunder.
+  c) Recipient understands that although each Contributor grants the licenses
+     to its Contributions set forth herein, no assurances are provided by any
+     Contributor that the Program does not infringe the patent or other
+     intellectual property rights of any other entity. Each Contributor
+     disclaims any liability to Recipient for claims brought by any other
+     entity based on infringement of intellectual property rights or
+     otherwise. As a condition to exercising the rights and licenses granted
+     hereunder, each Recipient hereby assumes sole responsibility to secure
+     any other intellectual property rights needed, if any. For example, if a
+     third party patent license is required to allow Recipient to distribute
+     the Program, it is Recipient's responsibility to acquire that license
+     before distributing the Program.
+  d) Each Contributor represents that to its knowledge it has sufficient
+     copyright rights in its Contribution, if any, to grant the copyright
+     license set forth in this Agreement.
+
+3. REQUIREMENTS
+
+A Contributor may choose to distribute the Program in object code form under
+its own license agreement, provided that:
+
+  a) it complies with the terms and conditions of this Agreement; and
+  b) its license agreement:
+      i) effectively disclaims on behalf of all Contributors all warranties
+         and conditions, express and implied, including warranties or
+         conditions of title and non-infringement, and implied warranties or
+         conditions of merchantability and fitness for a particular purpose;
+     ii) effectively excludes on behalf of all Contributors all liability for
+         damages, including direct, indirect, special, incidental and
+         consequential damages, such as lost profits;
+    iii) states that any provisions which differ from this Agreement are
+         offered by that Contributor alone and not by any other party; and
+     iv) states that source code for the Program is available from such
+         Contributor, and informs licensees how to obtain it in a reasonable
+         manner on or through a medium customarily used for software exchange.
+
+When the Program is made available in source code form:
+
+  a) it must be made available under this Agreement; and
+  b) a copy of this Agreement must be included with each copy of the Program.
+     Contributors may not remove or alter any copyright notices contained
+     within the Program.
+
+Each Contributor must identify itself as the originator of its Contribution,
+if
+any, in a manner that reasonably allows subsequent Recipients to identify the
+originator of the Contribution.
+
+4. COMMERCIAL DISTRIBUTION
+
+Commercial distributors of software may accept certain responsibilities with
+respect to end users, business partners and the like. While this license is
+intended to facilitate the commercial use of the Program, the Contributor who
+includes the Program in a commercial product offering should do so in a manner
+which does not create potential liability for other Contributors. Therefore,
+if a Contributor includes the Program in a commercial product offering, such
+Contributor ("Commercial Contributor") hereby agrees to defend and indemnify
+every other Contributor ("Indemnified Contributor") against any losses,
+damages and costs (collectively "Losses") arising from claims, lawsuits and
+other legal actions brought by a third party against the Indemnified
+Contributor to the extent caused by the acts or omissions of such Commercial
+Contributor in connection with its distribution of the Program in a commercial
+product offering. The obligations in this section do not apply to any claims
+or Losses relating to any actual or alleged intellectual property
+infringement. In order to qualify, an Indemnified Contributor must:
+a) promptly notify the Commercial Contributor in writing of such claim, and
+b) allow the Commercial Contributor to control, and cooperate with the
+Commercial Contributor in, the defense and any related settlement
+negotiations. The Indemnified Contributor may participate in any such claim at
+its own expense.
+
+For example, a Contributor might include the Program in a commercial product
+offering, Product X. That Contributor is then a Commercial Contributor. If
+that Commercial Contributor then makes performance claims, or offers
+warranties related to Product X, those performance claims and warranties are
+such Commercial Contributor's responsibility alone. Under this section, the
+Commercial Contributor would have to defend claims against the other
+Contributors related to those performance claims and warranties, and if a
+court requires any other Contributor to pay any damages as a result, the
+Commercial Contributor must pay those damages.
+
+5. NO WARRANTY
+
+EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, THE PROGRAM IS PROVIDED ON AN
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, EITHER EXPRESS OR
+IMPLIED INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE,
+NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Each
+Recipient is solely responsible for determining the appropriateness of using
+and distributing the Program and assumes all risks associated with its
+exercise of rights under this Agreement , including but not limited to the
+risks and costs of program errors, compliance with applicable laws, damage to
+or loss of data, programs or equipment, and unavailability or interruption of
+operations.
+
+6. DISCLAIMER OF LIABILITY
+
+EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, NEITHER RECIPIENT NOR ANY
+CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING WITHOUT LIMITATION
+LOST PROFITS), HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ARISING IN ANY WAY OUT OF THE USE OR DISTRIBUTION OF THE PROGRAM OR THE
+EXERCISE OF ANY RIGHTS GRANTED HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY
+OF SUCH DAMAGES.
+
+7. GENERAL
+
+If any provision of this Agreement is invalid or unenforceable under
+applicable law, it shall not affect the validity or enforceability of the
+remainder of the terms of this Agreement, and without further action by the
+parties hereto, such provision shall be reformed to the minimum extent
+necessary to make such provision valid and enforceable.
+
+If Recipient institutes patent litigation against any entity (including a
+cross-claim or counterclaim in a lawsuit) alleging that the Program itself
+(excluding combinations of the Program with other software or hardware)
+infringes such Recipient's patent(s), then such Recipient's rights granted
+under Section 2(b) shall terminate as of the date such litigation is filed.
+
+All Recipient's rights under this Agreement shall terminate if it fails to
+comply with any of the material terms or conditions of this Agreement and does
+not cure such failure in a reasonable period of time after becoming aware of
+such noncompliance. If all Recipient's rights under this Agreement terminate,
+Recipient agrees to cease use and distribution of the Program as soon as
+reasonably practicable. However, Recipient's obligations under this Agreement
+and any licenses granted by Recipient relating to the Program shall continue
+and survive.
+
+Everyone is permitted to copy and distribute copies of this Agreement, but in
+order to avoid inconsistency the Agreement is copyrighted and may only be
+modified in the following manner. The Agreement Steward reserves the right to
+publish new versions (including revisions) of this Agreement from time to
+time. No one other than the Agreement Steward has the right to modify this
+Agreement. The Eclipse Foundation is the initial Agreement Steward. The
+Eclipse Foundation may assign the responsibility to serve as the Agreement
+Steward to a suitable separate entity. Each new version of the Agreement will
+be given a distinguishing version number. The Program (including
+Contributions) may always be distributed subject to the version of the
+Agreement under which it was received. In addition, after a new version of the
+Agreement is published, Contributor may elect to distribute the Program
+(including its Contributions) under the new version. Except as expressly
+stated in Sections 2(a) and 2(b) above, Recipient receives no rights or
+licenses to the intellectual property of any Contributor under this Agreement,
+whether expressly, by implication, estoppel or otherwise. All rights in the
+Program not expressly granted under this Agreement are reserved.
+
+This Agreement is governed by the laws of the State of New York and the
+intellectual property laws of the United States of America. No party to this
+Agreement will bring a legal action under this Agreement more than one year
+after the cause of action arose. Each party waives its rights to a jury trial in
+any resulting litigation.
\ No newline at end of file
diff --git a/python/helpers/pydev/README.md b/python/helpers/pydev/README.md
new file mode 100644
index 0000000..7b22116
--- /dev/null
+++ b/python/helpers/pydev/README.md
@@ -0,0 +1,2 @@
+PyDev.Debugger
+==============
diff --git a/python/helpers/pydev/_pydev_execfile.py b/python/helpers/pydev/_pydev_execfile.py
deleted file mode 100644
index d60d7ed..0000000
--- a/python/helpers/pydev/_pydev_execfile.py
+++ /dev/null
@@ -1,38 +0,0 @@
-#We must redefine it in Py3k if it's not already there
-def execfile(file, glob=None, loc=None):
-    if glob is None:
-        import sys
-        glob = sys._getframe().f_back.f_globals
-    if loc is None:
-        loc = glob
-    stream = open(file, 'rb')
-    try:
-        encoding = None
-        #Get encoding!
-        for _i in range(2):
-            line = stream.readline() #Should not raise an exception even if there are no more contents
-            #Must be a comment line
-            if line.strip().startswith(b'#'):
-                #Don't import re if there's no chance that there's an encoding in the line
-                if b'coding' in line:
-                    import re
-                    p = re.search(br"coding[:=]\s*([-\w.]+)", line)
-                    if p:
-                        try:
-                            encoding = p.group(1).decode('ascii')
-                            break
-                        except:
-                            encoding = None
-    finally:
-        stream.close()
-
-    if encoding:
-        stream = open(file, encoding=encoding)
-    else:
-        stream = open(file)
-    try:
-        contents = stream.read()
-    finally:
-        stream.close()
-        
-    exec(compile(contents+"\n", file, 'exec'), glob, loc) #execute the script
\ No newline at end of file
diff --git a/python/helpers/pydev/_pydev_getopt.py b/python/helpers/pydev/_pydev_getopt.py
new file mode 100644
index 0000000..5548651
--- /dev/null
+++ b/python/helpers/pydev/_pydev_getopt.py
@@ -0,0 +1,130 @@
+
+#=======================================================================================================================
+# getopt code copied since gnu_getopt is not available on jython 2.1
+#=======================================================================================================================
+class GetoptError(Exception):
+    opt = ''
+    msg = ''
+    def __init__(self, msg, opt=''):
+        self.msg = msg
+        self.opt = opt
+        Exception.__init__(self, msg, opt)
+
+    def __str__(self):
+        return self.msg
+
+
+def gnu_getopt(args, shortopts, longopts=[]):
+    """getopt(args, options[, long_options]) -> opts, args
+
+    This function works like getopt(), except that GNU style scanning
+    mode is used by default. This means that option and non-option
+    arguments may be intermixed. The getopt() function stops
+    processing options as soon as a non-option argument is
+    encountered.
+
+    If the first character of the option string is `+', or if the
+    environment variable POSIXLY_CORRECT is set, then option
+    processing stops as soon as a non-option argument is encountered.
+    """
+
+    opts = []
+    prog_args = []
+    if type('') == type(longopts):
+        longopts = [longopts]
+    else:
+        longopts = list(longopts)
+
+    # Allow options after non-option arguments?
+    all_options_first = False
+    if shortopts.startswith('+'):
+        shortopts = shortopts[1:]
+        all_options_first = True
+
+    while args:
+        if args[0] == '--':
+            prog_args += args[1:]
+            break
+
+        if args[0][:2] == '--':
+            opts, args = do_longs(opts, args[0][2:], longopts, args[1:])
+        elif args[0][:1] == '-':
+            opts, args = do_shorts(opts, args[0][1:], shortopts, args[1:])
+        else:
+            if all_options_first:
+                prog_args += args
+                break
+            else:
+                prog_args.append(args[0])
+                args = args[1:]
+
+    return opts, prog_args
+
+def do_longs(opts, opt, longopts, args):
+    try:
+        i = opt.index('=')
+    except ValueError:
+        optarg = None
+    else:
+        opt, optarg = opt[:i], opt[i + 1:]
+
+    has_arg, opt = long_has_args(opt, longopts)
+    if has_arg:
+        if optarg is None:
+            if not args:
+                raise GetoptError('option --%s requires argument' % opt, opt)
+            optarg, args = args[0], args[1:]
+    elif optarg:
+        raise GetoptError('option --%s must not have an argument' % opt, opt)
+    opts.append(('--' + opt, optarg or ''))
+    return opts, args
+
+# Return:
+#   has_arg?
+#   full option name
+def long_has_args(opt, longopts):
+    possibilities = [o for o in longopts if o.startswith(opt)]
+    if not possibilities:
+        raise GetoptError('option --%s not recognized' % opt, opt)
+    # Is there an exact match?
+    if opt in possibilities:
+        return False, opt
+    elif opt + '=' in possibilities:
+        return True, opt
+    # No exact match, so better be unique.
+    if len(possibilities) > 1:
+        # XXX since possibilities contains all valid continuations, might be
+        # nice to work them into the error msg
+        raise GetoptError('option --%s not a unique prefix' % opt, opt)
+    assert len(possibilities) == 1
+    unique_match = possibilities[0]
+    has_arg = unique_match.endswith('=')
+    if has_arg:
+        unique_match = unique_match[:-1]
+    return has_arg, unique_match
+
+def do_shorts(opts, optstring, shortopts, args):
+    while optstring != '':
+        opt, optstring = optstring[0], optstring[1:]
+        if short_has_arg(opt, shortopts):
+            if optstring == '':
+                if not args:
+                    raise GetoptError('option -%s requires argument' % opt,
+                                      opt)
+                optstring, args = args[0], args[1:]
+            optarg, optstring = optstring, ''
+        else:
+            optarg = ''
+        opts.append(('-' + opt, optarg))
+    return opts, args
+
+def short_has_arg(opt, shortopts):
+    for i in range(len(shortopts)):
+        if opt == shortopts[i] != ':':
+            return shortopts.startswith(':', i + 1)
+    raise GetoptError('option -%s not recognized' % opt, opt)
+
+
+#=======================================================================================================================
+# End getopt code
+#=======================================================================================================================
diff --git a/python/helpers/pydev/_pydev_imports_tipper.py b/python/helpers/pydev/_pydev_imports_tipper.py
index e4b3b86..76cf2cd 100644
--- a/python/helpers/pydev/_pydev_imports_tipper.py
+++ b/python/helpers/pydev/_pydev_imports_tipper.py
@@ -4,6 +4,10 @@
 
 from _pydev_tipper_common import DoFind
 
+try:
+    xrange
+except:
+    xrange = range
 
 #completion types.
 TYPE_IMPORT = '0'
@@ -19,20 +23,20 @@
     except:
         if '.' in name:
             sub = name[0:name.rfind('.')]
-            
+
             if log is not None:
                 log.AddContent('Unable to import', name, 'trying with', sub)
                 log.AddException()
-            
+
             return _imp(sub, log)
         else:
             s = 'Unable to import module: %s - sys.path: %s' % (str(name), sys.path)
             if log is not None:
                 log.AddContent(s)
                 log.AddException()
-            
+
             raise ImportError(s)
-        
+
 
 IS_IPY = False
 if sys.platform == 'cli':
@@ -53,9 +57,9 @@
                 clr.AddReference(name)
             except:
                 pass #That's OK (not dot net module).
-        
+
         return _old_imp(initial_name, log)
-        
+
 
 
 def GetFile(mod):
@@ -69,19 +73,19 @@
                 filename = f[:-4] + '.py'
                 if os.path.exists(filename):
                     f = filename
-            
+
     return f
 
 def Find(name, log=None):
     f = None
-    
+
     mod = _imp(name, log)
     parent = mod
     foundAs = ''
-    
+
     if inspect.ismodule(mod):
         f = GetFile(mod)
-        
+
     components = name.split('.')
 
     old_comp = None
@@ -94,22 +98,22 @@
         except AttributeError:
             if old_comp != comp:
                 raise
-        
+
         if inspect.ismodule(mod):
             f = GetFile(mod)
         else:
             if len(foundAs) > 0:
                 foundAs = foundAs + '.'
             foundAs = foundAs + comp
-            
+
         old_comp = comp
-        
+
     return f, mod, parent, foundAs
 
 def Search(data):
     '''@return file, line, col
     '''
-    
+
     data = data.replace('\n', '')
     if data.endswith('.'):
         data = data.rstrip('.')
@@ -118,19 +122,19 @@
         return DoFind(f, mod), foundAs
     except:
         return DoFind(f, parent), foundAs
-    
-    
+
+
 def GenerateTip(data, log=None):
     data = data.replace('\n', '')
     if data.endswith('.'):
         data = data.rstrip('.')
-        
+
     f, mod, parent, foundAs = Find(data, log)
     #print_ >> open('temp.txt', 'w'), f
     tips = GenerateImportsTipForModule(mod)
     return f, tips
-    
-    
+
+
 def CheckChar(c):
     if c == '-' or c == '.':
         return '_'
@@ -146,7 +150,7 @@
             name, doc, args, type (from the TYPE_* constants)
     '''
     ret = []
-    
+
     if dirComps is None:
         dirComps = dir(obj_to_complete)
         if hasattr(obj_to_complete, '__dict__'):
@@ -155,22 +159,22 @@
             dirComps.append('__class__')
 
     getCompleteInfo = True
-    
+
     if len(dirComps) > 1000:
-        #ok, we don't want to let our users wait forever... 
+        #ok, we don't want to let our users wait forever...
         #no complete info for you...
-        
+
         getCompleteInfo = False
-    
+
     dontGetDocsOn = (float, int, str, tuple, list)
     for d in dirComps:
-        
+
         if d is None:
             continue
-            
+
         if not filter(d):
             continue
-        
+
         args = ''
 
         try:
@@ -182,18 +186,18 @@
             if getCompleteInfo:
                 try:
                     retType = TYPE_BUILTIN
-        
+
                     #check if we have to get docs
                     getDoc = True
                     for class_ in dontGetDocsOn:
-                            
+
                         if isinstance(obj, class_):
                             getDoc = False
                             break
-                        
+
                     doc = ''
                     if getDoc:
-                        #no need to get this info... too many constants are defined and 
+                        #no need to get this info... too many constants are defined and
                         #makes things much slower (passing all that through sockets takes quite some time)
                         try:
                             doc = inspect.getdoc(obj)
@@ -201,12 +205,12 @@
                                 doc = ''
                         except: #may happen on jython when checking java classes (so, just ignore it)
                             doc = ''
-                            
-                            
+
+
                     if inspect.ismethod(obj) or inspect.isbuiltin(obj) or inspect.isfunction(obj) or inspect.isroutine(obj):
                         try:
                             args, vargs, kwargs, defaults = inspect.getargspec(obj)
-                                
+
                             r = ''
                             for a in (args):
                                 if len(r) > 0:
@@ -250,7 +254,7 @@
                                             if major:
                                                 args = major[major.index('('):]
                                                 found = True
-                                            
+
 
                                     if not found:
                                         i = doc.find('->')
@@ -260,12 +264,12 @@
                                                 i = doc.find('\n')
                                                 if i < 0:
                                                     i = doc.find('\r')
-                                                
-                                                
+
+
                                         if i > 0:
                                             s = doc[0:i]
                                             s = s.strip()
-                                            
+
                                             #let's see if we have a docstring in the first line
                                             if s[-1] == ')':
                                                 start = s.find('(')
@@ -275,21 +279,21 @@
                                                         end = s.find(')')
                                                         if end <= 0:
                                                             end = len(s)
-                                                    
+
                                                     args = s[start:end]
                                                     if not args[-1] == ')':
                                                         args = args + ')'
-        
-                                                    
+
+
                                                     #now, get rid of unwanted chars
                                                     l = len(args) - 1
                                                     r = []
-                                                    for i in range(len(args)):
+                                                    for i in xrange(len(args)):
                                                         if i == 0 or i == l:
                                                             r.append(args[i])
                                                         else:
                                                             r.append(CheckChar(args[i]))
-                                                            
+
                                                     args = ''.join(r)
 
                                     if IS_IPY:
@@ -305,43 +309,43 @@
 
                             except:
                                 pass
-        
+
                         retType = TYPE_FUNCTION
-                        
+
                     elif inspect.isclass(obj):
                         retType = TYPE_CLASS
-                        
+
                     elif inspect.ismodule(obj):
                         retType = TYPE_IMPORT
-                        
+
                     else:
                         retType = TYPE_ATTR
-                    
-                    
+
+
                     #add token and doc to return - assure only strings.
                     ret.append((d, doc, args, retType))
-                    
+
                 except: #just ignore and get it without aditional info
                     ret.append((d, '', args, TYPE_BUILTIN))
-                
+
             else: #getCompleteInfo == False
                 if inspect.ismethod(obj) or inspect.isbuiltin(obj) or inspect.isfunction(obj) or inspect.isroutine(obj):
                     retType = TYPE_FUNCTION
-                    
+
                 elif inspect.isclass(obj):
                     retType = TYPE_CLASS
-                    
+
                 elif inspect.ismodule(obj):
                     retType = TYPE_IMPORT
-                    
+
                 else:
                     retType = TYPE_ATTR
                 #ok, no complete info, let's try to do this as fast and clean as possible
                 #so, no docs for this kind of information, only the signatures
                 ret.append((d, '', str(args), retType))
-            
+
     return ret
 
 
-    
-    
+
+
diff --git a/python/helpers/pydev/_pydev_imps/__init__.py b/python/helpers/pydev/_pydev_imps/__init__.py
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/python/helpers/pydev/_pydev_imps/__init__.py
diff --git a/python/helpers/pydev/_pydev_imps/_pydev_BaseHTTPServer.py b/python/helpers/pydev/_pydev_imps/_pydev_BaseHTTPServer.py
new file mode 100644
index 0000000..5f9dbfd
--- /dev/null
+++ b/python/helpers/pydev/_pydev_imps/_pydev_BaseHTTPServer.py
@@ -0,0 +1,604 @@
+"""HTTP server base class.
+
+Note: the class in this module doesn't implement any HTTP request; see
+SimpleHTTPServer for simple implementations of GET, HEAD and POST
+(including CGI scripts).  It does, however, optionally implement HTTP/1.1
+persistent connections, as of version 0.3.
+
+Contents:
+
+- BaseHTTPRequestHandler: HTTP request handler base class
+- test: test function
+
+XXX To do:
+
+- log requests even later (to capture byte count)
+- log user-agent header and other interesting goodies
+- send error log to separate file
+"""
+
+
+# See also:
+#
+# HTTP Working Group                                        T. Berners-Lee
+# INTERNET-DRAFT                                            R. T. Fielding
+# <draft-ietf-http-v10-spec-00.txt>                     H. Frystyk Nielsen
+# Expires September 8, 1995                                  March 8, 1995
+#
+# URL: http://www.ics.uci.edu/pub/ietf/http/draft-ietf-http-v10-spec-00.txt
+#
+# and
+#
+# Network Working Group                                      R. Fielding
+# Request for Comments: 2616                                       et al
+# Obsoletes: 2068                                              June 1999
+# Category: Standards Track
+#
+# URL: http://www.faqs.org/rfcs/rfc2616.html
+
+# Log files
+# ---------
+#
+# Here's a quote from the NCSA httpd docs about log file format.
+#
+# | The logfile format is as follows. Each line consists of:
+# |
+# | host rfc931 authuser [DD/Mon/YYYY:hh:mm:ss] "request" ddd bbbb
+# |
+# |        host: Either the DNS name or the IP number of the remote client
+# |        rfc931: Any information returned by identd for this person,
+# |                - otherwise.
+# |        authuser: If user sent a userid for authentication, the user name,
+# |                  - otherwise.
+# |        DD: Day
+# |        Mon: Month (calendar name)
+# |        YYYY: Year
+# |        hh: hour (24-hour format, the machine's timezone)
+# |        mm: minutes
+# |        ss: seconds
+# |        request: The first line of the HTTP request as sent by the client.
+# |        ddd: the status code returned by the server, - if not available.
+# |        bbbb: the total number of bytes sent,
+# |              *not including the HTTP/1.0 header*, - if not available
+# |
+# | You can determine the name of the file accessed through request.
+#
+# (Actually, the latter is only true if you know the server configuration
+# at the time the request was made!)
+
+__version__ = "0.3"
+
+__all__ = ["HTTPServer", "BaseHTTPRequestHandler"]
+
+import sys
+from _pydev_imps import _pydev_time as time
+from _pydev_imps import _pydev_socket as socket
+from warnings import filterwarnings, catch_warnings
+with catch_warnings():
+    if sys.py3kwarning:
+        filterwarnings("ignore", ".*mimetools has been removed",
+                        DeprecationWarning)
+    import mimetools
+
+from _pydev_imps import _pydev_SocketServer as SocketServer
+
+# Default error message template
+DEFAULT_ERROR_MESSAGE = """\
+<head>
+<title>Error response</title>
+</head>
+<body>
+<h1>Error response</h1>
+<p>Error code %(code)d.
+<p>Message: %(message)s.
+<p>Error code explanation: %(code)s = %(explain)s.
+</body>
+"""
+
+DEFAULT_ERROR_CONTENT_TYPE = "text/html"
+
+def _quote_html(html):
+    return html.replace("&", "&amp;").replace("<", "&lt;").replace(">", "&gt;")
+
+class HTTPServer(SocketServer.TCPServer):
+
+    allow_reuse_address = 1    # Seems to make sense in testing environment
+
+    def server_bind(self):
+        """Override server_bind to store the server name."""
+        SocketServer.TCPServer.server_bind(self)
+        host, port = self.socket.getsockname()[:2]
+        self.server_name = socket.getfqdn(host)
+        self.server_port = port
+
+
+class BaseHTTPRequestHandler(SocketServer.StreamRequestHandler):
+
+    """HTTP request handler base class.
+
+    The following explanation of HTTP serves to guide you through the
+    code as well as to expose any misunderstandings I may have about
+    HTTP (so you don't need to read the code to figure out I'm wrong
+    :-).
+
+    HTTP (HyperText Transfer Protocol) is an extensible protocol on
+    top of a reliable stream transport (e.g. TCP/IP).  The protocol
+    recognizes three parts to a request:
+
+    1. One line identifying the request type and path
+    2. An optional set of RFC-822-style headers
+    3. An optional data part
+
+    The headers and data are separated by a blank line.
+
+    The first line of the request has the form
+
+    <command> <path> <version>
+
+    where <command> is a (case-sensitive) keyword such as GET or POST,
+    <path> is a string containing path information for the request,
+    and <version> should be the string "HTTP/1.0" or "HTTP/1.1".
+    <path> is encoded using the URL encoding scheme (using %xx to signify
+    the ASCII character with hex code xx).
+
+    The specification specifies that lines are separated by CRLF but
+    for compatibility with the widest range of clients recommends
+    servers also handle LF.  Similarly, whitespace in the request line
+    is treated sensibly (allowing multiple spaces between components
+    and allowing trailing whitespace).
+
+    Similarly, for output, lines ought to be separated by CRLF pairs
+    but most clients grok LF characters just fine.
+
+    If the first line of the request has the form
+
+    <command> <path>
+
+    (i.e. <version> is left out) then this is assumed to be an HTTP
+    0.9 request; this form has no optional headers and data part and
+    the reply consists of just the data.
+
+    The reply form of the HTTP 1.x protocol again has three parts:
+
+    1. One line giving the response code
+    2. An optional set of RFC-822-style headers
+    3. The data
+
+    Again, the headers and data are separated by a blank line.
+
+    The response code line has the form
+
+    <version> <responsecode> <responsestring>
+
+    where <version> is the protocol version ("HTTP/1.0" or "HTTP/1.1"),
+    <responsecode> is a 3-digit response code indicating success or
+    failure of the request, and <responsestring> is an optional
+    human-readable string explaining what the response code means.
+
+    This server parses the request and the headers, and then calls a
+    function specific to the request type (<command>).  Specifically,
+    a request SPAM will be handled by a method do_SPAM().  If no
+    such method exists the server sends an error response to the
+    client.  If it exists, it is called with no arguments:
+
+    do_SPAM()
+
+    Note that the request name is case sensitive (i.e. SPAM and spam
+    are different requests).
+
+    The various request details are stored in instance variables:
+
+    - client_address is the client IP address in the form (host,
+    port);
+
+    - command, path and version are the broken-down request line;
+
+    - headers is an instance of mimetools.Message (or a derived
+    class) containing the header information;
+
+    - rfile is a file object open for reading positioned at the
+    start of the optional input data part;
+
+    - wfile is a file object open for writing.
+
+    IT IS IMPORTANT TO ADHERE TO THE PROTOCOL FOR WRITING!
+
+    The first thing to be written must be the response line.  Then
+    follow 0 or more header lines, then a blank line, and then the
+    actual data (if any).  The meaning of the header lines depends on
+    the command executed by the server; in most cases, when data is
+    returned, there should be at least one header line of the form
+
+    Content-type: <type>/<subtype>
+
+    where <type> and <subtype> should be registered MIME types,
+    e.g. "text/html" or "text/plain".
+
+    """
+
+    # The Python system version, truncated to its first component.
+    sys_version = "Python/" + sys.version.split()[0]
+
+    # The server software version.  You may want to override this.
+    # The format is multiple whitespace-separated strings,
+    # where each string is of the form name[/version].
+    server_version = "BaseHTTP/" + __version__
+
+    # The default request version.  This only affects responses up until
+    # the point where the request line is parsed, so it mainly decides what
+    # the client gets back when sending a malformed request line.
+    # Most web servers default to HTTP 0.9, i.e. don't send a status line.
+    default_request_version = "HTTP/0.9"
+
+    def parse_request(self):
+        """Parse a request (internal).
+
+        The request should be stored in self.raw_requestline; the results
+        are in self.command, self.path, self.request_version and
+        self.headers.
+
+        Return True for success, False for failure; on failure, an
+        error is sent back.
+
+        """
+        self.command = None  # set in case of error on the first line
+        self.request_version = version = self.default_request_version
+        self.close_connection = 1
+        requestline = self.raw_requestline
+        requestline = requestline.rstrip('\r\n')
+        self.requestline = requestline
+        words = requestline.split()
+        if len(words) == 3:
+            command, path, version = words
+            if version[:5] != 'HTTP/':
+                self.send_error(400, "Bad request version (%r)" % version)
+                return False
+            try:
+                base_version_number = version.split('/', 1)[1]
+                version_number = base_version_number.split(".")
+                # RFC 2145 section 3.1 says there can be only one "." and
+                #   - major and minor numbers MUST be treated as
+                #      separate integers;
+                #   - HTTP/2.4 is a lower version than HTTP/2.13, which in
+                #      turn is lower than HTTP/12.3;
+                #   - Leading zeros MUST be ignored by recipients.
+                if len(version_number) != 2:
+                    raise ValueError
+                version_number = int(version_number[0]), int(version_number[1])
+            except (ValueError, IndexError):
+                self.send_error(400, "Bad request version (%r)" % version)
+                return False
+            if version_number >= (1, 1) and self.protocol_version >= "HTTP/1.1":
+                self.close_connection = 0
+            if version_number >= (2, 0):
+                self.send_error(505,
+                          "Invalid HTTP Version (%s)" % base_version_number)
+                return False
+        elif len(words) == 2:
+            command, path = words
+            self.close_connection = 1
+            if command != 'GET':
+                self.send_error(400,
+                                "Bad HTTP/0.9 request type (%r)" % command)
+                return False
+        elif not words:
+            return False
+        else:
+            self.send_error(400, "Bad request syntax (%r)" % requestline)
+            return False
+        self.command, self.path, self.request_version = command, path, version
+
+        # Examine the headers and look for a Connection directive
+        self.headers = self.MessageClass(self.rfile, 0)
+
+        conntype = self.headers.get('Connection', "")
+        if conntype.lower() == 'close':
+            self.close_connection = 1
+        elif (conntype.lower() == 'keep-alive' and
+              self.protocol_version >= "HTTP/1.1"):
+            self.close_connection = 0
+        return True
+
+    def handle_one_request(self):
+        """Handle a single HTTP request.
+
+        You normally don't need to override this method; see the class
+        __doc__ string for information on how to handle specific HTTP
+        commands such as GET and POST.
+
+        """
+        try:
+            self.raw_requestline = self.rfile.readline(65537)
+            if len(self.raw_requestline) > 65536:
+                self.requestline = ''
+                self.request_version = ''
+                self.command = ''
+                self.send_error(414)
+                return
+            if not self.raw_requestline:
+                self.close_connection = 1
+                return
+            if not self.parse_request():
+                # An error code has been sent, just exit
+                return
+            mname = 'do_' + self.command
+            if not hasattr(self, mname):
+                self.send_error(501, "Unsupported method (%r)" % self.command)
+                return
+            method = getattr(self, mname)
+            method()
+            self.wfile.flush() #actually send the response if not already done.
+        except socket.timeout:
+            #a read or a write timed out.  Discard this connection
+            self.log_error("Request timed out: %r", sys.exc_info()[1])
+            self.close_connection = 1
+            return
+
+    def handle(self):
+        """Handle multiple requests if necessary."""
+        self.close_connection = 1
+
+        self.handle_one_request()
+        while not self.close_connection:
+            self.handle_one_request()
+
+    def send_error(self, code, message=None):
+        """Send and log an error reply.
+
+        Arguments are the error code, and a detailed message.
+        The detailed message defaults to the short entry matching the
+        response code.
+
+        This sends an error response (so it must be called before any
+        output has been generated), logs the error, and finally sends
+        a piece of HTML explaining the error to the user.
+
+        """
+
+        try:
+            short, long = self.responses[code]
+        except KeyError:
+            short, long = '???', '???'
+        if message is None:
+            message = short
+        explain = long
+        self.log_error("code %d, message %s", code, message)
+        # using _quote_html to prevent Cross Site Scripting attacks (see bug #1100201)
+        content = (self.error_message_format %
+                   {'code': code, 'message': _quote_html(message), 'explain': explain})
+        self.send_response(code, message)
+        self.send_header("Content-Type", self.error_content_type)
+        self.send_header('Connection', 'close')
+        self.end_headers()
+        if self.command != 'HEAD' and code >= 200 and code not in (204, 304):
+            self.wfile.write(content)
+
+    error_message_format = DEFAULT_ERROR_MESSAGE
+    error_content_type = DEFAULT_ERROR_CONTENT_TYPE
+
+    def send_response(self, code, message=None):
+        """Send the response header and log the response code.
+
+        Also send two standard headers with the server software
+        version and the current date.
+
+        """
+        self.log_request(code)
+        if message is None:
+            if code in self.responses:
+                message = self.responses[code][0]
+            else:
+                message = ''
+        if self.request_version != 'HTTP/0.9':
+            self.wfile.write("%s %d %s\r\n" %
+                             (self.protocol_version, code, message))
+            # print (self.protocol_version, code, message)
+        self.send_header('Server', self.version_string())
+        self.send_header('Date', self.date_time_string())
+
+    def send_header(self, keyword, value):
+        """Send a MIME header."""
+        if self.request_version != 'HTTP/0.9':
+            self.wfile.write("%s: %s\r\n" % (keyword, value))
+
+        if keyword.lower() == 'connection':
+            if value.lower() == 'close':
+                self.close_connection = 1
+            elif value.lower() == 'keep-alive':
+                self.close_connection = 0
+
+    def end_headers(self):
+        """Send the blank line ending the MIME headers."""
+        if self.request_version != 'HTTP/0.9':
+            self.wfile.write("\r\n")
+
+    def log_request(self, code='-', size='-'):
+        """Log an accepted request.
+
+        This is called by send_response().
+
+        """
+
+        self.log_message('"%s" %s %s',
+                         self.requestline, str(code), str(size))
+
+    def log_error(self, format, *args):
+        """Log an error.
+
+        This is called when a request cannot be fulfilled.  By
+        default it passes the message on to log_message().
+
+        Arguments are the same as for log_message().
+
+        XXX This should go to the separate error log.
+
+        """
+
+        self.log_message(format, *args)
+
+    def log_message(self, format, *args):
+        """Log an arbitrary message.
+
+        This is used by all other logging functions.  Override
+        it if you have specific logging wishes.
+
+        The first argument, FORMAT, is a format string for the
+        message to be logged.  If the format string contains
+        any % escapes requiring parameters, they should be
+        specified as subsequent arguments (it's just like
+        printf!).
+
+        The client host and current date/time are prefixed to
+        every message.
+
+        """
+
+        sys.stderr.write("%s - - [%s] %s\n" %
+                         (self.address_string(),
+                          self.log_date_time_string(),
+                          format%args))
+
+    def version_string(self):
+        """Return the server software version string."""
+        return self.server_version + ' ' + self.sys_version
+
+    def date_time_string(self, timestamp=None):
+        """Return the current date and time formatted for a message header."""
+        if timestamp is None:
+            timestamp = time.time()
+        year, month, day, hh, mm, ss, wd, y, z = time.gmtime(timestamp)
+        s = "%s, %02d %3s %4d %02d:%02d:%02d GMT" % (
+                self.weekdayname[wd],
+                day, self.monthname[month], year,
+                hh, mm, ss)
+        return s
+
+    def log_date_time_string(self):
+        """Return the current time formatted for logging."""
+        now = time.time()
+        year, month, day, hh, mm, ss, x, y, z = time.localtime(now)
+        s = "%02d/%3s/%04d %02d:%02d:%02d" % (
+                day, self.monthname[month], year, hh, mm, ss)
+        return s
+
+    weekdayname = ['Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat', 'Sun']
+
+    monthname = [None,
+                 'Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun',
+                 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec']
+
+    def address_string(self):
+        """Return the client address formatted for logging.
+
+        This version looks up the full hostname using gethostbyaddr(),
+        and tries to find a name that contains at least one dot.
+
+        """
+
+        host, port = self.client_address[:2]
+        return socket.getfqdn(host)
+
+    # Essentially static class variables
+
+    # The version of the HTTP protocol we support.
+    # Set this to HTTP/1.1 to enable automatic keepalive
+    protocol_version = "HTTP/1.0"
+
+    # The Message-like class used to parse headers
+    MessageClass = mimetools.Message
+
+    # Table mapping response codes to messages; entries have the
+    # form {code: (shortmessage, longmessage)}.
+    # See RFC 2616.
+    responses = {
+        100: ('Continue', 'Request received, please continue'),
+        101: ('Switching Protocols',
+              'Switching to new protocol; obey Upgrade header'),
+
+        200: ('OK', 'Request fulfilled, document follows'),
+        201: ('Created', 'Document created, URL follows'),
+        202: ('Accepted',
+              'Request accepted, processing continues off-line'),
+        203: ('Non-Authoritative Information', 'Request fulfilled from cache'),
+        204: ('No Content', 'Request fulfilled, nothing follows'),
+        205: ('Reset Content', 'Clear input form for further input.'),
+        206: ('Partial Content', 'Partial content follows.'),
+
+        300: ('Multiple Choices',
+              'Object has several resources -- see URI list'),
+        301: ('Moved Permanently', 'Object moved permanently -- see URI list'),
+        302: ('Found', 'Object moved temporarily -- see URI list'),
+        303: ('See Other', 'Object moved -- see Method and URL list'),
+        304: ('Not Modified',
+              'Document has not changed since given time'),
+        305: ('Use Proxy',
+              'You must use proxy specified in Location to access this '
+              'resource.'),
+        307: ('Temporary Redirect',
+              'Object moved temporarily -- see URI list'),
+
+        400: ('Bad Request',
+              'Bad request syntax or unsupported method'),
+        401: ('Unauthorized',
+              'No permission -- see authorization schemes'),
+        402: ('Payment Required',
+              'No payment -- see charging schemes'),
+        403: ('Forbidden',
+              'Request forbidden -- authorization will not help'),
+        404: ('Not Found', 'Nothing matches the given URI'),
+        405: ('Method Not Allowed',
+              'Specified method is invalid for this resource.'),
+        406: ('Not Acceptable', 'URI not available in preferred format.'),
+        407: ('Proxy Authentication Required', 'You must authenticate with '
+              'this proxy before proceeding.'),
+        408: ('Request Timeout', 'Request timed out; try again later.'),
+        409: ('Conflict', 'Request conflict.'),
+        410: ('Gone',
+              'URI no longer exists and has been permanently removed.'),
+        411: ('Length Required', 'Client must specify Content-Length.'),
+        412: ('Precondition Failed', 'Precondition in headers is false.'),
+        413: ('Request Entity Too Large', 'Entity is too large.'),
+        414: ('Request-URI Too Long', 'URI is too long.'),
+        415: ('Unsupported Media Type', 'Entity body in unsupported format.'),
+        416: ('Requested Range Not Satisfiable',
+              'Cannot satisfy request range.'),
+        417: ('Expectation Failed',
+              'Expect condition could not be satisfied.'),
+
+        500: ('Internal Server Error', 'Server got itself in trouble'),
+        501: ('Not Implemented',
+              'Server does not support this operation'),
+        502: ('Bad Gateway', 'Invalid responses from another server/proxy.'),
+        503: ('Service Unavailable',
+              'The server cannot process the request due to a high load'),
+        504: ('Gateway Timeout',
+              'The gateway server did not receive a timely response'),
+        505: ('HTTP Version Not Supported', 'Cannot fulfill request.'),
+        }
+
+
+def test(HandlerClass = BaseHTTPRequestHandler,
+         ServerClass = HTTPServer, protocol="HTTP/1.0"):
+    """Test the HTTP request handler class.
+
+    This runs an HTTP server on port 8000 (or the first command line
+    argument).
+
+    """
+
+    if sys.argv[1:]:
+        port = int(sys.argv[1])
+    else:
+        port = 8000
+    server_address = ('', port)
+
+    HandlerClass.protocol_version = protocol
+    httpd = ServerClass(server_address, HandlerClass)
+
+    sa = httpd.socket.getsockname()
+    print ("Serving HTTP on", sa[0], "port", sa[1], "...")
+    httpd.serve_forever()
+
+
+if __name__ == '__main__':
+    test()
diff --git a/python/helpers/pydev/_pydev_imps/_pydev_Queue.py b/python/helpers/pydev/_pydev_imps/_pydev_Queue.py
new file mode 100644
index 0000000..d351b50
--- /dev/null
+++ b/python/helpers/pydev/_pydev_imps/_pydev_Queue.py
@@ -0,0 +1,244 @@
+"""A multi-producer, multi-consumer queue."""
+
+from _pydev_imps._pydev_time import time as _time
+try:
+    import _pydev_threading as _threading
+except ImportError:
+    import dummy_threading as _threading
+from collections import deque
+import heapq
+
+__all__ = ['Empty', 'Full', 'Queue', 'PriorityQueue', 'LifoQueue']
+
+class Empty(Exception):
+    "Exception raised by Queue.get(block=0)/get_nowait()."
+    pass
+
+class Full(Exception):
+    "Exception raised by Queue.put(block=0)/put_nowait()."
+    pass
+
+class Queue:
+    """Create a queue object with a given maximum size.
+
+    If maxsize is <= 0, the queue size is infinite.
+    """
+    def __init__(self, maxsize=0):
+        self.maxsize = maxsize
+        self._init(maxsize)
+        # mutex must be held whenever the queue is mutating.  All methods
+        # that acquire mutex must release it before returning.  mutex
+        # is shared between the three conditions, so acquiring and
+        # releasing the conditions also acquires and releases mutex.
+        self.mutex = _threading.Lock()
+        # Notify not_empty whenever an item is added to the queue; a
+        # thread waiting to get is notified then.
+        self.not_empty = _threading.Condition(self.mutex)
+        # Notify not_full whenever an item is removed from the queue;
+        # a thread waiting to put is notified then.
+        self.not_full = _threading.Condition(self.mutex)
+        # Notify all_tasks_done whenever the number of unfinished tasks
+        # drops to zero; thread waiting to join() is notified to resume
+        self.all_tasks_done = _threading.Condition(self.mutex)
+        self.unfinished_tasks = 0
+
+    def task_done(self):
+        """Indicate that a formerly enqueued task is complete.
+
+        Used by Queue consumer threads.  For each get() used to fetch a task,
+        a subsequent call to task_done() tells the queue that the processing
+        on the task is complete.
+
+        If a join() is currently blocking, it will resume when all items
+        have been processed (meaning that a task_done() call was received
+        for every item that had been put() into the queue).
+
+        Raises a ValueError if called more times than there were items
+        placed in the queue.
+        """
+        self.all_tasks_done.acquire()
+        try:
+            unfinished = self.unfinished_tasks - 1
+            if unfinished <= 0:
+                if unfinished < 0:
+                    raise ValueError('task_done() called too many times')
+                self.all_tasks_done.notify_all()
+            self.unfinished_tasks = unfinished
+        finally:
+            self.all_tasks_done.release()
+
+    def join(self):
+        """Blocks until all items in the Queue have been gotten and processed.
+
+        The count of unfinished tasks goes up whenever an item is added to the
+        queue. The count goes down whenever a consumer thread calls task_done()
+        to indicate the item was retrieved and all work on it is complete.
+
+        When the count of unfinished tasks drops to zero, join() unblocks.
+        """
+        self.all_tasks_done.acquire()
+        try:
+            while self.unfinished_tasks:
+                self.all_tasks_done.wait()
+        finally:
+            self.all_tasks_done.release()
+
+    def qsize(self):
+        """Return the approximate size of the queue (not reliable!)."""
+        self.mutex.acquire()
+        n = self._qsize()
+        self.mutex.release()
+        return n
+
+    def empty(self):
+        """Return True if the queue is empty, False otherwise (not reliable!)."""
+        self.mutex.acquire()
+        n = not self._qsize()
+        self.mutex.release()
+        return n
+
+    def full(self):
+        """Return True if the queue is full, False otherwise (not reliable!)."""
+        self.mutex.acquire()
+        n = 0 < self.maxsize == self._qsize()
+        self.mutex.release()
+        return n
+
+    def put(self, item, block=True, timeout=None):
+        """Put an item into the queue.
+
+        If optional args 'block' is true and 'timeout' is None (the default),
+        block if necessary until a free slot is available. If 'timeout' is
+        a positive number, it blocks at most 'timeout' seconds and raises
+        the Full exception if no free slot was available within that time.
+        Otherwise ('block' is false), put an item on the queue if a free slot
+        is immediately available, else raise the Full exception ('timeout'
+        is ignored in that case).
+        """
+        self.not_full.acquire()
+        try:
+            if self.maxsize > 0:
+                if not block:
+                    if self._qsize() == self.maxsize:
+                        raise Full
+                elif timeout is None:
+                    while self._qsize() == self.maxsize:
+                        self.not_full.wait()
+                elif timeout < 0:
+                    raise ValueError("'timeout' must be a positive number")
+                else:
+                    endtime = _time() + timeout
+                    while self._qsize() == self.maxsize:
+                        remaining = endtime - _time()
+                        if remaining <= 0.0:
+                            raise Full
+                        self.not_full.wait(remaining)
+            self._put(item)
+            self.unfinished_tasks += 1
+            self.not_empty.notify()
+        finally:
+            self.not_full.release()
+
+    def put_nowait(self, item):
+        """Put an item into the queue without blocking.
+
+        Only enqueue the item if a free slot is immediately available.
+        Otherwise raise the Full exception.
+        """
+        return self.put(item, False)
+
+    def get(self, block=True, timeout=None):
+        """Remove and return an item from the queue.
+
+        If optional args 'block' is true and 'timeout' is None (the default),
+        block if necessary until an item is available. If 'timeout' is
+        a positive number, it blocks at most 'timeout' seconds and raises
+        the Empty exception if no item was available within that time.
+        Otherwise ('block' is false), return an item if one is immediately
+        available, else raise the Empty exception ('timeout' is ignored
+        in that case).
+        """
+        self.not_empty.acquire()
+        try:
+            if not block:
+                if not self._qsize():
+                    raise Empty
+            elif timeout is None:
+                while not self._qsize():
+                    self.not_empty.wait()
+            elif timeout < 0:
+                raise ValueError("'timeout' must be a positive number")
+            else:
+                endtime = _time() + timeout
+                while not self._qsize():
+                    remaining = endtime - _time()
+                    if remaining <= 0.0:
+                        raise Empty
+                    self.not_empty.wait(remaining)
+            item = self._get()
+            self.not_full.notify()
+            return item
+        finally:
+            self.not_empty.release()
+
+    def get_nowait(self):
+        """Remove and return an item from the queue without blocking.
+
+        Only get an item if one is immediately available. Otherwise
+        raise the Empty exception.
+        """
+        return self.get(False)
+
+    # Override these methods to implement other queue organizations
+    # (e.g. stack or priority queue).
+    # These will only be called with appropriate locks held
+
+    # Initialize the queue representation
+    def _init(self, maxsize):
+        self.queue = deque()
+
+    def _qsize(self, len=len):
+        return len(self.queue)
+
+    # Put a new item in the queue
+    def _put(self, item):
+        self.queue.append(item)
+
+    # Get an item from the queue
+    def _get(self):
+        return self.queue.popleft()
+
+
+class PriorityQueue(Queue):
+    '''Variant of Queue that retrieves open entries in priority order (lowest first).
+
+    Entries are typically tuples of the form:  (priority number, data).
+    '''
+
+    def _init(self, maxsize):
+        self.queue = []
+
+    def _qsize(self, len=len):
+        return len(self.queue)
+
+    def _put(self, item, heappush=heapq.heappush):
+        heappush(self.queue, item)
+
+    def _get(self, heappop=heapq.heappop):
+        return heappop(self.queue)
+
+
+class LifoQueue(Queue):
+    '''Variant of Queue that retrieves most recently added entries first.'''
+
+    def _init(self, maxsize):
+        self.queue = []
+
+    def _qsize(self, len=len):
+        return len(self.queue)
+
+    def _put(self, item):
+        self.queue.append(item)
+
+    def _get(self):
+        return self.queue.pop()
diff --git a/python/helpers/pydev/_pydev_imps/_pydev_SimpleXMLRPCServer.py b/python/helpers/pydev/_pydev_imps/_pydev_SimpleXMLRPCServer.py
new file mode 100644
index 0000000..5a0c2af
--- /dev/null
+++ b/python/helpers/pydev/_pydev_imps/_pydev_SimpleXMLRPCServer.py
@@ -0,0 +1,610 @@
+#Just a copy of the version in python 2.5 to be used if it's not available in jython 2.1
+
+"""Simple XML-RPC Server.
+
+This module can be used to create simple XML-RPC servers
+by creating a server and either installing functions, a
+class instance, or by extending the SimpleXMLRPCServer
+class.
+
+It can also be used to handle XML-RPC requests in a CGI
+environment using CGIXMLRPCRequestHandler.
+
+A list of possible usage patterns follows:
+
+1. Install functions:
+
+server = SimpleXMLRPCServer(("localhost", 8000))
+server.register_function(pow)
+server.register_function(lambda x,y: x+y, 'add')
+server.serve_forever()
+
+2. Install an instance:
+
+class MyFuncs:
+    def __init__(self):
+        # make all of the string functions available through
+        # string.func_name
+        import string
+        self.string = string
+    def _listMethods(self):
+        # implement this method so that system.listMethods
+        # knows to advertise the strings methods
+        return list_public_methods(self) + \
+                ['string.' + method for method in list_public_methods(self.string)]
+    def pow(self, x, y): return pow(x, y)
+    def add(self, x, y) : return x + y
+
+server = SimpleXMLRPCServer(("localhost", 8000))
+server.register_introspection_functions()
+server.register_instance(MyFuncs())
+server.serve_forever()
+
+3. Install an instance with custom dispatch method:
+
+class Math:
+    def _listMethods(self):
+        # this method must be present for system.listMethods
+        # to work
+        return ['add', 'pow']
+    def _methodHelp(self, method):
+        # this method must be present for system.methodHelp
+        # to work
+        if method == 'add':
+            return "add(2,3) => 5"
+        elif method == 'pow':
+            return "pow(x, y[, z]) => number"
+        else:
+            # By convention, return empty
+            # string if no help is available
+            return ""
+    def _dispatch(self, method, params):
+        if method == 'pow':
+            return pow(*params)
+        elif method == 'add':
+            return params[0] + params[1]
+        else:
+            raise 'bad method'
+
+server = SimpleXMLRPCServer(("localhost", 8000))
+server.register_introspection_functions()
+server.register_instance(Math())
+server.serve_forever()
+
+4. Subclass SimpleXMLRPCServer:
+
+class MathServer(SimpleXMLRPCServer):
+    def _dispatch(self, method, params):
+        try:
+            # We are forcing the 'export_' prefix on methods that are
+            # callable through XML-RPC to prevent potential security
+            # problems
+            func = getattr(self, 'export_' + method)
+        except AttributeError:
+            raise Exception('method "%s" is not supported' % method)
+        else:
+            return func(*params)
+
+    def export_add(self, x, y):
+        return x + y
+
+server = MathServer(("localhost", 8000))
+server.serve_forever()
+
+5. CGI script:
+
+server = CGIXMLRPCRequestHandler()
+server.register_function(pow)
+server.handle_request()
+"""
+
+# Written by Brian Quinlan (brian@sweetapp.com).
+# Based on code written by Fredrik Lundh.
+
+try:
+    True
+    False
+except:
+    import __builtin__
+    setattr(__builtin__, 'True', 1) #Python 3.0 does not accept __builtin__.True = 1 in its syntax
+    setattr(__builtin__, 'False', 0)
+
+
+from _pydev_imps import _pydev_xmlrpclib as xmlrpclib
+from _pydev_imps._pydev_xmlrpclib import Fault
+from _pydev_imps import _pydev_SocketServer as SocketServer
+from _pydev_imps import _pydev_BaseHTTPServer as BaseHTTPServer
+import sys
+import os
+try:
+    import fcntl
+except ImportError:
+    fcntl = None
+
+def resolve_dotted_attribute(obj, attr, allow_dotted_names=True):
+    """resolve_dotted_attribute(a, 'b.c.d') => a.b.c.d
+
+    Resolves a dotted attribute name to an object.  Raises
+    an AttributeError if any attribute in the chain starts with a '_'.
+
+    If the optional allow_dotted_names argument is false, dots are not
+    supported and this function operates similar to getattr(obj, attr).
+    """
+
+    if allow_dotted_names:
+        attrs = attr.split('.')
+    else:
+        attrs = [attr]
+
+    for i in attrs:
+        if i.startswith('_'):
+            raise AttributeError(
+                'attempt to access private attribute "%s"' % i
+                )
+        else:
+            obj = getattr(obj, i)
+    return obj
+
+def list_public_methods(obj):
+    """Returns a list of attribute strings, found in the specified
+    object, which represent callable attributes"""
+
+    return [member for member in dir(obj)
+                if not member.startswith('_') and
+                    callable(getattr(obj, member))]
+
+def remove_duplicates(lst):
+    """remove_duplicates([2,2,2,1,3,3]) => [3,1,2]
+
+    Returns a copy of a list without duplicates. Every list
+    item must be hashable and the order of the items in the
+    resulting list is not defined.
+    """
+    u = {}
+    for x in lst:
+        u[x] = 1
+
+    return u.keys()
+
+class SimpleXMLRPCDispatcher:
+    """Mix-in class that dispatches XML-RPC requests.
+
+    This class is used to register XML-RPC method handlers
+    and then to dispatch them. There should never be any
+    reason to instantiate this class directly.
+    """
+
+    def __init__(self, allow_none, encoding):
+        self.funcs = {}
+        self.instance = None
+        self.allow_none = allow_none
+        self.encoding = encoding
+
+    def register_instance(self, instance, allow_dotted_names=False):
+        """Registers an instance to respond to XML-RPC requests.
+
+        Only one instance can be installed at a time.
+
+        If the registered instance has a _dispatch method then that
+        method will be called with the name of the XML-RPC method and
+        its parameters as a tuple
+        e.g. instance._dispatch('add',(2,3))
+
+        If the registered instance does not have a _dispatch method
+        then the instance will be searched to find a matching method
+        and, if found, will be called. Methods beginning with an '_'
+        are considered private and will not be called by
+        SimpleXMLRPCServer.
+
+        If a registered function matches a XML-RPC request, then it
+        will be called instead of the registered instance.
+
+        If the optional allow_dotted_names argument is true and the
+        instance does not have a _dispatch method, method names
+        containing dots are supported and resolved, as long as none of
+        the name segments start with an '_'.
+
+            *** SECURITY WARNING: ***
+
+            Enabling the allow_dotted_names options allows intruders
+            to access your module's global variables and may allow
+            intruders to execute arbitrary code on your machine.  Only
+            use this option on a secure, closed network.
+
+        """
+
+        self.instance = instance
+        self.allow_dotted_names = allow_dotted_names
+
+    def register_function(self, function, name=None):
+        """Registers a function to respond to XML-RPC requests.
+
+        The optional name argument can be used to set a Unicode name
+        for the function.
+        """
+
+        if name is None:
+            name = function.__name__
+        self.funcs[name] = function
+
+    def register_introspection_functions(self):
+        """Registers the XML-RPC introspection methods in the system
+        namespace.
+
+        see http://xmlrpc.usefulinc.com/doc/reserved.html
+        """
+
+        self.funcs.update({'system.listMethods' : self.system_listMethods,
+                      'system.methodSignature' : self.system_methodSignature,
+                      'system.methodHelp' : self.system_methodHelp})
+
+    def register_multicall_functions(self):
+        """Registers the XML-RPC multicall method in the system
+        namespace.
+
+        see http://www.xmlrpc.com/discuss/msgReader$1208"""
+
+        self.funcs.update({'system.multicall' : self.system_multicall})
+
+    def _marshaled_dispatch(self, data, dispatch_method=None):
+        """Dispatches an XML-RPC method from marshalled (XML) data.
+
+        XML-RPC methods are dispatched from the marshalled (XML) data
+        using the _dispatch method and the result is returned as
+        marshalled data. For backwards compatibility, a dispatch
+        function can be provided as an argument (see comment in
+        SimpleXMLRPCRequestHandler.do_POST) but overriding the
+        existing method through subclassing is the prefered means
+        of changing method dispatch behavior.
+        """
+        try:
+            params, method = xmlrpclib.loads(data)
+
+            # generate response
+            if dispatch_method is not None:
+                response = dispatch_method(method, params)
+            else:
+                response = self._dispatch(method, params)
+            # wrap response in a singleton tuple
+            response = (response,)
+            response = xmlrpclib.dumps(response, methodresponse=1,
+                                       allow_none=self.allow_none, encoding=self.encoding)
+        except Fault, fault:
+            response = xmlrpclib.dumps(fault, allow_none=self.allow_none,
+                                       encoding=self.encoding)
+        except:
+            # report exception back to server
+            response = xmlrpclib.dumps(
+                xmlrpclib.Fault(1, "%s:%s" % (sys.exc_type, sys.exc_value)), #@UndefinedVariable exc_value only available when we actually have an exception
+                encoding=self.encoding, allow_none=self.allow_none,
+                )
+
+        return response
+
+    def system_listMethods(self):
+        """system.listMethods() => ['add', 'subtract', 'multiple']
+
+        Returns a list of the methods supported by the server."""
+
+        methods = self.funcs.keys()
+        if self.instance is not None:
+            # Instance can implement _listMethod to return a list of
+            # methods
+            if hasattr(self.instance, '_listMethods'):
+                methods = remove_duplicates(
+                        methods + self.instance._listMethods()
+                    )
+            # if the instance has a _dispatch method then we
+            # don't have enough information to provide a list
+            # of methods
+            elif not hasattr(self.instance, '_dispatch'):
+                methods = remove_duplicates(
+                        methods + list_public_methods(self.instance)
+                    )
+        methods.sort()
+        return methods
+
+    def system_methodSignature(self, method_name):
+        """system.methodSignature('add') => [double, int, int]
+
+        Returns a list describing the signature of the method. In the
+        above example, the add method takes two integers as arguments
+        and returns a double result.
+
+        This server does NOT support system.methodSignature."""
+
+        # See http://xmlrpc.usefulinc.com/doc/sysmethodsig.html
+
+        return 'signatures not supported'
+
+    def system_methodHelp(self, method_name):
+        """system.methodHelp('add') => "Adds two integers together"
+
+        Returns a string containing documentation for the specified method."""
+
+        method = None
+        if self.funcs.has_key(method_name):
+            method = self.funcs[method_name]
+        elif self.instance is not None:
+            # Instance can implement _methodHelp to return help for a method
+            if hasattr(self.instance, '_methodHelp'):
+                return self.instance._methodHelp(method_name)
+            # if the instance has a _dispatch method then we
+            # don't have enough information to provide help
+            elif not hasattr(self.instance, '_dispatch'):
+                try:
+                    method = resolve_dotted_attribute(
+                                self.instance,
+                                method_name,
+                                self.allow_dotted_names
+                                )
+                except AttributeError:
+                    pass
+
+        # Note that we aren't checking that the method actually
+        # be a callable object of some kind
+        if method is None:
+            return ""
+        else:
+            try:
+                import pydoc
+            except ImportError:
+                return "" #not there for jython
+            else:
+                return pydoc.getdoc(method)
+
+    def system_multicall(self, call_list):
+        """system.multicall([{'methodName': 'add', 'params': [2, 2]}, ...]) => \
+[[4], ...]
+
+        Allows the caller to package multiple XML-RPC calls into a single
+        request.
+
+        See http://www.xmlrpc.com/discuss/msgReader$1208
+        """
+
+        results = []
+        for call in call_list:
+            method_name = call['methodName']
+            params = call['params']
+
+            try:
+                # XXX A marshalling error in any response will fail the entire
+                # multicall. If someone cares they should fix this.
+                results.append([self._dispatch(method_name, params)])
+            except Fault, fault:
+                results.append(
+                    {'faultCode' : fault.faultCode,
+                     'faultString' : fault.faultString}
+                    )
+            except:
+                results.append(
+                    {'faultCode' : 1,
+                     'faultString' : "%s:%s" % (sys.exc_type, sys.exc_value)} #@UndefinedVariable exc_value only available when we actually have an exception
+                    )
+        return results
+
+    def _dispatch(self, method, params):
+        """Dispatches the XML-RPC method.
+
+        XML-RPC calls are forwarded to a registered function that
+        matches the called XML-RPC method name. If no such function
+        exists then the call is forwarded to the registered instance,
+        if available.
+
+        If the registered instance has a _dispatch method then that
+        method will be called with the name of the XML-RPC method and
+        its parameters as a tuple
+        e.g. instance._dispatch('add',(2,3))
+
+        If the registered instance does not have a _dispatch method
+        then the instance will be searched to find a matching method
+        and, if found, will be called.
+
+        Methods beginning with an '_' are considered private and will
+        not be called.
+        """
+
+        func = None
+        try:
+            # check to see if a matching function has been registered
+            func = self.funcs[method]
+        except KeyError:
+            if self.instance is not None:
+                # check for a _dispatch method
+                if hasattr(self.instance, '_dispatch'):
+                    return self.instance._dispatch(method, params)
+                else:
+                    # call instance method directly
+                    try:
+                        func = resolve_dotted_attribute(
+                            self.instance,
+                            method,
+                            self.allow_dotted_names
+                            )
+                    except AttributeError:
+                        pass
+
+        if func is not None:
+            return func(*params)
+        else:
+            raise Exception('method "%s" is not supported' % method)
+
+class SimpleXMLRPCRequestHandler(BaseHTTPServer.BaseHTTPRequestHandler):
+    """Simple XML-RPC request handler class.
+
+    Handles all HTTP POST requests and attempts to decode them as
+    XML-RPC requests.
+    """
+
+    # Class attribute listing the accessible path components;
+    # paths not on this list will result in a 404 error.
+    rpc_paths = ('/', '/RPC2')
+
+    def is_rpc_path_valid(self):
+        if self.rpc_paths:
+            return self.path in self.rpc_paths
+        else:
+            # If .rpc_paths is empty, just assume all paths are legal
+            return True
+
+    def do_POST(self):
+        """Handles the HTTP POST request.
+
+        Attempts to interpret all HTTP POST requests as XML-RPC calls,
+        which are forwarded to the server's _dispatch method for handling.
+        """
+
+        # Check that the path is legal
+        if not self.is_rpc_path_valid():
+            self.report_404()
+            return
+
+        try:
+            # Get arguments by reading body of request.
+            # We read this in chunks to avoid straining
+            # socket.read(); around the 10 or 15Mb mark, some platforms
+            # begin to have problems (bug #792570).
+            max_chunk_size = 10 * 1024 * 1024
+            size_remaining = int(self.headers["content-length"])
+            L = []
+            while size_remaining:
+                chunk_size = min(size_remaining, max_chunk_size)
+                L.append(self.rfile.read(chunk_size))
+                size_remaining -= len(L[-1])
+            data = ''.join(L)
+
+            # In previous versions of SimpleXMLRPCServer, _dispatch
+            # could be overridden in this class, instead of in
+            # SimpleXMLRPCDispatcher. To maintain backwards compatibility,
+            # check to see if a subclass implements _dispatch and dispatch
+            # using that method if present.
+            response = self.server._marshaled_dispatch(
+                    data, getattr(self, '_dispatch', None)
+                )
+        except: # This should only happen if the module is buggy
+            # internal error, report as HTTP server error
+            self.send_response(500)
+            self.end_headers()
+        else:
+            # got a valid XML RPC response
+            self.send_response(200)
+            self.send_header("Content-type", "text/xml")
+            self.send_header("Content-length", str(len(response)))
+            self.end_headers()
+            self.wfile.write(response)
+
+            # shut down the connection
+            self.wfile.flush()
+            self.connection.shutdown(1)
+
+    def report_404 (self):
+            # Report a 404 error
+        self.send_response(404)
+        response = 'No such page'
+        self.send_header("Content-type", "text/plain")
+        self.send_header("Content-length", str(len(response)))
+        self.end_headers()
+        self.wfile.write(response)
+        # shut down the connection
+        self.wfile.flush()
+        self.connection.shutdown(1)
+
+    def log_request(self, code='-', size='-'):
+        """Selectively log an accepted request."""
+
+        if self.server.logRequests:
+            BaseHTTPServer.BaseHTTPRequestHandler.log_request(self, code, size)
+
+class SimpleXMLRPCServer(SocketServer.TCPServer,
+                         SimpleXMLRPCDispatcher):
+    """Simple XML-RPC server.
+
+    Simple XML-RPC server that allows functions and a single instance
+    to be installed to handle requests. The default implementation
+    attempts to dispatch XML-RPC calls to the functions or instance
+    installed in the server. Override the _dispatch method inhereted
+    from SimpleXMLRPCDispatcher to change this behavior.
+    """
+
+    allow_reuse_address = True
+
+    def __init__(self, addr, requestHandler=SimpleXMLRPCRequestHandler,
+                 logRequests=True, allow_none=False, encoding=None):
+        self.logRequests = logRequests
+
+        SimpleXMLRPCDispatcher.__init__(self, allow_none, encoding)
+        SocketServer.TCPServer.__init__(self, addr, requestHandler)
+
+        # [Bug #1222790] If possible, set close-on-exec flag; if a
+        # method spawns a subprocess, the subprocess shouldn't have
+        # the listening socket open.
+        if fcntl is not None and hasattr(fcntl, 'FD_CLOEXEC'):
+            flags = fcntl.fcntl(self.fileno(), fcntl.F_GETFD)
+            flags |= fcntl.FD_CLOEXEC
+            fcntl.fcntl(self.fileno(), fcntl.F_SETFD, flags)
+
+class CGIXMLRPCRequestHandler(SimpleXMLRPCDispatcher):
+    """Simple handler for XML-RPC data passed through CGI."""
+
+    def __init__(self, allow_none=False, encoding=None):
+        SimpleXMLRPCDispatcher.__init__(self, allow_none, encoding)
+
+    def handle_xmlrpc(self, request_text):
+        """Handle a single XML-RPC request"""
+
+        response = self._marshaled_dispatch(request_text)
+
+        sys.stdout.write('Content-Type: text/xml\n')
+        sys.stdout.write('Content-Length: %d\n' % len(response))
+        sys.stdout.write('\n')
+
+        sys.stdout.write(response)
+
+    def handle_get(self):
+        """Handle a single HTTP GET request.
+
+        Default implementation indicates an error because
+        XML-RPC uses the POST method.
+        """
+
+        code = 400
+        message, explain = \
+                 BaseHTTPServer.BaseHTTPRequestHandler.responses[code]
+
+        response = BaseHTTPServer.DEFAULT_ERROR_MESSAGE % { #@UndefinedVariable
+             'code' : code,
+             'message' : message,
+             'explain' : explain
+            }
+        sys.stdout.write('Status: %d %s\n' % (code, message))
+        sys.stdout.write('Content-Type: text/html\n')
+        sys.stdout.write('Content-Length: %d\n' % len(response))
+        sys.stdout.write('\n')
+
+        sys.stdout.write(response)
+
+    def handle_request(self, request_text=None):
+        """Handle a single XML-RPC request passed through a CGI post method.
+
+        If no XML data is given then it is read from stdin. The resulting
+        XML-RPC response is printed to stdout along with the correct HTTP
+        headers.
+        """
+
+        if request_text is None and \
+            os.environ.get('REQUEST_METHOD', None) == 'GET':
+            self.handle_get()
+        else:
+            # POST data is normally available through stdin
+            if request_text is None:
+                request_text = sys.stdin.read()
+
+            self.handle_xmlrpc(request_text)
+
+if __name__ == '__main__':
+    sys.stdout.write('Running XML-RPC server on port 8000\n')
+    server = SimpleXMLRPCServer(("localhost", 8000))
+    server.register_function(pow)
+    server.register_function(lambda x, y: x + y, 'add')
+    server.serve_forever()
diff --git a/python/helpers/pydev/_pydev_imps/_pydev_SocketServer.py b/python/helpers/pydev/_pydev_imps/_pydev_SocketServer.py
new file mode 100644
index 0000000..79cfb08
--- /dev/null
+++ b/python/helpers/pydev/_pydev_imps/_pydev_SocketServer.py
@@ -0,0 +1,715 @@
+"""Generic socket server classes.
+
+This module tries to capture the various aspects of defining a server:
+
+For socket-based servers:
+
+- address family:
+        - AF_INET{,6}: IP (Internet Protocol) sockets (default)
+        - AF_UNIX: Unix domain sockets
+        - others, e.g. AF_DECNET are conceivable (see <socket.h>
+- socket type:
+        - SOCK_STREAM (reliable stream, e.g. TCP)
+        - SOCK_DGRAM (datagrams, e.g. UDP)
+
+For request-based servers (including socket-based):
+
+- client address verification before further looking at the request
+        (This is actually a hook for any processing that needs to look
+         at the request before anything else, e.g. logging)
+- how to handle multiple requests:
+        - synchronous (one request is handled at a time)
+        - forking (each request is handled by a new process)
+        - threading (each request is handled by a new thread)
+
+The classes in this module favor the server type that is simplest to
+write: a synchronous TCP/IP server.  This is bad class design, but
+save some typing.  (There's also the issue that a deep class hierarchy
+slows down method lookups.)
+
+There are five classes in an inheritance diagram, four of which represent
+synchronous servers of four types:
+
+        +------------+
+        | BaseServer |
+        +------------+
+              |
+              v
+        +-----------+        +------------------+
+        | TCPServer |------->| UnixStreamServer |
+        +-----------+        +------------------+
+              |
+              v
+        +-----------+        +--------------------+
+        | UDPServer |------->| UnixDatagramServer |
+        +-----------+        +--------------------+
+
+Note that UnixDatagramServer derives from UDPServer, not from
+UnixStreamServer -- the only difference between an IP and a Unix
+stream server is the address family, which is simply repeated in both
+unix server classes.
+
+Forking and threading versions of each type of server can be created
+using the ForkingMixIn and ThreadingMixIn mix-in classes.  For
+instance, a threading UDP server class is created as follows:
+
+        class ThreadingUDPServer(ThreadingMixIn, UDPServer): pass
+
+The Mix-in class must come first, since it overrides a method defined
+in UDPServer! Setting the various member variables also changes
+the behavior of the underlying server mechanism.
+
+To implement a service, you must derive a class from
+BaseRequestHandler and redefine its handle() method.  You can then run
+various versions of the service by combining one of the server classes
+with your request handler class.
+
+The request handler class must be different for datagram or stream
+services.  This can be hidden by using the request handler
+subclasses StreamRequestHandler or DatagramRequestHandler.
+
+Of course, you still have to use your head!
+
+For instance, it makes no sense to use a forking server if the service
+contains state in memory that can be modified by requests (since the
+modifications in the child process would never reach the initial state
+kept in the parent process and passed to each child).  In this case,
+you can use a threading server, but you will probably have to use
+locks to avoid two requests that come in nearly simultaneous to apply
+conflicting changes to the server state.
+
+On the other hand, if you are building e.g. an HTTP server, where all
+data is stored externally (e.g. in the file system), a synchronous
+class will essentially render the service "deaf" while one request is
+being handled -- which may be for a very long time if a client is slow
+to read all the data it has requested.  Here a threading or forking
+server is appropriate.
+
+In some cases, it may be appropriate to process part of a request
+synchronously, but to finish processing in a forked child depending on
+the request data.  This can be implemented by using a synchronous
+server and doing an explicit fork in the request handler class
+handle() method.
+
+Another approach to handling multiple simultaneous requests in an
+environment that supports neither threads nor fork (or where these are
+too expensive or inappropriate for the service) is to maintain an
+explicit table of partially finished requests and to use select() to
+decide which request to work on next (or whether to handle a new
+incoming request).  This is particularly important for stream services
+where each client can potentially be connected for a long time (if
+threads or subprocesses cannot be used).
+
+Future work:
+- Standard classes for Sun RPC (which uses either UDP or TCP)
+- Standard mix-in classes to implement various authentication
+  and encryption schemes
+- Standard framework for select-based multiplexing
+
+XXX Open problems:
+- What to do with out-of-band data?
+
+BaseServer:
+- split generic "request" functionality out into BaseServer class.
+  Copyright (C) 2000  Luke Kenneth Casson Leighton <lkcl@samba.org>
+
+  example: read entries from a SQL database (requires overriding
+  get_request() to return a table entry from the database).
+  entry is processed by a RequestHandlerClass.
+
+"""
+
+# Author of the BaseServer patch: Luke Kenneth Casson Leighton
+
+# XXX Warning!
+# There is a test suite for this module, but it cannot be run by the
+# standard regression test.
+# To run it manually, run Lib/test/test_socketserver.py.
+
+__version__ = "0.4"
+
+
+from _pydev_imps import _pydev_socket as socket
+from _pydev_imps import _pydev_select as select
+import sys
+import os
+try:
+    import _pydev_threading as threading
+except ImportError:
+    import dummy_threading as threading
+
+__all__ = ["TCPServer","UDPServer","ForkingUDPServer","ForkingTCPServer",
+           "ThreadingUDPServer","ThreadingTCPServer","BaseRequestHandler",
+           "StreamRequestHandler","DatagramRequestHandler",
+           "ThreadingMixIn", "ForkingMixIn"]
+if hasattr(socket, "AF_UNIX"):
+    __all__.extend(["UnixStreamServer","UnixDatagramServer",
+                    "ThreadingUnixStreamServer",
+                    "ThreadingUnixDatagramServer"])
+
+class BaseServer:
+
+    """Base class for server classes.
+
+    Methods for the caller:
+
+    - __init__(server_address, RequestHandlerClass)
+    - serve_forever(poll_interval=0.5)
+    - shutdown()
+    - handle_request()  # if you do not use serve_forever()
+    - fileno() -> int   # for select()
+
+    Methods that may be overridden:
+
+    - server_bind()
+    - server_activate()
+    - get_request() -> request, client_address
+    - handle_timeout()
+    - verify_request(request, client_address)
+    - server_close()
+    - process_request(request, client_address)
+    - shutdown_request(request)
+    - close_request(request)
+    - handle_error()
+
+    Methods for derived classes:
+
+    - finish_request(request, client_address)
+
+    Class variables that may be overridden by derived classes or
+    instances:
+
+    - timeout
+    - address_family
+    - socket_type
+    - allow_reuse_address
+
+    Instance variables:
+
+    - RequestHandlerClass
+    - socket
+
+    """
+
+    timeout = None
+
+    def __init__(self, server_address, RequestHandlerClass):
+        """Constructor.  May be extended, do not override."""
+        self.server_address = server_address
+        self.RequestHandlerClass = RequestHandlerClass
+        self.__is_shut_down = threading.Event()
+        self.__shutdown_request = False
+
+    def server_activate(self):
+        """Called by constructor to activate the server.
+
+        May be overridden.
+
+        """
+        pass
+
+    def serve_forever(self, poll_interval=0.5):
+        """Handle one request at a time until shutdown.
+
+        Polls for shutdown every poll_interval seconds. Ignores
+        self.timeout. If you need to do periodic tasks, do them in
+        another thread.
+        """
+        self.__is_shut_down.clear()
+        try:
+            while not self.__shutdown_request:
+                # XXX: Consider using another file descriptor or
+                # connecting to the socket to wake this up instead of
+                # polling. Polling reduces our responsiveness to a
+                # shutdown request and wastes cpu at all other times.
+                r, w, e = select.select([self], [], [], poll_interval)
+                if self in r:
+                    self._handle_request_noblock()
+        finally:
+            self.__shutdown_request = False
+            self.__is_shut_down.set()
+
+    def shutdown(self):
+        """Stops the serve_forever loop.
+
+        Blocks until the loop has finished. This must be called while
+        serve_forever() is running in another thread, or it will
+        deadlock.
+        """
+        self.__shutdown_request = True
+        self.__is_shut_down.wait()
+
+    # The distinction between handling, getting, processing and
+    # finishing a request is fairly arbitrary.  Remember:
+    #
+    # - handle_request() is the top-level call.  It calls
+    #   select, get_request(), verify_request() and process_request()
+    # - get_request() is different for stream or datagram sockets
+    # - process_request() is the place that may fork a new process
+    #   or create a new thread to finish the request
+    # - finish_request() instantiates the request handler class;
+    #   this constructor will handle the request all by itself
+
+    def handle_request(self):
+        """Handle one request, possibly blocking.
+
+        Respects self.timeout.
+        """
+        # Support people who used socket.settimeout() to escape
+        # handle_request before self.timeout was available.
+        timeout = self.socket.gettimeout()
+        if timeout is None:
+            timeout = self.timeout
+        elif self.timeout is not None:
+            timeout = min(timeout, self.timeout)
+        fd_sets = select.select([self], [], [], timeout)
+        if not fd_sets[0]:
+            self.handle_timeout()
+            return
+        self._handle_request_noblock()
+
+    def _handle_request_noblock(self):
+        """Handle one request, without blocking.
+
+        I assume that select.select has returned that the socket is
+        readable before this function was called, so there should be
+        no risk of blocking in get_request().
+        """
+        try:
+            request, client_address = self.get_request()
+        except socket.error:
+            return
+        if self.verify_request(request, client_address):
+            try:
+                self.process_request(request, client_address)
+            except:
+                self.handle_error(request, client_address)
+                self.shutdown_request(request)
+
+    def handle_timeout(self):
+        """Called if no new request arrives within self.timeout.
+
+        Overridden by ForkingMixIn.
+        """
+        pass
+
+    def verify_request(self, request, client_address):
+        """Verify the request.  May be overridden.
+
+        Return True if we should proceed with this request.
+
+        """
+        return True
+
+    def process_request(self, request, client_address):
+        """Call finish_request.
+
+        Overridden by ForkingMixIn and ThreadingMixIn.
+
+        """
+        self.finish_request(request, client_address)
+        self.shutdown_request(request)
+
+    def server_close(self):
+        """Called to clean-up the server.
+
+        May be overridden.
+
+        """
+        pass
+
+    def finish_request(self, request, client_address):
+        """Finish one request by instantiating RequestHandlerClass."""
+        self.RequestHandlerClass(request, client_address, self)
+
+    def shutdown_request(self, request):
+        """Called to shutdown and close an individual request."""
+        self.close_request(request)
+
+    def close_request(self, request):
+        """Called to clean up an individual request."""
+        pass
+
+    def handle_error(self, request, client_address):
+        """Handle an error gracefully.  May be overridden.
+
+        The default is to print a traceback and continue.
+
+        """
+        print '-'*40
+        print 'Exception happened during processing of request from',
+        print client_address
+        import traceback
+        traceback.print_exc() # XXX But this goes to stderr!
+        print '-'*40
+
+
+class TCPServer(BaseServer):
+
+    """Base class for various socket-based server classes.
+
+    Defaults to synchronous IP stream (i.e., TCP).
+
+    Methods for the caller:
+
+    - __init__(server_address, RequestHandlerClass, bind_and_activate=True)
+    - serve_forever(poll_interval=0.5)
+    - shutdown()
+    - handle_request()  # if you don't use serve_forever()
+    - fileno() -> int   # for select()
+
+    Methods that may be overridden:
+
+    - server_bind()
+    - server_activate()
+    - get_request() -> request, client_address
+    - handle_timeout()
+    - verify_request(request, client_address)
+    - process_request(request, client_address)
+    - shutdown_request(request)
+    - close_request(request)
+    - handle_error()
+
+    Methods for derived classes:
+
+    - finish_request(request, client_address)
+
+    Class variables that may be overridden by derived classes or
+    instances:
+
+    - timeout
+    - address_family
+    - socket_type
+    - request_queue_size (only for stream sockets)
+    - allow_reuse_address
+
+    Instance variables:
+
+    - server_address
+    - RequestHandlerClass
+    - socket
+
+    """
+
+    address_family = socket.AF_INET
+
+    socket_type = socket.SOCK_STREAM
+
+    request_queue_size = 5
+
+    allow_reuse_address = False
+
+    def __init__(self, server_address, RequestHandlerClass, bind_and_activate=True):
+        """Constructor.  May be extended, do not override."""
+        BaseServer.__init__(self, server_address, RequestHandlerClass)
+        self.socket = socket.socket(self.address_family,
+                                    self.socket_type)
+        if bind_and_activate:
+            self.server_bind()
+            self.server_activate()
+
+    def server_bind(self):
+        """Called by constructor to bind the socket.
+
+        May be overridden.
+
+        """
+        if self.allow_reuse_address:
+            self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
+        self.socket.bind(self.server_address)
+        self.server_address = self.socket.getsockname()
+
+    def server_activate(self):
+        """Called by constructor to activate the server.
+
+        May be overridden.
+
+        """
+        self.socket.listen(self.request_queue_size)
+
+    def server_close(self):
+        """Called to clean-up the server.
+
+        May be overridden.
+
+        """
+        self.socket.close()
+
+    def fileno(self):
+        """Return socket file number.
+
+        Interface required by select().
+
+        """
+        return self.socket.fileno()
+
+    def get_request(self):
+        """Get the request and client address from the socket.
+
+        May be overridden.
+
+        """
+        return self.socket.accept()
+
+    def shutdown_request(self, request):
+        """Called to shutdown and close an individual request."""
+        try:
+            #explicitly shutdown.  socket.close() merely releases
+            #the socket and waits for GC to perform the actual close.
+            request.shutdown(socket.SHUT_WR)
+        except socket.error:
+            pass #some platforms may raise ENOTCONN here
+        self.close_request(request)
+
+    def close_request(self, request):
+        """Called to clean up an individual request."""
+        request.close()
+
+
+class UDPServer(TCPServer):
+
+    """UDP server class."""
+
+    allow_reuse_address = False
+
+    socket_type = socket.SOCK_DGRAM
+
+    max_packet_size = 8192
+
+    def get_request(self):
+        data, client_addr = self.socket.recvfrom(self.max_packet_size)
+        return (data, self.socket), client_addr
+
+    def server_activate(self):
+        # No need to call listen() for UDP.
+        pass
+
+    def shutdown_request(self, request):
+        # No need to shutdown anything.
+        self.close_request(request)
+
+    def close_request(self, request):
+        # No need to close anything.
+        pass
+
+class ForkingMixIn:
+
+    """Mix-in class to handle each request in a new process."""
+
+    timeout = 300
+    active_children = None
+    max_children = 40
+
+    def collect_children(self):
+        """Internal routine to wait for children that have exited."""
+        if self.active_children is None: return
+        while len(self.active_children) >= self.max_children:
+            # XXX: This will wait for any child process, not just ones
+            # spawned by this library. This could confuse other
+            # libraries that expect to be able to wait for their own
+            # children.
+            try:
+                pid, status = os.waitpid(0, 0)
+            except os.error:
+                pid = None
+            if pid not in self.active_children: continue
+            self.active_children.remove(pid)
+
+        # XXX: This loop runs more system calls than it ought
+        # to. There should be a way to put the active_children into a
+        # process group and then use os.waitpid(-pgid) to wait for any
+        # of that set, but I couldn't find a way to allocate pgids
+        # that couldn't collide.
+        for child in self.active_children:
+            try:
+                pid, status = os.waitpid(child, os.WNOHANG)
+            except os.error:
+                pid = None
+            if not pid: continue
+            try:
+                self.active_children.remove(pid)
+            except ValueError, e:
+                raise ValueError('%s. x=%d and list=%r' % (e.message, pid,
+                                                           self.active_children))
+
+    def handle_timeout(self):
+        """Wait for zombies after self.timeout seconds of inactivity.
+
+        May be extended, do not override.
+        """
+        self.collect_children()
+
+    def process_request(self, request, client_address):
+        """Fork a new subprocess to process the request."""
+        self.collect_children()
+        pid = os.fork()
+        if pid:
+            # Parent process
+            if self.active_children is None:
+                self.active_children = []
+            self.active_children.append(pid)
+            self.close_request(request) #close handle in parent process
+            return
+        else:
+            # Child process.
+            # This must never return, hence os._exit()!
+            try:
+                self.finish_request(request, client_address)
+                self.shutdown_request(request)
+                os._exit(0)
+            except:
+                try:
+                    self.handle_error(request, client_address)
+                    self.shutdown_request(request)
+                finally:
+                    os._exit(1)
+
+
+class ThreadingMixIn:
+    """Mix-in class to handle each request in a new thread."""
+
+    # Decides how threads will act upon termination of the
+    # main process
+    daemon_threads = False
+
+    def process_request_thread(self, request, client_address):
+        """Same as in BaseServer but as a thread.
+
+        In addition, exception handling is done here.
+
+        """
+        try:
+            self.finish_request(request, client_address)
+            self.shutdown_request(request)
+        except:
+            self.handle_error(request, client_address)
+            self.shutdown_request(request)
+
+    def process_request(self, request, client_address):
+        """Start a new thread to process the request."""
+        t = threading.Thread(target = self.process_request_thread,
+                             args = (request, client_address))
+        t.daemon = self.daemon_threads
+        t.start()
+
+
+class ForkingUDPServer(ForkingMixIn, UDPServer): pass
+class ForkingTCPServer(ForkingMixIn, TCPServer): pass
+
+class ThreadingUDPServer(ThreadingMixIn, UDPServer): pass
+class ThreadingTCPServer(ThreadingMixIn, TCPServer): pass
+
+if hasattr(socket, 'AF_UNIX'):
+
+    class UnixStreamServer(TCPServer):
+        address_family = socket.AF_UNIX
+
+    class UnixDatagramServer(UDPServer):
+        address_family = socket.AF_UNIX
+
+    class ThreadingUnixStreamServer(ThreadingMixIn, UnixStreamServer): pass
+
+    class ThreadingUnixDatagramServer(ThreadingMixIn, UnixDatagramServer): pass
+
+class BaseRequestHandler:
+
+    """Base class for request handler classes.
+
+    This class is instantiated for each request to be handled.  The
+    constructor sets the instance variables request, client_address
+    and server, and then calls the handle() method.  To implement a
+    specific service, all you need to do is to derive a class which
+    defines a handle() method.
+
+    The handle() method can find the request as self.request, the
+    client address as self.client_address, and the server (in case it
+    needs access to per-server information) as self.server.  Since a
+    separate instance is created for each request, the handle() method
+    can define arbitrary other instance variariables.
+
+    """
+
+    def __init__(self, request, client_address, server):
+        self.request = request
+        self.client_address = client_address
+        self.server = server
+        self.setup()
+        try:
+            self.handle()
+        finally:
+            self.finish()
+
+    def setup(self):
+        pass
+
+    def handle(self):
+        pass
+
+    def finish(self):
+        pass
+
+
+# The following two classes make it possible to use the same service
+# class for stream or datagram servers.
+# Each class sets up these instance variables:
+# - rfile: a file object from which receives the request is read
+# - wfile: a file object to which the reply is written
+# When the handle() method returns, wfile is flushed properly
+
+
+class StreamRequestHandler(BaseRequestHandler):
+
+    """Define self.rfile and self.wfile for stream sockets."""
+
+    # Default buffer sizes for rfile, wfile.
+    # We default rfile to buffered because otherwise it could be
+    # really slow for large data (a getc() call per byte); we make
+    # wfile unbuffered because (a) often after a write() we want to
+    # read and we need to flush the line; (b) big writes to unbuffered
+    # files are typically optimized by stdio even when big reads
+    # aren't.
+    rbufsize = -1
+    wbufsize = 0
+
+    # A timeout to apply to the request socket, if not None.
+    timeout = None
+
+    # Disable nagle algorithm for this socket, if True.
+    # Use only when wbufsize != 0, to avoid small packets.
+    disable_nagle_algorithm = False
+
+    def setup(self):
+        self.connection = self.request
+        if self.timeout is not None:
+            self.connection.settimeout(self.timeout)
+        if self.disable_nagle_algorithm:
+            self.connection.setsockopt(socket.IPPROTO_TCP,
+                                       socket.TCP_NODELAY, True)
+        self.rfile = self.connection.makefile('rb', self.rbufsize)
+        self.wfile = self.connection.makefile('wb', self.wbufsize)
+
+    def finish(self):
+        if not self.wfile.closed:
+            self.wfile.flush()
+        self.wfile.close()
+        self.rfile.close()
+
+
+class DatagramRequestHandler(BaseRequestHandler):
+
+    # XXX Regrettably, I cannot get this working on Linux;
+    # s.recvfrom() doesn't return a meaningful client address.
+
+    """Define self.rfile and self.wfile for datagram sockets."""
+
+    def setup(self):
+        try:
+            from cStringIO import StringIO
+        except ImportError:
+            from StringIO import StringIO
+        self.packet, self.socket = self.request
+        self.rfile = StringIO(self.packet)
+        self.wfile = StringIO()
+
+    def finish(self):
+        self.socket.sendto(self.wfile.getvalue(), self.client_address)
diff --git a/python/helpers/pydev/_pydev_imps/_pydev_execfile.py b/python/helpers/pydev/_pydev_imps/_pydev_execfile.py
new file mode 100644
index 0000000..954783c
--- /dev/null
+++ b/python/helpers/pydev/_pydev_imps/_pydev_execfile.py
@@ -0,0 +1,18 @@
+#We must redefine it in Py3k if it's not already there
+def execfile(file, glob=None, loc=None):
+    if glob is None:
+        import sys
+        glob = sys._getframe().f_back.f_globals
+    if loc is None:
+        loc = glob
+
+    # It seems that the best way is using tokenize.open(): http://code.activestate.com/lists/python-dev/131251/
+    import tokenize
+    stream = tokenize.open(file)
+    try:
+        contents = stream.read()
+    finally:
+        stream.close()
+
+    #execute the script (note: it's important to compile first to have the filename set in debug mode)
+    exec(compile(contents+"\n", file, 'exec'), glob, loc) 
\ No newline at end of file
diff --git a/python/helpers/pydev/_pydev_imps/_pydev_inspect.py b/python/helpers/pydev/_pydev_imps/_pydev_inspect.py
new file mode 100644
index 0000000..5714764
--- /dev/null
+++ b/python/helpers/pydev/_pydev_imps/_pydev_inspect.py
@@ -0,0 +1,794 @@
+"""Get useful information from live Python objects.
+
+This module encapsulates the interface provided by the internal special
+attributes (func_*, co_*, im_*, tb_*, etc.) in a friendlier fashion.
+It also provides some help for examining source code and class layout.
+
+Here are some of the useful functions provided by this module:
+
+    ismodule(), isclass(), ismethod(), isfunction(), istraceback(),
+        isframe(), iscode(), isbuiltin(), isroutine() - check object types
+    getmembers() - get members of an object that satisfy a given condition
+
+    getfile(), getsourcefile(), getsource() - find an object's source code
+    getdoc(), getcomments() - get documentation on an object
+    getmodule() - determine the module that an object came from
+    getclasstree() - arrange classes so as to represent their hierarchy
+
+    getargspec(), getargvalues() - get info about function arguments
+    formatargspec(), formatargvalues() - format an argument spec
+    getouterframes(), getinnerframes() - get info about frames
+    currentframe() - get the current stack frame
+    stack(), trace() - get info about frames on the stack or in a traceback
+"""
+
+# This module is in the public domain.  No warranties.
+
+__author__ = 'Ka-Ping Yee <ping@lfw.org>'
+__date__ = '1 Jan 2001'
+
+import sys
+import os
+import types
+import string
+import re
+import imp
+import tokenize
+
+# ----------------------------------------------------------- type-checking
+def ismodule(object):
+    """Return true if the object is a module.
+
+    Module objects provide these attributes:
+        __doc__         documentation string
+        __file__        filename (missing for built-in modules)"""
+    return isinstance(object, types.ModuleType)
+
+def isclass(object):
+    """Return true if the object is a class.
+
+    Class objects provide these attributes:
+        __doc__         documentation string
+        __module__      name of module in which this class was defined"""
+    return isinstance(object, types.ClassType) or hasattr(object, '__bases__')
+
+def ismethod(object):
+    """Return true if the object is an instance method.
+
+    Instance method objects provide these attributes:
+        __doc__         documentation string
+        __name__        name with which this method was defined
+        im_class        class object in which this method belongs
+        im_func         function object containing implementation of method
+        im_self         instance to which this method is bound, or None"""
+    return isinstance(object, types.MethodType)
+
+def ismethoddescriptor(object):
+    """Return true if the object is a method descriptor.
+
+    But not if ismethod() or isclass() or isfunction() are true.
+
+    This is new in Python 2.2, and, for example, is true of int.__add__.
+    An object passing this test has a __get__ attribute but not a __set__
+    attribute, but beyond that the set of attributes varies.  __name__ is
+    usually sensible, and __doc__ often is.
+
+    Methods implemented via descriptors that also pass one of the other
+    tests return false from the ismethoddescriptor() test, simply because
+    the other tests promise more -- you can, e.g., count on having the
+    im_func attribute (etc) when an object passes ismethod()."""
+    return (hasattr(object, "__get__")
+            and not hasattr(object, "__set__") # else it's a data descriptor
+            and not ismethod(object)           # mutual exclusion
+            and not isfunction(object)
+            and not isclass(object))
+
+def isfunction(object):
+    """Return true if the object is a user-defined function.
+
+    Function objects provide these attributes:
+        __doc__         documentation string
+        __name__        name with which this function was defined
+        func_code       code object containing compiled function bytecode
+        func_defaults   tuple of any default values for arguments
+        func_doc        (same as __doc__)
+        func_globals    global namespace in which this function was defined
+        func_name       (same as __name__)"""
+    return isinstance(object, types.FunctionType)
+
+def istraceback(object):
+    """Return true if the object is a traceback.
+
+    Traceback objects provide these attributes:
+        tb_frame        frame object at this level
+        tb_lasti        index of last attempted instruction in bytecode
+        tb_lineno       current line number in Python source code
+        tb_next         next inner traceback object (called by this level)"""
+    return isinstance(object, types.TracebackType)
+
+def isframe(object):
+    """Return true if the object is a frame object.
+
+    Frame objects provide these attributes:
+        f_back          next outer frame object (this frame's caller)
+        f_builtins      built-in namespace seen by this frame
+        f_code          code object being executed in this frame
+        f_exc_traceback traceback if raised in this frame, or None
+        f_exc_type      exception type if raised in this frame, or None
+        f_exc_value     exception value if raised in this frame, or None
+        f_globals       global namespace seen by this frame
+        f_lasti         index of last attempted instruction in bytecode
+        f_lineno        current line number in Python source code
+        f_locals        local namespace seen by this frame
+        f_restricted    0 or 1 if frame is in restricted execution mode
+        f_trace         tracing function for this frame, or None"""
+    return isinstance(object, types.FrameType)
+
+def iscode(object):
+    """Return true if the object is a code object.
+
+    Code objects provide these attributes:
+        co_argcount     number of arguments (not including * or ** args)
+        co_code         string of raw compiled bytecode
+        co_consts       tuple of constants used in the bytecode
+        co_filename     name of file in which this code object was created
+        co_firstlineno  number of first line in Python source code
+        co_flags        bitmap: 1=optimized | 2=newlocals | 4=*arg | 8=**arg
+        co_lnotab       encoded mapping of line numbers to bytecode indices
+        co_name         name with which this code object was defined
+        co_names        tuple of names of local variables
+        co_nlocals      number of local variables
+        co_stacksize    virtual machine stack space required
+        co_varnames     tuple of names of arguments and local variables"""
+    return isinstance(object, types.CodeType)
+
+def isbuiltin(object):
+    """Return true if the object is a built-in function or method.
+
+    Built-in functions and methods provide these attributes:
+        __doc__         documentation string
+        __name__        original name of this function or method
+        __self__        instance to which a method is bound, or None"""
+    return isinstance(object, types.BuiltinFunctionType)
+
+def isroutine(object):
+    """Return true if the object is any kind of function or method."""
+    return (isbuiltin(object)
+            or isfunction(object)
+            or ismethod(object)
+            or ismethoddescriptor(object))
+
+def getmembers(object, predicate=None):
+    """Return all members of an object as (name, value) pairs sorted by name.
+    Optionally, only return members that satisfy a given predicate."""
+    results = []
+    for key in dir(object):
+        value = getattr(object, key)
+        if not predicate or predicate(value):
+            results.append((key, value))
+    results.sort()
+    return results
+
+def classify_class_attrs(cls):
+    """Return list of attribute-descriptor tuples.
+
+    For each name in dir(cls), the return list contains a 4-tuple
+    with these elements:
+
+        0. The name (a string).
+
+        1. The kind of attribute this is, one of these strings:
+               'class method'    created via classmethod()
+               'static method'   created via staticmethod()
+               'property'        created via property()
+               'method'          any other flavor of method
+               'data'            not a method
+
+        2. The class which defined this attribute (a class).
+
+        3. The object as obtained directly from the defining class's
+           __dict__, not via getattr.  This is especially important for
+           data attributes:  C.data is just a data object, but
+           C.__dict__['data'] may be a data descriptor with additional
+           info, like a __doc__ string.
+    """
+
+    mro = getmro(cls)
+    names = dir(cls)
+    result = []
+    for name in names:
+        # Get the object associated with the name.
+        # Getting an obj from the __dict__ sometimes reveals more than
+        # using getattr.  Static and class methods are dramatic examples.
+        if name in cls.__dict__:
+            obj = cls.__dict__[name]
+        else:
+            obj = getattr(cls, name)
+
+        # Figure out where it was defined.
+        homecls = getattr(obj, "__objclass__", None)
+        if homecls is None:
+            # search the dicts.
+            for base in mro:
+                if name in base.__dict__:
+                    homecls = base
+                    break
+
+        # Get the object again, in order to get it from the defining
+        # __dict__ instead of via getattr (if possible).
+        if homecls is not None and name in homecls.__dict__:
+            obj = homecls.__dict__[name]
+
+        # Also get the object via getattr.
+        obj_via_getattr = getattr(cls, name)
+
+        # Classify the object.
+        if isinstance(obj, staticmethod):
+            kind = "static method"
+        elif isinstance(obj, classmethod):
+            kind = "class method"
+        elif isinstance(obj, property):
+            kind = "property"
+        elif (ismethod(obj_via_getattr) or
+              ismethoddescriptor(obj_via_getattr)):
+            kind = "method"
+        else:
+            kind = "data"
+
+        result.append((name, kind, homecls, obj))
+
+    return result
+
+# ----------------------------------------------------------- class helpers
+def _searchbases(cls, accum):
+    # Simulate the "classic class" search order.
+    if cls in accum:
+        return
+    accum.append(cls)
+    for base in cls.__bases__:
+        _searchbases(base, accum)
+
+def getmro(cls):
+    "Return tuple of base classes (including cls) in method resolution order."
+    if hasattr(cls, "__mro__"):
+        return cls.__mro__
+    else:
+        result = []
+        _searchbases(cls, result)
+        return tuple(result)
+
+# -------------------------------------------------- source code extraction
+def indentsize(line):
+    """Return the indent size, in spaces, at the start of a line of text."""
+    expline = string.expandtabs(line)
+    return len(expline) - len(string.lstrip(expline))
+
+def getdoc(object):
+    """Get the documentation string for an object.
+
+    All tabs are expanded to spaces.  To clean up docstrings that are
+    indented to line up with blocks of code, any whitespace than can be
+    uniformly removed from the second line onwards is removed."""
+    try:
+        doc = object.__doc__
+    except AttributeError:
+        return None
+    if not isinstance(doc, (str, unicode)):
+        return None
+    try:
+        lines = string.split(string.expandtabs(doc), '\n')
+    except UnicodeError:
+        return None
+    else:
+        margin = None
+        for line in lines[1:]:
+            content = len(string.lstrip(line))
+            if not content: continue
+            indent = len(line) - content
+            if margin is None: margin = indent
+            else: margin = min(margin, indent)
+        if margin is not None:
+            for i in range(1, len(lines)): lines[i] = lines[i][margin:]
+        return string.join(lines, '\n')
+
+def getfile(object):
+    """Work out which source or compiled file an object was defined in."""
+    if ismodule(object):
+        if hasattr(object, '__file__'):
+            return object.__file__
+        raise TypeError, 'arg is a built-in module'
+    if isclass(object):
+        object = sys.modules.get(object.__module__)
+        if hasattr(object, '__file__'):
+            return object.__file__
+        raise TypeError, 'arg is a built-in class'
+    if ismethod(object):
+        object = object.im_func
+    if isfunction(object):
+        object = object.func_code
+    if istraceback(object):
+        object = object.tb_frame
+    if isframe(object):
+        object = object.f_code
+    if iscode(object):
+        return object.co_filename
+    raise TypeError, 'arg is not a module, class, method, ' \
+                     'function, traceback, frame, or code object'
+
+def getmoduleinfo(path):
+    """Get the module name, suffix, mode, and module type for a given file."""
+    filename = os.path.basename(path)
+    suffixes = map(lambda (suffix, mode, mtype):
+                   (-len(suffix), suffix, mode, mtype), imp.get_suffixes())
+    suffixes.sort() # try longest suffixes first, in case they overlap
+    for neglen, suffix, mode, mtype in suffixes:
+        if filename[neglen:] == suffix:
+            return filename[:neglen], suffix, mode, mtype
+
+def getmodulename(path):
+    """Return the module name for a given file, or None."""
+    info = getmoduleinfo(path)
+    if info: return info[0]
+
+def getsourcefile(object):
+    """Return the Python source file an object was defined in, if it exists."""
+    filename = getfile(object)
+    if string.lower(filename[-4:]) in ['.pyc', '.pyo']:
+        filename = filename[:-4] + '.py'
+    for suffix, mode, kind in imp.get_suffixes():
+        if 'b' in mode and string.lower(filename[-len(suffix):]) == suffix:
+            # Looks like a binary file.  We want to only return a text file.
+            return None
+    if os.path.exists(filename):
+        return filename
+
+def getabsfile(object):
+    """Return an absolute path to the source or compiled file for an object.
+
+    The idea is for each object to have a unique origin, so this routine
+    normalizes the result as much as possible."""
+    return os.path.normcase(
+        os.path.abspath(getsourcefile(object) or getfile(object)))
+
+modulesbyfile = {}
+
+def getmodule(object):
+    """Return the module an object was defined in, or None if not found."""
+    if ismodule(object):
+        return object
+    if isclass(object):
+        return sys.modules.get(object.__module__)
+    try:
+        file = getabsfile(object)
+    except TypeError:
+        return None
+    if modulesbyfile.has_key(file):
+        return sys.modules[modulesbyfile[file]]
+    for module in sys.modules.values():
+        if hasattr(module, '__file__'):
+            modulesbyfile[getabsfile(module)] = module.__name__
+    if modulesbyfile.has_key(file):
+        return sys.modules[modulesbyfile[file]]
+    main = sys.modules['__main__']
+    if hasattr(main, object.__name__):
+        mainobject = getattr(main, object.__name__)
+        if mainobject is object:
+            return main
+    builtin = sys.modules['__builtin__']
+    if hasattr(builtin, object.__name__):
+        builtinobject = getattr(builtin, object.__name__)
+        if builtinobject is object:
+            return builtin
+
+def findsource(object):
+    """Return the entire source file and starting line number for an object.
+
+    The argument may be a module, class, method, function, traceback, frame,
+    or code object.  The source code is returned as a list of all the lines
+    in the file and the line number indexes a line in that list.  An IOError
+    is raised if the source code cannot be retrieved."""
+    try:
+        file = open(getsourcefile(object))
+    except (TypeError, IOError):
+        raise IOError, 'could not get source code'
+    lines = file.readlines()
+    file.close()
+
+    if ismodule(object):
+        return lines, 0
+
+    if isclass(object):
+        name = object.__name__
+        pat = re.compile(r'^\s*class\s*' + name + r'\b')
+        for i in range(len(lines)):
+            if pat.match(lines[i]): return lines, i
+        else: raise IOError, 'could not find class definition'
+
+    if ismethod(object):
+        object = object.im_func
+    if isfunction(object):
+        object = object.func_code
+    if istraceback(object):
+        object = object.tb_frame
+    if isframe(object):
+        object = object.f_code
+    if iscode(object):
+        if not hasattr(object, 'co_firstlineno'):
+            raise IOError, 'could not find function definition'
+        lnum = object.co_firstlineno - 1
+        pat = re.compile(r'^(\s*def\s)|(.*\slambda(:|\s))')
+        while lnum > 0:
+            if pat.match(lines[lnum]): break
+            lnum = lnum - 1
+        return lines, lnum
+    raise IOError, 'could not find code object'
+
+def getcomments(object):
+    """Get lines of comments immediately preceding an object's source code."""
+    try: lines, lnum = findsource(object)
+    except IOError: return None
+
+    if ismodule(object):
+        # Look for a comment block at the top of the file.
+        start = 0
+        if lines and lines[0][:2] == '#!': start = 1
+        while start < len(lines) and string.strip(lines[start]) in ['', '#']:
+            start = start + 1
+        if start < len(lines) and lines[start][:1] == '#':
+            comments = []
+            end = start
+            while end < len(lines) and lines[end][:1] == '#':
+                comments.append(string.expandtabs(lines[end]))
+                end = end + 1
+            return string.join(comments, '')
+
+    # Look for a preceding block of comments at the same indentation.
+    elif lnum > 0:
+        indent = indentsize(lines[lnum])
+        end = lnum - 1
+        if end >= 0 and string.lstrip(lines[end])[:1] == '#' and \
+            indentsize(lines[end]) == indent:
+            comments = [string.lstrip(string.expandtabs(lines[end]))]
+            if end > 0:
+                end = end - 1
+                comment = string.lstrip(string.expandtabs(lines[end]))
+                while comment[:1] == '#' and indentsize(lines[end]) == indent:
+                    comments[:0] = [comment]
+                    end = end - 1
+                    if end < 0: break
+                    comment = string.lstrip(string.expandtabs(lines[end]))
+            while comments and string.strip(comments[0]) == '#':
+                comments[:1] = []
+            while comments and string.strip(comments[-1]) == '#':
+                comments[-1:] = []
+            return string.join(comments, '')
+
+class ListReader:
+    """Provide a readline() method to return lines from a list of strings."""
+    def __init__(self, lines):
+        self.lines = lines
+        self.index = 0
+
+    def readline(self):
+        i = self.index
+        if i < len(self.lines):
+            self.index = i + 1
+            return self.lines[i]
+        else: return ''
+
+class EndOfBlock(Exception): pass
+
+class BlockFinder:
+    """Provide a tokeneater() method to detect the end of a code block."""
+    def __init__(self):
+        self.indent = 0
+        self.started = 0
+        self.last = 0
+
+    def tokeneater(self, type, token, (srow, scol), (erow, ecol), line):
+        if not self.started:
+            if type == tokenize.NAME: self.started = 1
+        elif type == tokenize.NEWLINE:
+            self.last = srow
+        elif type == tokenize.INDENT:
+            self.indent = self.indent + 1
+        elif type == tokenize.DEDENT:
+            self.indent = self.indent - 1
+            if self.indent == 0: raise EndOfBlock, self.last
+        elif type == tokenize.NAME and scol == 0:
+            raise EndOfBlock, self.last
+
+def getblock(lines):
+    """Extract the block of code at the top of the given list of lines."""
+    try:
+        tokenize.tokenize(ListReader(lines).readline, BlockFinder().tokeneater)
+    except EndOfBlock, eob:
+        return lines[:eob.args[0]]
+    # Fooling the indent/dedent logic implies a one-line definition
+    return lines[:1]
+
+def getsourcelines(object):
+    """Return a list of source lines and starting line number for an object.
+
+    The argument may be a module, class, method, function, traceback, frame,
+    or code object.  The source code is returned as a list of the lines
+    corresponding to the object and the line number indicates where in the
+    original source file the first line of code was found.  An IOError is
+    raised if the source code cannot be retrieved."""
+    lines, lnum = findsource(object)
+
+    if ismodule(object): return lines, 0
+    else: return getblock(lines[lnum:]), lnum + 1
+
+def getsource(object):
+    """Return the text of the source code for an object.
+
+    The argument may be a module, class, method, function, traceback, frame,
+    or code object.  The source code is returned as a single string.  An
+    IOError is raised if the source code cannot be retrieved."""
+    lines, lnum = getsourcelines(object)
+    return string.join(lines, '')
+
+# --------------------------------------------------- class tree extraction
+def walktree(classes, children, parent):
+    """Recursive helper function for getclasstree()."""
+    results = []
+    classes.sort(lambda a, b: cmp(a.__name__, b.__name__))
+    for c in classes:
+        results.append((c, c.__bases__))
+        if children.has_key(c):
+            results.append(walktree(children[c], children, c))
+    return results
+
+def getclasstree(classes, unique=0):
+    """Arrange the given list of classes into a hierarchy of nested lists.
+
+    Where a nested list appears, it contains classes derived from the class
+    whose entry immediately precedes the list.  Each entry is a 2-tuple
+    containing a class and a tuple of its base classes.  If the 'unique'
+    argument is true, exactly one entry appears in the returned structure
+    for each class in the given list.  Otherwise, classes using multiple
+    inheritance and their descendants will appear multiple times."""
+    children = {}
+    roots = []
+    for c in classes:
+        if c.__bases__:
+            for parent in c.__bases__:
+                if not children.has_key(parent):
+                    children[parent] = []
+                children[parent].append(c)
+                if unique and parent in classes: break
+        elif c not in roots:
+            roots.append(c)
+    for parent in children.keys():
+        if parent not in classes:
+            roots.append(parent)
+    return walktree(roots, children, None)
+
+# ------------------------------------------------ argument list extraction
+# These constants are from Python's compile.h.
+CO_OPTIMIZED, CO_NEWLOCALS, CO_VARARGS, CO_VARKEYWORDS = 1, 2, 4, 8
+
+def getargs(co):
+    """Get information about the arguments accepted by a code object.
+
+    Three things are returned: (args, varargs, varkw), where 'args' is
+    a list of argument names (possibly containing nested lists), and
+    'varargs' and 'varkw' are the names of the * and ** arguments or None."""
+    if not iscode(co): raise TypeError, 'arg is not a code object'
+
+    nargs = co.co_argcount
+    names = co.co_varnames
+    args = list(names[:nargs])
+    step = 0
+
+    # The following acrobatics are for anonymous (tuple) arguments.
+    if not sys.platform.startswith('java'):#Jython doesn't have co_code
+        code = co.co_code
+        import dis
+        for i in range(nargs):
+            if args[i][:1] in ['', '.']:
+                stack, remain, count = [], [], []
+                while step < len(code):
+                    op = ord(code[step])
+                    step = step + 1
+                    if op >= dis.HAVE_ARGUMENT:
+                        opname = dis.opname[op]
+                        value = ord(code[step]) + ord(code[step + 1]) * 256
+                        step = step + 2
+                        if opname in ['UNPACK_TUPLE', 'UNPACK_SEQUENCE']:
+                            remain.append(value)
+                            count.append(value)
+                        elif opname == 'STORE_FAST':
+                            stack.append(names[value])
+                            remain[-1] = remain[-1] - 1
+                            while remain[-1] == 0:
+                                remain.pop()
+                                size = count.pop()
+                                stack[-size:] = [stack[-size:]]
+                                if not remain: break
+                                remain[-1] = remain[-1] - 1
+                            if not remain: break
+                args[i] = stack[0]
+
+    varargs = None
+    if co.co_flags & CO_VARARGS:
+        varargs = co.co_varnames[nargs]
+        nargs = nargs + 1
+    varkw = None
+    if co.co_flags & CO_VARKEYWORDS:
+        varkw = co.co_varnames[nargs]
+    return args, varargs, varkw
+
+def getargspec(func):
+    """Get the names and default values of a function's arguments.
+
+    A tuple of four things is returned: (args, varargs, varkw, defaults).
+    'args' is a list of the argument names (it may contain nested lists).
+    'varargs' and 'varkw' are the names of the * and ** arguments or None.
+    'defaults' is an n-tuple of the default values of the last n arguments."""
+    if ismethod(func):
+        func = func.im_func
+    if not isfunction(func): raise TypeError, 'arg is not a Python function'
+    args, varargs, varkw = getargs(func.func_code)
+    return args, varargs, varkw, func.func_defaults
+
+def getargvalues(frame):
+    """Get information about arguments passed into a particular frame.
+
+    A tuple of four things is returned: (args, varargs, varkw, locals).
+    'args' is a list of the argument names (it may contain nested lists).
+    'varargs' and 'varkw' are the names of the * and ** arguments or None.
+    'locals' is the locals dictionary of the given frame."""
+    args, varargs, varkw = getargs(frame.f_code)
+    return args, varargs, varkw, frame.f_locals
+
+def joinseq(seq):
+    if len(seq) == 1:
+        return '(' + seq[0] + ',)'
+    else:
+        return '(' + string.join(seq, ', ') + ')'
+
+def strseq(object, convert, join=joinseq):
+    """Recursively walk a sequence, stringifying each element."""
+    if type(object) in [types.ListType, types.TupleType]:
+        return join(map(lambda o, c=convert, j=join: strseq(o, c, j), object))
+    else:
+        return convert(object)
+
+def formatargspec(args, varargs=None, varkw=None, defaults=None,
+                  formatarg=str,
+                  formatvarargs=lambda name: '*' + name,
+                  formatvarkw=lambda name: '**' + name,
+                  formatvalue=lambda value: '=' + repr(value),
+                  join=joinseq):
+    """Format an argument spec from the 4 values returned by getargspec.
+
+    The first four arguments are (args, varargs, varkw, defaults).  The
+    other four arguments are the corresponding optional formatting functions
+    that are called to turn names and values into strings.  The ninth
+    argument is an optional function to format the sequence of arguments."""
+    specs = []
+    if defaults:
+        firstdefault = len(args) - len(defaults)
+    for i in range(len(args)):
+        spec = strseq(args[i], formatarg, join)
+        if defaults and i >= firstdefault:
+            spec = spec + formatvalue(defaults[i - firstdefault])
+        specs.append(spec)
+    if varargs:
+        specs.append(formatvarargs(varargs))
+    if varkw:
+        specs.append(formatvarkw(varkw))
+    return '(' + string.join(specs, ', ') + ')'
+
+def formatargvalues(args, varargs, varkw, locals,
+                    formatarg=str,
+                    formatvarargs=lambda name: '*' + name,
+                    formatvarkw=lambda name: '**' + name,
+                    formatvalue=lambda value: '=' + repr(value),
+                    join=joinseq):
+    """Format an argument spec from the 4 values returned by getargvalues.
+
+    The first four arguments are (args, varargs, varkw, locals).  The
+    next four arguments are the corresponding optional formatting functions
+    that are called to turn names and values into strings.  The ninth
+    argument is an optional function to format the sequence of arguments."""
+    def convert(name, locals=locals,
+                formatarg=formatarg, formatvalue=formatvalue):
+        return formatarg(name) + formatvalue(locals[name])
+    specs = []
+    for i in range(len(args)):
+        specs.append(strseq(args[i], convert, join))
+    if varargs:
+        specs.append(formatvarargs(varargs) + formatvalue(locals[varargs]))
+    if varkw:
+        specs.append(formatvarkw(varkw) + formatvalue(locals[varkw]))
+    return '(' + string.join(specs, ', ') + ')'
+
+# -------------------------------------------------- stack frame extraction
+def getframeinfo(frame, context=1):
+    """Get information about a frame or traceback object.
+
+    A tuple of five things is returned: the filename, the line number of
+    the current line, the function name, a list of lines of context from
+    the source code, and the index of the current line within that list.
+    The optional second argument specifies the number of lines of context
+    to return, which are centered around the current line."""
+    raise NotImplementedError
+#    if istraceback(frame):
+#        frame = frame.tb_frame
+#    if not isframe(frame):
+#        raise TypeError, 'arg is not a frame or traceback object'
+#
+#    filename = getsourcefile(frame)
+#    lineno = getlineno(frame)
+#    if context > 0:
+#        start = lineno - 1 - context//2
+#        try:
+#            lines, lnum = findsource(frame)
+#        except IOError:
+#            lines = index = None
+#        else:
+#            start = max(start, 1)
+#            start = min(start, len(lines) - context)
+#            lines = lines[start:start+context]
+#            index = lineno - 1 - start
+#    else:
+#        lines = index = None
+#
+#    return (filename, lineno, frame.f_code.co_name, lines, index)
+
+def getlineno(frame):
+    """Get the line number from a frame object, allowing for optimization."""
+    # Written by Marc-Andr Lemburg; revised by Jim Hugunin and Fredrik Lundh.
+    lineno = frame.f_lineno
+    code = frame.f_code
+    if hasattr(code, 'co_lnotab'):
+        table = code.co_lnotab
+        lineno = code.co_firstlineno
+        addr = 0
+        for i in range(0, len(table), 2):
+            addr = addr + ord(table[i])
+            if addr > frame.f_lasti: break
+            lineno = lineno + ord(table[i + 1])
+    return lineno
+
+def getouterframes(frame, context=1):
+    """Get a list of records for a frame and all higher (calling) frames.
+
+    Each record contains a frame object, filename, line number, function
+    name, a list of lines of context, and index within the context."""
+    framelist = []
+    while frame:
+        framelist.append((frame,) + getframeinfo(frame, context))
+        frame = frame.f_back
+    return framelist
+
+def getinnerframes(tb, context=1):
+    """Get a list of records for a traceback's frame and all lower frames.
+
+    Each record contains a frame object, filename, line number, function
+    name, a list of lines of context, and index within the context."""
+    framelist = []
+    while tb:
+        framelist.append((tb.tb_frame,) + getframeinfo(tb, context))
+        tb = tb.tb_next
+    return framelist
+
+def currentframe():
+    """Return the frame object for the caller's stack frame."""
+    try:
+        raise 'catch me'
+    except:
+        return sys.exc_traceback.tb_frame.f_back #@UndefinedVariable
+
+if hasattr(sys, '_getframe'): currentframe = sys._getframe
+
+def stack(context=1):
+    """Return a list of records for the stack above the caller's frame."""
+    return getouterframes(currentframe().f_back, context)
+
+def trace(context=1):
+    """Return a list of records for the stack below the current exception."""
+    return getinnerframes(sys.exc_traceback, context) #@UndefinedVariable
diff --git a/python/helpers/pydev/_pydev_imps/_pydev_select.py b/python/helpers/pydev/_pydev_imps/_pydev_select.py
new file mode 100644
index 0000000..b8dad03
--- /dev/null
+++ b/python/helpers/pydev/_pydev_imps/_pydev_select.py
@@ -0,0 +1 @@
+from select import *
\ No newline at end of file
diff --git a/python/helpers/pydev/_pydev_imps/_pydev_socket.py b/python/helpers/pydev/_pydev_imps/_pydev_socket.py
new file mode 100644
index 0000000..9e96e80
--- /dev/null
+++ b/python/helpers/pydev/_pydev_imps/_pydev_socket.py
@@ -0,0 +1 @@
+from socket import *
\ No newline at end of file
diff --git a/python/helpers/pydev/_pydev_imps/_pydev_thread.py b/python/helpers/pydev/_pydev_imps/_pydev_thread.py
new file mode 100644
index 0000000..4d2fd5d
--- /dev/null
+++ b/python/helpers/pydev/_pydev_imps/_pydev_thread.py
@@ -0,0 +1,4 @@
+try:
+    from thread import *
+except:
+    from _thread import * #Py3k
diff --git a/python/helpers/pydev/_pydev_imps/_pydev_time.py b/python/helpers/pydev/_pydev_imps/_pydev_time.py
new file mode 100644
index 0000000..72705db
--- /dev/null
+++ b/python/helpers/pydev/_pydev_imps/_pydev_time.py
@@ -0,0 +1 @@
+from time import *
diff --git a/python/helpers/pydev/_pydev_imps/_pydev_xmlrpclib.py b/python/helpers/pydev/_pydev_imps/_pydev_xmlrpclib.py
new file mode 100644
index 0000000..5f6e2b7
--- /dev/null
+++ b/python/helpers/pydev/_pydev_imps/_pydev_xmlrpclib.py
@@ -0,0 +1,1493 @@
+#Just a copy of the version in python 2.5 to be used if it's not available in jython 2.1
+import sys
+
+#
+# XML-RPC CLIENT LIBRARY
+#
+# an XML-RPC client interface for Python.
+#
+# the marshalling and response parser code can also be used to
+# implement XML-RPC servers.
+#
+# Notes:
+# this version is designed to work with Python 2.1 or newer.
+#
+# History:
+# 1999-01-14 fl  Created
+# 1999-01-15 fl  Changed dateTime to use localtime
+# 1999-01-16 fl  Added Binary/base64 element, default to RPC2 service
+# 1999-01-19 fl  Fixed array data element (from Skip Montanaro)
+# 1999-01-21 fl  Fixed dateTime constructor, etc.
+# 1999-02-02 fl  Added fault handling, handle empty sequences, etc.
+# 1999-02-10 fl  Fixed problem with empty responses (from Skip Montanaro)
+# 1999-06-20 fl  Speed improvements, pluggable parsers/transports (0.9.8)
+# 2000-11-28 fl  Changed boolean to check the truth value of its argument
+# 2001-02-24 fl  Added encoding/Unicode/SafeTransport patches
+# 2001-02-26 fl  Added compare support to wrappers (0.9.9/1.0b1)
+# 2001-03-28 fl  Make sure response tuple is a singleton
+# 2001-03-29 fl  Don't require empty params element (from Nicholas Riley)
+# 2001-06-10 fl  Folded in _xmlrpclib accelerator support (1.0b2)
+# 2001-08-20 fl  Base xmlrpclib.Error on built-in Exception (from Paul Prescod)
+# 2001-09-03 fl  Allow Transport subclass to override getparser
+# 2001-09-10 fl  Lazy import of urllib, cgi, xmllib (20x import speedup)
+# 2001-10-01 fl  Remove containers from memo cache when done with them
+# 2001-10-01 fl  Use faster escape method (80% dumps speedup)
+# 2001-10-02 fl  More dumps microtuning
+# 2001-10-04 fl  Make sure import expat gets a parser (from Guido van Rossum)
+# 2001-10-10 sm  Allow long ints to be passed as ints if they don't overflow
+# 2001-10-17 sm  Test for int and long overflow (allows use on 64-bit systems)
+# 2001-11-12 fl  Use repr() to marshal doubles (from Paul Felix)
+# 2002-03-17 fl  Avoid buffered read when possible (from James Rucker)
+# 2002-04-07 fl  Added pythondoc comments
+# 2002-04-16 fl  Added __str__ methods to datetime/binary wrappers
+# 2002-05-15 fl  Added error constants (from Andrew Kuchling)
+# 2002-06-27 fl  Merged with Python CVS version
+# 2002-10-22 fl  Added basic authentication (based on code from Phillip Eby)
+# 2003-01-22 sm  Add support for the bool type
+# 2003-02-27 gvr Remove apply calls
+# 2003-04-24 sm  Use cStringIO if available
+# 2003-04-25 ak  Add support for nil
+# 2003-06-15 gn  Add support for time.struct_time
+# 2003-07-12 gp  Correct marshalling of Faults
+# 2003-10-31 mvl Add multicall support
+# 2004-08-20 mvl Bump minimum supported Python version to 2.1
+#
+# Copyright (c) 1999-2002 by Secret Labs AB.
+# Copyright (c) 1999-2002 by Fredrik Lundh.
+#
+# info@pythonware.com
+# http://www.pythonware.com
+#
+# --------------------------------------------------------------------
+# The XML-RPC client interface is
+#
+# Copyright (c) 1999-2002 by Secret Labs AB
+# Copyright (c) 1999-2002 by Fredrik Lundh
+#
+# By obtaining, using, and/or copying this software and/or its
+# associated documentation, you agree that you have read, understood,
+# and will comply with the following terms and conditions:
+#
+# Permission to use, copy, modify, and distribute this software and
+# its associated documentation for any purpose and without fee is
+# hereby granted, provided that the above copyright notice appears in
+# all copies, and that both that copyright notice and this permission
+# notice appear in supporting documentation, and that the name of
+# Secret Labs AB or the author not be used in advertising or publicity
+# pertaining to distribution of the software without specific, written
+# prior permission.
+#
+# SECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD
+# TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANT-
+# ABILITY AND FITNESS.  IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR
+# BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY
+# DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS,
+# WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS
+# ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE
+# OF THIS SOFTWARE.
+# --------------------------------------------------------------------
+
+#
+# things to look into some day:
+
+# TODO: sort out True/False/boolean issues for Python 2.3
+
+"""
+An XML-RPC client interface for Python.
+
+The marshalling and response parser code can also be used to
+implement XML-RPC servers.
+
+Exported exceptions:
+
+  Error          Base class for client errors
+  ProtocolError  Indicates an HTTP protocol error
+  ResponseError  Indicates a broken response package
+  Fault          Indicates an XML-RPC fault package
+
+Exported classes:
+
+  ServerProxy    Represents a logical connection to an XML-RPC server
+
+  MultiCall      Executor of boxcared xmlrpc requests
+  Boolean        boolean wrapper to generate a "boolean" XML-RPC value
+  DateTime       dateTime wrapper for an ISO 8601 string or time tuple or
+                 localtime integer value to generate a "dateTime.iso8601"
+                 XML-RPC value
+  Binary         binary data wrapper
+
+  SlowParser     Slow but safe standard parser (based on xmllib)
+  Marshaller     Generate an XML-RPC params chunk from a Python data structure
+  Unmarshaller   Unmarshal an XML-RPC response from incoming XML event message
+  Transport      Handles an HTTP transaction to an XML-RPC server
+  SafeTransport  Handles an HTTPS transaction to an XML-RPC server
+
+Exported constants:
+
+  True
+  False
+
+Exported functions:
+
+  boolean        Convert any Python value to an XML-RPC boolean
+  getparser      Create instance of the fastest available parser & attach
+                 to an unmarshalling object
+  dumps          Convert an argument tuple or a Fault instance to an XML-RPC
+                 request (or response, if the methodresponse option is used).
+  loads          Convert an XML-RPC packet to unmarshalled data plus a method
+                 name (None if not present).
+"""
+
+import re, string, time, operator
+
+from types import *
+
+# --------------------------------------------------------------------
+# Internal stuff
+
+try:
+    unicode
+except NameError:
+    unicode = None # unicode support not available
+
+try:
+    import datetime
+except ImportError:
+    datetime = None
+
+try:
+    _bool_is_builtin = False.__class__.__name__ == "bool"
+except (NameError, AttributeError):
+    _bool_is_builtin = 0
+
+def _decode(data, encoding, is8bit=re.compile("[\x80-\xff]").search):
+    # decode non-ascii string (if possible)
+    if unicode and encoding and is8bit(data):
+        data = unicode(data, encoding)
+    return data
+
+def escape(s, replace=string.replace):
+    s = replace(s, "&", "&amp;")
+    s = replace(s, "<", "&lt;")
+    return replace(s, ">", "&gt;",)
+
+if unicode:
+    def _stringify(string):
+        # convert to 7-bit ascii if possible
+        try:
+            return string.encode("ascii")
+        except UnicodeError:
+            return string
+else:
+    def _stringify(string):
+        return string
+
+__version__ = "1.0.1"
+
+# xmlrpc integer limits
+try:
+    long 
+except NameError:
+    long = int
+MAXINT = long(2) ** 31 - 1
+MININT = long(-2) ** 31
+
+# --------------------------------------------------------------------
+# Error constants (from Dan Libby's specification at
+# http://xmlrpc-epi.sourceforge.net/specs/rfc.fault_codes.php)
+
+# Ranges of errors
+PARSE_ERROR = -32700
+SERVER_ERROR = -32600
+APPLICATION_ERROR = -32500
+SYSTEM_ERROR = -32400
+TRANSPORT_ERROR = -32300
+
+# Specific errors
+NOT_WELLFORMED_ERROR = -32700
+UNSUPPORTED_ENCODING = -32701
+INVALID_ENCODING_CHAR = -32702
+INVALID_XMLRPC = -32600
+METHOD_NOT_FOUND = -32601
+INVALID_METHOD_PARAMS = -32602
+INTERNAL_ERROR = -32603
+
+# --------------------------------------------------------------------
+# Exceptions
+
+##
+# Base class for all kinds of client-side errors.
+
+class Error(Exception):
+    """Base class for client errors."""
+    def __str__(self):
+        return repr(self)
+
+##
+# Indicates an HTTP-level protocol error.  This is raised by the HTTP
+# transport layer, if the server returns an error code other than 200
+# (OK).
+#
+# @param url The target URL.
+# @param errcode The HTTP error code.
+# @param errmsg The HTTP error message.
+# @param headers The HTTP header dictionary.
+
+class ProtocolError(Error):
+    """Indicates an HTTP protocol error."""
+    def __init__(self, url, errcode, errmsg, headers):
+        Error.__init__(self)
+        self.url = url
+        self.errcode = errcode
+        self.errmsg = errmsg
+        self.headers = headers
+    def __repr__(self):
+        return (
+            "<ProtocolError for %s: %s %s>" % 
+            (self.url, self.errcode, self.errmsg)
+            )
+
+##
+# Indicates a broken XML-RPC response package.  This exception is
+# raised by the unmarshalling layer, if the XML-RPC response is
+# malformed.
+
+class ResponseError(Error):
+    """Indicates a broken response package."""
+    pass
+
+##
+# Indicates an XML-RPC fault response package.  This exception is
+# raised by the unmarshalling layer, if the XML-RPC response contains
+# a fault string.  This exception can also used as a class, to
+# generate a fault XML-RPC message.
+#
+# @param faultCode The XML-RPC fault code.
+# @param faultString The XML-RPC fault string.
+
+class Fault(Error):
+    """Indicates an XML-RPC fault package."""
+    def __init__(self, faultCode, faultString, **extra):
+        Error.__init__(self)
+        self.faultCode = faultCode
+        self.faultString = faultString
+    def __repr__(self):
+        return (
+            "<Fault %s: %s>" % 
+            (self.faultCode, repr(self.faultString))
+            )
+
+# --------------------------------------------------------------------
+# Special values
+
+##
+# Wrapper for XML-RPC boolean values.  Use the xmlrpclib.True and
+# xmlrpclib.False constants, or the xmlrpclib.boolean() function, to
+# generate boolean XML-RPC values.
+#
+# @param value A boolean value.  Any true value is interpreted as True,
+#              all other values are interpreted as False.
+
+if _bool_is_builtin:
+    boolean = Boolean = bool #@UndefinedVariable
+    # to avoid breaking code which references xmlrpclib.{True,False}
+    True, False = True, False
+else:
+    class Boolean:
+        """Boolean-value wrapper.
+
+        Use True or False to generate a "boolean" XML-RPC value.
+        """
+
+        def __init__(self, value=0):
+            self.value = operator.truth(value)
+
+        def encode(self, out):
+            out.write("<value><boolean>%d</boolean></value>\n" % self.value)
+
+        def __cmp__(self, other):
+            if isinstance(other, Boolean):
+                other = other.value
+            return cmp(self.value, other)
+
+        def __repr__(self):
+            if self.value:
+                return "<Boolean True at %x>" % id(self)
+            else:
+                return "<Boolean False at %x>" % id(self)
+
+        def __int__(self):
+            return self.value
+
+        def __nonzero__(self):
+            return self.value
+
+    True, False = Boolean(1), Boolean(0)
+
+    ##
+    # Map true or false value to XML-RPC boolean values.
+    #
+    # @def boolean(value)
+    # @param value A boolean value.  Any true value is mapped to True,
+    #              all other values are mapped to False.
+    # @return xmlrpclib.True or xmlrpclib.False.
+    # @see Boolean
+    # @see True
+    # @see False
+
+    def boolean(value, _truefalse=(False, True)):
+        """Convert any Python value to XML-RPC 'boolean'."""
+        return _truefalse[operator.truth(value)]
+
+##
+# Wrapper for XML-RPC DateTime values.  This converts a time value to
+# the format used by XML-RPC.
+# <p>
+# The value can be given as a string in the format
+# "yyyymmddThh:mm:ss", as a 9-item time tuple (as returned by
+# time.localtime()), or an integer value (as returned by time.time()).
+# The wrapper uses time.localtime() to convert an integer to a time
+# tuple.
+#
+# @param value The time, given as an ISO 8601 string, a time
+#              tuple, or a integer time value.
+
+class DateTime:
+    """DateTime wrapper for an ISO 8601 string or time tuple or
+    localtime integer value to generate 'dateTime.iso8601' XML-RPC
+    value.
+    """
+
+    def __init__(self, value=0):
+        if not isinstance(value, StringType):
+            if datetime and isinstance(value, datetime.datetime):
+                self.value = value.strftime("%Y%m%dT%H:%M:%S")
+                return
+            if datetime and isinstance(value, datetime.date):
+                self.value = value.strftime("%Y%m%dT%H:%M:%S")
+                return
+            if datetime and isinstance(value, datetime.time):
+                today = datetime.datetime.now().strftime("%Y%m%d")
+                self.value = value.strftime(today + "T%H:%M:%S")
+                return
+            if not isinstance(value, (TupleType, time.struct_time)): #@UndefinedVariable
+                if value == 0:
+                    value = time.time()
+                value = time.localtime(value)
+            value = time.strftime("%Y%m%dT%H:%M:%S", value)
+        self.value = value
+
+    def __cmp__(self, other):
+        if isinstance(other, DateTime):
+            other = other.value
+        return cmp(self.value, other)
+
+    ##
+    # Get date/time value.
+    #
+    # @return Date/time value, as an ISO 8601 string.
+
+    def __str__(self):
+        return self.value
+
+    def __repr__(self):
+        return "<DateTime %s at %x>" % (repr(self.value), id(self))
+
+    def decode(self, data):
+        data = str(data)
+        self.value = string.strip(data)
+
+    def encode(self, out):
+        out.write("<value><dateTime.iso8601>")
+        out.write(self.value)
+        out.write("</dateTime.iso8601></value>\n")
+
+def _datetime(data):
+    # decode xml element contents into a DateTime structure.
+    value = DateTime()
+    value.decode(data)
+    return value
+
+def _datetime_type(data):
+    t = time.strptime(data, "%Y%m%dT%H:%M:%S") #@UndefinedVariable
+    return datetime.datetime(*tuple(t)[:6])
+
+##
+# Wrapper for binary data.  This can be used to transport any kind
+# of binary data over XML-RPC, using BASE64 encoding.
+#
+# @param data An 8-bit string containing arbitrary data.
+
+import base64
+try:
+    import cStringIO as StringIO
+except ImportError:
+    import StringIO
+
+class Binary:
+    """Wrapper for binary data."""
+
+    def __init__(self, data=None):
+        self.data = data
+
+    ##
+    # Get buffer contents.
+    #
+    # @return Buffer contents, as an 8-bit string.
+
+    def __str__(self):
+        return self.data or ""
+
+    def __cmp__(self, other):
+        if isinstance(other, Binary):
+            other = other.data
+        return cmp(self.data, other)
+
+    def decode(self, data):
+        self.data = base64.decodestring(data)
+
+    def encode(self, out):
+        out.write("<value><base64>\n")
+        base64.encode(StringIO.StringIO(self.data), out)
+        out.write("</base64></value>\n")
+
+def _binary(data):
+    # decode xml element contents into a Binary structure
+    value = Binary()
+    value.decode(data)
+    return value
+
+WRAPPERS = (DateTime, Binary)
+if not _bool_is_builtin:
+    WRAPPERS = WRAPPERS + (Boolean,)
+
+# --------------------------------------------------------------------
+# XML parsers
+
+try:
+    # optional xmlrpclib accelerator
+    import _xmlrpclib #@UnresolvedImport
+    FastParser = _xmlrpclib.Parser
+    FastUnmarshaller = _xmlrpclib.Unmarshaller
+except (AttributeError, ImportError):
+    FastParser = FastUnmarshaller = None
+
+try:
+    import _xmlrpclib #@UnresolvedImport
+    FastMarshaller = _xmlrpclib.Marshaller
+except (AttributeError, ImportError):
+    FastMarshaller = None
+
+#
+# the SGMLOP parser is about 15x faster than Python's builtin
+# XML parser.  SGMLOP sources can be downloaded from:
+#
+#     http://www.pythonware.com/products/xml/sgmlop.htm
+#
+
+try:
+    import sgmlop
+    if not hasattr(sgmlop, "XMLParser"):
+        raise ImportError()
+except ImportError:
+    SgmlopParser = None # sgmlop accelerator not available
+else:
+    class SgmlopParser:
+        def __init__(self, target):
+
+            # setup callbacks
+            self.finish_starttag = target.start
+            self.finish_endtag = target.end
+            self.handle_data = target.data
+            self.handle_xml = target.xml
+
+            # activate parser
+            self.parser = sgmlop.XMLParser()
+            self.parser.register(self)
+            self.feed = self.parser.feed
+            self.entity = {
+                "amp": "&", "gt": ">", "lt": "<",
+                "apos": "'", "quot": '"'
+                }
+
+        def close(self):
+            try:
+                self.parser.close()
+            finally:
+                self.parser = self.feed = None # nuke circular reference
+
+        def handle_proc(self, tag, attr):
+            m = re.search("encoding\s*=\s*['\"]([^\"']+)[\"']", attr) #@UndefinedVariable
+            if m:
+                self.handle_xml(m.group(1), 1)
+
+        def handle_entityref(self, entity):
+            # <string> entity
+            try:
+                self.handle_data(self.entity[entity])
+            except KeyError:
+                self.handle_data("&%s;" % entity)
+
+try:
+    from xml.parsers import expat
+    if not hasattr(expat, "ParserCreate"):
+        raise ImportError()
+except ImportError:
+    ExpatParser = None # expat not available
+else:
+    class ExpatParser:
+        # fast expat parser for Python 2.0 and later.  this is about
+        # 50% slower than sgmlop, on roundtrip testing
+        def __init__(self, target):
+            self._parser = parser = expat.ParserCreate(None, None)
+            self._target = target
+            parser.StartElementHandler = target.start
+            parser.EndElementHandler = target.end
+            parser.CharacterDataHandler = target.data
+            encoding = None
+            if not parser.returns_unicode:
+                encoding = "utf-8"
+            target.xml(encoding, None)
+
+        def feed(self, data):
+            self._parser.Parse(data, 0)
+
+        def close(self):
+            self._parser.Parse("", 1) # end of data
+            del self._target, self._parser # get rid of circular references
+
+class SlowParser:
+    """Default XML parser (based on xmllib.XMLParser)."""
+    # this is about 10 times slower than sgmlop, on roundtrip
+    # testing.
+    def __init__(self, target):
+        import xmllib # lazy subclassing (!)
+        if xmllib.XMLParser not in SlowParser.__bases__:
+            SlowParser.__bases__ = (xmllib.XMLParser,)
+        self.handle_xml = target.xml
+        self.unknown_starttag = target.start
+        self.handle_data = target.data
+        self.handle_cdata = target.data
+        self.unknown_endtag = target.end
+        try:
+            xmllib.XMLParser.__init__(self, accept_utf8=1)
+        except TypeError:
+            xmllib.XMLParser.__init__(self) # pre-2.0
+
+# --------------------------------------------------------------------
+# XML-RPC marshalling and unmarshalling code
+
+##
+# XML-RPC marshaller.
+#
+# @param encoding Default encoding for 8-bit strings.  The default
+#     value is None (interpreted as UTF-8).
+# @see dumps
+
+class Marshaller:
+    """Generate an XML-RPC params chunk from a Python data structure.
+
+    Create a Marshaller instance for each set of parameters, and use
+    the "dumps" method to convert your data (represented as a tuple)
+    to an XML-RPC params chunk.  To write a fault response, pass a
+    Fault instance instead.  You may prefer to use the "dumps" module
+    function for this purpose.
+    """
+
+    # by the way, if you don't understand what's going on in here,
+    # that's perfectly ok.
+
+    def __init__(self, encoding=None, allow_none=0):
+        self.memo = {}
+        self.data = None
+        self.encoding = encoding
+        self.allow_none = allow_none
+
+    dispatch = {}
+
+    def dumps(self, values):
+        out = []
+        write = out.append
+        dump = self.__dump
+        if isinstance(values, Fault):
+            # fault instance
+            write("<fault>\n")
+            dump({'faultCode': values.faultCode,
+                  'faultString': values.faultString},
+                 write)
+            write("</fault>\n")
+        else:
+            # parameter block
+            # FIXME: the xml-rpc specification allows us to leave out
+            # the entire <params> block if there are no parameters.
+            # however, changing this may break older code (including
+            # old versions of xmlrpclib.py), so this is better left as
+            # is for now.  See @XMLRPC3 for more information. /F
+            write("<params>\n")
+            for v in values:
+                write("<param>\n")
+                dump(v, write)
+                write("</param>\n")
+            write("</params>\n")
+        result = string.join(out, "")
+        return result
+
+    def __dump(self, value, write):
+        try:
+            f = self.dispatch[type(value)]
+        except KeyError:
+            raise TypeError("cannot marshal %s objects" % type(value))
+        else:
+            f(self, value, write)
+
+    def dump_nil (self, value, write):
+        if not self.allow_none:
+            raise TypeError("cannot marshal None unless allow_none is enabled")
+        write("<value><nil/></value>")
+    dispatch[NoneType] = dump_nil
+
+    def dump_int(self, value, write):
+        # in case ints are > 32 bits
+        if value > MAXINT or value < MININT:
+            raise OverflowError("int exceeds XML-RPC limits")
+        write("<value><int>")
+        write(str(value))
+        write("</int></value>\n")
+    dispatch[IntType] = dump_int
+
+    if _bool_is_builtin:
+        def dump_bool(self, value, write):
+            write("<value><boolean>")
+            write(value and "1" or "0")
+            write("</boolean></value>\n")
+        dispatch[bool] = dump_bool #@UndefinedVariable
+
+    def dump_long(self, value, write):
+        if value > MAXINT or value < MININT:
+            raise OverflowError("long int exceeds XML-RPC limits")
+        write("<value><int>")
+        write(str(int(value)))
+        write("</int></value>\n")
+    dispatch[LongType] = dump_long
+
+    def dump_double(self, value, write):
+        write("<value><double>")
+        write(repr(value))
+        write("</double></value>\n")
+    dispatch[FloatType] = dump_double
+
+    def dump_string(self, value, write, escape=escape):
+        write("<value><string>")
+        write(escape(value))
+        write("</string></value>\n")
+    dispatch[StringType] = dump_string
+
+    if unicode:
+        def dump_unicode(self, value, write, escape=escape):
+            value = value.encode(self.encoding)
+            write("<value><string>")
+            write(escape(value))
+            write("</string></value>\n")
+        dispatch[UnicodeType] = dump_unicode
+
+    def dump_array(self, value, write):
+        i = id(value)
+        if self.memo.has_key(i):
+            raise TypeError("cannot marshal recursive sequences")
+        self.memo[i] = None
+        dump = self.__dump
+        write("<value><array><data>\n")
+        for v in value:
+            dump(v, write)
+        write("</data></array></value>\n")
+        del self.memo[i]
+    dispatch[TupleType] = dump_array
+    dispatch[ListType] = dump_array
+
+    def dump_struct(self, value, write, escape=escape):
+        i = id(value)
+        if self.memo.has_key(i):
+            raise TypeError("cannot marshal recursive dictionaries")
+        self.memo[i] = None
+        dump = self.__dump
+        write("<value><struct>\n")
+        for k, v in value.items():
+            write("<member>\n")
+            if type(k) is not StringType:
+                if unicode and type(k) is UnicodeType:
+                    k = k.encode(self.encoding)
+                else:
+                    raise TypeError("dictionary key must be string")
+            write("<name>%s</name>\n" % escape(k))
+            dump(v, write)
+            write("</member>\n")
+        write("</struct></value>\n")
+        del self.memo[i]
+    dispatch[DictType] = dump_struct
+
+    if datetime:
+        def dump_datetime(self, value, write):
+            write("<value><dateTime.iso8601>")
+            write(value.strftime("%Y%m%dT%H:%M:%S"))
+            write("</dateTime.iso8601></value>\n")
+        dispatch[datetime.datetime] = dump_datetime
+
+        def dump_date(self, value, write):
+            write("<value><dateTime.iso8601>")
+            write(value.strftime("%Y%m%dT00:00:00"))
+            write("</dateTime.iso8601></value>\n")
+        dispatch[datetime.date] = dump_date
+
+        def dump_time(self, value, write):
+            write("<value><dateTime.iso8601>")
+            write(datetime.datetime.now().date().strftime("%Y%m%dT"))
+            write(value.strftime("%H:%M:%S"))
+            write("</dateTime.iso8601></value>\n")
+        dispatch[datetime.time] = dump_time
+
+    def dump_instance(self, value, write):
+        # check for special wrappers
+        if value.__class__ in WRAPPERS:
+            self.write = write
+            value.encode(self)
+            del self.write
+        else:
+            # store instance attributes as a struct (really?)
+            self.dump_struct(value.__dict__, write)
+    dispatch[InstanceType] = dump_instance
+
+##
+# XML-RPC unmarshaller.
+#
+# @see loads
+
+class Unmarshaller:
+    """Unmarshal an XML-RPC response, based on incoming XML event
+    messages (start, data, end).  Call close() to get the resulting
+    data structure.
+
+    Note that this reader is fairly tolerant, and gladly accepts bogus
+    XML-RPC data without complaining (but not bogus XML).
+    """
+
+    # and again, if you don't understand what's going on in here,
+    # that's perfectly ok.
+
+    def __init__(self, use_datetime=0):
+        self._type = None
+        self._stack = []
+        self._marks = []
+        self._data = []
+        self._methodname = None
+        self._encoding = "utf-8"
+        self.append = self._stack.append
+        self._use_datetime = use_datetime
+        if use_datetime and not datetime:
+            raise ValueError("the datetime module is not available")
+
+    def close(self):
+        # return response tuple and target method
+        if self._type is None or self._marks:
+            raise ResponseError()
+        if self._type == "fault":
+            raise Fault(**self._stack[0])
+        return tuple(self._stack)
+
+    def getmethodname(self):
+        return self._methodname
+
+    #
+    # event handlers
+
+    def xml(self, encoding, standalone):
+        self._encoding = encoding
+        # FIXME: assert standalone == 1 ???
+
+    def start(self, tag, attrs):
+        # prepare to handle this element
+        if tag == "array" or tag == "struct":
+            self._marks.append(len(self._stack))
+        self._data = []
+        self._value = (tag == "value")
+
+    def data(self, text):
+        self._data.append(text)
+
+    def end(self, tag, join=string.join):
+        # call the appropriate end tag handler
+        try:
+            f = self.dispatch[tag]
+        except KeyError:
+            pass # unknown tag ?
+        else:
+            return f(self, join(self._data, ""))
+
+    #
+    # accelerator support
+
+    def end_dispatch(self, tag, data):
+        # dispatch data
+        try:
+            f = self.dispatch[tag]
+        except KeyError:
+            pass # unknown tag ?
+        else:
+            return f(self, data)
+
+    #
+    # element decoders
+
+    dispatch = {}
+
+    def end_nil (self, data):
+        self.append(None)
+        self._value = 0
+    dispatch["nil"] = end_nil
+
+    def end_boolean(self, data):
+        if data == "0":
+            self.append(False)
+        elif data == "1":
+            self.append(True)
+        else:
+            raise TypeError("bad boolean value")
+        self._value = 0
+    dispatch["boolean"] = end_boolean
+
+    def end_int(self, data):
+        self.append(int(data))
+        self._value = 0
+    dispatch["i4"] = end_int
+    dispatch["int"] = end_int
+
+    def end_double(self, data):
+        self.append(float(data))
+        self._value = 0
+    dispatch["double"] = end_double
+
+    def end_string(self, data):
+        if self._encoding:
+            data = _decode(data, self._encoding)
+        self.append(_stringify(data))
+        self._value = 0
+    dispatch["string"] = end_string
+    dispatch["name"] = end_string # struct keys are always strings
+
+    def end_array(self, data):
+        mark = self._marks.pop()
+        # map arrays to Python lists
+        self._stack[mark:] = [self._stack[mark:]]
+        self._value = 0
+    dispatch["array"] = end_array
+
+    def end_struct(self, data):
+        mark = self._marks.pop()
+        # map structs to Python dictionaries
+        dict = {}
+        items = self._stack[mark:]
+        for i in range(0, len(items), 2):
+            dict[_stringify(items[i])] = items[i + 1]
+        self._stack[mark:] = [dict]
+        self._value = 0
+    dispatch["struct"] = end_struct
+
+    def end_base64(self, data):
+        value = Binary()
+        value.decode(data)
+        self.append(value)
+        self._value = 0
+    dispatch["base64"] = end_base64
+
+    def end_dateTime(self, data):
+        value = DateTime()
+        value.decode(data)
+        if self._use_datetime:
+            value = _datetime_type(data)
+        self.append(value)
+    dispatch["dateTime.iso8601"] = end_dateTime
+
+    def end_value(self, data):
+        # if we stumble upon a value element with no internal
+        # elements, treat it as a string element
+        if self._value:
+            self.end_string(data)
+    dispatch["value"] = end_value
+
+    def end_params(self, data):
+        self._type = "params"
+    dispatch["params"] = end_params
+
+    def end_fault(self, data):
+        self._type = "fault"
+    dispatch["fault"] = end_fault
+
+    def end_methodName(self, data):
+        if self._encoding:
+            data = _decode(data, self._encoding)
+        self._methodname = data
+        self._type = "methodName" # no params
+    dispatch["methodName"] = end_methodName
+
+## Multicall support
+#
+
+class _MultiCallMethod:
+    # some lesser magic to store calls made to a MultiCall object
+    # for batch execution
+    def __init__(self, call_list, name):
+        self.__call_list = call_list
+        self.__name = name
+    def __getattr__(self, name):
+        return _MultiCallMethod(self.__call_list, "%s.%s" % (self.__name, name))
+    def __call__(self, *args):
+        self.__call_list.append((self.__name, args))
+
+class MultiCallIterator:
+    """Iterates over the results of a multicall. Exceptions are
+    thrown in response to xmlrpc faults."""
+
+    def __init__(self, results):
+        self.results = results
+
+    def __getitem__(self, i):
+        item = self.results[i]
+        if type(item) == type({}):
+            raise Fault(item['faultCode'], item['faultString'])
+        elif type(item) == type([]):
+            return item[0]
+        else:
+            raise ValueError("unexpected type in multicall result")
+
+class MultiCall:
+    """server -> a object used to boxcar method calls
+
+    server should be a ServerProxy object.
+
+    Methods can be added to the MultiCall using normal
+    method call syntax e.g.:
+
+    multicall = MultiCall(server_proxy)
+    multicall.add(2,3)
+    multicall.get_address("Guido")
+
+    To execute the multicall, call the MultiCall object e.g.:
+
+    add_result, address = multicall()
+    """
+
+    def __init__(self, server):
+        self.__server = server
+        self.__call_list = []
+
+    def __repr__(self):
+        return "<MultiCall at %x>" % id(self)
+
+    __str__ = __repr__
+
+    def __getattr__(self, name):
+        return _MultiCallMethod(self.__call_list, name)
+
+    def __call__(self):
+        marshalled_list = []
+        for name, args in self.__call_list:
+            marshalled_list.append({'methodName' : name, 'params' : args})
+
+        return MultiCallIterator(self.__server.system.multicall(marshalled_list))
+
+# --------------------------------------------------------------------
+# convenience functions
+
+##
+# Create a parser object, and connect it to an unmarshalling instance.
+# This function picks the fastest available XML parser.
+#
+# return A (parser, unmarshaller) tuple.
+
+def getparser(use_datetime=0):
+    """getparser() -> parser, unmarshaller
+
+    Create an instance of the fastest available parser, and attach it
+    to an unmarshalling object.  Return both objects.
+    """
+    if use_datetime and not datetime:
+        raise ValueError("the datetime module is not available")
+    if FastParser and FastUnmarshaller:
+        if use_datetime:
+            mkdatetime = _datetime_type
+        else:
+            mkdatetime = _datetime
+        target = FastUnmarshaller(True, False, _binary, mkdatetime, Fault)
+        parser = FastParser(target)
+    else:
+        target = Unmarshaller(use_datetime=use_datetime)
+        if FastParser:
+            parser = FastParser(target)
+        elif SgmlopParser:
+            parser = SgmlopParser(target)
+        elif ExpatParser:
+            parser = ExpatParser(target)
+        else:
+            parser = SlowParser(target)
+    return parser, target
+
+##
+# Convert a Python tuple or a Fault instance to an XML-RPC packet.
+#
+# @def dumps(params, **options)
+# @param params A tuple or Fault instance.
+# @keyparam methodname If given, create a methodCall request for
+#     this method name.
+# @keyparam methodresponse If given, create a methodResponse packet.
+#     If used with a tuple, the tuple must be a singleton (that is,
+#     it must contain exactly one element).
+# @keyparam encoding The packet encoding.
+# @return A string containing marshalled data.
+
+def dumps(params, methodname=None, methodresponse=None, encoding=None,
+          allow_none=0):
+    """data [,options] -> marshalled data
+
+    Convert an argument tuple or a Fault instance to an XML-RPC
+    request (or response, if the methodresponse option is used).
+
+    In addition to the data object, the following options can be given
+    as keyword arguments:
+
+        methodname: the method name for a methodCall packet
+
+        methodresponse: true to create a methodResponse packet.
+        If this option is used with a tuple, the tuple must be
+        a singleton (i.e. it can contain only one element).
+
+        encoding: the packet encoding (default is UTF-8)
+
+    All 8-bit strings in the data structure are assumed to use the
+    packet encoding.  Unicode strings are automatically converted,
+    where necessary.
+    """
+
+    assert isinstance(params, TupleType) or isinstance(params, Fault), \
+           "argument must be tuple or Fault instance"
+
+    if isinstance(params, Fault):
+        methodresponse = 1
+    elif methodresponse and isinstance(params, TupleType):
+        assert len(params) == 1, "response tuple must be a singleton"
+
+    if not encoding:
+        encoding = "utf-8"
+
+    if FastMarshaller:
+        m = FastMarshaller(encoding)
+    else:
+        m = Marshaller(encoding, allow_none)
+
+    data = m.dumps(params)
+
+    if encoding != "utf-8":
+        xmlheader = "<?xml version='1.0' encoding='%s'?>\n" % str(encoding)
+    else:
+        xmlheader = "<?xml version='1.0'?>\n" # utf-8 is default
+
+    # standard XML-RPC wrappings
+    if methodname:
+        # a method call
+        if not isinstance(methodname, StringType):
+            methodname = methodname.encode(encoding)
+        data = (
+            xmlheader,
+            "<methodCall>\n"
+            "<methodName>", methodname, "</methodName>\n",
+            data,
+            "</methodCall>\n"
+            )
+    elif methodresponse:
+        # a method response, or a fault structure
+        data = (
+            xmlheader,
+            "<methodResponse>\n",
+            data,
+            "</methodResponse>\n"
+            )
+    else:
+        return data # return as is
+    return string.join(data, "")
+
+##
+# Convert an XML-RPC packet to a Python object.  If the XML-RPC packet
+# represents a fault condition, this function raises a Fault exception.
+#
+# @param data An XML-RPC packet, given as an 8-bit string.
+# @return A tuple containing the unpacked data, and the method name
+#     (None if not present).
+# @see Fault
+
+def loads(data, use_datetime=0):
+    """data -> unmarshalled data, method name
+
+    Convert an XML-RPC packet to unmarshalled data plus a method
+    name (None if not present).
+
+    If the XML-RPC packet represents a fault condition, this function
+    raises a Fault exception.
+    """
+    p, u = getparser(use_datetime=use_datetime)
+    p.feed(data)
+    p.close()
+    return u.close(), u.getmethodname()
+
+
+# --------------------------------------------------------------------
+# request dispatcher
+
+class _Method:
+    # some magic to bind an XML-RPC method to an RPC server.
+    # supports "nested" methods (e.g. examples.getStateName)
+    def __init__(self, send, name):
+        self.__send = send
+        self.__name = name
+    def __getattr__(self, name):
+        return _Method(self.__send, "%s.%s" % (self.__name, name))
+    def __call__(self, *args):
+        return self.__send(self.__name, args)
+
+##
+# Standard transport class for XML-RPC over HTTP.
+# <p>
+# You can create custom transports by subclassing this method, and
+# overriding selected methods.
+
+class Transport:
+    """Handles an HTTP transaction to an XML-RPC server."""
+
+    # client identifier (may be overridden)
+    user_agent = "xmlrpclib.py/%s (by www.pythonware.com)" % __version__
+
+    def __init__(self, use_datetime=0):
+        self._use_datetime = use_datetime
+
+    ##
+    # Send a complete request, and parse the response.
+    #
+    # @param host Target host.
+    # @param handler Target PRC handler.
+    # @param request_body XML-RPC request body.
+    # @param verbose Debugging flag.
+    # @return Parsed response.
+
+    def request(self, host, handler, request_body, verbose=0):
+        # issue XML-RPC request
+
+        h = self.make_connection(host)
+        if verbose:
+            h.set_debuglevel(1)
+
+        self.send_request(h, handler, request_body)
+        self.send_host(h, host)
+        self.send_user_agent(h)
+        self.send_content(h, request_body)
+
+        errcode, errmsg, headers = h.getreply()
+
+        if errcode != 200:
+            raise ProtocolError(
+                host + handler,
+                errcode, errmsg,
+                headers
+                )
+
+        self.verbose = verbose
+
+        try:
+            sock = h._conn.sock
+        except AttributeError:
+            sock = None
+
+        return self._parse_response(h.getfile(), sock)
+
+    ##
+    # Create parser.
+    #
+    # @return A 2-tuple containing a parser and a unmarshaller.
+
+    def getparser(self):
+        # get parser and unmarshaller
+        return getparser(use_datetime=self._use_datetime)
+
+    ##
+    # Get authorization info from host parameter
+    # Host may be a string, or a (host, x509-dict) tuple; if a string,
+    # it is checked for a "user:pw@host" format, and a "Basic
+    # Authentication" header is added if appropriate.
+    #
+    # @param host Host descriptor (URL or (URL, x509 info) tuple).
+    # @return A 3-tuple containing (actual host, extra headers,
+    #     x509 info).  The header and x509 fields may be None.
+
+    def get_host_info(self, host):
+
+        x509 = {}
+        if isinstance(host, TupleType):
+            host, x509 = host
+
+        import urllib
+        auth, host = urllib.splituser(host)
+
+        if auth:
+            import base64
+            auth = base64.encodestring(urllib.unquote(auth))
+            auth = string.join(string.split(auth), "") # get rid of whitespace
+            extra_headers = [
+                ("Authorization", "Basic " + auth)
+                ]
+        else:
+            extra_headers = None
+
+        return host, extra_headers, x509
+
+    ##
+    # Connect to server.
+    #
+    # @param host Target host.
+    # @return A connection handle.
+
+    def make_connection(self, host):
+        # create a HTTP connection object from a host descriptor
+        import httplib
+        host, extra_headers, x509 = self.get_host_info(host)
+        return httplib.HTTP(host)
+
+    ##
+    # Send request header.
+    #
+    # @param connection Connection handle.
+    # @param handler Target RPC handler.
+    # @param request_body XML-RPC body.
+
+    def send_request(self, connection, handler, request_body):
+        connection.putrequest("POST", handler)
+
+    ##
+    # Send host name.
+    #
+    # @param connection Connection handle.
+    # @param host Host name.
+
+    def send_host(self, connection, host):
+        host, extra_headers, x509 = self.get_host_info(host)
+        connection.putheader("Host", host)
+        if extra_headers:
+            if isinstance(extra_headers, DictType):
+                extra_headers = extra_headers.items()
+            for key, value in extra_headers:
+                connection.putheader(key, value)
+
+    ##
+    # Send user-agent identifier.
+    #
+    # @param connection Connection handle.
+
+    def send_user_agent(self, connection):
+        connection.putheader("User-Agent", self.user_agent)
+
+    ##
+    # Send request body.
+    #
+    # @param connection Connection handle.
+    # @param request_body XML-RPC request body.
+
+    def send_content(self, connection, request_body):
+        connection.putheader("Content-Type", "text/xml")
+        connection.putheader("Content-Length", str(len(request_body)))
+        connection.endheaders()
+        if request_body:
+            connection.send(request_body)
+
+    ##
+    # Parse response.
+    #
+    # @param file Stream.
+    # @return Response tuple and target method.
+
+    def parse_response(self, file):
+        # compatibility interface
+        return self._parse_response(file, None)
+
+    ##
+    # Parse response (alternate interface).  This is similar to the
+    # parse_response method, but also provides direct access to the
+    # underlying socket object (where available).
+    #
+    # @param file Stream.
+    # @param sock Socket handle (or None, if the socket object
+    #    could not be accessed).
+    # @return Response tuple and target method.
+
+    def _parse_response(self, file, sock):
+        # read response from input file/socket, and parse it
+
+        p, u = self.getparser()
+
+        while 1:
+            if sock:
+                response = sock.recv(1024)
+            else:
+                response = file.read(1024)
+            if not response:
+                break
+            if self.verbose:
+                sys.stdout.write("body: %s\n" % repr(response))
+            p.feed(response)
+
+        file.close()
+        p.close()
+
+        return u.close()
+
+##
+# Standard transport class for XML-RPC over HTTPS.
+
+class SafeTransport(Transport):
+    """Handles an HTTPS transaction to an XML-RPC server."""
+
+    # FIXME: mostly untested
+
+    def make_connection(self, host):
+        # create a HTTPS connection object from a host descriptor
+        # host may be a string, or a (host, x509-dict) tuple
+        import httplib
+        host, extra_headers, x509 = self.get_host_info(host)
+        try:
+            HTTPS = httplib.HTTPS
+        except AttributeError:
+            raise NotImplementedError(
+                "your version of httplib doesn't support HTTPS"
+                )
+        else:
+            return HTTPS(host, None, **(x509 or {}))
+
+##
+# Standard server proxy.  This class establishes a virtual connection
+# to an XML-RPC server.
+# <p>
+# This class is available as ServerProxy and Server.  New code should
+# use ServerProxy, to avoid confusion.
+#
+# @def ServerProxy(uri, **options)
+# @param uri The connection point on the server.
+# @keyparam transport A transport factory, compatible with the
+#    standard transport class.
+# @keyparam encoding The default encoding used for 8-bit strings
+#    (default is UTF-8).
+# @keyparam verbose Use a true value to enable debugging output.
+#    (printed to standard output).
+# @see Transport
+
+class ServerProxy:
+    """uri [,options] -> a logical connection to an XML-RPC server
+
+    uri is the connection point on the server, given as
+    scheme://host/target.
+
+    The standard implementation always supports the "http" scheme.  If
+    SSL socket support is available (Python 2.0), it also supports
+    "https".
+
+    If the target part and the slash preceding it are both omitted,
+    "/RPC2" is assumed.
+
+    The following options can be given as keyword arguments:
+
+        transport: a transport factory
+        encoding: the request encoding (default is UTF-8)
+
+    All 8-bit strings passed to the server proxy are assumed to use
+    the given encoding.
+    """
+
+    def __init__(self, uri, transport=None, encoding=None, verbose=0,
+                 allow_none=0, use_datetime=0):
+        # establish a "logical" server connection
+
+        # get the url
+        import urllib
+        type, uri = urllib.splittype(uri)
+        if type not in ("http", "https"):
+            raise IOError("unsupported XML-RPC protocol")
+        self.__host, self.__handler = urllib.splithost(uri)
+        if not self.__handler:
+            self.__handler = "/RPC2"
+
+        if transport is None:
+            if type == "https":
+                transport = SafeTransport(use_datetime=use_datetime)
+            else:
+                transport = Transport(use_datetime=use_datetime)
+        self.__transport = transport
+
+        self.__encoding = encoding
+        self.__verbose = verbose
+        self.__allow_none = allow_none
+
+    def __request(self, methodname, params):
+        # call a method on the remote server
+
+        request = dumps(params, methodname, encoding=self.__encoding,
+                        allow_none=self.__allow_none)
+
+        response = self.__transport.request(
+            self.__host,
+            self.__handler,
+            request,
+            verbose=self.__verbose
+            )
+
+        if len(response) == 1:
+            response = response[0]
+
+        return response
+
+    def __repr__(self):
+        return (
+            "<ServerProxy for %s%s>" % 
+            (self.__host, self.__handler)
+            )
+
+    __str__ = __repr__
+
+    def __getattr__(self, name):
+        # magic method dispatcher
+        return _Method(self.__request, name)
+
+    # note: to call a remote object with an non-standard name, use
+    # result getattr(server, "strange-python-name")(args)
+
+# compatibility
+
+Server = ServerProxy
+
+# --------------------------------------------------------------------
+# test code
+
+if __name__ == "__main__":
+
+    # simple test program (from the XML-RPC specification)
+
+    # server = ServerProxy("http://localhost:8000") # local server
+    server = ServerProxy("http://time.xmlrpc.com/RPC2")
+
+    sys.stdout.write('%s\n' % server)
+
+    try:
+        sys.stdout.write('%s\n' % (server.currentTime.getCurrentTime(),))
+    except Error:
+        import traceback;traceback.print_exc()
+
+    multi = MultiCall(server)
+    multi.currentTime.getCurrentTime()
+    multi.currentTime.getCurrentTime()
+    try:
+        for response in multi():
+            sys.stdout.write('%s\n' % (response,))
+    except Error:
+        import traceback;traceback.print_exc()
diff --git a/python/helpers/pydev/_pydev_jy_imports_tipper.py b/python/helpers/pydev/_pydev_jy_imports_tipper.py
index 43e4d0b..1691e3e 100644
--- a/python/helpers/pydev/_pydev_jy_imports_tipper.py
+++ b/python/helpers/pydev/_pydev_jy_imports_tipper.py
@@ -1,7 +1,5 @@
 import StringIO
 import traceback
-from java.lang import StringBuffer #@UnresolvedImport
-from java.lang import String #@UnresolvedImport
 import java.lang #@UnresolvedImport
 import sys
 from _pydev_tipper_common import DoFind
@@ -14,12 +12,16 @@
     import __builtin__
     setattr(__builtin__, 'True', 1)
     setattr(__builtin__, 'False', 0)
-    
-    
+
+
 from org.python.core import PyReflectedFunction #@UnresolvedImport
 
 from org.python import core #@UnresolvedImport
-from org.python.core import PyClass #@UnresolvedImport
+
+try:
+    xrange
+except:
+    xrange = range
 
 
 #completion types.
@@ -48,11 +50,11 @@
             name = 'org.python.core.PyString'
         elif name == '__builtin__.dict':
             name = 'org.python.core.PyDictionary'
-            
+
     mod = _imp(name)
     parent = mod
     foundAs = ''
-    
+
     if hasattr(mod, '__file__'):
         f = mod.__file__
 
@@ -68,29 +70,29 @@
         except AttributeError:
             if old_comp != comp:
                 raise
-        
+
         if hasattr(mod, '__file__'):
             f = mod.__file__
         else:
             if len(foundAs) > 0:
                 foundAs = foundAs + '.'
             foundAs = foundAs + comp
-            
+
         old_comp = comp
-        
+
     return f, mod, parent, foundAs
 
 def formatParamClassName(paramClassName):
     if paramClassName.startswith('['):
         if paramClassName == '[C':
             paramClassName = 'char[]'
-        
+
         elif paramClassName == '[B':
             paramClassName = 'byte[]'
-        
+
         elif paramClassName == '[I':
             paramClassName = 'int[]'
-            
+
         elif paramClassName.startswith('[L') and paramClassName.endswith(';'):
             paramClassName = paramClassName[2:-1]
             paramClassName += '[]'
@@ -101,17 +103,17 @@
     data = data.replace('\n', '')
     if data.endswith('.'):
         data = data.rstrip('.')
-    
+
     f, mod, parent, foundAs = Find(data)
     tips = GenerateImportsTipForModule(mod)
     return f, tips
-    
+
 
 #=======================================================================================================================
 # Info
 #=======================================================================================================================
 class Info:
-    
+
     def __init__(self, name, **kwargs):
         self.name = name
         self.doc = kwargs.get('doc', None)
@@ -119,47 +121,47 @@
         self.varargs = kwargs.get('varargs', None) #string
         self.kwargs = kwargs.get('kwargs', None) #string
         self.ret = kwargs.get('ret', None) #string
-        
+
     def basicAsStr(self):
         '''@returns this class information as a string (just basic format)
         '''
-        
+
         s = 'function:%s args=%s, varargs=%s, kwargs=%s, docs:%s' % \
             (str(self.name), str(self.args), str(self.varargs), str(self.kwargs), str(self.doc))
         return s
-        
+
 
     def getAsDoc(self):
         s = str(self.name)
         if self.doc:
             s += '\n@doc %s\n' % str(self.doc)
-            
+
         if self.args:
             s += '\n@params '
             for arg in self.args:
                 s += str(formatParamClassName(arg))
                 s += '  '
-        
+
         if self.varargs:
             s += '\n@varargs '
             s += str(self.varargs)
-            
+
         if self.kwargs:
             s += '\n@kwargs '
             s += str(self.kwargs)
-            
+
         if self.ret:
             s += '\n@return '
             s += str(formatParamClassName(str(self.ret)))
-            
+
         return str(s)
-        
+
 def isclass(cls):
     return isinstance(cls, core.PyClass)
 
 def ismethod(func):
     '''this function should return the information gathered on a function
-    
+
     @param func: this is the function we want to get info on
     @return a tuple where:
         0 = indicates whether the parameter passed is a method or not
@@ -167,24 +169,24 @@
             this is a list because when we have methods from java with the same name and different signatures,
             we actually have many methods, each with its own set of arguments
     '''
-    
+
     try:
         if isinstance(func, core.PyFunction):
             #ok, this is from python, created by jython
             #print_ '    PyFunction'
-            
+
             def getargs(func_code):
                 """Get information about the arguments accepted by a code object.
-            
+
                 Three things are returned: (args, varargs, varkw), where 'args' is
                 a list of argument names (possibly containing nested lists), and
                 'varargs' and 'varkw' are the names of the * and ** arguments or None."""
-            
+
                 nargs = func_code.co_argcount
                 names = func_code.co_varnames
                 args = list(names[:nargs])
                 step = 0
-            
+
                 varargs = None
                 if func_code.co_flags & func_code.CO_VARARGS:
                     varargs = func_code.co_varnames[nargs]
@@ -193,35 +195,35 @@
                 if func_code.co_flags & func_code.CO_VARKEYWORDS:
                     varkw = func_code.co_varnames[nargs]
                 return args, varargs, varkw
-            
+
             args = getargs(func.func_code)
             return 1, [Info(func.func_name, args=args[0], varargs=args[1], kwargs=args[2], doc=func.func_doc)]
-            
+
         if isinstance(func, core.PyMethod):
             #this is something from java itself, and jython just wrapped it...
-            
+
             #things to play in func:
             #['__call__', '__class__', '__cmp__', '__delattr__', '__dir__', '__doc__', '__findattr__', '__name__', '_doget', 'im_class',
             #'im_func', 'im_self', 'toString']
             #print_ '    PyMethod'
             #that's the PyReflectedFunction... keep going to get it
             func = func.im_func
-    
+
         if isinstance(func, PyReflectedFunction):
             #this is something from java itself, and jython just wrapped it...
-            
+
             #print_ '    PyReflectedFunction'
-            
+
             infos = []
-            for i in range(len(func.argslist)):
+            for i in xrange(len(func.argslist)):
                 #things to play in func.argslist[i]:
-                    
+
                 #'PyArgsCall', 'PyArgsKeywordsCall', 'REPLACE', 'StandardCall', 'args', 'compare', 'compareTo', 'data', 'declaringClass'
                 #'flags', 'isStatic', 'matches', 'precedence']
-                
+
                 #print_ '        ', func.argslist[i].data.__class__
                 #func.argslist[i].data.__class__ == java.lang.reflect.Method
-                
+
                 if func.argslist[i]:
                     met = func.argslist[i].data
                     name = met.getName()
@@ -230,9 +232,9 @@
                     except AttributeError:
                         ret = ''
                     parameterTypes = met.getParameterTypes()
-                    
+
                     args = []
-                    for j in range(len(parameterTypes)):
+                    for j in xrange(len(parameterTypes)):
                         paramTypesClass = parameterTypes[j]
                         try:
                             try:
@@ -246,7 +248,7 @@
                             except:
                                 paramClassName = repr(paramTypesClass) #just in case something else happens... it will at least be visible
                         #if the parameter equals [C, it means it it a char array, so, let's change it
-    
+
                         a = formatParamClassName(paramClassName)
                         #a = a.replace('[]','Array')
                         #a = a.replace('Object', 'obj')
@@ -255,18 +257,18 @@
                         #a = a.replace('Char', 'c')
                         #a = a.replace('Double', 'd')
                         args.append(a) #so we don't leave invalid code
-    
-                    
+
+
                     info = Info(name, args=args, ret=ret)
                     #print_ info.basicAsStr()
                     infos.append(info)
-    
+
             return 1, infos
-    except Exception, e:
+    except Exception:
         s = StringIO.StringIO()
         traceback.print_exc(file=s)
         return 1, [Info(str('ERROR'), doc=s.getvalue())]
-        
+
     return 0, None
 
 def ismodule(mod):
@@ -274,7 +276,7 @@
     if not hasattr(mod, 'getClass') and not hasattr(mod, '__class__') \
        and hasattr(mod, '__name__'):
             return 1
-           
+
     return isinstance(mod, core.PyModule)
 
 
@@ -293,11 +295,11 @@
             except TypeError:
                 #may happen on jython when getting the java.lang.Class class
                 c = obj.getSuperclass(obj)
-                
+
             while c != None:
                 classes.append(c)
                 c = c.getSuperclass()
-            
+
             #get info about interfaces
             interfs = []
             for obj in classes:
@@ -306,57 +308,57 @@
                 except TypeError:
                     interfs.extend(obj.getInterfaces(obj))
             classes.extend(interfs)
-                
+
             #now is the time when we actually get info on the declared methods and fields
             for obj in classes:
                 try:
                     declaredMethods = obj.getDeclaredMethods()
                 except TypeError:
                     declaredMethods = obj.getDeclaredMethods(obj)
-                    
+
                 try:
                     declaredFields = obj.getDeclaredFields()
                 except TypeError:
                     declaredFields = obj.getDeclaredFields(obj)
-                    
-                for i in range(len(declaredMethods)):
+
+                for i in xrange(len(declaredMethods)):
                     name = declaredMethods[i].getName()
                     ret.append(name)
                     found.put(name, 1)
-                    
-                for i in range(len(declaredFields)):
+
+                for i in xrange(len(declaredFields)):
                     name = declaredFields[i].getName()
                     ret.append(name)
                     found.put(name, 1)
-                    
-                    
-        elif isclass(obj.__class__): 
+
+
+        elif isclass(obj.__class__):
             d = dir(obj.__class__)
             for name in d:
                 ret.append(name)
                 found.put(name, 1)
-            
+
 
     #this simple dir does not always get all the info, that's why we have the part before
-    #(e.g.: if we do a dir on String, some methods that are from other interfaces such as 
+    #(e.g.: if we do a dir on String, some methods that are from other interfaces such as
     #charAt don't appear)
     d = dir(original)
     for name in d:
         if found.get(name) != 1:
             ret.append(name)
-            
+
     return ret
 
 
 def formatArg(arg):
     '''formats an argument to be shown
     '''
-    
+
     s = str(arg)
     dot = s.rfind('.')
     if dot >= 0:
         s = s[dot + 1:]
-    
+
     s = s.replace(';', '')
     s = s.replace('[]', 'Array')
     if len(s) > 0:
@@ -364,13 +366,13 @@
         s = c + s[1:]
 
     return s
-    
-    
-    
+
+
+
 def Search(data):
     '''@return file, line, col
     '''
-    
+
     data = data.replace('\n', '')
     if data.endswith('.'):
         data = data.rstrip('.')
@@ -379,8 +381,8 @@
         return DoFind(f, mod), foundAs
     except:
         return DoFind(f, parent), foundAs
-    
-    
+
+
 def GenerateImportsTipForModule(obj_to_complete, dirComps=None, getattr=getattr, filter=lambda name:True):
     '''
         @param obj_to_complete: the object from where we should get the completions
@@ -391,18 +393,18 @@
             name, doc, args, type (from the TYPE_* constants)
     '''
     ret = []
-    
+
     if dirComps is None:
         dirComps = dirObj(obj_to_complete)
-    
+
     for d in dirComps:
 
         if d is None:
             continue
-            
+
         if not filter(d):
             continue
-            
+
         args = ''
         doc = ''
         retType = TYPE_BUILTIN
@@ -421,7 +423,7 @@
             #note: this only happens when we add things to the sys.path at runtime, if they are added to the classpath
             #before the run, everything goes fine.
             #
-            #The code below ilustrates what I mean... 
+            #The code below ilustrates what I mean...
             #
             #import sys
             #sys.path.insert(1, r"C:\bin\eclipse310\plugins\org.junit_3.8.1\junit.jar" )
@@ -429,7 +431,7 @@
             #import junit.framework
             #print_ dir(junit.framework) #shows the TestCase class here
             #
-            #import junit.framework.TestCase 
+            #import junit.framework.TestCase
             #
             #raises the error:
             #Traceback (innermost last):
@@ -458,19 +460,19 @@
                 except TypeError:
                     traceback.print_exc()
                     args = '()'
-    
+
                 retType = TYPE_FUNCTION
-                
+
             elif isclass(obj):
                 retType = TYPE_CLASS
-                
+
             elif ismodule(obj):
                 retType = TYPE_IMPORT
-        
+
         #add token and doc to return - assure only strings.
         ret.append((d, doc, args, retType))
-        
-            
+
+
     return ret
 
 
diff --git a/python/helpers/pydev/_pydev_thread.py b/python/helpers/pydev/_pydev_thread.py
deleted file mode 100644
index 3971c79..0000000
--- a/python/helpers/pydev/_pydev_thread.py
+++ /dev/null
@@ -1 +0,0 @@
-from thread import *
\ No newline at end of file
diff --git a/python/helpers/pydev/_pydev_threading.py b/python/helpers/pydev/_pydev_threading.py
index 52d48c9..d7bfadf 100644
--- a/python/helpers/pydev/_pydev_threading.py
+++ b/python/helpers/pydev/_pydev_threading.py
@@ -2,14 +2,10 @@
 
 import sys as _sys
 
-try:
-    import _pydev_thread as thread
-except ImportError:
-    import thread
-
+from _pydev_imps import _pydev_thread as thread
 import warnings
 
-from _pydev_time import time as _time, sleep as _sleep
+from _pydev_imps._pydev_time import time as _time, sleep as _sleep
 from traceback import format_exc as _format_exc
 
 # Note regarding PEP 8 compliant aliases
@@ -854,7 +850,7 @@
 # module, or from the python fallback
 
 try:
-    from _pydev_thread import _local as local
+    from _pydev_imps._pydev_thread import _local as local
 except ImportError:
     from _threading_local import local
 
diff --git a/python/helpers/pydev/_pydev_tipper_common.py b/python/helpers/pydev/_pydev_tipper_common.py
index f8c46d2..8e6267f 100644
--- a/python/helpers/pydev/_pydev_tipper_common.py
+++ b/python/helpers/pydev/_pydev_tipper_common.py
@@ -2,7 +2,7 @@
     import inspect
 except:
     try:
-        import _pydev_inspect as inspect # for older versions
+        from _pydev_imps import _pydev_inspect as inspect
     except:
         import traceback;traceback.print_exc() #Ok, no inspect available (search will not work)
 
@@ -10,57 +10,58 @@
     import re
 except:
     try:
-        import _pydev_re as re # for older versions @UnresolvedImport
+        import sre as re  # for older versions
     except:
         import traceback;traceback.print_exc() #Ok, no inspect available (search will not work)
 
 
+from pydevd_constants import xrange
 
 def DoFind(f, mod):
     import linecache
     if inspect.ismodule(mod):
         return f, 0, 0
-    
+
     lines = linecache.getlines(f)
-    
+
     if inspect.isclass(mod):
         name = mod.__name__
         pat = re.compile(r'^\s*class\s*' + name + r'\b')
-        for i in range(len(lines)):
-            if pat.match(lines[i]): 
+        for i in xrange(len(lines)):
+            if pat.match(lines[i]):
                 return f, i, 0
-            
+
         return f, 0, 0
 
     if inspect.ismethod(mod):
         mod = mod.im_func
-        
+
     if inspect.isfunction(mod):
         try:
             mod = mod.func_code
         except AttributeError:
             mod = mod.__code__ #python 3k
-            
+
     if inspect.istraceback(mod):
         mod = mod.tb_frame
-        
+
     if inspect.isframe(mod):
         mod = mod.f_code
 
     if inspect.iscode(mod):
         if not hasattr(mod, 'co_filename'):
             return None, 0, 0
-        
+
         if not hasattr(mod, 'co_firstlineno'):
             return mod.co_filename, 0, 0
-        
+
         lnum = mod.co_firstlineno
         pat = re.compile(r'^(\s*def\s)|(.*(?<!\w)lambda(:|\s))|^(\s*@)')
         while lnum > 0:
-            if pat.match(lines[lnum]): 
+            if pat.match(lines[lnum]):
                 break
             lnum -= 1
-            
+
         return f, lnum, 0
 
     raise RuntimeError('Do not know about: ' + f + ' ' + str(mod))
diff --git a/python/helpers/pydev/_pydev_xmlrpc_hook.py b/python/helpers/pydev/_pydev_xmlrpc_hook.py
deleted file mode 100644
index 22d445a..0000000
--- a/python/helpers/pydev/_pydev_xmlrpc_hook.py
+++ /dev/null
@@ -1,74 +0,0 @@
-from pydev_imports import SimpleXMLRPCServer
-from pydev_ipython.inputhook import get_inputhook, set_return_control_callback
-import select
-import sys
-
-select_fn = select.select
-if sys.platform.startswith('java'):
-    select_fn = select.cpython_compatible_select
-
-class InputHookedXMLRPCServer(SimpleXMLRPCServer):
-    ''' An XML-RPC Server that can run hooks while polling for new requests.
-
-        This code was designed to work with IPython's inputhook methods and
-        to allow Debug framework to have a place to run commands during idle
-        too.
-    '''
-    def __init__(self, *args, **kwargs):
-        SimpleXMLRPCServer.__init__(self, *args, **kwargs)
-        # Tell the inputhook mechanisms when control should be returned
-        set_return_control_callback(self.return_control)
-        self.debug_hook = None
-        self.return_control_osc = False
-
-    def return_control(self):
-        ''' A function that the inputhooks can call (via inputhook.stdin_ready()) to find 
-            out if they should cede control and return '''
-        if self.debug_hook:
-            # Some of the input hooks check return control without doing
-            # a single operation, so we don't return True on every
-            # call when the debug hook is in place to allow the GUI to run
-            # XXX: Eventually the inputhook code will have diverged enough
-            # from the IPython source that it will be worthwhile rewriting
-            # it rather than pretending to maintain the old API
-            self.return_control_osc = not self.return_control_osc
-            if self.return_control_osc:
-                return True
-        r, unused_w, unused_e = select_fn([self], [], [], 0)
-        return bool(r)
-
-    def setDebugHook(self, debug_hook):
-        self.debug_hook = debug_hook
-
-    def serve_forever(self):
-        ''' Serve forever, running defined hooks regularly and when idle.
-            Does not support shutdown '''
-        inputhook = get_inputhook()
-        while True:
-            # Block for default 1/2 second when no GUI is in progress
-            timeout = 0.5
-            if self.debug_hook:
-                self.debug_hook()
-                timeout = 0.1
-            if inputhook:
-                try:
-                    inputhook()
-                    # The GUI has given us an opportunity to try receiving, normally
-                    # this happens because the input hook has already polled the
-                    # server has knows something is waiting
-                    timeout = 0.020
-                except:
-                    inputhook = None
-            r, unused_w, unused_e = select_fn([self], [], [], timeout)
-            if self in r:
-                try:
-                    self._handle_request_noblock()
-                except AttributeError:
-                    # Older libraries do not support _handle_request_noblock, so fall
-                    # back to the handle_request version
-                    self.handle_request()
-                # Running the request may have changed the inputhook in use
-                inputhook = get_inputhook()
-
-    def shutdown(self):
-        raise NotImplementedError('InputHookedXMLRPCServer does not support shutdown')
diff --git a/python/helpers/pydev/_pydevd_re.py b/python/helpers/pydev/_pydevd_re.py
deleted file mode 100644
index cd00672..0000000
--- a/python/helpers/pydev/_pydevd_re.py
+++ /dev/null
@@ -1,11 +0,0 @@
-
-__all__ = [ "match", "search", "sub", "subn", "split", "findall",
-    "compile", "purge", "template", "escape", "I", "L", "M", "S", "X",
-    "U", "IGNORECASE", "LOCALE", "MULTILINE", "DOTALL", "VERBOSE",
-    "UNICODE", "error" ]
-
-import sre, sys
-module = sys.modules['re']
-for name in __all__:
-    setattr(module, name, getattr(sre, name))
-
diff --git a/python/helpers/pydev/django_debug.py b/python/helpers/pydev/django_debug.py
index 37ee042..417ff01 100644
--- a/python/helpers/pydev/django_debug.py
+++ b/python/helpers/pydev/django_debug.py
@@ -1,28 +1,19 @@
 import inspect
-from django_frame import DjangoTemplateFrame, get_template_file_name, get_template_line
+from django_frame import DjangoTemplateFrame
 from pydevd_comm import CMD_SET_BREAK
-from pydevd_constants import DJANGO_SUSPEND, GetThreadId
-from pydevd_file_utils import NormFileToServer
-from runfiles import DictContains
+from pydevd_constants import DJANGO_SUSPEND, GetThreadId, DictContains
 from pydevd_breakpoints import LineBreakpoint
 import pydevd_vars
 import traceback
 
 class DjangoLineBreakpoint(LineBreakpoint):
-    def __init__(self, type, file, line, flag, condition, func_name, expression):
+
+    def __init__(self, file, line, condition, func_name, expression):
         self.file = file
-        self.line = line
-        LineBreakpoint.__init__(self, type, flag, condition, func_name, expression)
+        LineBreakpoint.__init__(self, line, condition, func_name, expression)
 
-    def __eq__(self, other):
-        if not isinstance(other, DjangoLineBreakpoint):
-            return False
-        return self.file == other.file and self.line == other.line
-
-    def is_triggered(self, frame):
-        file = get_template_file_name(frame)
-        line = get_template_line(frame)
-        return self.file == file and self.line == line
+    def is_triggered(self, template_frame_file, template_frame_line):
+        return self.file == template_frame_file and self.line == template_frame_line
 
     def __str__(self):
         return "DjangoLineBreakpoint: %s-%d" %(self.file, self.line)
diff --git a/python/helpers/pydev/django_frame.py b/python/helpers/pydev/django_frame.py
index 762df2d..4181572 100644
--- a/python/helpers/pydev/django_frame.py
+++ b/python/helpers/pydev/django_frame.py
@@ -1,11 +1,14 @@
 from pydevd_file_utils import GetFileNameAndBaseFromFile
 import pydev_log
 import traceback
+from pydevd_constants import DictContains
 
 def read_file(filename):
     f = open(filename, "r")
-    s = f.read()
-    f.close()
+    try:
+        s = f.read()
+    finally:
+        f.close()
     return s
 
 
@@ -34,7 +37,9 @@
         if hasattr(node, 'source'):
             return node.source
         else:
-            pydev_log.error_once("WARNING: Template path is not available. Please set TEMPLATE_DEBUG=True in your settings.py to make django template breakpoints working")
+            pydev_log.error_once(
+                "WARNING: Template path is not available. Please set TEMPLATE_DEBUG=True "
+                "in your settings.py to make django template breakpoints working")
             return None
 
     except:
@@ -61,41 +66,50 @@
         return None
 
 
-def get_template_line(frame):
+def get_template_line(frame, template_frame_file):
     source = get_source(frame)
-    file_name = get_template_file_name(frame)
     try:
-        return offset_to_line_number(read_file(file_name), source[1][0])
+        return offset_to_line_number(read_file(template_frame_file), source[1][0])
     except:
         return None
 
 
 class DjangoTemplateFrame:
-    def __init__(self, frame):
-        file_name = get_template_file_name(frame)
+    def __init__(
+        self,
+        frame,
+        template_frame_file=None,
+        template_frame_line=None):
+
+        if template_frame_file is None:
+            template_frame_file = get_template_file_name(frame)
+
         self.back_context = frame.f_locals['context']
-        self.f_code = FCode('Django Template', file_name)
-        self.f_lineno = get_template_line(frame)
+        self.f_code = FCode('Django Template', template_frame_file)
+
+        if template_frame_line is None:
+            template_frame_line = get_template_line(frame, template_frame_file)
+        self.f_lineno = template_frame_line
+
         self.f_back = frame
         self.f_globals = {}
-        self.f_locals = self.collect_context(self.back_context)
+        self.f_locals = self.collect_context()
         self.f_trace = None
 
-    def collect_context(self, context):
+    def collect_context(self):
         res = {}
         try:
-            for d in context.dicts:
-                for k, v in d.items():
-                    res[k] = v
-        except  AttributeError:
+            for d in self.back_context.dicts:
+                res.update(d)
+        except AttributeError:
             pass
         return res
 
     def changeVariable(self, name, value):
         for d in self.back_context.dicts:
-            for k, v in d.items():
-                if k == name:
-                    d[k] = value
+            if DictContains(d, name):
+                d[name] = value
+        self.f_locals[name] = value
 
 
 class FCode:
@@ -106,10 +120,9 @@
 
 def is_django_exception_break_context(frame):
     try:
-        name = frame.f_code.co_name
+        return frame.f_code.co_name in ['_resolve_lookup', 'find_template']
     except:
-        name = None
-    return name in ['_resolve_lookup', 'find_template']
+        return False
 
 
 def just_raised(trace):
diff --git a/python/helpers/pydev/merge_pydev_pycharm.txt b/python/helpers/pydev/merge_pydev_pycharm.txt
new file mode 100644
index 0000000..1cbd356
--- /dev/null
+++ b/python/helpers/pydev/merge_pydev_pycharm.txt
@@ -0,0 +1,138 @@
+Done in the merge (started from the PyCharm version and bringing in things from PyDev):
+
+- Added modules which were unused in PyCharm but are used in PyDev.
+
+- execfile was upgraded to the PyDev version (it had errors with BOM in utf-8).
+
+- pydevd_file_utils: automatically doing normcase.
+
+- pyded: multiprocessing supporting 2 approaches (use new connection/use same connection).
+
+- pydev_monkey: fixes from PyDev to properly deal with windows command lines.
+
+- Private variables (name-mangled) are now evaluated (so they can be hovered).
+
+- Exceptions raised from lines with a #@IgnoreException are ignored.
+
+- Fixed exception changing variable in django debugging.
+
+- Made debugging with Django breakpoints a bit faster.
+
+- Exceptions separated by caught/uncaught, so, it's no longer needed to check
+    an additional attribute to check it.
+
+- When exception is thrown evaluating breakpoint condition, the debugger will stop
+    (can be configured in main_debugger.suspend_on_breakpoint_exception).
+
+- #@DontTrace comments can be used on methods so that they are ignored when stepping
+    in (needs UI for CMD_ENABLE_DONT_TRACE).
+
+- Code which stops tracing inside python properties integrated (CMD_SET_PROPERTY_TRACE).
+
+- Find Referrers integrated.
+
+- Using same facade for IPython integration.
+
+- When the code is interrupted, the buffer in the python side is cleared.
+
+- GEvent debugging: for remote debugging, one has to import pydevd before doing the gevent patching -- even if
+    pydevd.settrace will only be done later.
+
+    Also, the gevent debugging should probably be closer to the stackless debugging,
+    where we actually show the live stackless threads -- so, we should show the live
+    gevent greenlets -- which the current version doesn't do.
+
+
+Things to be fixed in PyCharm:
+--------------------------------
+
+1. CMD_VERSION now should receive that it wants breakpoints by ID (and it
+    should give a breakpoint id which should be used to remove it later
+    when setting a breakpoint).
+
+2. Setting breakpoint: the func_name is not being properly passed from PyCharm
+    (and as such, PyCharm debugging is slower than it should be).
+
+    Note that it works passing 'None', but the func_name should be given when possible.
+
+    I.e.:
+
+    class MyClass(object):
+        def __init__(self):
+            print('here') # Break here: '__init__' as func_name
+            print(a)
+
+        def bar(self):
+            print('bar') # Break here: 'bar' as func_name
+
+3. Note (may not need to change anything):
+    Removed support for removing breakpoint without knowing its type (i.e.:
+    remove breakpoint tried to remove a breakpoint if the type wasn't
+    python-line nor django-line, now, it'll give an exception).
+
+4. break_on_exceptions_thrown_in_same_context / ignore_exceptions_thrown_in_lines_with_ignore_exception
+
+    These are currently set in the bulk operation to add exceptions, but
+    it does make sense to create a separate command for that (but it's only
+    worth doing it when/if PyCharm gets an UI to add it).
+
+5. UI to ignore exception from additional places (not only from #@IgnoreException code-comments)
+    i.e.: UI for CMD_IGNORE_THROWN_EXCEPTION_AT.
+
+6. UI to show the current exception (i.e.: deal with CMD_SEND_CURR_EXCEPTION_TRACE and
+    CMD_SEND_CURR_EXCEPTION_TRACE_PROCEEDED in the client side).
+
+7. When an exception is detected on a breakpoint condition evaluation, we'll send
+    a CMD_GET_BREAKPOINT_EXCEPTION (which should be handled by PyCharm to show some
+    UI notification).
+
+8. The CMD_ENABLE_DONT_TRACE must be sent from the UI to skip methods which have
+    an #@DontTrace above it.
+
+9. The CMD_SET_PROPERTY_TRACE must be sent from the UI to skip setter/getter/deleter
+    python properties.
+
+10. Integrate find referrers UI in PyCharm. In the backend it uses a CMD_RUN_CUSTOM_OPERATION with:
+    from pydevd_referrers import get_referrer_info
+    get_referrer_info
+
+11. CMD_RELOAD_CODE has to be integrated (when a file changes it should be issued
+    for 'hot' auto-reload of the code -- note that it's not needed if the
+    user already has some sort of auto-reload builtin -- i.e.: django without the noreload option).
+
+12. Console Completions: See: pydev_ipython_console_011.PyDevFrontEnd.getCompletions
+    Now we're completing as they come from the IPython backend (i.e.: not stripping %
+    for magic commands).
+
+13. In PyDev, interrupt can be used to clear the current buffer (whereas in PyCharm it's only
+    possible to activate it to stop the execution of a command) -- note that this is only a
+    client-side limitation.
+
+14. Console GUI event loop can have some UI integrated.
+    Note that the user can enable it manually (i.e.: by writing something as "%gui qt"
+    the qt backend is integrated, but it's possible to call 'enableGui' with the
+    backend to use from PyCharm too -- in PyDev this is an option with the possible backends).
+
+
+Things to be fixed in PyDev:
+--------------------------------
+
+.Provide UI for 'smart step into' (later)
+
+. Check what to do with 'message' from xml (later)
+
+. Deal with sendSignatureCallTrace (later)
+
+. Set IPYTHONENABLE to False/True to use IPython console (later)
+
+
+Manual test:
+---------------
+
+* Support for IPython GUI event loop in console
+* Django template debugging
+* Gevent debugging
+* Smart step into
+* Collection of type information of arguments in debug mode
+* Ability to stop tracing
+* Ability to run debugger and console on remote interpreter
diff --git a/python/helpers/pydev/pycompletion.py b/python/helpers/pydev/pycompletion.py
index e706d54..3369780 100644
--- a/python/helpers/pydev/pycompletion.py
+++ b/python/helpers/pydev/pycompletion.py
@@ -2,12 +2,10 @@
 '''
 @author Radim Kubacki
 '''
-import __builtin__
 import _pydev_imports_tipper
 import traceback
 import StringIO
 import sys
-import time
 import urllib
 import pycompletionserver
 
@@ -24,7 +22,7 @@
     except:
         s = StringIO.StringIO()
         exc_info = sys.exc_info()
-    
+
         traceback.print_exception(exc_info[0], exc_info[1], exc_info[2], limit=None, file=s)
         err = s.getvalue()
         pycompletionserver.dbg('Received error: ' + str(err), pycompletionserver.ERROR)
@@ -38,4 +36,4 @@
     mod_name = sys.argv[1]
 
     print(GetImports(mod_name))
-           
+
diff --git a/python/helpers/pydev/pycompletionserver.py b/python/helpers/pydev/pycompletionserver.py
index 2fdd539..0b11cb6 100644
--- a/python/helpers/pydev/pycompletionserver.py
+++ b/python/helpers/pydev/pycompletionserver.py
@@ -37,10 +37,7 @@
     import _pydev_imports_tipper
 
 
-if pydevd_constants.USE_LIB_COPY:
-    import _pydev_socket as socket
-else:
-    import socket
+from _pydev_imps import _pydev_socket as socket
 
 import sys
 if sys.platform == "darwin":
@@ -65,10 +62,7 @@
 
 import traceback
 
-if pydevd_constants.USE_LIB_COPY:
-    import _pydev_time as time
-else:
-    import time
+from _pydev_imps import _pydev_time as time
 
 try:
     import StringIO
@@ -93,7 +87,7 @@
 #        f = open('c:/temp/test.txt', 'a')
 #        print_ >> f, s
 #        f.close()
-   
+
 import pydev_localhost
 HOST = pydev_localhost.get_localhost() # Symbolic name meaning the local host
 
@@ -203,7 +197,7 @@
 
 
     def connectToServer(self):
-        import socket
+        from _pydev_imps import _pydev_socket as socket
 
         self.socket = s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
         try:
diff --git a/python/helpers/pydev/pydev_console_utils.py b/python/helpers/pydev/pydev_console_utils.py
index 571ae87..bd7b7de 100644
--- a/python/helpers/pydev/pydev_console_utils.py
+++ b/python/helpers/pydev/pydev_console_utils.py
@@ -1,36 +1,12 @@
-from pydev_imports import xmlrpclib
+from pydev_imports import xmlrpclib, _queue, Exec
 import sys
-
-import traceback
-
 from pydevd_constants import USE_LIB_COPY
 from pydevd_constants import IS_JYTHON
-
-try:
-    if USE_LIB_COPY:
-        import _pydev_Queue as _queue
-    else:
-        import Queue as _queue
-except:
-    import queue as _queue
-
-try:
-    from pydevd_exec import Exec
-except:
-    from pydevd_exec2 import Exec
-
-try:
-    if USE_LIB_COPY:
-        import _pydev_thread as thread
-    else:
-        import thread
-except:
-    import _thread as thread
-
+from _pydev_imps import _pydev_thread as thread
 import pydevd_xml
 import pydevd_vars
-
-from pydevd_utils import *
+from pydevd_utils import *  # @UnusedWildImport
+import traceback
 
 #=======================================================================================================================
 # Null
@@ -137,7 +113,7 @@
     def __init__(self, text, is_single_line=True):
         self.text = text
         self.is_single_line = is_single_line
-        
+
     def append(self, code_fragment):
         self.text = self.text + "\n" + code_fragment.text
         if not code_fragment.is_single_line:
@@ -173,9 +149,12 @@
             self.buffer = code_fragment
         else:
             self.buffer.append(code_fragment)
-        
+
         return self.needMoreForCode(self.buffer.text)
 
+    def createStdIn(self):
+        return StdIn(self, self.host, self.client_port)
+
     def addExec(self, code_fragment):
         original_in = sys.stdin
         try:
@@ -194,7 +173,7 @@
 
         more = False
         try:
-            sys.stdin = StdIn(self, self.host, self.client_port)
+            sys.stdin = self.createStdIn()
             try:
                 if help is not None:
                     #This will enable the help() function to work.
@@ -209,8 +188,6 @@
                             self._input_error_printed = True
                             sys.stderr.write('\nError when trying to update pydoc.help.input\n')
                             sys.stderr.write('(help() may not work -- please report this as a bug in the pydev bugtracker).\n\n')
-                            import traceback
-
                             traceback.print_exc()
 
                 try:
@@ -241,8 +218,6 @@
         except SystemExit:
             raise
         except:
-            import traceback;
-
             traceback.print_exc()
 
         return more
@@ -251,7 +226,7 @@
     def doAddExec(self, codeFragment):
         '''
         Subclasses should override.
-        
+
         @return: more (True if more input is needed to complete the statement and False if the statement is complete).
         '''
         raise NotImplementedError()
@@ -260,7 +235,7 @@
     def getNamespace(self):
         '''
         Subclasses should override.
-        
+
         @return: dict with namespace.
         '''
         raise NotImplementedError()
@@ -312,14 +287,14 @@
                     pass
 
             try:
-                #if no attempt succeeded, try to return repr()... 
+                #if no attempt succeeded, try to return repr()...
                 return repr(obj)
             except:
                 try:
-                    #otherwise the class 
+                    #otherwise the class
                     return str(obj.__class__)
                 except:
-                    #if all fails, go to an empty string 
+                    #if all fails, go to an empty string
                     return ''
         except:
             traceback.print_exc()
@@ -353,6 +328,7 @@
 
 
     def interrupt(self):
+        self.buffer = None # Also clear the buffer when it's interrupted.
         try:
             if self.interruptable:
                 if hasattr(thread, 'interrupt_main'): #Jython doesn't have it
@@ -371,11 +347,13 @@
         self.interruptable = True
 
     def get_server(self):
-        if self.host is not None:
+        if getattr(self, 'host', None) is not None:
             return xmlrpclib.Server('http://%s:%s' % (self.host, self.client_port))
         else:
             return None
 
+    server = property(get_server)
+
     def finishExec(self, more):
         self.interruptable = False
 
@@ -409,7 +387,12 @@
         return xml
 
     def changeVariable(self, attr, value):
-        Exec('%s=%s' % (attr, value), self.getNamespace(), self.getNamespace())
+        def do_change_variable():
+            Exec('%s=%s' % (attr, value), self.getNamespace(), self.getNamespace())
+
+        # Important: it has to be really enabled in the main thread, so, schedule
+        # it to run in the main thread.
+        self.exec_queue.put(do_change_variable)
 
     def _findFrame(self, thread_id, frame_id):
         '''
@@ -431,39 +414,48 @@
         Used to show console with variables connection.
         Mainly, monkey-patches things in the debugger structure so that the debugger protocol works.
         '''
-        try:
-            # Try to import the packages needed to attach the debugger
-            import pydevd
-            import pydevd_vars
-            import threading
-        except:
-            # This happens on Jython embedded in host eclipse
-            import traceback;traceback.print_exc()
-            return ('pydevd is not available, cannot connect',)
+        def do_connect_to_debugger():
+            try:
+                # Try to import the packages needed to attach the debugger
+                import pydevd
+                if USE_LIB_COPY:
+                    import _pydev_threading as threading
+                else:
+                    import threading
 
-        import pydev_localhost
-        threading.currentThread().__pydevd_id__ = "console_main"
+            except:
+                # This happens on Jython embedded in host eclipse
+                traceback.print_exc()
+                sys.stderr.write('pydevd is not available, cannot connect\n',)
 
-        self.orig_findFrame = pydevd_vars.findFrame
-        pydevd_vars.findFrame = self._findFrame
+            import pydev_localhost
+            threading.currentThread().__pydevd_id__ = "console_main"
 
-        self.debugger = pydevd.PyDB()
-        try:
-            self.debugger.connect(pydev_localhost.get_localhost(), debuggerPort)
-            self.debugger.prepareToRun()
-            import pydevd_tracing
-            pydevd_tracing.SetTrace(None)
-        except:
-            import traceback;traceback.print_exc()
-            return ('Failed to connect to target debugger.')
+            self.orig_findFrame = pydevd_vars.findFrame
+            pydevd_vars.findFrame = self._findFrame
 
-        # Register to process commands when idle
-        self.debugrunning = False
-        try:
-            self.server.setDebugHook(self.debugger.processInternalCommands)
-        except:
-            import traceback;traceback.print_exc()
-            return ('Version of Python does not support debuggable Interactive Console.')
+            self.debugger = pydevd.PyDB()
+            try:
+                self.debugger.connect(pydev_localhost.get_localhost(), debuggerPort)
+                self.debugger.prepareToRun()
+                import pydevd_tracing
+                pydevd_tracing.SetTrace(None)
+            except:
+                traceback.print_exc()
+                sys.stderr.write('Failed to connect to target debugger.\n')
+
+            # Register to process commands when idle
+            self.debugrunning = False
+            try:
+                import pydevconsole
+                pydevconsole.set_debug_hook(self.debugger.processInternalCommands)
+            except:
+                traceback.print_exc()
+                sys.stderr.write('Version of Python does not support debuggable Interactive Console.\n')
+
+        # Important: it has to be really enabled in the main thread, so, schedule
+        # it to run in the main thread.
+        self.exec_queue.put(do_connect_to_debugger)
 
         return ('connect complete',)
 
@@ -476,19 +468,24 @@
             As with IPython, enabling multiple GUIs isn't an error, but
             only the last one's main loop runs and it may not work
         '''
-        from pydev_versioncheck import versionok_for_gui
-        if versionok_for_gui():
-            try:
-                from pydev_ipython.inputhook import enable_gui
-                enable_gui(guiname)
-            except:
-                sys.stderr.write("Failed to enable GUI event loop integration for '%s'\n" % guiname)
-                import traceback;traceback.print_exc()
-        elif guiname not in ['none', '', None]:
-            # Only print a warning if the guiname was going to do something
-            sys.stderr.write("PyDev console: Python version does not support GUI event loop integration for '%s'\n" % guiname)
-        # Return value does not matter, so return back what was sent
-        return guiname
+        def do_enable_gui():
+            from pydev_versioncheck import versionok_for_gui
+            if versionok_for_gui():
+                try:
+                    from pydev_ipython.inputhook import enable_gui
+                    enable_gui(guiname)
+                except:
+                    sys.stderr.write("Failed to enable GUI event loop integration for '%s'\n" % guiname)
+                    traceback.print_exc()
+            elif guiname not in ['none', '', None]:
+                # Only print a warning if the guiname was going to do something
+                sys.stderr.write("PyDev console: Python version does not support GUI event loop integration for '%s'\n" % guiname)
+            # Return value does not matter, so return back what was sent
+            return guiname
+
+        # Important: it has to be really enabled in the main thread, so, schedule
+        # it to run in the main thread.
+        self.exec_queue.put(do_enable_gui)
 
 #=======================================================================================================================
 # FakeFrame
diff --git a/python/helpers/pydev/pydev_coverage.py b/python/helpers/pydev/pydev_coverage.py
new file mode 100644
index 0000000..1690f8c
--- /dev/null
+++ b/python/helpers/pydev/pydev_coverage.py
@@ -0,0 +1,54 @@
+def execute():
+    import os
+    import sys
+    
+    files = None
+    if 'combine' not in sys.argv:
+    
+        if '--pydev-analyze' in sys.argv:
+                
+            #Ok, what we want here is having the files passed through stdin (because
+            #there may be too many files for passing in the command line -- we could
+            #just pass a dir and make the find files here, but as that's already 
+            #given in the java side, let's just gather that info here).
+            sys.argv.remove('--pydev-analyze')
+            try:
+                s = raw_input()
+            except:
+                s = input()
+            s = s.replace('\r', '')
+            s = s.replace('\n', '')
+            files = s.split('|')
+            files = [v for v in files if len(v) > 0]
+            
+            #Note that in this case we'll already be in the working dir with the coverage files, so, the
+            #coverage file location is not passed.
+            
+        else:
+            #For all commands, the coverage file is configured in pydev, and passed as the first argument
+            #in the command line, so, let's make sure this gets to the coverage module.            
+            os.environ['COVERAGE_FILE'] = sys.argv[1]
+            del sys.argv[1]
+        
+    try:
+        import coverage #@UnresolvedImport
+    except:
+        sys.stderr.write('Error: coverage module could not be imported\n')
+        sys.stderr.write('Please make sure that the coverage module (http://nedbatchelder.com/code/coverage/)\n')
+        sys.stderr.write('is properly installed in your interpreter: %s\n' % (sys.executable,))
+        
+        import traceback;traceback.print_exc()
+        return
+    
+    #print(coverage.__version__) TODO: Check if the version is a version we support (should be at least 3.4) -- note that maybe the attr is not there.
+    from coverage.cmdline import main #@UnresolvedImport
+
+    if files is not None:        
+        sys.argv.append('-r')
+        sys.argv.append('-m')
+        sys.argv += files
+        
+    main()
+
+if __name__ == '__main__':
+    execute()
\ No newline at end of file
diff --git a/python/helpers/pydev/pydev_imports.py b/python/helpers/pydev/pydev_imports.py
index 0685875..69804a8 100644
--- a/python/helpers/pydev/pydev_imports.py
+++ b/python/helpers/pydev/pydev_imports.py
@@ -1,40 +1,52 @@
-from pydevd_constants import USE_LIB_COPY
+from pydevd_constants import USE_LIB_COPY, izip
+
+
 try:
     try:
         if USE_LIB_COPY:
-            import _pydev_xmlrpclib as xmlrpclib
+            from _pydev_imps import _pydev_xmlrpclib as xmlrpclib
         else:
             import xmlrpclib
     except ImportError:
         import xmlrpc.client as xmlrpclib
 except ImportError:
-    import _pydev_xmlrpclib as xmlrpclib
+    from _pydev_imps import _pydev_xmlrpclib as xmlrpclib
+
+
 try:
     try:
         if USE_LIB_COPY:
-            from _pydev_SimpleXMLRPCServer import SimpleXMLRPCServer
+            from _pydev_imps._pydev_SimpleXMLRPCServer import SimpleXMLRPCServer
         else:
             from SimpleXMLRPCServer import SimpleXMLRPCServer
     except ImportError:
         from xmlrpc.server import SimpleXMLRPCServer
 except ImportError:
-    from _pydev_SimpleXMLRPCServer import SimpleXMLRPCServer
+    from _pydev_imps._pydev_SimpleXMLRPCServer import SimpleXMLRPCServer
+
+
+
 try:
     from StringIO import StringIO
 except ImportError:
     from io import StringIO
+
+
 try:
     execfile=execfile #Not in Py3k
 except NameError:
-    from _pydev_execfile import execfile
+    from _pydev_imps._pydev_execfile import execfile
+
+
 try:
     if USE_LIB_COPY:
-        import _pydev_Queue as _queue
+        from _pydev_imps import _pydev_Queue as _queue
     else:
         import Queue as _queue
 except:
     import queue as _queue #@UnresolvedImport
 
+
 try:
     from pydevd_exec import Exec
 except:
@@ -80,7 +92,7 @@
             return start
 
         i = 0
-        for start_seg, dest_seg in zip(orig_list, dest_list):
+        for start_seg, dest_seg in izip(orig_list, dest_list):
             if start_seg != os.path.normcase(dest_seg):
                 break
             i += 1
diff --git a/python/helpers/pydev/pydev_ipython/inputhookglut.py b/python/helpers/pydev/pydev_ipython/inputhookglut.py
index f0683ba..e1e67e9 100644
--- a/python/helpers/pydev/pydev_ipython/inputhookglut.py
+++ b/python/helpers/pydev/pydev_ipython/inputhookglut.py
@@ -29,9 +29,8 @@
 #-----------------------------------------------------------------------------
 # Imports
 #-----------------------------------------------------------------------------
-import os
 import sys
-import time
+from _pydev_imps import _pydev_time as time
 import signal
 import OpenGL.GLUT as glut
 import OpenGL.platform as platform
diff --git a/python/helpers/pydev/pydev_ipython/inputhookpyglet.py b/python/helpers/pydev/pydev_ipython/inputhookpyglet.py
index 0cbb87f..64dd2e5 100644
--- a/python/helpers/pydev/pydev_ipython/inputhookpyglet.py
+++ b/python/helpers/pydev/pydev_ipython/inputhookpyglet.py
@@ -20,9 +20,8 @@
 # Imports
 #-----------------------------------------------------------------------------
 
-import os
 import sys
-import time
+from _pydev_imps import _pydev_time as time
 from timeit import default_timer as clock
 import pyglet
 from pydev_ipython.inputhook import stdin_ready
diff --git a/python/helpers/pydev/pydev_ipython/inputhookqt4.py b/python/helpers/pydev/pydev_ipython/inputhookqt4.py
index f4f32a3..27598fa 100644
--- a/python/helpers/pydev/pydev_ipython/inputhookqt4.py
+++ b/python/helpers/pydev/pydev_ipython/inputhookqt4.py
@@ -18,7 +18,13 @@
 
 import os
 import signal
-import threading
+
+from pydevd_constants import USE_LIB_COPY
+if USE_LIB_COPY:
+    import _pydev_threading as threading
+else:
+    import threading
+
 
 from pydev_ipython.qt_for_kernel import QtCore, QtGui
 from pydev_ipython.inputhook import allow_CTRL_C, ignore_CTRL_C, stdin_ready
diff --git a/python/helpers/pydev/pydev_ipython/inputhookwx.py b/python/helpers/pydev/pydev_ipython/inputhookwx.py
index 6640884..19ffdc7 100644
--- a/python/helpers/pydev/pydev_ipython/inputhookwx.py
+++ b/python/helpers/pydev/pydev_ipython/inputhookwx.py
@@ -19,7 +19,7 @@
 
 import sys
 import signal
-import time
+from _pydev_imps import _pydev_time as time
 from timeit import default_timer as clock
 import wx
 
diff --git a/python/helpers/pydev/pydev_ipython_console.py b/python/helpers/pydev/pydev_ipython_console.py
index 859157e..0c51dfe 100644
--- a/python/helpers/pydev/pydev_ipython_console.py
+++ b/python/helpers/pydev/pydev_ipython_console.py
@@ -1,20 +1,15 @@
 import sys
 from pydev_console_utils import BaseInterpreterInterface
-import re
 
 import os
 
 os.environ['TERM'] = 'emacs' #to use proper page_more() for paging
 
 
-#Uncomment to force PyDev standard shell.   
-#raise ImportError()
+# Uncomment to force PyDev standard shell.
+# raise ImportError()
 
-try:
-    #IPython 0.11 broke compatibility...
-    from pydev_ipython_console_011 import PyDevFrontEnd
-except:
-    from pydev_ipython_console_010 import PyDevFrontEnd
+from pydev_ipython_console_011 import PyDevFrontEnd
 
 #=======================================================================================================================
 # InterpreterInterface
@@ -28,7 +23,7 @@
         BaseInterpreterInterface.__init__(self, mainThread)
         self.client_port = client_port
         self.host = host
-        self.interpreter = PyDevFrontEnd()
+        self.interpreter = PyDevFrontEnd(host, client_port)
         self._input_error_printed = False
         self.notification_succeeded = False
         self.notification_tries = 0
@@ -57,58 +52,11 @@
 
 
     def getCompletions(self, text, act_tok):
-        try:
-            ipython_completion = text.startswith('%')
-            if not ipython_completion:
-                s = re.search(r'\bcd\b', text)
-                if s is not None and s.start() == 0:
-                    ipython_completion = True
-
-            if text is None:
-                text = ""
-
-            TYPE_LOCAL = '9'
-            _line, completions = self.interpreter.complete(text)
-
-            ret = []
-            append = ret.append
-            for completion in completions:
-                if completion.startswith('%'):
-                    append((completion[1:], '', '%', TYPE_LOCAL))
-                else:
-                    append((completion, '', '', TYPE_LOCAL))
-
-            if ipython_completion:
-                return ret
-
-            #Otherwise, use the default PyDev completer (to get nice icons)
-            from _pydev_completer import Completer
-
-            completer = Completer(self.getNamespace(), None)
-            completions = completer.complete(act_tok)
-            cset = set()
-            for c in completions:
-                cset.add(c[0])
-            for c in ret:
-                if c[0] not in cset:
-                    completions.append(c)
-
-            return completions
-
-        except:
-            import traceback
-
-            traceback.print_exc()
-            return []
+        return self.interpreter.getCompletions(text, act_tok)
 
     def close(self):
         sys.exit(0)
 
-    def ipython_editor(self, file, line):
-        server = self.get_server()
-
-        if server is not None:
-            return server.IPythonEditor(os.path.realpath(file), line)
 
     def notify_about_magic(self):
         if not self.notification_succeeded:
diff --git a/python/helpers/pydev/pydev_ipython_console_010.py b/python/helpers/pydev/pydev_ipython_console_010.py
deleted file mode 100644
index e093fef..0000000
--- a/python/helpers/pydev/pydev_ipython_console_010.py
+++ /dev/null
@@ -1,129 +0,0 @@
-from IPython.frontend.prefilterfrontend import PrefilterFrontEnd
-from pydev_console_utils import Null
-import sys
-original_stdout = sys.stdout
-original_stderr = sys.stderr
-
-
-#=======================================================================================================================
-# PyDevFrontEnd
-#=======================================================================================================================
-class PyDevFrontEnd(PrefilterFrontEnd):
-
-
-    def __init__(self, *args, **kwargs):        
-        PrefilterFrontEnd.__init__(self, *args, **kwargs)
-        #Disable the output trap: we want all that happens to go to the output directly
-        self.shell.output_trap = Null()
-        self._curr_exec_lines = []
-        self._continuation_prompt = ''
-        
-        
-    def capture_output(self):
-        pass
-    
-    
-    def release_output(self):
-        pass
-    
-    
-    def continuation_prompt(self):
-        return self._continuation_prompt
-    
-    
-    def write(self, txt, refresh=True):
-        original_stdout.write(txt)
-        
-
-    def new_prompt(self, prompt):
-        self.input_buffer = ''
-        #The java side takes care of this part.
-        #self.write(prompt)
-        
-        
-    def show_traceback(self):
-        import traceback;traceback.print_exc()
-        
-        
-    def write_out(self, txt, *args, **kwargs):
-        original_stdout.write(txt)
-    
-    
-    def write_err(self, txt, *args, **kwargs):
-        original_stderr.write(txt)
-        
-        
-    def getNamespace(self):
-        return self.shell.user_ns
-
-
-    def is_complete(self, string):
-        #Based on IPython 0.10.1
-
-        if string in ('', '\n'):
-            # Prefiltering, eg through ipython0, may return an empty
-            # string although some operations have been accomplished. We
-            # thus want to consider an empty string as a complete
-            # statement.
-            return True
-        else:
-            try:
-                # Add line returns here, to make sure that the statement is
-                # complete (except if '\' was used).
-                # This should probably be done in a different place (like
-                # maybe 'prefilter_input' method? For now, this works.
-                clean_string = string.rstrip('\n')
-                if not clean_string.endswith('\\'): clean_string += '\n\n'
-                is_complete = codeop.compile_command(clean_string,
-                                                     "<string>", "exec")
-            except Exception:
-                # XXX: Hack: return True so that the
-                # code gets executed and the error captured.
-                is_complete = True
-            return is_complete
-    
-    
-    def addExec(self, line):
-        if self._curr_exec_lines:
-            if not line:
-                self._curr_exec_lines.append(line)
-
-                #Would be the line below, but we've set the continuation_prompt to ''.
-                #buf = self.continuation_prompt() + ('\n' + self.continuation_prompt()).join(self._curr_exec_lines)
-                buf = '\n'.join(self._curr_exec_lines)
-
-                self.input_buffer = buf + '\n'
-                if self._on_enter():
-                    del self._curr_exec_lines[:]
-                    return False #execute complete (no more)
-
-                return True #needs more
-            else:
-                self._curr_exec_lines.append(line)
-                return True #needs more
-
-        else:
-
-            self.input_buffer = line
-            if not self._on_enter():
-                #Did not execute
-                self._curr_exec_lines.append(line)
-                return True #needs more
-
-            return False #execute complete (no more)
-
-    def update(self, globals, locals):
-        locals['_oh'] = self.shell.user_ns['_oh']
-        locals['_ip'] = self.shell.user_ns['_ip']
-        self.shell.user_global_ns = globals
-        self.shell.user_ns = locals
-
-    def is_automagic(self):
-        if self.ipython0.rc.automagic:
-            return True
-        else:
-            return False
-
-    def get_greeting_msg(self):
-        return 'PyDev console: using IPython 0.10\n'
-
diff --git a/python/helpers/pydev/pydev_ipython_console_011.py b/python/helpers/pydev/pydev_ipython_console_011.py
index 198d40f..54458e7 100644
--- a/python/helpers/pydev/pydev_ipython_console_011.py
+++ b/python/helpers/pydev/pydev_ipython_console_011.py
@@ -1,23 +1,295 @@
+# TODO that would make IPython integration better
+# - show output other times then when enter was pressed
+# - support proper exit to allow IPython to cleanup (e.g. temp files created with %edit)
+# - support Ctrl-D (Ctrl-Z on Windows)
+# - use IPython (numbered) prompts in PyDev
+# - better integration of IPython and PyDev completions
+# - some of the semantics on handling the code completion are not correct:
+#   eg: Start a line with % and then type c should give %cd as a completion by it doesn't
+#       however type %c and request completions and %cd is given as an option
+#   eg: Completing a magic when user typed it without the leading % causes the % to be inserted
+#       to the left of what should be the first colon.
+"""Interface to TerminalInteractiveShell for PyDev Interactive Console frontend
+   for IPython 0.11 to 1.0+.
+"""
+
+from __future__ import print_function
+
+import os
+import codeop
+
+from IPython.core.error import UsageError
+from IPython.core.inputsplitter import IPythonInputSplitter
+from IPython.core.completer import IPCompleter
+from IPython.core.interactiveshell import InteractiveShell, InteractiveShellABC
+from IPython.core.usage import default_banner_parts
+from IPython.utils.strdispatch import StrDispatch
+import IPython.core.release as IPythonRelease
 try:
     from IPython.terminal.interactiveshell import TerminalInteractiveShell
 except ImportError:
+    # Versions of IPython [0.11,1.0) had an extra hierarchy level
     from IPython.frontend.terminal.interactiveshell import TerminalInteractiveShell
-from IPython.utils import io
-import sys
-import codeop, re
-original_stdout = sys.stdout
-original_stderr = sys.stderr
+from IPython.utils.traitlets import CBool, Unicode
 from IPython.core import release
 
+from pydev_imports import xmlrpclib
 
-#=======================================================================================================================
-# _showtraceback
-#=======================================================================================================================
-def _showtraceback(*args, **kwargs):
-    import traceback;traceback.print_exc()
-    
-    
-    
+pydev_banner_parts = [
+    '\n',
+    'PyDev -- Python IDE for Eclipse\n',  # TODO can we get a version number in here?
+    'For help on using PyDev\'s Console see http://pydev.org/manual_adv_interactive_console.html\n',
+]
+
+default_pydev_banner_parts = default_banner_parts + pydev_banner_parts
+
+default_pydev_banner = ''.join(default_pydev_banner_parts)
+
+def show_in_pager(self, strng):
+    """ Run a string through pager """
+    # On PyDev we just output the string, there are scroll bars in the console
+    # to handle "paging". This is the same behaviour as when TERM==dump (see
+    # page.py)
+    print(strng)
+
+def create_editor_hook(pydev_host, pydev_client_port):
+    def call_editor(self, filename, line=0, wait=True):
+        """ Open an editor in PyDev """
+        if line is None:
+            line = 0
+
+        # Make sure to send an absolution path because unlike most editor hooks
+        # we don't launch a process. This is more like what happens in the zmqshell
+        filename = os.path.abspath(filename)
+
+        # Tell PyDev to open the editor
+        server = xmlrpclib.Server('http://%s:%s' % (pydev_host, pydev_client_port))
+        server.IPythonEditor(filename, str(line))
+
+        if wait:
+            try:
+                raw_input("Press Enter when done editing:")
+            except NameError:
+                input("Press Enter when done editing:")
+    return call_editor
+
+
+
+class PyDevIPCompleter(IPCompleter):
+
+    def __init__(self, *args, **kwargs):
+        """ Create a Completer that reuses the advanced completion support of PyDev
+            in addition to the completion support provided by IPython """
+        IPCompleter.__init__(self, *args, **kwargs)
+        # Use PyDev for python matches, see getCompletions below
+        self.matchers.remove(self.python_matches)
+
+class PyDevTerminalInteractiveShell(TerminalInteractiveShell):
+    banner1 = Unicode(default_pydev_banner, config=True,
+        help="""The part of the banner to be printed before the profile"""
+    )
+
+    # TODO term_title: (can PyDev's title be changed???, see terminal.py for where to inject code, in particular set_term_title as used by %cd)
+    # for now, just disable term_title
+    term_title = CBool(False)
+
+    # Note in version 0.11 there is no guard in the IPython code about displaying a
+    # warning, so with 0.11 you get:
+    #  WARNING: Readline services not available or not loaded.
+    #  WARNING: The auto-indent feature requires the readline library
+    # Disable readline, readline type code is all handled by PyDev (on Java side)
+    readline_use = CBool(False)
+    # autoindent has no meaning in PyDev (PyDev always handles that on the Java side),
+    # and attempting to enable it will print a warning in the absence of readline.
+    autoindent = CBool(False)
+    # Force console to not give warning about color scheme choice and default to NoColor.
+    # TODO It would be nice to enable colors in PyDev but:
+    # - The PyDev Console (Eclipse Console) does not support the full range of colors, so the
+    #   effect isn't as nice anyway at the command line
+    # - If done, the color scheme should default to LightBG, but actually be dependent on
+    #   any settings the user has (such as if a dark theme is in use, then Linux is probably
+    #   a better theme).
+    colors_force = CBool(True)
+    colors = Unicode("NoColor")
+
+    # In the PyDev Console, GUI control is done via hookable XML-RPC server
+    @staticmethod
+    def enable_gui(gui=None, app=None):
+        """Switch amongst GUI input hooks by name.
+        """
+        # Deferred import
+        from pydev_ipython.inputhook import enable_gui as real_enable_gui
+        try:
+            return real_enable_gui(gui, app)
+        except ValueError as e:
+            raise UsageError("%s" % e)
+
+    #-------------------------------------------------------------------------
+    # Things related to hooks
+    #-------------------------------------------------------------------------
+
+    def init_hooks(self):
+        super(PyDevTerminalInteractiveShell, self).init_hooks()
+        self.set_hook('show_in_pager', show_in_pager)
+
+    #-------------------------------------------------------------------------
+    # Things related to exceptions
+    #-------------------------------------------------------------------------
+
+    def showtraceback(self, exc_tuple=None, filename=None, tb_offset=None,
+                  exception_only=False):
+        # IPython does a lot of clever stuff with Exceptions. However mostly
+        # it is related to IPython running in a terminal instead of an IDE.
+        # (e.g. it prints out snippets of code around the stack trace)
+        # PyDev does a lot of clever stuff too, so leave exception handling
+        # with default print_exc that PyDev can parse and do its clever stuff
+        # with (e.g. it puts links back to the original source code)
+        import traceback;traceback.print_exc()
+
+
+    #-------------------------------------------------------------------------
+    # Things related to text completion
+    #-------------------------------------------------------------------------
+
+    # The way to construct an IPCompleter changed in most versions,
+    # so we have a custom, per version implementation of the construction
+
+    def _new_completer_011(self):
+        return PyDevIPCompleter(self,
+                             self.user_ns,
+                             self.user_global_ns,
+                             self.readline_omit__names,
+                             self.alias_manager.alias_table,
+                             self.has_readline)
+
+
+    def _new_completer_012(self):
+        completer = PyDevIPCompleter(shell=self,
+                             namespace=self.user_ns,
+                             global_namespace=self.user_global_ns,
+                             alias_table=self.alias_manager.alias_table,
+                             use_readline=self.has_readline,
+                             config=self.config,
+                             )
+        self.configurables.append(completer)
+        return completer
+
+
+    def _new_completer_100(self):
+        completer = PyDevIPCompleter(shell=self,
+                             namespace=self.user_ns,
+                             global_namespace=self.user_global_ns,
+                             alias_table=self.alias_manager.alias_table,
+                             use_readline=self.has_readline,
+                             parent=self,
+                             )
+        self.configurables.append(completer)
+        return completer
+
+    def _new_completer_200(self):
+        # As of writing this, IPython 2.0.0 is in dev mode so subject to change
+        completer = PyDevIPCompleter(shell=self,
+                             namespace=self.user_ns,
+                             global_namespace=self.user_global_ns,
+                             use_readline=self.has_readline,
+                             parent=self,
+                             )
+        self.configurables.append(completer)
+        return completer
+
+
+
+    def init_completer(self):
+        """Initialize the completion machinery.
+
+        This creates a completer that provides the completions that are
+        IPython specific. We use this to supplement PyDev's core code
+        completions.
+        """
+        # PyDev uses its own completer and custom hooks so that it uses
+        # most completions from PyDev's core completer which provides
+        # extra information.
+        # See getCompletions for where the two sets of results are merged
+
+        from IPython.core.completerlib import magic_run_completer, cd_completer
+        try:
+            from IPython.core.completerlib import reset_completer
+        except ImportError:
+            # reset_completer was added for rel-0.13
+            reset_completer = None
+
+        if IPythonRelease._version_major >= 2:
+            self.Completer = self._new_completer_200()
+        elif IPythonRelease._version_major >= 1:
+            self.Completer = self._new_completer_100()
+        elif IPythonRelease._version_minor >= 12:
+            self.Completer = self._new_completer_012()
+        else:
+            self.Completer = self._new_completer_011()
+
+        # Add custom completers to the basic ones built into IPCompleter
+        sdisp = self.strdispatchers.get('complete_command', StrDispatch())
+        self.strdispatchers['complete_command'] = sdisp
+        self.Completer.custom_completers = sdisp
+
+        self.set_hook('complete_command', magic_run_completer, str_key='%run')
+        self.set_hook('complete_command', cd_completer, str_key='%cd')
+        if reset_completer:
+            self.set_hook('complete_command', reset_completer, str_key='%reset')
+
+        # Only configure readline if we truly are using readline.  IPython can
+        # do tab-completion over the network, in GUIs, etc, where readline
+        # itself may be absent
+        if self.has_readline:
+            self.set_readline_completer()
+
+
+    #-------------------------------------------------------------------------
+    # Things related to aliases
+    #-------------------------------------------------------------------------
+
+    def init_alias(self):
+        # InteractiveShell defines alias's we want, but TerminalInteractiveShell defines
+        # ones we don't. So don't use super and instead go right to InteractiveShell
+        InteractiveShell.init_alias(self)
+
+    #-------------------------------------------------------------------------
+    # Things related to exiting
+    #-------------------------------------------------------------------------
+    def ask_exit(self):
+        """ Ask the shell to exit. Can be overiden and used as a callback. """
+        # TODO PyDev's console does not have support from the Python side to exit
+        # the console. If user forces the exit (with sys.exit()) then the console
+        # simply reports errors. e.g.:
+        # >>> import sys
+        # >>> sys.exit()
+        # Failed to create input stream: Connection refused
+        # >>>
+        # Console already exited with value: 0 while waiting for an answer.
+        # Error stream:
+        # Output stream:
+        # >>>
+        #
+        # Alternatively if you use the non-IPython shell this is what happens
+        # >>> exit()
+        # <type 'exceptions.SystemExit'>:None
+        # >>>
+        # <type 'exceptions.SystemExit'>:None
+        # >>>
+        #
+        super(PyDevTerminalInteractiveShell, self).ask_exit()
+        print('To exit the PyDev Console, terminate the console within Eclipse.')
+
+    #-------------------------------------------------------------------------
+    # Things related to magics
+    #-------------------------------------------------------------------------
+
+    def init_magics(self):
+        super(PyDevTerminalInteractiveShell, self).init_magics()
+        # TODO Any additional magics for PyDev?
+
+InteractiveShellABC.register(PyDevTerminalInteractiveShell)  # @UndefinedVariable
+
 #=======================================================================================================================
 # PyDevFrontEnd
 #=======================================================================================================================
@@ -25,49 +297,23 @@
 
     version = release.__version__
 
-
-    def __init__(self, *args, **kwargs):        
-        #Initialization based on: from IPython.testing.globalipapp import start_ipython
-        
-        self._curr_exec_line = 0
-        # Store certain global objects that IPython modifies
-        _displayhook = sys.displayhook
-        _excepthook = sys.excepthook
-
-        class ClosablePyDevTerminalInteractiveShell(TerminalInteractiveShell):
-            '''Override ask_exit() method for correct exit, exit(), etc. handling.'''
-            def ask_exit(self):
-                sys.exit()
+    def __init__(self, pydev_host, pydev_client_port, *args, **kwarg):
 
         # Create and initialize our IPython instance.
-        shell = ClosablePyDevTerminalInteractiveShell.instance()
+        self.ipython = PyDevTerminalInteractiveShell.instance()
 
-        shell.showtraceback = _showtraceback
-        # IPython is ready, now clean up some global state...
-        
-        # Deactivate the various python system hooks added by ipython for
-        # interactive convenience so we don't confuse the doctest system
-        sys.displayhook = _displayhook
-        sys.excepthook = _excepthook
-    
-        # So that ipython magics and aliases can be doctested (they work by making
-        # a call into a global _ip object).  Also make the top-level get_ipython
-        # now return this without recursively calling here again.
-        get_ipython = shell.get_ipython
-        try:
-            import __builtin__
-        except:
-            import builtins as __builtin__
-        __builtin__._ip = shell
-        __builtin__.get_ipython = get_ipython
+        # Back channel to PyDev to open editors (in the future other
+        # info may go back this way. This is the same channel that is
+        # used to get stdin, see StdIn in pydev_console_utils)
+        self.ipython.set_hook('editor', create_editor_hook(pydev_host, pydev_client_port))
 
-        # We want to print to stdout/err as usual.
-        io.stdout = original_stdout
-        io.stderr = original_stderr
+        # Display the IPython banner, this has version info and
+        # help info
+        self.ipython.show_banner()
 
 
+        self._curr_exec_line = 0
         self._curr_exec_lines = []
-        self.ipython = shell
 
 
     def update(self, globals, locals):
@@ -95,7 +341,7 @@
 
     def is_complete(self, string):
         #Based on IPython 0.10.1
-         
+
         if string in ('', '\n'):
             # Prefiltering, eg through ipython0, may return an empty
             # string although some operations have been accomplished. We
@@ -109,20 +355,64 @@
                 # This should probably be done in a different place (like
                 # maybe 'prefilter_input' method? For now, this works.
                 clean_string = string.rstrip('\n')
-                if not clean_string.endswith('\\'): clean_string += '\n\n' 
-                is_complete = codeop.compile_command(clean_string,
-                            "<string>", "exec")
+                if not clean_string.endswith('\\'):
+                    clean_string += '\n\n'
+
+                is_complete = codeop.compile_command(
+                    clean_string,
+                    "<string>",
+                    "exec"
+                )
             except Exception:
                 # XXX: Hack: return True so that the
                 # code gets executed and the error captured.
                 is_complete = True
             return is_complete
-        
-        
+
+
+    def getCompletions(self, text, act_tok):
+        # Get completions from IPython and from PyDev and merge the results
+        # IPython only gives context free list of completions, while PyDev
+        # gives detailed information about completions.
+        try:
+            TYPE_IPYTHON = '11'
+            TYPE_IPYTHON_MAGIC = '12'
+            _line, ipython_completions = self.complete(text)
+
+            from _pydev_completer import Completer
+            completer = Completer(self.getNamespace(), None)
+            ret = completer.complete(act_tok)
+            append = ret.append
+            ip = self.ipython
+            pydev_completions = set([f[0] for f in ret])
+            for ipython_completion in ipython_completions:
+
+                #PyCharm was not expecting completions with '%'...
+                #Could be fixed in the backend, but it's probably better
+                #fixing it at PyCharm.
+                #if ipython_completion.startswith('%'):
+                #    ipython_completion = ipython_completion[1:]
+
+                if ipython_completion not in pydev_completions:
+                    pydev_completions.add(ipython_completion)
+                    inf = ip.object_inspect(ipython_completion)
+                    if inf['type_name'] == 'Magic function':
+                        pydev_type = TYPE_IPYTHON_MAGIC
+                    else:
+                        pydev_type = TYPE_IPYTHON
+                    pydev_doc = inf['docstring']
+                    if pydev_doc is None:
+                        pydev_doc = ''
+                    append((ipython_completion, pydev_doc, '', pydev_type))
+            return ret
+        except:
+            import traceback;traceback.print_exc()
+            return []
+
+
     def getNamespace(self):
         return self.ipython.user_ns
 
-    
     def addExec(self, line):
         if self._curr_exec_lines:
             self._curr_exec_lines.append(line)
@@ -155,5 +445,21 @@
         return self.ipython.automagic
 
     def get_greeting_msg(self):
-        return 'PyDev console: using IPython %s' % self.version
+        return 'PyDev console: using IPython %s\n' % self.version
 
+
+# If we have succeeded in importing this module, then monkey patch inputhook
+# in IPython to redirect to PyDev's version. This is essential to make
+# %gui in 0.11 work (0.12+ fixes it by calling self.enable_gui, which is implemented
+# above, instead of inputhook.enable_gui).
+# See testGui (test_pydev_ipython_011.TestRunningCode) which fails on 0.11 without
+# this patch
+import IPython.lib.inputhook
+import pydev_ipython.inputhook
+IPython.lib.inputhook.enable_gui = pydev_ipython.inputhook.enable_gui
+# In addition to enable_gui, make all publics in pydev_ipython.inputhook replace
+# the IPython versions. This enables the examples in IPython's examples/lib/gui-*
+# to operate properly because those examples don't use %gui magic and instead
+# rely on using the inputhooks directly.
+for name in pydev_ipython.inputhook.__all__:
+    setattr(IPython.lib.inputhook, name, getattr(pydev_ipython.inputhook, name))
diff --git a/python/helpers/pydev/pydev_localhost.py b/python/helpers/pydev/pydev_localhost.py
index 4e7a4d9..13c4d02 100644
--- a/python/helpers/pydev/pydev_localhost.py
+++ b/python/helpers/pydev/pydev_localhost.py
@@ -1,21 +1,18 @@
-import pydevd_constants
-if pydevd_constants.USE_LIB_COPY:
-    import _pydev_socket as socket
-else:
-    import socket
+
+from _pydev_imps import _pydev_socket as socket
 
 _cache = None
 def get_localhost():
     '''
     Should return 127.0.0.1 in ipv4 and ::1 in ipv6
-    
+
     localhost is not used because on windows vista/windows 7, there can be issues where the resolving doesn't work
-    properly and takes a lot of time (had this issue on the pyunit server). 
-    
+    properly and takes a lot of time (had this issue on the pyunit server).
+
     Using the IP directly solves the problem.
     '''
     #TODO: Needs better investigation!
-    
+
     global _cache
     if _cache is None:
         try:
diff --git a/python/helpers/pydev/pydev_monkey.py b/python/helpers/pydev/pydev_monkey.py
index ed6fea5..2b12ed2 100644
--- a/python/helpers/pydev/pydev_monkey.py
+++ b/python/helpers/pydev/pydev_monkey.py
@@ -1,10 +1,11 @@
 import os
-import shlex
 import sys
 import pydev_log
 import traceback
 
-helpers = os.path.dirname(__file__).replace('\\', '/')
+pydev_src_dir = os.path.dirname(__file__)
+
+from pydevd_constants import xrange
 
 def is_python(path):
     if path.endswith("'") or path.endswith('"'):
@@ -38,7 +39,9 @@
 
                 if port is not None:
                     new_args.extend(args)
-                    new_args[indC + 1] = "import sys; sys.path.append('%s'); import pydevd; pydevd.settrace(host='%s', port=%s, suspend=False); %s"%(helpers, host, port, args[indC + 1])
+                    new_args[indC + 1] = ("import sys; sys.path.append(r'%s'); import pydevd; "
+                        "pydevd.settrace(host='%s', port=%s, suspend=False, trace_only_current_thread=False, patch_multiprocessing=True); %s") % (
+                        pydev_src_dir, host, port, args[indC + 1])
                     return new_args
             else:
                 new_args.append(args[0])
@@ -52,14 +55,14 @@
                 new_args.append(args[i])
             else:
                 break
-            i+=1
+            i += 1
 
         if args[i].endswith('pydevd.py'): #no need to add pydevd twice
             return args
 
         for x in sys.original_argv:
             if sys.platform == "win32" and not x.endswith('"'):
-                arg = '"%s"'%x
+                arg = '"%s"' % x
             else:
                 arg = x
             new_args.append(arg)
@@ -68,7 +71,7 @@
 
         while i < len(args):
             new_args.append(args[i])
-            i+=1
+            i += 1
 
         return new_args
     except:
@@ -82,26 +85,101 @@
         if x.startswith('"') and x.endswith('"'):
             quoted_args.append(x)
         else:
+            x = x.replace('"', '\\"')
             quoted_args.append('"%s"' % x)
 
     return ' '.join(quoted_args)
 
-def remove_quotes(str):
-    if str.startswith('"') and str.endswith('"'):
-        return str[1:-1]
-    else:
-        return str
 
-def str_to_args(str):
-    return [remove_quotes(x) for x in shlex.split(str)]
+def str_to_args_windows(args):
+    # see http:#msdn.microsoft.com/en-us/library/a1y7w461.aspx
+    result = []
+
+    DEFAULT = 0
+    ARG = 1
+    IN_DOUBLE_QUOTE = 2
+
+    state = DEFAULT
+    backslashes = 0
+    buf = ''
+
+    args_len = len(args)
+    for i in xrange(args_len):
+        ch = args[i]
+        if (ch == '\\'):
+            backslashes+=1
+            continue
+        elif (backslashes != 0):
+            if ch == '"':
+                while backslashes >= 2:
+                    backslashes -= 2
+                    buf += '\\'
+                if (backslashes == 1):
+                    if (state == DEFAULT):
+                        state = ARG
+
+                    buf += '"'
+                    backslashes = 0
+                    continue
+                # else fall through to switch
+            else:
+                # false alarm, treat passed backslashes literally...
+                if (state == DEFAULT):
+                    state = ARG
+
+                while backslashes > 0:
+                    backslashes-=1
+                    buf += '\\'
+                # fall through to switch
+        if ch in (' ', '\t'):
+            if (state == DEFAULT):
+                # skip
+                continue
+            elif (state == ARG):
+                state = DEFAULT
+                result.append(buf)
+                buf = ''
+                continue
+
+        if state in (DEFAULT, ARG):
+            if ch == '"':
+                state = IN_DOUBLE_QUOTE
+            else:
+                state = ARG
+                buf += ch
+
+        elif state == IN_DOUBLE_QUOTE:
+            if ch == '"':
+                if (i + 1 < args_len and args[i + 1] == '"'):
+                    # Undocumented feature in Windows:
+                    # Two consecutive double quotes inside a double-quoted argument are interpreted as
+                    # a single double quote.
+                    buf += '"'
+                    i+=1
+                elif len(buf) == 0:
+                    # empty string on Windows platform. Account for bug in constructor of JDK's java.lang.ProcessImpl.
+                    result.append("\"\"")
+                    state = DEFAULT
+                else:
+                    state = ARG
+            else:
+                buf += ch
+
+        else:
+            raise RuntimeError('Illegal condition')
+
+    if len(buf) > 0 or state != DEFAULT:
+        result.append(buf)
+
+    return result
+
 
 def patch_arg_str_win(arg_str):
-    new_arg_str = arg_str.replace('\\', '/')
-    args = str_to_args(new_arg_str)
+    args = str_to_args_windows(arg_str)
     if not is_python(args[0]):
         return arg_str
     arg_str = args_to_str(patch_args(args))
-    pydev_log.debug("New args: %s"% arg_str)
+    pydev_log.debug("New args: %s" % arg_str)
     return arg_str
 
 def monkey_patch_module(module, funcname, create_func):
@@ -120,7 +198,8 @@
     import pydev_log
 
     pydev_log.error_once(
-        "New process is launching. Breakpoints won't work.\n To debug that process please enable 'Attach to subprocess automatically while debugging' option in the debugger settings.\n")
+        "pydev debugger: New process is launching (breakpoints won't work in the new process).\n"
+        "pydev debugger: To debug that process please enable 'Attach to subprocess automatically while debugging?' option in the debugger settings.\n")
 
 
 def create_warn_multiproc(original_name):
@@ -308,3 +387,110 @@
         except ImportError:
             import _winapi as _subprocess
         monkey_patch_module(_subprocess, 'CreateProcess', create_CreateProcessWarnMultiproc)
+
+
+
+class _NewThreadStartupWithTrace:
+
+    def __init__(self, original_func):
+        self.original_func = original_func
+
+    def __call__(self, *args, **kwargs):
+        from pydevd_comm import GetGlobalDebugger
+        global_debugger = GetGlobalDebugger()
+        if global_debugger is not None:
+            global_debugger.SetTrace(global_debugger.trace_dispatch)
+
+        return self.original_func(*args, **kwargs)
+
+class _NewThreadStartupWithoutTrace:
+
+    def __init__(self, original_func):
+        self.original_func = original_func
+
+    def __call__(self, *args, **kwargs):
+        return self.original_func(*args, **kwargs)
+
+_UseNewThreadStartup = _NewThreadStartupWithTrace
+
+def _get_threading_modules():
+    threading_modules = []
+    from _pydev_imps import _pydev_thread
+    threading_modules.append(_pydev_thread)
+    try:
+        import thread as _thread
+        threading_modules.append(_thread)
+    except:
+        import _thread
+        threading_modules.append(_thread)
+    return threading_modules
+
+threading_modules = _get_threading_modules()
+
+
+
+def patch_thread_module(thread):
+
+    if getattr(thread, '_original_start_new_thread', None) is None:
+        _original_start_new_thread = thread._original_start_new_thread = thread.start_new_thread
+    else:
+        _original_start_new_thread = thread._original_start_new_thread
+
+
+    class ClassWithPydevStartNewThread:
+
+        def pydev_start_new_thread(self, function, args, kwargs={}):
+            '''
+            We need to replace the original thread.start_new_thread with this function so that threads started
+            through it and not through the threading module are properly traced.
+            '''
+            return _original_start_new_thread(_UseNewThreadStartup(function), args, kwargs)
+
+    # This is a hack for the situation where the thread.start_new_thread is declared inside a class, such as the one below
+    # class F(object):
+    #    start_new_thread = thread.start_new_thread
+    #
+    #    def start_it(self):
+    #        self.start_new_thread(self.function, args, kwargs)
+    # So, if it's an already bound method, calling self.start_new_thread won't really receive a different 'self' -- it
+    # does work in the default case because in builtins self isn't passed either.
+    pydev_start_new_thread = ClassWithPydevStartNewThread().pydev_start_new_thread
+
+    try:
+        # We need to replace the original thread.start_new_thread with this function so that threads started through
+        # it and not through the threading module are properly traced.
+        thread.start_new_thread = pydev_start_new_thread
+        thread.start_new = pydev_start_new_thread
+    except:
+        pass
+
+def patch_thread_modules():
+    for t in threading_modules:
+        patch_thread_module(t)
+
+def undo_patch_thread_modules():
+    for t in threading_modules:
+        try:
+            t.start_new_thread = t._original_start_new_thread
+        except:
+            pass
+
+        try:
+            t.start_new = t._original_start_new_thread
+        except:
+            pass
+
+def disable_trace_thread_modules():
+    '''
+    Can be used to temporarily stop tracing threads created with thread.start_new_thread.
+    '''
+    global _UseNewThreadStartup
+    _UseNewThreadStartup = _NewThreadStartupWithoutTrace
+
+
+def enable_trace_thread_modules():
+    '''
+    Can be used to start tracing threads created with thread.start_new_thread again.
+    '''
+    global _UseNewThreadStartup
+    _UseNewThreadStartup = _NewThreadStartupWithTrace
diff --git a/python/helpers/pydev/pydev_override.py b/python/helpers/pydev/pydev_override.py
new file mode 100644
index 0000000..bb0c504
--- /dev/null
+++ b/python/helpers/pydev/pydev_override.py
@@ -0,0 +1,49 @@
+def overrides(method):
+    '''
+    Initially meant to be used as
+    
+    class B:
+        @overrides(A.m1)
+        def m1(self):
+            pass
+            
+    but as we want to be compatible with Jython 2.1 where decorators have an uglier syntax (needing an assign
+    after the method), it should now be used without being a decorator as below (in which case we don't even check
+    for anything, just that the parent name was actually properly loaded).
+    
+    i.e.:
+    
+    class B:
+        overrides(A.m1)
+        def m1(self):
+            pass
+    '''
+    return
+
+#    def wrapper(func):
+#        if func.__name__ != method.__name__:
+#            msg = "Wrong @override: %r expected, but overwriting %r."
+#            msg = msg % (func.__name__, method.__name__)
+#            raise AssertionError(msg)
+#
+#        if func.__doc__ is None:
+#            func.__doc__ = method.__doc__
+#
+#        return func
+#
+#    return wrapper
+
+def implements(method):
+    return
+#    def wrapper(func):
+#        if func.__name__ != method.__name__:
+#            msg = "Wrong @implements: %r expected, but implementing %r."
+#            msg = msg % (func.__name__, method.__name__)
+#            raise AssertionError(msg)
+#
+#        if func.__doc__ is None:
+#            func.__doc__ = method.__doc__
+#
+#        return func
+#
+#    return wrapper
\ No newline at end of file
diff --git a/python/helpers/pydev/pydev_pysrc.py b/python/helpers/pydev/pydev_pysrc.py
new file mode 100644
index 0000000..b9ed49e
--- /dev/null
+++ b/python/helpers/pydev/pydev_pysrc.py
@@ -0,0 +1 @@
+'''An empty file in pysrc that can be imported (from sitecustomize) to find the location of pysrc'''
\ No newline at end of file
diff --git a/python/helpers/pydev/pydev_runfiles.py b/python/helpers/pydev/pydev_runfiles.py
new file mode 100644
index 0000000..bb704db
--- /dev/null
+++ b/python/helpers/pydev/pydev_runfiles.py
@@ -0,0 +1,831 @@
+from __future__ import nested_scopes
+
+import fnmatch
+import os.path
+from pydev_runfiles_coverage import StartCoverageSupport
+import pydev_runfiles_unittest
+from pydevd_constants import * #@UnusedWildImport
+import re
+import time
+import unittest
+
+
+#=======================================================================================================================
+# Configuration
+#=======================================================================================================================
+class Configuration:
+
+    def __init__(
+        self,
+        files_or_dirs='',
+        verbosity=2,
+        include_tests=None,
+        tests=None,
+        port=None,
+        files_to_tests=None,
+        jobs=1,
+        split_jobs='tests',
+        coverage_output_dir=None,
+        coverage_include=None,
+        coverage_output_file=None,
+        exclude_files=None,
+        exclude_tests=None,
+        include_files=None,
+        django=False,
+        ):
+        self.files_or_dirs = files_or_dirs
+        self.verbosity = verbosity
+        self.include_tests = include_tests
+        self.tests = tests
+        self.port = port
+        self.files_to_tests = files_to_tests
+        self.jobs = jobs
+        self.split_jobs = split_jobs
+        self.django = django
+
+        if include_tests:
+            assert isinstance(include_tests, (list, tuple))
+
+        if exclude_files:
+            assert isinstance(exclude_files, (list, tuple))
+
+        if exclude_tests:
+            assert isinstance(exclude_tests, (list, tuple))
+
+        self.exclude_files = exclude_files
+        self.include_files = include_files
+        self.exclude_tests = exclude_tests
+
+        self.coverage_output_dir = coverage_output_dir
+        self.coverage_include = coverage_include
+        self.coverage_output_file = coverage_output_file
+
+    def __str__(self):
+        return '''Configuration
+ - files_or_dirs: %s
+ - verbosity: %s
+ - tests: %s
+ - port: %s
+ - files_to_tests: %s
+ - jobs: %s
+ - split_jobs: %s
+
+ - include_files: %s
+ - include_tests: %s
+
+ - exclude_files: %s
+ - exclude_tests: %s
+
+ - coverage_output_dir: %s
+ - coverage_include_dir: %s
+ - coverage_output_file: %s
+
+ - django: %s
+''' % (
+        self.files_or_dirs,
+        self.verbosity,
+        self.tests,
+        self.port,
+        self.files_to_tests,
+        self.jobs,
+        self.split_jobs,
+
+        self.include_files,
+        self.include_tests,
+
+        self.exclude_files,
+        self.exclude_tests,
+
+        self.coverage_output_dir,
+        self.coverage_include,
+        self.coverage_output_file,
+
+        self.django,
+    )
+
+
+#=======================================================================================================================
+# parse_cmdline
+#=======================================================================================================================
+def parse_cmdline(argv=None):
+    """
+    Parses command line and returns test directories, verbosity, test filter and test suites
+
+    usage:
+        runfiles.py  -v|--verbosity <level>  -t|--tests <Test.test1,Test2>  dirs|files
+
+    Multiprocessing options:
+    jobs=number (with the number of jobs to be used to run the tests)
+    split_jobs='module'|'tests'
+        if == module, a given job will always receive all the tests from a module
+        if == tests, the tests will be split independently of their originating module (default)
+
+    --exclude_files  = comma-separated list of patterns with files to exclude (fnmatch style)
+    --include_files = comma-separated list of patterns with files to include (fnmatch style)
+    --exclude_tests = comma-separated list of patterns with test names to exclude (fnmatch style)
+
+    Note: if --tests is given, --exclude_files, --include_files and --exclude_tests are ignored!
+    """
+    if argv is None:
+        argv = sys.argv
+
+    verbosity = 2
+    include_tests = None
+    tests = None
+    port = None
+    jobs = 1
+    split_jobs = 'tests'
+    files_to_tests = {}
+    coverage_output_dir = None
+    coverage_include = None
+    exclude_files = None
+    exclude_tests = None
+    include_files = None
+    django = False
+
+    from _pydev_getopt import gnu_getopt
+    optlist, dirs = gnu_getopt(
+        argv[1:], "",
+        [
+            "verbosity=",
+            "tests=",
+
+            "port=",
+            "config_file=",
+
+            "jobs=",
+            "split_jobs=",
+
+            "include_tests=",
+            "include_files=",
+
+            "exclude_files=",
+            "exclude_tests=",
+
+            "coverage_output_dir=",
+            "coverage_include=",
+
+            "django="
+        ]
+    )
+
+    for opt, value in optlist:
+        if opt in ("-v", "--verbosity"):
+            verbosity = value
+
+        elif opt in ("-p", "--port"):
+            port = int(value)
+
+        elif opt in ("-j", "--jobs"):
+            jobs = int(value)
+
+        elif opt in ("-s", "--split_jobs"):
+            split_jobs = value
+            if split_jobs not in ('module', 'tests'):
+                raise AssertionError('Expected split to be either "module" or "tests". Was :%s' % (split_jobs,))
+
+        elif opt in ("-d", "--coverage_output_dir",):
+            coverage_output_dir = value.strip()
+
+        elif opt in ("-i", "--coverage_include",):
+            coverage_include = value.strip()
+
+        elif opt in ("-I", "--include_tests"):
+            include_tests = value.split(',')
+
+        elif opt in ("-E", "--exclude_files"):
+            exclude_files = value.split(',')
+
+        elif opt in ("-F", "--include_files"):
+            include_files = value.split(',')
+
+        elif opt in ("-e", "--exclude_tests"):
+            exclude_tests = value.split(',')
+
+        elif opt in ("-t", "--tests"):
+            tests = value.split(',')
+
+        elif opt in ("--django",):
+            django = value.strip() in ['true', 'True', '1']
+
+        elif opt in ("-c", "--config_file"):
+            config_file = value.strip()
+            if os.path.exists(config_file):
+                f = open(config_file, 'rU')
+                try:
+                    config_file_contents = f.read()
+                finally:
+                    f.close()
+
+                if config_file_contents:
+                    config_file_contents = config_file_contents.strip()
+
+                if config_file_contents:
+                    for line in config_file_contents.splitlines():
+                        file_and_test = line.split('|')
+                        if len(file_and_test) == 2:
+                            file, test = file_and_test
+                            if DictContains(files_to_tests, file):
+                                files_to_tests[file].append(test)
+                            else:
+                                files_to_tests[file] = [test]
+
+            else:
+                sys.stderr.write('Could not find config file: %s\n' % (config_file,))
+
+    if type([]) != type(dirs):
+        dirs = [dirs]
+
+    ret_dirs = []
+    for d in dirs:
+        if '|' in d:
+            #paths may come from the ide separated by |
+            ret_dirs.extend(d.split('|'))
+        else:
+            ret_dirs.append(d)
+
+    verbosity = int(verbosity)
+
+    if tests:
+        if verbosity > 4:
+            sys.stdout.write('--tests provided. Ignoring --exclude_files, --exclude_tests and --include_files\n')
+        exclude_files = exclude_tests = include_files = None
+
+    config = Configuration(
+        ret_dirs,
+        verbosity,
+        include_tests,
+        tests,
+        port,
+        files_to_tests,
+        jobs,
+        split_jobs,
+        coverage_output_dir,
+        coverage_include,
+        exclude_files=exclude_files,
+        exclude_tests=exclude_tests,
+        include_files=include_files,
+        django=django,
+    )
+
+    if verbosity > 5:
+        sys.stdout.write(str(config) + '\n')
+    return config
+
+
+#=======================================================================================================================
+# PydevTestRunner
+#=======================================================================================================================
+class PydevTestRunner(object):
+    """ finds and runs a file or directory of files as a unit test """
+
+    __py_extensions = ["*.py", "*.pyw"]
+    __exclude_files = ["__init__.*"]
+
+    #Just to check that only this attributes will be written to this file
+    __slots__ = [
+        'verbosity',  #Always used
+
+        'files_to_tests',  #If this one is given, the ones below are not used
+
+        'files_or_dirs',  #Files or directories received in the command line
+        'include_tests',  #The filter used to collect the tests
+        'tests',  #Strings with the tests to be run
+
+        'jobs',  #Integer with the number of jobs that should be used to run the test cases
+        'split_jobs',  #String with 'tests' or 'module' (how should the jobs be split)
+
+        'configuration',
+        'coverage',
+    ]
+
+    def __init__(self, configuration):
+        self.verbosity = configuration.verbosity
+
+        self.jobs = configuration.jobs
+        self.split_jobs = configuration.split_jobs
+
+        files_to_tests = configuration.files_to_tests
+        if files_to_tests:
+            self.files_to_tests = files_to_tests
+            self.files_or_dirs = list(files_to_tests.keys())
+            self.tests = None
+        else:
+            self.files_to_tests = {}
+            self.files_or_dirs = configuration.files_or_dirs
+            self.tests = configuration.tests
+
+        self.configuration = configuration
+        self.__adjust_path()
+
+
+    def __adjust_path(self):
+        """ add the current file or directory to the python path """
+        path_to_append = None
+        for n in xrange(len(self.files_or_dirs)):
+            dir_name = self.__unixify(self.files_or_dirs[n])
+            if os.path.isdir(dir_name):
+                if not dir_name.endswith("/"):
+                    self.files_or_dirs[n] = dir_name + "/"
+                path_to_append = os.path.normpath(dir_name)
+            elif os.path.isfile(dir_name):
+                path_to_append = os.path.dirname(dir_name)
+            else:
+                if not os.path.exists(dir_name):
+                    block_line = '*' * 120
+                    sys.stderr.write('\n%s\n* PyDev test runner error: %s does not exist.\n%s\n' % (block_line, dir_name, block_line))
+                    return
+                msg = ("unknown type. \n%s\nshould be file or a directory.\n" % (dir_name))
+                raise RuntimeError(msg)
+        if path_to_append is not None:
+            #Add it as the last one (so, first things are resolved against the default dirs and
+            #if none resolves, then we try a relative import).
+            sys.path.append(path_to_append)
+
+    def __is_valid_py_file(self, fname):
+        """ tests that a particular file contains the proper file extension
+            and is not in the list of files to exclude """
+        is_valid_fname = 0
+        for invalid_fname in self.__class__.__exclude_files:
+            is_valid_fname += int(not fnmatch.fnmatch(fname, invalid_fname))
+        if_valid_ext = 0
+        for ext in self.__class__.__py_extensions:
+            if_valid_ext += int(fnmatch.fnmatch(fname, ext))
+        return is_valid_fname > 0 and if_valid_ext > 0
+
+    def __unixify(self, s):
+        """ stupid windows. converts the backslash to forwardslash for consistency """
+        return os.path.normpath(s).replace(os.sep, "/")
+
+    def __importify(self, s, dir=False):
+        """ turns directory separators into dots and removes the ".py*" extension
+            so the string can be used as import statement """
+        if not dir:
+            dirname, fname = os.path.split(s)
+
+            if fname.count('.') > 1:
+                #if there's a file named xxx.xx.py, it is not a valid module, so, let's not load it...
+                return
+
+            imp_stmt_pieces = [dirname.replace("\\", "/").replace("/", "."), os.path.splitext(fname)[0]]
+
+            if len(imp_stmt_pieces[0]) == 0:
+                imp_stmt_pieces = imp_stmt_pieces[1:]
+
+            return ".".join(imp_stmt_pieces)
+
+        else:  #handle dir
+            return s.replace("\\", "/").replace("/", ".")
+
+    def __add_files(self, pyfiles, root, files):
+        """ if files match, appends them to pyfiles. used by os.path.walk fcn """
+        for fname in files:
+            if self.__is_valid_py_file(fname):
+                name_without_base_dir = self.__unixify(os.path.join(root, fname))
+                pyfiles.append(name_without_base_dir)
+
+
+    def find_import_files(self):
+        """ return a list of files to import """
+        if self.files_to_tests:
+            pyfiles = self.files_to_tests.keys()
+        else:
+            pyfiles = []
+
+            for base_dir in self.files_or_dirs:
+                if os.path.isdir(base_dir):
+                    if hasattr(os, 'walk'):
+                        for root, dirs, files in os.walk(base_dir):
+
+                            #Note: handling directories that should be excluded from the search because
+                            #they don't have __init__.py
+                            exclude = {}
+                            for d in dirs:
+                                for init in ['__init__.py', '__init__.pyo', '__init__.pyc', '__init__.pyw']:
+                                    if os.path.exists(os.path.join(root, d, init).replace('\\', '/')):
+                                        break
+                                else:
+                                    exclude[d] = 1
+
+                            if exclude:
+                                new = []
+                                for d in dirs:
+                                    if d not in exclude:
+                                        new.append(d)
+
+                                dirs[:] = new
+
+                            self.__add_files(pyfiles, root, files)
+                    else:
+                        # jython2.1 is too old for os.walk!
+                        os.path.walk(base_dir, self.__add_files, pyfiles)
+
+                elif os.path.isfile(base_dir):
+                    pyfiles.append(base_dir)
+
+        if self.configuration.exclude_files or self.configuration.include_files:
+            ret = []
+            for f in pyfiles:
+                add = True
+                basename = os.path.basename(f)
+                if self.configuration.include_files:
+                    add = False
+
+                    for pat in self.configuration.include_files:
+                        if fnmatch.fnmatchcase(basename, pat):
+                            add = True
+                            break
+
+                if not add:
+                    if self.verbosity > 3:
+                        sys.stdout.write('Skipped file: %s (did not match any include_files pattern: %s)\n' % (f, self.configuration.include_files))
+
+                elif self.configuration.exclude_files:
+                    for pat in self.configuration.exclude_files:
+                        if fnmatch.fnmatchcase(basename, pat):
+                            if self.verbosity > 3:
+                                sys.stdout.write('Skipped file: %s (matched exclude_files pattern: %s)\n' % (f, pat))
+
+                            elif self.verbosity > 2:
+                                sys.stdout.write('Skipped file: %s\n' % (f,))
+
+                            add = False
+                            break
+
+                if add:
+                    if self.verbosity > 3:
+                        sys.stdout.write('Adding file: %s for test discovery.\n' % (f,))
+                    ret.append(f)
+
+            pyfiles = ret
+
+
+        return pyfiles
+
+    def __get_module_from_str(self, modname, print_exception, pyfile):
+        """ Import the module in the given import path.
+            * Returns the "final" module, so importing "coilib40.subject.visu"
+            returns the "visu" module, not the "coilib40" as returned by __import__ """
+        try:
+            mod = __import__(modname)
+            for part in modname.split('.')[1:]:
+                mod = getattr(mod, part)
+            return mod
+        except:
+            if print_exception:
+                import pydev_runfiles_xml_rpc
+                import pydevd_io
+                buf_err = pydevd_io.StartRedirect(keep_original_redirection=True, std='stderr')
+                buf_out = pydevd_io.StartRedirect(keep_original_redirection=True, std='stdout')
+                try:
+                    import traceback;traceback.print_exc()
+                    sys.stderr.write('ERROR: Module: %s could not be imported (file: %s).\n' % (modname, pyfile))
+                finally:
+                    pydevd_io.EndRedirect('stderr')
+                    pydevd_io.EndRedirect('stdout')
+
+                pydev_runfiles_xml_rpc.notifyTest(
+                    'error', buf_out.getvalue(), buf_err.getvalue(), pyfile, modname, 0)
+
+            return None
+
+    def find_modules_from_files(self, pyfiles):
+        """ returns a list of modules given a list of files """
+        #let's make sure that the paths we want are in the pythonpath...
+        imports = [(s, self.__importify(s)) for s in pyfiles]
+
+        system_paths = []
+        for s in sys.path:
+            system_paths.append(self.__importify(s, True))
+
+
+        ret = []
+        for pyfile, imp in imports:
+            if imp is None:
+                continue  #can happen if a file is not a valid module
+            choices = []
+            for s in system_paths:
+                if imp.startswith(s):
+                    add = imp[len(s) + 1:]
+                    if add:
+                        choices.append(add)
+                    #sys.stdout.write(' ' + add + ' ')
+
+            if not choices:
+                sys.stdout.write('PYTHONPATH not found for file: %s\n' % imp)
+            else:
+                for i, import_str in enumerate(choices):
+                    print_exception = i == len(choices) - 1
+                    mod = self.__get_module_from_str(import_str, print_exception, pyfile)
+                    if mod is not None:
+                        ret.append((pyfile, mod, import_str))
+                        break
+
+
+        return ret
+
+    #===================================================================================================================
+    # GetTestCaseNames
+    #===================================================================================================================
+    class GetTestCaseNames:
+        """Yes, we need a class for that (cannot use outer context on jython 2.1)"""
+
+        def __init__(self, accepted_classes, accepted_methods):
+            self.accepted_classes = accepted_classes
+            self.accepted_methods = accepted_methods
+
+        def __call__(self, testCaseClass):
+            """Return a sorted sequence of method names found within testCaseClass"""
+            testFnNames = []
+            className = testCaseClass.__name__
+
+            if DictContains(self.accepted_classes, className):
+                for attrname in dir(testCaseClass):
+                    #If a class is chosen, we select all the 'test' methods'
+                    if attrname.startswith('test') and hasattr(getattr(testCaseClass, attrname), '__call__'):
+                        testFnNames.append(attrname)
+
+            else:
+                for attrname in dir(testCaseClass):
+                    #If we have the class+method name, we must do a full check and have an exact match.
+                    if DictContains(self.accepted_methods, className + '.' + attrname):
+                        if hasattr(getattr(testCaseClass, attrname), '__call__'):
+                            testFnNames.append(attrname)
+
+            #sorted() is not available in jython 2.1
+            testFnNames.sort()
+            return testFnNames
+
+
+    def _decorate_test_suite(self, suite, pyfile, module_name):
+        if isinstance(suite, unittest.TestSuite):
+            add = False
+            suite.__pydev_pyfile__ = pyfile
+            suite.__pydev_module_name__ = module_name
+
+            for t in suite._tests:
+                t.__pydev_pyfile__ = pyfile
+                t.__pydev_module_name__ = module_name
+                if self._decorate_test_suite(t, pyfile, module_name):
+                    add = True
+
+            return add
+
+        elif isinstance(suite, unittest.TestCase):
+            return True
+
+        else:
+            return False
+
+
+
+    def find_tests_from_modules(self, file_and_modules_and_module_name):
+        """ returns the unittests given a list of modules """
+        #Use our own suite!
+        unittest.TestLoader.suiteClass = pydev_runfiles_unittest.PydevTestSuite
+        loader = unittest.TestLoader()
+
+        ret = []
+        if self.files_to_tests:
+            for pyfile, m, module_name in file_and_modules_and_module_name:
+                accepted_classes = {}
+                accepted_methods = {}
+                tests = self.files_to_tests[pyfile]
+                for t in tests:
+                    accepted_methods[t] = t
+
+                loader.getTestCaseNames = self.GetTestCaseNames(accepted_classes, accepted_methods)
+
+                suite = loader.loadTestsFromModule(m)
+                if self._decorate_test_suite(suite, pyfile, module_name):
+                    ret.append(suite)
+            return ret
+
+
+        if self.tests:
+            accepted_classes = {}
+            accepted_methods = {}
+
+            for t in self.tests:
+                splitted = t.split('.')
+                if len(splitted) == 1:
+                    accepted_classes[t] = t
+
+                elif len(splitted) == 2:
+                    accepted_methods[t] = t
+
+            loader.getTestCaseNames = self.GetTestCaseNames(accepted_classes, accepted_methods)
+
+
+        for pyfile, m, module_name in file_and_modules_and_module_name:
+            suite = loader.loadTestsFromModule(m)
+            if self._decorate_test_suite(suite, pyfile, module_name):
+                ret.append(suite)
+
+        return ret
+
+
+    def filter_tests(self, test_objs, internal_call=False):
+        """ based on a filter name, only return those tests that have
+            the test case names that match """
+        if not internal_call:
+            if not self.configuration.include_tests and not self.tests and not self.configuration.exclude_tests:
+                #No need to filter if we have nothing to filter!
+                return test_objs
+
+            if self.verbosity > 1:
+                if self.configuration.include_tests:
+                    sys.stdout.write('Tests to include: %s\n' % (self.configuration.include_tests,))
+
+                if self.tests:
+                    sys.stdout.write('Tests to run: %s\n' % (self.tests,))
+
+                if self.configuration.exclude_tests:
+                    sys.stdout.write('Tests to exclude: %s\n' % (self.configuration.exclude_tests,))
+
+        test_suite = []
+        for test_obj in test_objs:
+
+            if isinstance(test_obj, unittest.TestSuite):
+                #Note: keep the suites as they are and just 'fix' the tests (so, don't use the iter_tests).
+                if test_obj._tests:
+                    test_obj._tests = self.filter_tests(test_obj._tests, True)
+                    if test_obj._tests:  #Only add the suite if we still have tests there.
+                        test_suite.append(test_obj)
+
+            elif isinstance(test_obj, unittest.TestCase):
+                try:
+                    testMethodName = test_obj._TestCase__testMethodName
+                except AttributeError:
+                    #changed in python 2.5
+                    testMethodName = test_obj._testMethodName
+
+                add = True
+                if self.configuration.exclude_tests:
+                    for pat in self.configuration.exclude_tests:
+                        if fnmatch.fnmatchcase(testMethodName, pat):
+                            if self.verbosity > 3:
+                                sys.stdout.write('Skipped test: %s (matched exclude_tests pattern: %s)\n' % (testMethodName, pat))
+
+                            elif self.verbosity > 2:
+                                sys.stdout.write('Skipped test: %s\n' % (testMethodName,))
+
+                            add = False
+                            break
+
+                if add:
+                    if self.__match_tests(self.tests, test_obj, testMethodName):
+                        include = True
+                        if self.configuration.include_tests:
+                            include = False
+                            for pat in self.configuration.include_tests:
+                                if fnmatch.fnmatchcase(testMethodName, pat):
+                                    include = True
+                                    break
+                        if include:
+                            test_suite.append(test_obj)
+                        else:
+                            if self.verbosity > 3:
+                                sys.stdout.write('Skipped test: %s (did not match any include_tests pattern %s)\n' % (self.configuration.include_tests,))
+        return test_suite
+
+
+    def iter_tests(self, test_objs):
+        #Note: not using yield because of Jython 2.1.
+        tests = []
+        for test_obj in test_objs:
+            if isinstance(test_obj, unittest.TestSuite):
+                tests.extend(self.iter_tests(test_obj._tests))
+
+            elif isinstance(test_obj, unittest.TestCase):
+                tests.append(test_obj)
+        return tests
+
+
+    def list_test_names(self, test_objs):
+        names = []
+        for tc in self.iter_tests(test_objs):
+            try:
+                testMethodName = tc._TestCase__testMethodName
+            except AttributeError:
+                #changed in python 2.5
+                testMethodName = tc._testMethodName
+            names.append(testMethodName)
+        return names
+
+
+    def __match_tests(self, tests, test_case, test_method_name):
+        if not tests:
+            return 1
+
+        for t in tests:
+            class_and_method = t.split('.')
+            if len(class_and_method) == 1:
+                #only class name
+                if class_and_method[0] == test_case.__class__.__name__:
+                    return 1
+
+            elif len(class_and_method) == 2:
+                if class_and_method[0] == test_case.__class__.__name__ and class_and_method[1] == test_method_name:
+                    return 1
+
+        return 0
+
+
+    def __match(self, filter_list, name):
+        """ returns whether a test name matches the test filter """
+        if filter_list is None:
+            return 1
+        for f in filter_list:
+            if re.match(f, name):
+                return 1
+        return 0
+
+
+    def run_tests(self, handle_coverage=True):
+        """ runs all tests """
+        sys.stdout.write("Finding files... ")
+        files = self.find_import_files()
+        if self.verbosity > 3:
+            sys.stdout.write('%s ... done.\n' % (self.files_or_dirs))
+        else:
+            sys.stdout.write('done.\n')
+        sys.stdout.write("Importing test modules ... ")
+
+
+        if handle_coverage:
+            coverage_files, coverage = StartCoverageSupport(self.configuration)
+
+        file_and_modules_and_module_name = self.find_modules_from_files(files)
+        sys.stdout.write("done.\n")
+
+        all_tests = self.find_tests_from_modules(file_and_modules_and_module_name)
+        all_tests = self.filter_tests(all_tests)
+
+        test_suite = pydev_runfiles_unittest.PydevTestSuite(all_tests)
+        import pydev_runfiles_xml_rpc
+        pydev_runfiles_xml_rpc.notifyTestsCollected(test_suite.countTestCases())
+
+        start_time = time.time()
+
+        def run_tests():
+            executed_in_parallel = False
+            if self.jobs > 1:
+                import pydev_runfiles_parallel
+
+                #What may happen is that the number of jobs needed is lower than the number of jobs requested
+                #(e.g.: 2 jobs were requested for running 1 test) -- in which case ExecuteTestsInParallel will
+                #return False and won't run any tests.
+                executed_in_parallel = pydev_runfiles_parallel.ExecuteTestsInParallel(
+                    all_tests, self.jobs, self.split_jobs, self.verbosity, coverage_files, self.configuration.coverage_include)
+
+            if not executed_in_parallel:
+                #If in coverage, we don't need to pass anything here (coverage is already enabled for this execution).
+                runner = pydev_runfiles_unittest.PydevTextTestRunner(stream=sys.stdout, descriptions=1, verbosity=self.verbosity)
+                sys.stdout.write('\n')
+                runner.run(test_suite)
+
+        if self.configuration.django:
+            MyDjangoTestSuiteRunner(run_tests).run_tests([])
+        else:
+            run_tests()
+
+        if handle_coverage:
+            coverage.stop()
+            coverage.save()
+
+        total_time = 'Finished in: %.2f secs.' % (time.time() - start_time,)
+        pydev_runfiles_xml_rpc.notifyTestRunFinished(total_time)
+
+
+try:
+    from django.test.simple import DjangoTestSuiteRunner
+except:
+    class DjangoTestSuiteRunner:
+        def __init__(self):
+            pass
+
+        def run_tests(self, *args, **kwargs):
+            raise AssertionError("Unable to run suite with DjangoTestSuiteRunner because it couldn't be imported.")
+
+class MyDjangoTestSuiteRunner(DjangoTestSuiteRunner):
+
+    def __init__(self, on_run_suite):
+        DjangoTestSuiteRunner.__init__(self)
+        self.on_run_suite = on_run_suite
+
+    def build_suite(self, *args, **kwargs):
+        pass
+
+    def suite_result(self, *args, **kwargs):
+        pass
+
+    def run_suite(self, *args, **kwargs):
+        self.on_run_suite()
+
+
+#=======================================================================================================================
+# main
+#=======================================================================================================================
+def main(configuration):
+    PydevTestRunner(configuration).run_tests()
diff --git a/python/helpers/pydev/pydev_runfiles_coverage.py b/python/helpers/pydev/pydev_runfiles_coverage.py
new file mode 100644
index 0000000..55bec06
--- /dev/null
+++ b/python/helpers/pydev/pydev_runfiles_coverage.py
@@ -0,0 +1,76 @@
+import os.path
+import sys
+from pydevd_constants import Null
+
+
+#=======================================================================================================================
+# GetCoverageFiles
+#=======================================================================================================================
+def GetCoverageFiles(coverage_output_dir, number_of_files):
+    base_dir = coverage_output_dir
+    ret = []
+    i = 0
+    while len(ret) < number_of_files:
+        while True:
+            f = os.path.join(base_dir, '.coverage.%s' % i)
+            i += 1
+            if not os.path.exists(f):
+                ret.append(f)
+                break #Break only inner for.
+    return ret
+
+
+#=======================================================================================================================
+# StartCoverageSupport
+#=======================================================================================================================
+def StartCoverageSupport(configuration):
+    return StartCoverageSupportFromParams(
+        configuration.coverage_output_dir, 
+        configuration.coverage_output_file, 
+        configuration.jobs, 
+        configuration.coverage_include, 
+    )
+    
+
+#=======================================================================================================================
+# StartCoverageSupportFromParams
+#=======================================================================================================================
+def StartCoverageSupportFromParams(coverage_output_dir, coverage_output_file, jobs, coverage_include):
+    coverage_files = []
+    coverage_instance = Null()
+    if coverage_output_dir or coverage_output_file:
+        try:
+            import coverage #@UnresolvedImport
+        except:
+            sys.stderr.write('Error: coverage module could not be imported\n')
+            sys.stderr.write('Please make sure that the coverage module (http://nedbatchelder.com/code/coverage/)\n')
+            sys.stderr.write('is properly installed in your interpreter: %s\n' % (sys.executable,))
+            
+            import traceback;traceback.print_exc()
+        else:
+            if coverage_output_dir:
+                if not os.path.exists(coverage_output_dir):
+                    sys.stderr.write('Error: directory for coverage output (%s) does not exist.\n' % (coverage_output_dir,))
+                    
+                elif not os.path.isdir(coverage_output_dir):
+                    sys.stderr.write('Error: expected (%s) to be a directory.\n' % (coverage_output_dir,))
+                    
+                else:
+                    n = jobs
+                    if n <= 0:
+                        n += 1
+                    n += 1 #Add 1 more for the current process (which will do the initial import).
+                    coverage_files = GetCoverageFiles(coverage_output_dir, n)
+                    os.environ['COVERAGE_FILE'] = coverage_files.pop(0)
+                    
+                    coverage_instance = coverage.coverage(source=[coverage_include])
+                    coverage_instance.start()
+                    
+            elif coverage_output_file:
+                #Client of parallel run.
+                os.environ['COVERAGE_FILE'] = coverage_output_file
+                coverage_instance = coverage.coverage(source=[coverage_include])
+                coverage_instance.start()
+                
+    return coverage_files, coverage_instance
+
diff --git a/python/helpers/pydev/pydev_runfiles_nose.py b/python/helpers/pydev/pydev_runfiles_nose.py
new file mode 100644
index 0000000..422d2a6
--- /dev/null
+++ b/python/helpers/pydev/pydev_runfiles_nose.py
@@ -0,0 +1,180 @@
+from nose.plugins.multiprocess import MultiProcessTestRunner  # @UnresolvedImport
+from nose.plugins.base import Plugin  # @UnresolvedImport
+import sys
+import pydev_runfiles_xml_rpc
+import time
+from pydev_runfiles_coverage import StartCoverageSupport
+
+#=======================================================================================================================
+# PydevPlugin
+#=======================================================================================================================
+class PydevPlugin(Plugin):
+
+    def __init__(self, configuration):
+        self.configuration = configuration
+        Plugin.__init__(self)
+
+
+    def begin(self):
+        # Called before any test is run (it's always called, with multiprocess or not)
+        self.start_time = time.time()
+        self.coverage_files, self.coverage = StartCoverageSupport(self.configuration)
+
+
+    def finalize(self, result):
+        # Called after all tests are run (it's always called, with multiprocess or not)
+        self.coverage.stop()
+        self.coverage.save()
+
+        pydev_runfiles_xml_rpc.notifyTestRunFinished('Finished in: %.2f secs.' % (time.time() - self.start_time,))
+
+
+
+    #===================================================================================================================
+    # Methods below are not called with multiprocess (so, we monkey-patch MultiProcessTestRunner.consolidate
+    # so that they're called, but unfortunately we loose some info -- i.e.: the time for each test in this
+    # process).
+    #===================================================================================================================
+
+
+    def reportCond(self, cond, test, captured_output, error=''):
+        '''
+        @param cond: fail, error, ok
+        '''
+
+        # test.address() is something as:
+        # ('D:\\workspaces\\temp\\test_workspace\\pytesting1\\src\\mod1\\hello.py', 'mod1.hello', 'TestCase.testMet1')
+        #
+        # and we must pass: location, test
+        #    E.g.: ['D:\\src\\mod1\\hello.py', 'TestCase.testMet1']
+        try:
+            if hasattr(test, 'address'):
+                address = test.address()
+                address = address[0], address[2]
+            else:
+                # multiprocess
+                try:
+                    address = test[0], test[1]
+                except TypeError:
+                    # It may be an error at setup, in which case it's not really a test, but a Context object.
+                    f = test.context.__file__
+                    if f.endswith('.pyc'):
+                        f = f[:-1]
+                    address = f, '?'
+        except:
+            sys.stderr.write("PyDev: Internal pydev error getting test address. Please report at the pydev bug tracker\n")
+            import traceback;traceback.print_exc()
+            sys.stderr.write("\n\n\n")
+            address = '?', '?'
+
+        error_contents = self.getIoFromError(error)
+        try:
+            time_str = '%.2f' % (time.time() - test._pydev_start_time)
+        except:
+            time_str = '?'
+
+        pydev_runfiles_xml_rpc.notifyTest(cond, captured_output, error_contents, address[0], address[1], time_str)
+
+
+    def startTest(self, test):
+        test._pydev_start_time = time.time()
+        if hasattr(test, 'address'):
+            address = test.address()
+            file, test = address[0], address[2]
+        else:
+            # multiprocess
+            file, test = test
+        pydev_runfiles_xml_rpc.notifyStartTest(file, test)
+
+
+    def getIoFromError(self, err):
+        if type(err) == type(()):
+            if len(err) != 3:
+                if len(err) == 2:
+                    return err[1]  # multiprocess
+            try:
+                from StringIO import StringIO
+            except:
+                from io import StringIO
+            s = StringIO()
+            etype, value, tb = err
+            import traceback;traceback.print_exception(etype, value, tb, file=s)
+            return s.getvalue()
+        return err
+
+
+    def getCapturedOutput(self, test):
+        if hasattr(test, 'capturedOutput') and test.capturedOutput:
+            return test.capturedOutput
+        return ''
+
+
+    def addError(self, test, err):
+        self.reportCond(
+            'error',
+            test,
+            self.getCapturedOutput(test),
+            err,
+        )
+
+
+    def addFailure(self, test, err):
+        self.reportCond(
+            'fail',
+            test,
+            self.getCapturedOutput(test),
+            err,
+        )
+
+
+    def addSuccess(self, test):
+        self.reportCond(
+            'ok',
+            test,
+            self.getCapturedOutput(test),
+            '',
+        )
+
+
+PYDEV_NOSE_PLUGIN_SINGLETON = None
+def StartPydevNosePluginSingleton(configuration):
+    global PYDEV_NOSE_PLUGIN_SINGLETON
+    PYDEV_NOSE_PLUGIN_SINGLETON = PydevPlugin(configuration)
+    return PYDEV_NOSE_PLUGIN_SINGLETON
+
+
+
+
+original = MultiProcessTestRunner.consolidate
+#=======================================================================================================================
+# NewConsolidate
+#=======================================================================================================================
+def NewConsolidate(self, result, batch_result):
+    '''
+    Used so that it can work with the multiprocess plugin.
+    Monkeypatched because nose seems a bit unsupported at this time (ideally
+    the plugin would have this support by default).
+    '''
+    ret = original(self, result, batch_result)
+
+    parent_frame = sys._getframe().f_back
+    # addr is something as D:\pytesting1\src\mod1\hello.py:TestCase.testMet4
+    # so, convert it to what reportCond expects
+    addr = parent_frame.f_locals['addr']
+    i = addr.rindex(':')
+    addr = [addr[:i], addr[i + 1:]]
+
+    output, testsRun, failures, errors, errorClasses = batch_result
+    if failures or errors:
+        for failure in failures:
+            PYDEV_NOSE_PLUGIN_SINGLETON.reportCond('fail', addr, output, failure)
+
+        for error in errors:
+            PYDEV_NOSE_PLUGIN_SINGLETON.reportCond('error', addr, output, error)
+    else:
+        PYDEV_NOSE_PLUGIN_SINGLETON.reportCond('ok', addr, output)
+
+
+    return ret
+
+MultiProcessTestRunner.consolidate = NewConsolidate
diff --git a/python/helpers/pydev/pydev_runfiles_parallel.py b/python/helpers/pydev/pydev_runfiles_parallel.py
new file mode 100644
index 0000000..e14f36d
--- /dev/null
+++ b/python/helpers/pydev/pydev_runfiles_parallel.py
@@ -0,0 +1,298 @@
+import unittest
+try:
+    import Queue
+except:
+    import queue as Queue #@UnresolvedImport
+from pydevd_constants import * #@UnusedWildImport
+import pydev_runfiles_xml_rpc
+import time
+import os
+
+#=======================================================================================================================
+# FlattenTestSuite
+#=======================================================================================================================
+def FlattenTestSuite(test_suite, ret):
+    if isinstance(test_suite, unittest.TestSuite):
+        for t in test_suite._tests:
+            FlattenTestSuite(t, ret)
+            
+    elif isinstance(test_suite, unittest.TestCase):
+        ret.append(test_suite)
+
+
+#=======================================================================================================================
+# ExecuteTestsInParallel
+#=======================================================================================================================
+def ExecuteTestsInParallel(tests, jobs, split, verbosity, coverage_files, coverage_include):
+    '''
+    @param tests: list(PydevTestSuite)
+        A list with the suites to be run
+        
+    @param split: str
+        Either 'module' or the number of tests that should be run in each batch
+        
+    @param coverage_files: list(file)
+        A list with the files that should be used for giving coverage information (if empty, coverage information 
+        should not be gathered). 
+        
+    @param coverage_include: str
+        The pattern that should be included in the coverage.
+        
+    @return: bool
+        Returns True if the tests were actually executed in parallel. If the tests were not executed because only 1
+        should be used (e.g.: 2 jobs were requested for running 1 test), False will be returned and no tests will be
+        run.
+        
+        It may also return False if in debug mode (in which case, multi-processes are not accepted) 
+    '''
+    try:
+        from pydevd_comm import GetGlobalDebugger
+        if GetGlobalDebugger() is not None:
+            return False
+    except:
+        pass #Ignore any error here.
+    
+    #This queue will receive the tests to be run. Each entry in a queue is a list with the tests to be run together When
+    #split == 'tests', each list will have a single element, when split == 'module', each list will have all the tests
+    #from a given module.
+    tests_queue = []
+    
+    queue_elements = []
+    if split == 'module':
+        module_to_tests = {}
+        for test in tests:
+            lst = []
+            FlattenTestSuite(test, lst)
+            for test in lst:
+                key = (test.__pydev_pyfile__, test.__pydev_module_name__)
+                module_to_tests.setdefault(key, []).append(test)
+        
+        for key, tests in module_to_tests.items():
+            queue_elements.append(tests)
+            
+        if len(queue_elements) < jobs:
+            #Don't create jobs we will never use.
+            jobs = len(queue_elements)
+    
+    elif split == 'tests':
+        for test in tests:
+            lst = []
+            FlattenTestSuite(test, lst)
+            for test in lst:
+                queue_elements.append([test])
+                
+        if len(queue_elements) < jobs:
+            #Don't create jobs we will never use.
+            jobs = len(queue_elements)
+    
+    else:
+        raise AssertionError('Do not know how to handle: %s' % (split,))
+    
+    for test_cases in queue_elements:
+        test_queue_elements = []
+        for test_case in test_cases:
+            try:
+                test_name = test_case.__class__.__name__+"."+test_case._testMethodName
+            except AttributeError:
+                #Support for jython 2.1 (__testMethodName is pseudo-private in the test case)
+                test_name = test_case.__class__.__name__+"."+test_case._TestCase__testMethodName
+
+            test_queue_elements.append(test_case.__pydev_pyfile__+'|'+test_name)
+        
+        tests_queue.append(test_queue_elements)
+        
+    if jobs < 2:
+        return False
+        
+    sys.stdout.write('Running tests in parallel with: %s jobs.\n' %(jobs,))
+
+    
+    queue = Queue.Queue()
+    for item in tests_queue:
+        queue.put(item, block=False)
+
+    
+    providers = []
+    clients = []
+    for i in range(jobs):
+        test_cases_provider = CommunicationThread(queue)
+        providers.append(test_cases_provider)
+        
+        test_cases_provider.start()
+        port = test_cases_provider.port
+        
+        if coverage_files:
+            clients.append(ClientThread(i, port, verbosity, coverage_files.pop(0), coverage_include))
+        else:
+            clients.append(ClientThread(i, port, verbosity))
+        
+    for client in clients:
+        client.start()
+
+    client_alive = True
+    while client_alive:    
+        client_alive = False
+        for client in clients:
+            #Wait for all the clients to exit.
+            if not client.finished:
+                client_alive = True
+                time.sleep(.2)
+                break
+    
+    for provider in providers:
+        provider.shutdown()
+        
+    return True
+    
+    
+    
+#=======================================================================================================================
+# CommunicationThread
+#=======================================================================================================================
+class CommunicationThread(threading.Thread):
+    
+    def __init__(self, tests_queue):
+        threading.Thread.__init__(self)
+        self.setDaemon(True)
+        self.queue = tests_queue
+        self.finished = False
+        from pydev_imports import SimpleXMLRPCServer
+        
+        
+        # This is a hack to patch slow socket.getfqdn calls that
+        # BaseHTTPServer (and its subclasses) make.
+        # See: http://bugs.python.org/issue6085
+        # See: http://www.answermysearches.com/xmlrpc-server-slow-in-python-how-to-fix/2140/
+        try:
+            import BaseHTTPServer
+            def _bare_address_string(self):
+                host, port = self.client_address[:2]
+                return '%s' % host
+            BaseHTTPServer.BaseHTTPRequestHandler.address_string = _bare_address_string
+            
+        except:
+            pass
+        # End hack.
+
+
+        # Create server
+        
+        import pydev_localhost
+        server = SimpleXMLRPCServer((pydev_localhost.get_localhost(), 0), logRequests=False)
+        server.register_function(self.GetTestsToRun)
+        server.register_function(self.notifyStartTest)
+        server.register_function(self.notifyTest)
+        server.register_function(self.notifyCommands)
+        self.port = server.socket.getsockname()[1]
+        self.server = server
+        
+        
+    def GetTestsToRun(self, job_id):
+        '''
+        @param job_id:
+        
+        @return: list(str)
+            Each entry is a string in the format: filename|Test.testName 
+        '''
+        try:
+            ret = self.queue.get(block=False)
+            return ret
+        except: #Any exception getting from the queue (empty or not) means we finished our work on providing the tests.
+            self.finished = True
+            return []
+
+
+    def notifyCommands(self, job_id, commands):
+        #Batch notification.
+        for command in commands:
+            getattr(self, command[0])(job_id, *command[1], **command[2])
+            
+        return True
+
+    def notifyStartTest(self, job_id, *args, **kwargs):
+        pydev_runfiles_xml_rpc.notifyStartTest(*args, **kwargs)
+        return True
+    
+    
+    def notifyTest(self, job_id, *args, **kwargs):
+        pydev_runfiles_xml_rpc.notifyTest(*args, **kwargs)
+        return True
+    
+    def shutdown(self):
+        if hasattr(self.server, 'shutdown'):
+            self.server.shutdown()
+        else:
+            self._shutdown = True
+    
+    def run(self):
+        if hasattr(self.server, 'shutdown'):
+            self.server.serve_forever()
+        else:
+            self._shutdown = False
+            while not self._shutdown:
+                self.server.handle_request()
+        
+    
+    
+#=======================================================================================================================
+# Client
+#=======================================================================================================================
+class ClientThread(threading.Thread):
+    
+    def __init__(self, job_id, port, verbosity, coverage_output_file=None, coverage_include=None):
+        threading.Thread.__init__(self)
+        self.setDaemon(True)
+        self.port = port
+        self.job_id = job_id
+        self.verbosity = verbosity
+        self.finished = False
+        self.coverage_output_file = coverage_output_file
+        self.coverage_include = coverage_include
+
+
+    def _reader_thread(self, pipe, target):
+        while True:
+            target.write(pipe.read(1))
+            
+        
+    def run(self):
+        try:
+            import pydev_runfiles_parallel_client
+            #TODO: Support Jython:
+            #
+            #For jython, instead of using sys.executable, we should use:
+            #r'D:\bin\jdk_1_5_09\bin\java.exe',
+            #'-classpath',
+            #'D:/bin/jython-2.2.1/jython.jar',
+            #'org.python.util.jython',
+                
+            args = [
+                sys.executable, 
+                pydev_runfiles_parallel_client.__file__, 
+                str(self.job_id), 
+                str(self.port), 
+                str(self.verbosity), 
+            ]
+            
+            if self.coverage_output_file and self.coverage_include:
+                args.append(self.coverage_output_file)
+                args.append(self.coverage_include)
+            
+            import subprocess
+            if False:
+                proc = subprocess.Popen(args, env=os.environ, shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
+                
+                stdout_thread = threading.Thread(target=self._reader_thread,args=(proc.stdout, sys.stdout))
+                stdout_thread.setDaemon(True)
+                stdout_thread.start()
+    
+                stderr_thread = threading.Thread(target=self._reader_thread,args=(proc.stderr, sys.stderr))
+                stderr_thread.setDaemon(True)
+                stderr_thread.start()
+            else:
+                proc = subprocess.Popen(args, env=os.environ, shell=False)
+                proc.wait()
+
+        finally:
+            self.finished = True
+
diff --git a/python/helpers/pydev/pydev_runfiles_parallel_client.py b/python/helpers/pydev/pydev_runfiles_parallel_client.py
new file mode 100644
index 0000000..7e5187e
--- /dev/null
+++ b/python/helpers/pydev/pydev_runfiles_parallel_client.py
@@ -0,0 +1,214 @@
+from pydevd_constants import * #@UnusedWildImport
+from pydev_imports import xmlrpclib, _queue
+Queue = _queue.Queue
+import traceback
+from pydev_runfiles_coverage import StartCoverageSupportFromParams
+
+
+
+#=======================================================================================================================
+# ParallelNotification
+#=======================================================================================================================
+class ParallelNotification(object):
+
+    def __init__(self, method, args, kwargs):
+        self.method = method
+        self.args = args
+        self.kwargs = kwargs
+
+    def ToTuple(self):
+        return self.method, self.args, self.kwargs
+
+
+#=======================================================================================================================
+# KillServer
+#=======================================================================================================================
+class KillServer(object):
+    pass
+
+
+
+#=======================================================================================================================
+# ServerComm
+#=======================================================================================================================
+class ServerComm(threading.Thread):
+
+
+
+    def __init__(self, job_id, server):
+        self.notifications_queue = Queue()
+        threading.Thread.__init__(self)
+        self.setDaemon(False) #Wait for all the notifications to be passed before exiting!
+        assert job_id is not None
+        assert port is not None
+        self.job_id = job_id
+
+        self.finished = False
+        self.server = server
+
+
+    def run(self):
+        while True:
+            kill_found = False
+            commands = []
+            command = self.notifications_queue.get(block=True)
+            if isinstance(command, KillServer):
+                kill_found = True
+            else:
+                assert isinstance(command, ParallelNotification)
+                commands.append(command.ToTuple())
+
+            try:
+                while True:
+                    command = self.notifications_queue.get(block=False) #No block to create a batch.
+                    if isinstance(command, KillServer):
+                        kill_found = True
+                    else:
+                        assert isinstance(command, ParallelNotification)
+                        commands.append(command.ToTuple())
+            except:
+                pass #That's OK, we're getting it until it becomes empty so that we notify multiple at once.
+
+
+            if commands:
+                try:
+                    #Batch notification.
+                    self.server.lock.acquire()
+                    try:
+                        self.server.notifyCommands(self.job_id, commands)
+                    finally:
+                        self.server.lock.release()
+                except:
+                    traceback.print_exc()
+
+            if kill_found:
+                self.finished = True
+                return
+
+
+
+#=======================================================================================================================
+# ServerFacade
+#=======================================================================================================================
+class ServerFacade(object):
+
+
+    def __init__(self, notifications_queue):
+        self.notifications_queue = notifications_queue
+
+
+    def notifyTestsCollected(self, *args, **kwargs):
+        pass #This notification won't be passed
+
+
+    def notifyTestRunFinished(self, *args, **kwargs):
+        pass #This notification won't be passed
+
+
+    def notifyStartTest(self, *args, **kwargs):
+        self.notifications_queue.put_nowait(ParallelNotification('notifyStartTest', args, kwargs))
+
+
+    def notifyTest(self, *args, **kwargs):
+        self.notifications_queue.put_nowait(ParallelNotification('notifyTest', args, kwargs))
+
+
+
+#=======================================================================================================================
+# run_client
+#=======================================================================================================================
+def run_client(job_id, port, verbosity, coverage_output_file, coverage_include):
+    job_id = int(job_id)
+
+    import pydev_localhost
+    server = xmlrpclib.Server('http://%s:%s' % (pydev_localhost.get_localhost(), port))
+    server.lock = threading.Lock()
+
+
+    server_comm = ServerComm(job_id, server)
+    server_comm.start()
+
+    try:
+        server_facade = ServerFacade(server_comm.notifications_queue)
+        import pydev_runfiles
+        import pydev_runfiles_xml_rpc
+        pydev_runfiles_xml_rpc.SetServer(server_facade)
+
+        #Starts None and when the 1st test is gotten, it's started (because a server may be initiated and terminated
+        #before receiving any test -- which would mean a different process got all the tests to run).
+        coverage = None
+
+        try:
+            tests_to_run = [1]
+            while tests_to_run:
+                #Investigate: is it dangerous to use the same xmlrpclib server from different threads?
+                #It seems it should be, as it creates a new connection for each request...
+                server.lock.acquire()
+                try:
+                    tests_to_run = server.GetTestsToRun(job_id)
+                finally:
+                    server.lock.release()
+
+                if not tests_to_run:
+                    break
+
+                if coverage is None:
+                    _coverage_files, coverage = StartCoverageSupportFromParams(
+                        None, coverage_output_file, 1, coverage_include)
+
+
+                files_to_tests = {}
+                for test in tests_to_run:
+                    filename_and_test = test.split('|')
+                    if len(filename_and_test) == 2:
+                        files_to_tests.setdefault(filename_and_test[0], []).append(filename_and_test[1])
+
+                configuration = pydev_runfiles.Configuration(
+                    '',
+                    verbosity,
+                    None,
+                    None,
+                    None,
+                    files_to_tests,
+                    1, #Always single job here
+                    None,
+
+                    #The coverage is handled in this loop.
+                    coverage_output_file=None,
+                    coverage_include=None,
+                )
+                test_runner = pydev_runfiles.PydevTestRunner(configuration)
+                sys.stdout.flush()
+                test_runner.run_tests(handle_coverage=False)
+        finally:
+            if coverage is not None:
+                coverage.stop()
+                coverage.save()
+
+
+    except:
+        traceback.print_exc()
+    server_comm.notifications_queue.put_nowait(KillServer())
+
+
+
+#=======================================================================================================================
+# main
+#=======================================================================================================================
+if __name__ == '__main__':
+    if len(sys.argv) -1 == 3:
+        job_id, port, verbosity = sys.argv[1:]
+        coverage_output_file, coverage_include = None, None
+
+    elif len(sys.argv) -1 == 5:
+        job_id, port, verbosity, coverage_output_file, coverage_include = sys.argv[1:]
+
+    else:
+        raise AssertionError('Could not find out how to handle the parameters: '+sys.argv[1:])
+
+    job_id = int(job_id)
+    port = int(port)
+    verbosity = int(verbosity)
+    run_client(job_id, port, verbosity, coverage_output_file, coverage_include)
+
+
diff --git a/python/helpers/pydev/pydev_runfiles_pytest2.py b/python/helpers/pydev/pydev_runfiles_pytest2.py
new file mode 100644
index 0000000..e40d60f
--- /dev/null
+++ b/python/helpers/pydev/pydev_runfiles_pytest2.py
@@ -0,0 +1,230 @@
+import pickle
+import zlib
+import base64
+import os
+import py
+from py._code import code  # @UnresolvedImport
+import pydev_runfiles_xml_rpc
+from pydevd_file_utils import _NormFile
+import pytest
+import sys
+import time
+
+
+
+#===================================================================================================
+# Load filters with tests we should skip
+#===================================================================================================
+py_test_accept_filter = None
+
+def _load_filters():
+    global py_test_accept_filter
+    if py_test_accept_filter is None:
+        py_test_accept_filter = os.environ.get('PYDEV_PYTEST_SKIP')
+        if py_test_accept_filter:
+            py_test_accept_filter = pickle.loads(zlib.decompress(base64.b64decode(py_test_accept_filter)))
+        else:
+            py_test_accept_filter = {}
+
+
+def connect_to_server_for_communication_to_xml_rpc_on_xdist():
+    main_pid = os.environ.get('PYDEV_MAIN_PID')
+    if main_pid and main_pid != str(os.getpid()):
+        port = os.environ.get('PYDEV_PYTEST_SERVER')
+        if not port:
+            sys.stderr.write('Error: no PYDEV_PYTEST_SERVER environment variable defined.\n')
+        else:
+            pydev_runfiles_xml_rpc.InitializeServer(int(port), daemon=True)
+
+
+#===================================================================================================
+# Mocking to get clickable file representations
+#===================================================================================================
+def _MockFileRepresentation():
+    code.ReprFileLocation._original_toterminal = code.ReprFileLocation.toterminal
+
+    def toterminal(self, tw):
+        # filename and lineno output for each entry,
+        # using an output format that most editors understand
+        msg = self.message
+        i = msg.find("\n")
+        if i != -1:
+            msg = msg[:i]
+
+        tw.line('File "%s", line %s\n%s' %(os.path.abspath(self.path), self.lineno, msg))
+
+    code.ReprFileLocation.toterminal = toterminal
+
+
+def _UninstallMockFileRepresentation():
+    code.ReprFileLocation.toterminal = code.ReprFileLocation._original_toterminal #@UndefinedVariable
+
+
+class State:
+    numcollected = 0
+    start_time = time.time()
+
+
+def pytest_configure(*args, **kwargs):
+    _MockFileRepresentation()
+
+
+def pytest_collectreport(report):
+
+    i = 0
+    for x in report.result:
+        if isinstance(x, pytest.Item):
+            try:
+                # Call our setup (which may do a skip, in which
+                # case we won't count it).
+                pytest_runtest_setup(x)
+                i += 1
+            except:
+                continue
+    State.numcollected += i
+
+
+def pytest_collection_modifyitems():
+    connect_to_server_for_communication_to_xml_rpc_on_xdist()
+    pydev_runfiles_xml_rpc.notifyTestsCollected(State.numcollected)
+    State.numcollected = 0
+
+
+def pytest_unconfigure(*args, **kwargs):
+    _UninstallMockFileRepresentation()
+    pydev_runfiles_xml_rpc.notifyTestRunFinished('Finished in: %.2f secs.' % (time.time() - State.start_time,))
+
+
+def pytest_runtest_setup(item):
+    filename = item.fspath.strpath
+    test = item.location[2]
+    State.start_test_time = time.time()
+
+    pydev_runfiles_xml_rpc.notifyStartTest(filename, test)
+
+
+def report_test(cond, filename, test, captured_output, error_contents, delta):
+    '''
+    @param filename: 'D:\\src\\mod1\\hello.py'
+    @param test: 'TestCase.testMet1'
+    @param cond: fail, error, ok
+    '''
+    time_str = '%.2f' % (delta,)
+    pydev_runfiles_xml_rpc.notifyTest(cond, captured_output, error_contents, filename, test, time_str)
+
+
+def pytest_runtest_makereport(item, call):
+    report_when = call.when
+    report_duration = call.stop-call.start
+    excinfo = call.excinfo
+
+    if not call.excinfo:
+        report_outcome = "passed"
+        report_longrepr = None
+    else:
+        excinfo = call.excinfo
+        if not isinstance(excinfo, py.code.ExceptionInfo):
+            report_outcome = "failed"
+            report_longrepr = excinfo
+
+        elif excinfo.errisinstance(py.test.skip.Exception):
+            report_outcome = "skipped"
+            r = excinfo._getreprcrash()
+            report_longrepr = None #(str(r.path), r.lineno, r.message)
+
+        else:
+            report_outcome = "failed"
+            if call.when == "call":
+                report_longrepr = item.repr_failure(excinfo)
+
+            else: # exception in setup or teardown
+                report_longrepr = item._repr_failure_py(excinfo, style=item.config.option.tbstyle)
+
+    filename = item.fspath.strpath
+    test = item.location[2]
+
+    status = 'ok'
+    captured_output = ''
+    error_contents = ''
+
+    if report_outcome in ('passed', 'skipped'):
+        #passed or skipped: no need to report if in setup or teardown (only on the actual test if it passed).
+        if report_when in ('setup', 'teardown'):
+            return
+
+    else:
+        #It has only passed, skipped and failed (no error), so, let's consider error if not on call.
+        if report_when == 'setup':
+            if status == 'ok':
+                status = 'error'
+
+        elif report_when == 'teardown':
+            if status == 'ok':
+                status = 'error'
+
+        else:
+            #any error in the call (not in setup or teardown) is considered a regular failure.
+            status = 'fail'
+
+
+    if call.excinfo:
+        rep = report_longrepr
+        if hasattr(rep, 'reprcrash'):
+            reprcrash = rep.reprcrash
+            error_contents += str(reprcrash)
+            error_contents += '\n'
+
+        if hasattr(rep, 'reprtraceback'):
+            error_contents += str(rep.reprtraceback)
+
+        if hasattr(rep, 'sections'):
+            for name, content, sep in rep.sections:
+                error_contents += sep * 40
+                error_contents += name
+                error_contents += sep * 40
+                error_contents += '\n'
+                error_contents += content
+                error_contents += '\n'
+
+    if status != 'skip': #I.e.: don't event report skips...
+        report_test(status, filename, test, captured_output, error_contents, report_duration)
+
+
+
+@pytest.mark.tryfirst
+def pytest_runtest_setup(item):
+    '''
+    Skips tests. With xdist will be on a secondary process.
+    '''
+    _load_filters()
+    if not py_test_accept_filter:
+        return #Keep on going (nothing to filter)
+
+    f = _NormFile(str(item.parent.fspath))
+    name = item.name
+
+    if f not in py_test_accept_filter:
+        pytest.skip() # Skip the file
+
+    accept_tests = py_test_accept_filter[f]
+
+    if item.cls is not None:
+        class_name = item.cls.__name__
+    else:
+        class_name = None
+    for test in accept_tests:
+        if test == name:
+            #Direct match of the test (just go on with the default loading)
+            return
+
+        if class_name is not None:
+            if test == class_name + '.' + name:
+                return
+
+            if class_name == test:
+                return
+
+    # If we had a match it'd have returned already.
+    pytest.skip() # Skip the test
+
+
diff --git a/python/helpers/pydev/pydev_runfiles_unittest.py b/python/helpers/pydev/pydev_runfiles_unittest.py
new file mode 100644
index 0000000..78dfa52
--- /dev/null
+++ b/python/helpers/pydev/pydev_runfiles_unittest.py
@@ -0,0 +1,174 @@
+try:
+    import unittest2 as python_unittest
+except:
+    import unittest as python_unittest
+    
+import pydev_runfiles_xml_rpc
+import time
+import pydevd_io
+import traceback
+from pydevd_constants import * #@UnusedWildImport
+
+    
+#=======================================================================================================================
+# PydevTextTestRunner
+#=======================================================================================================================
+class PydevTextTestRunner(python_unittest.TextTestRunner):
+    
+    def _makeResult(self):
+        return PydevTestResult(self.stream, self.descriptions, self.verbosity)
+
+
+_PythonTextTestResult = python_unittest.TextTestRunner()._makeResult().__class__
+
+#=======================================================================================================================
+# PydevTestResult
+#=======================================================================================================================
+class PydevTestResult(_PythonTextTestResult):
+    
+
+    def startTest(self, test):
+        _PythonTextTestResult.startTest(self, test)
+        self.buf = pydevd_io.StartRedirect(keep_original_redirection=True, std='both')
+        self.start_time = time.time()
+        self._current_errors_stack = []
+        self._current_failures_stack = []
+        
+        try:
+            test_name = test.__class__.__name__+"."+test._testMethodName
+        except AttributeError:
+            #Support for jython 2.1 (__testMethodName is pseudo-private in the test case)
+            test_name = test.__class__.__name__+"."+test._TestCase__testMethodName
+            
+        pydev_runfiles_xml_rpc.notifyStartTest(
+            test.__pydev_pyfile__, test_name)
+
+
+
+
+    def getTestName(self, test):
+        try:
+            try:
+                test_name = test.__class__.__name__ + "." + test._testMethodName
+            except AttributeError:
+                #Support for jython 2.1 (__testMethodName is pseudo-private in the test case)
+                try:
+                    test_name = test.__class__.__name__ + "." + test._TestCase__testMethodName
+                #Support for class/module exceptions (test is instance of _ErrorHolder)
+                except:
+                    test_name = test.description.split()[1][1:-1] + ' <' + test.description.split()[0] + '>'
+        except:
+            traceback.print_exc()
+            return '<unable to get test name>'
+        return test_name
+
+
+    def stopTest(self, test):
+        end_time = time.time()
+        pydevd_io.EndRedirect(std='both')
+        
+        _PythonTextTestResult.stopTest(self, test)
+        
+        captured_output = self.buf.getvalue()
+        del self.buf
+        error_contents = ''
+        test_name = self.getTestName(test)
+            
+        
+        diff_time = '%.2f' % (end_time - self.start_time)
+        if not self._current_errors_stack and not self._current_failures_stack:
+            pydev_runfiles_xml_rpc.notifyTest(
+                'ok', captured_output, error_contents, test.__pydev_pyfile__, test_name, diff_time)
+        else:
+            self._reportErrors(self._current_errors_stack, self._current_failures_stack, captured_output, test_name)
+            
+            
+    def _reportErrors(self, errors, failures, captured_output, test_name, diff_time=''):
+        error_contents = []
+        for test, s in errors+failures:
+            if type(s) == type((1,)): #If it's a tuple (for jython 2.1)
+                sio = StringIO()
+                traceback.print_exception(s[0], s[1], s[2], file=sio)
+                s = sio.getvalue()
+            error_contents.append(s)
+        
+        sep = '\n'+self.separator1
+        error_contents = sep.join(error_contents)
+        
+        if errors and not failures:
+            try:
+                pydev_runfiles_xml_rpc.notifyTest(
+                    'error', captured_output, error_contents, test.__pydev_pyfile__, test_name, diff_time)
+            except:
+                file_start = error_contents.find('File "')
+                file_end = error_contents.find('", ', file_start)
+                if file_start != -1 and file_end != -1:
+                    file = error_contents[file_start+6:file_end]
+                else:
+                    file = '<unable to get file>'
+                pydev_runfiles_xml_rpc.notifyTest(
+                    'error', captured_output, error_contents, file, test_name, diff_time)
+            
+        elif failures and not errors:
+            pydev_runfiles_xml_rpc.notifyTest(
+                'fail', captured_output, error_contents, test.__pydev_pyfile__, test_name, diff_time)
+        
+        else: #Ok, we got both, errors and failures. Let's mark it as an error in the end.
+            pydev_runfiles_xml_rpc.notifyTest(
+                'error', captured_output, error_contents, test.__pydev_pyfile__, test_name, diff_time)
+                
+
+
+    def addError(self, test, err):
+        _PythonTextTestResult.addError(self, test, err)
+        #Support for class/module exceptions (test is instance of _ErrorHolder)
+        if not hasattr(self, '_current_errors_stack') or test.__class__.__name__ == '_ErrorHolder':
+            #Not in start...end, so, report error now (i.e.: django pre/post-setup)
+            self._reportErrors([self.errors[-1]], [], '', self.getTestName(test))
+        else:
+            self._current_errors_stack.append(self.errors[-1])
+
+
+    def addFailure(self, test, err):
+        _PythonTextTestResult.addFailure(self, test, err)
+        if not hasattr(self, '_current_failures_stack'):
+            #Not in start...end, so, report error now (i.e.: django pre/post-setup)
+            self._reportErrors([], [self.failures[-1]], '', self.getTestName(test))
+        else:
+            self._current_failures_stack.append(self.failures[-1])
+
+
+try:
+    #Version 2.7 onwards has a different structure... Let's not make any changes in it for now
+    #(waiting for bug: http://bugs.python.org/issue11798)
+    try:
+        from unittest2 import suite
+    except ImportError:
+        from unittest import suite
+    #===================================================================================================================
+    # PydevTestSuite
+    #===================================================================================================================
+    class PydevTestSuite(python_unittest.TestSuite):
+        pass
+    
+    
+except ImportError:
+    
+    #===================================================================================================================
+    # PydevTestSuite
+    #===================================================================================================================
+    class PydevTestSuite(python_unittest.TestSuite):
+    
+    
+        def run(self, result):
+            for index, test in enumerate(self._tests):
+                if result.shouldStop:
+                    break
+                test(result)
+    
+                # Let the memory be released! 
+                self._tests[index] = None
+    
+            return result
+    
+
diff --git a/python/helpers/pydev/pydev_runfiles_xml_rpc.py b/python/helpers/pydev/pydev_runfiles_xml_rpc.py
new file mode 100644
index 0000000..bcaa38a
--- /dev/null
+++ b/python/helpers/pydev/pydev_runfiles_xml_rpc.py
@@ -0,0 +1,269 @@
+import traceback
+import warnings
+
+from _pydev_filesystem_encoding import getfilesystemencoding
+from pydev_imports import xmlrpclib, _queue
+Queue = _queue.Queue
+from pydevd_constants import *
+
+#This may happen in IronPython (in Python it shouldn't happen as there are
+#'fast' replacements that are used in xmlrpclib.py)
+warnings.filterwarnings(
+    'ignore', 'The xmllib module is obsolete.*', DeprecationWarning)
+
+
+file_system_encoding = getfilesystemencoding()
+
+#=======================================================================================================================
+# _ServerHolder
+#=======================================================================================================================
+class _ServerHolder:
+    '''
+    Helper so that we don't have to use a global here.
+    '''
+    SERVER = None
+
+
+#=======================================================================================================================
+# SetServer
+#=======================================================================================================================
+def SetServer(server):
+    _ServerHolder.SERVER = server
+
+
+
+#=======================================================================================================================
+# ParallelNotification
+#=======================================================================================================================
+class ParallelNotification(object):
+
+    def __init__(self, method, args):
+        self.method = method
+        self.args = args
+
+    def ToTuple(self):
+        return self.method, self.args
+
+
+
+#=======================================================================================================================
+# KillServer
+#=======================================================================================================================
+class KillServer(object):
+    pass
+
+
+#=======================================================================================================================
+# ServerFacade
+#=======================================================================================================================
+class ServerFacade(object):
+
+
+    def __init__(self, notifications_queue):
+        self.notifications_queue = notifications_queue
+
+
+    def notifyTestsCollected(self, *args):
+        self.notifications_queue.put_nowait(ParallelNotification('notifyTestsCollected', args))
+
+    def notifyConnected(self, *args):
+        self.notifications_queue.put_nowait(ParallelNotification('notifyConnected', args))
+
+
+    def notifyTestRunFinished(self, *args):
+        self.notifications_queue.put_nowait(ParallelNotification('notifyTestRunFinished', args))
+
+
+    def notifyStartTest(self, *args):
+        self.notifications_queue.put_nowait(ParallelNotification('notifyStartTest', args))
+
+
+    def notifyTest(self, *args):
+        self.notifications_queue.put_nowait(ParallelNotification('notifyTest', args))
+
+
+
+
+
+#=======================================================================================================================
+# ServerComm
+#=======================================================================================================================
+class ServerComm(threading.Thread):
+
+
+
+    def __init__(self, notifications_queue, port, daemon=False):
+        threading.Thread.__init__(self)
+        self.setDaemon(daemon) # If False, wait for all the notifications to be passed before exiting!
+        self.finished = False
+        self.notifications_queue = notifications_queue
+
+        import pydev_localhost
+
+        # It is necessary to specify an encoding, that matches
+        # the encoding of all bytes-strings passed into an
+        # XMLRPC call: "All 8-bit strings in the data structure are assumed to use the
+        # packet encoding.  Unicode strings are automatically converted,
+        # where necessary."
+        # Byte strings most likely come from file names.
+        encoding = file_system_encoding
+        if encoding == "mbcs":
+            # Windos symbolic name for the system encoding CP_ACP.
+            # We need to convert it into a encoding that is recognized by Java.
+            # Unfortunately this is not always possible. You could use
+            # GetCPInfoEx and get a name similar to "windows-1251". Then
+            # you need a table to translate on a best effort basis. Much to complicated.
+            # ISO-8859-1 is good enough.
+            encoding = "ISO-8859-1"
+
+        self.server = xmlrpclib.Server('http://%s:%s' % (pydev_localhost.get_localhost(), port),
+                                       encoding=encoding)
+
+
+    def run(self):
+        while True:
+            kill_found = False
+            commands = []
+            command = self.notifications_queue.get(block=True)
+            if isinstance(command, KillServer):
+                kill_found = True
+            else:
+                assert isinstance(command, ParallelNotification)
+                commands.append(command.ToTuple())
+
+            try:
+                while True:
+                    command = self.notifications_queue.get(block=False) #No block to create a batch.
+                    if isinstance(command, KillServer):
+                        kill_found = True
+                    else:
+                        assert isinstance(command, ParallelNotification)
+                        commands.append(command.ToTuple())
+            except:
+                pass #That's OK, we're getting it until it becomes empty so that we notify multiple at once.
+
+
+            if commands:
+                try:
+                    self.server.notifyCommands(commands)
+                except:
+                    traceback.print_exc()
+
+            if kill_found:
+                self.finished = True
+                return
+
+
+
+#=======================================================================================================================
+# InitializeServer
+#=======================================================================================================================
+def InitializeServer(port, daemon=False):
+    if _ServerHolder.SERVER is None:
+        if port is not None:
+            notifications_queue = Queue()
+            _ServerHolder.SERVER = ServerFacade(notifications_queue)
+            _ServerHolder.SERVER_COMM = ServerComm(notifications_queue, port, daemon)
+            _ServerHolder.SERVER_COMM.start()
+        else:
+            #Create a null server, so that we keep the interface even without any connection.
+            _ServerHolder.SERVER = Null()
+            _ServerHolder.SERVER_COMM = Null()
+
+    try:
+        _ServerHolder.SERVER.notifyConnected()
+    except:
+        traceback.print_exc()
+
+
+
+#=======================================================================================================================
+# notifyTest
+#=======================================================================================================================
+def notifyTestsCollected(tests_count):
+    assert tests_count is not None
+    try:
+        _ServerHolder.SERVER.notifyTestsCollected(tests_count)
+    except:
+        traceback.print_exc()
+
+
+#=======================================================================================================================
+# notifyStartTest
+#=======================================================================================================================
+def notifyStartTest(file, test):
+    '''
+    @param file: the tests file (c:/temp/test.py)
+    @param test: the test ran (i.e.: TestCase.test1)
+    '''
+    assert file is not None
+    if test is None:
+        test = '' #Could happen if we have an import error importing module.
+
+    try:
+        _ServerHolder.SERVER.notifyStartTest(file, test)
+    except:
+        traceback.print_exc()
+
+
+def _encode_if_needed(obj):
+    if not IS_PY3K:
+        if isinstance(obj, str):
+            try:
+                return xmlrpclib.Binary(obj.encode('ISO-8859-1', 'xmlcharrefreplace'))
+            except:
+                return xmlrpclib.Binary(obj)
+
+        elif isinstance(obj, unicode):
+            return xmlrpclib.Binary(obj.encode('ISO-8859-1', 'xmlcharrefreplace'))
+
+    else:
+        if isinstance(obj, str):
+            return obj.encode('ISO-8859-1', 'xmlcharrefreplace')
+
+    return obj
+
+
+#=======================================================================================================================
+# notifyTest
+#=======================================================================================================================
+def notifyTest(cond, captured_output, error_contents, file, test, time):
+    '''
+    @param cond: ok, fail, error
+    @param captured_output: output captured from stdout
+    @param captured_output: output captured from stderr
+    @param file: the tests file (c:/temp/test.py)
+    @param test: the test ran (i.e.: TestCase.test1)
+    @param time: float with the number of seconds elapsed
+    '''
+    assert cond is not None
+    assert captured_output is not None
+    assert error_contents is not None
+    assert file is not None
+    if test is None:
+        test = '' #Could happen if we have an import error importing module.
+    assert time is not None
+    try:
+        captured_output = _encode_if_needed(captured_output)
+        error_contents = _encode_if_needed(error_contents)
+
+        _ServerHolder.SERVER.notifyTest(cond, captured_output, error_contents, file, test, time)
+    except:
+        traceback.print_exc()
+
+#=======================================================================================================================
+# notifyTestRunFinished
+#=======================================================================================================================
+def notifyTestRunFinished(total_time):
+    assert total_time is not None
+    try:
+        _ServerHolder.SERVER.notifyTestRunFinished(total_time)
+    except:
+        traceback.print_exc()
+
+
+#=======================================================================================================================
+# forceServerKill
+#=======================================================================================================================
+def forceServerKill():
+    _ServerHolder.SERVER_COMM.notifications_queue.put_nowait(KillServer())
diff --git a/python/helpers/pydev/pydev_sitecustomize/__not_in_default_pythonpath.txt b/python/helpers/pydev/pydev_sitecustomize/__not_in_default_pythonpath.txt
new file mode 100644
index 0000000..29cdc5b
--- /dev/null
+++ b/python/helpers/pydev/pydev_sitecustomize/__not_in_default_pythonpath.txt
@@ -0,0 +1 @@
+(no __init__.py file)
\ No newline at end of file
diff --git a/python/helpers/pydev/pydev_sitecustomize/sitecustomize.py b/python/helpers/pydev/pydev_sitecustomize/sitecustomize.py
new file mode 100644
index 0000000..78b9c79
--- /dev/null
+++ b/python/helpers/pydev/pydev_sitecustomize/sitecustomize.py
@@ -0,0 +1,192 @@
+'''
+    This module will:
+    - change the input() and raw_input() commands to change \r\n or \r into \n
+    - execute the user site customize -- if available
+    - change raw_input() and input() to also remove any trailing \r
+    
+    Up to PyDev 3.4 it also was setting the default encoding, but it was removed because of differences when
+    running from a shell (i.e.: now we just set the PYTHONIOENCODING related to that -- which is properly 
+    treated on Py 2.7 onwards).
+'''
+DEBUG = 0 #0 or 1 because of jython
+
+import sys
+encoding = None
+
+IS_PYTHON_3K = 0
+
+try:
+    if sys.version_info[0] == 3:
+        IS_PYTHON_3K = 1
+        
+except:
+    #That's OK, not all versions of python have sys.version_info
+    if DEBUG:
+        import traceback;traceback.print_exc() #@Reimport
+        
+#-----------------------------------------------------------------------------------------------------------------------
+#Line buffering
+if IS_PYTHON_3K:
+    #Python 3 has a bug (http://bugs.python.org/issue4705) in which -u doesn't properly make output/input unbuffered
+    #so, we need to enable that ourselves here.
+    try:
+        sys.stdout._line_buffering = True
+    except:
+        pass
+    try:
+        sys.stderr._line_buffering = True
+    except:
+        pass
+    try:
+        sys.stdin._line_buffering = True
+    except:
+        pass
+    
+    
+try:
+    import org.python.core.PyDictionary #@UnresolvedImport @UnusedImport -- just to check if it could be valid
+    def DictContains(d, key):
+        return d.has_key(key)
+except:
+    try:
+        #Py3k does not have has_key anymore, and older versions don't have __contains__
+        DictContains = dict.__contains__
+    except:
+        try:
+            DictContains = dict.has_key
+        except NameError:
+            def DictContains(d, key):
+                return d.has_key(key)
+
+
+#----------------------------------------------------------------------------------------------------------------------- 
+#now that we've finished the needed pydev sitecustomize, let's run the default one (if available)
+
+#Ok, some weirdness going on in Python 3k: when removing this module from the sys.module to import the 'real'
+#sitecustomize, all the variables in this scope become None (as if it was garbage-collected), so, the the reference
+#below is now being kept to create a cyclic reference so that it neven dies)
+__pydev_sitecustomize_module__ = sys.modules.get('sitecustomize') #A ref to this module
+
+
+#remove the pydev site customize (and the pythonpath for it)
+paths_removed = []
+try:
+    for c in sys.path[:]:
+        #Pydev controls the whole classpath in Jython already, so, we don't want a a duplicate for
+        #what we've already added there (this is needed to support Jython 2.5b1 onwards -- otherwise, as
+        #we added the sitecustomize to the pythonpath and to the classpath, we'd have to remove it from the
+        #classpath too -- and I don't think there's a way to do that... or not?)
+        if c.find('pydev_sitecustomize') != -1 or c == '__classpath__' or c == '__pyclasspath__' or \
+            c == '__classpath__/' or c == '__pyclasspath__/' or  c == '__classpath__\\' or c == '__pyclasspath__\\':
+            sys.path.remove(c)
+            if c.find('pydev_sitecustomize') == -1:
+                #We'll re-add any paths removed but the pydev_sitecustomize we added from pydev.
+                paths_removed.append(c)
+            
+    if DictContains(sys.modules, 'sitecustomize'):
+        del sys.modules['sitecustomize'] #this module
+except:
+    #print the error... should never happen (so, always show, and not only on debug)!
+    import traceback;traceback.print_exc() #@Reimport
+else:
+    #Now, execute the default sitecustomize
+    try:
+        import sitecustomize #@UnusedImport
+        sitecustomize.__pydev_sitecustomize_module__ = __pydev_sitecustomize_module__
+    except:
+        pass
+    
+    if not DictContains(sys.modules, 'sitecustomize'):
+        #If there was no sitecustomize, re-add the pydev sitecustomize (pypy gives a KeyError if it's not there)
+        sys.modules['sitecustomize'] = __pydev_sitecustomize_module__
+    
+    try:
+        if paths_removed:
+            if sys is None:
+                import sys
+            if sys is not None:
+                #And after executing the default sitecustomize, restore the paths (if we didn't remove it before,
+                #the import sitecustomize would recurse).
+                sys.path.extend(paths_removed)
+    except:
+        #print the error... should never happen (so, always show, and not only on debug)!
+        import traceback;traceback.print_exc() #@Reimport
+
+
+
+
+if not IS_PYTHON_3K:
+    try:
+        #Redefine input and raw_input only after the original sitecustomize was executed
+        #(because otherwise, the original raw_input and input would still not be defined)
+        import __builtin__
+        original_raw_input = __builtin__.raw_input
+        original_input = __builtin__.input
+        
+        
+        def raw_input(prompt=''):
+            #the original raw_input would only remove a trailing \n, so, at
+            #this point if we had a \r\n the \r would remain (which is valid for eclipse)
+            #so, let's remove the remaining \r which python didn't expect.
+            ret = original_raw_input(prompt)
+                
+            if ret.endswith('\r'):
+                return ret[:-1]
+                
+            return ret
+        raw_input.__doc__ = original_raw_input.__doc__
+    
+        def input(prompt=''):
+            #input must also be rebinded for using the new raw_input defined
+            return eval(raw_input(prompt))
+        input.__doc__ = original_input.__doc__
+        
+        
+        __builtin__.raw_input = raw_input
+        __builtin__.input = input
+    
+    except:
+        #Don't report errors at this stage
+        if DEBUG:
+            import traceback;traceback.print_exc() #@Reimport
+    
+else:
+    try:
+        import builtins #Python 3.0 does not have the __builtin__ module @UnresolvedImport
+        original_input = builtins.input
+        def input(prompt=''):
+            #the original input would only remove a trailing \n, so, at
+            #this point if we had a \r\n the \r would remain (which is valid for eclipse)
+            #so, let's remove the remaining \r which python didn't expect.
+            ret = original_input(prompt)
+                
+            if ret.endswith('\r'):
+                return ret[:-1]
+                
+            return ret
+        input.__doc__ = original_input.__doc__
+        builtins.input = input
+    except:
+        #Don't report errors at this stage
+        if DEBUG:
+            import traceback;traceback.print_exc() #@Reimport
+    
+
+
+try:
+    #The original getpass doesn't work from the eclipse console, so, let's put a replacement
+    #here (note that it'll not go into echo mode in the console, so, what' the user writes
+    #will actually be seen)
+    import getpass #@UnresolvedImport
+    if IS_PYTHON_3K:
+        def pydev_getpass(msg='Password: '):
+            return input(msg)
+    else:
+        def pydev_getpass(msg='Password: '):
+            return raw_input(msg)
+    
+    getpass.getpass = pydev_getpass
+except:
+    #Don't report errors at this stage
+    if DEBUG:
+        import traceback;traceback.print_exc() #@Reimport
diff --git a/python/helpers/pydev/pydev_umd.py b/python/helpers/pydev/pydev_umd.py
new file mode 100644
index 0000000..0bfeda7
--- /dev/null
+++ b/python/helpers/pydev/pydev_umd.py
@@ -0,0 +1,172 @@
+"""
+The UserModuleDeleter and runfile methods are copied from
+Spyder and carry their own license agreement.
+http://code.google.com/p/spyderlib/source/browse/spyderlib/widgets/externalshell/sitecustomize.py
+
+Spyder License Agreement (MIT License)
+--------------------------------------
+
+Copyright (c) 2009-2012 Pierre Raybaut
+
+Permission is hereby granted, free of charge, to any person
+obtaining a copy of this software and associated documentation
+files (the "Software"), to deal in the Software without
+restriction, including without limitation the rights to use,
+copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the
+Software is furnished to do so, subject to the following
+conditions:
+
+The above copyright notice and this permission notice shall be
+included in all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+OTHER DEALINGS IN THE SOFTWARE.
+"""
+
+import sys
+import os
+
+# The following classes and functions are mainly intended to be used from
+# an interactive Python session
+class UserModuleDeleter:
+    """
+    User Module Deleter (UMD) aims at deleting user modules
+    to force Python to deeply reload them during import
+
+    pathlist [list]: blacklist in terms of module path
+    namelist [list]: blacklist in terms of module name
+    """
+    def __init__(self, namelist=None, pathlist=None):
+        if namelist is None:
+            namelist = []
+        self.namelist = namelist
+        if pathlist is None:
+            pathlist = []
+        self.pathlist = pathlist
+        try:
+            # blacklist all files in org.python.pydev/pysrc
+            import pydev_pysrc, inspect
+            self.pathlist.append(os.path.dirname(pydev_pysrc.__file__))
+        except:
+            pass
+        self.previous_modules = list(sys.modules.keys())
+
+    def is_module_blacklisted(self, modname, modpath):
+        for path in [sys.prefix] + self.pathlist:
+            if modpath.startswith(path):
+                return True
+        else:
+            return set(modname.split('.')) & set(self.namelist)
+
+    def run(self, verbose=False):
+        """
+        Del user modules to force Python to deeply reload them
+
+        Do not del modules which are considered as system modules, i.e.
+        modules installed in subdirectories of Python interpreter's binary
+        Do not del C modules
+        """
+        log = []
+        modules_copy = dict(sys.modules)
+        for modname, module in modules_copy.items():
+            if modname == 'aaaaa':
+                print(modname, module)
+                print(self.previous_modules)
+            if modname not in self.previous_modules:
+                modpath = getattr(module, '__file__', None)
+                if modpath is None:
+                    # *module* is a C module that is statically linked into the
+                    # interpreter. There is no way to know its path, so we
+                    # choose to ignore it.
+                    continue
+                if not self.is_module_blacklisted(modname, modpath):
+                    log.append(modname)
+                    del sys.modules[modname]
+        if verbose and log:
+            print("\x1b[4;33m%s\x1b[24m%s\x1b[0m" % ("UMD has deleted",
+                                                     ": " + ", ".join(log)))
+
+__umd__ = None
+
+_get_globals_callback = None
+def _set_globals_function(get_globals):
+    global _get_globals_callback
+    _get_globals_callback = get_globals
+def _get_globals():
+    """Return current Python interpreter globals namespace"""
+    if _get_globals_callback is not None:
+        return _get_globals_callback()
+    else:
+        try:
+            from __main__ import __dict__ as namespace
+        except ImportError:
+            try:
+                # The import fails on IronPython
+                import __main__
+                namespace = __main__.__dict__
+            except:
+                namespace
+        shell = namespace.get('__ipythonshell__')
+        if shell is not None and hasattr(shell, 'user_ns'):
+            # IPython 0.12+ kernel
+            return shell.user_ns
+        else:
+            # Python interpreter
+            return namespace
+        return namespace
+
+
+def runfile(filename, args=None, wdir=None, namespace=None):
+    """
+    Run filename
+    args: command line arguments (string)
+    wdir: working directory
+    """
+    try:
+        if hasattr(filename, 'decode'):
+            filename = filename.decode('utf-8')
+    except (UnicodeError, TypeError):
+        pass
+    global __umd__
+    if os.environ.get("PYDEV_UMD_ENABLED", "").lower() == "true":
+        if __umd__ is None:
+            namelist = os.environ.get("PYDEV_UMD_NAMELIST", None)
+            if namelist is not None:
+                namelist = namelist.split(',')
+            __umd__ = UserModuleDeleter(namelist=namelist)
+        else:
+            verbose = os.environ.get("PYDEV_UMD_VERBOSE", "").lower() == "true"
+            __umd__.run(verbose=verbose)
+    if args is not None and not isinstance(args, basestring):
+        raise TypeError("expected a character buffer object")
+    if namespace is None:
+        namespace = _get_globals()
+    if '__file__' in namespace:
+        old_file = namespace['__file__']
+    else:
+        old_file = None
+    namespace['__file__'] = filename
+    sys.argv = [filename]
+    if args is not None:
+        for arg in args.split():
+            sys.argv.append(arg)
+    if wdir is not None:
+        try:
+            if hasattr(wdir, 'decode'):
+                wdir = wdir.decode('utf-8')
+        except (UnicodeError, TypeError):
+            pass
+        os.chdir(wdir)
+    execfile(filename, namespace)
+    sys.argv = ['']
+    if old_file is None:
+        del namespace['__file__']
+    else:
+        namespace['__file__'] = old_file
diff --git a/python/helpers/pydev/pydevconsole.py b/python/helpers/pydev/pydevconsole.py
index 6c0640f..2f07a82 100644
--- a/python/helpers/pydev/pydevconsole.py
+++ b/python/helpers/pydev/pydevconsole.py
@@ -10,7 +10,6 @@
 import sys
 
 from pydevd_constants import USE_LIB_COPY
-from pydevd_constants import IS_JYTHON
 
 if USE_LIB_COPY:
     import _pydev_threading as threading
@@ -23,15 +22,7 @@
 
 import pydevd_vars
 
-from pydev_imports import Exec
-
-try:
-    if USE_LIB_COPY:
-        import _pydev_Queue as _queue
-    else:
-        import Queue as _queue
-except:
-    import queue as _queue
+from pydev_imports import Exec, _queue
 
 try:
     import __builtin__
@@ -47,7 +38,7 @@
     setattr(__builtin__, 'True', 1) #Python 3.0 does not accept __builtin__.True = 1 in its syntax
     setattr(__builtin__, 'False', 0)
 
-from pydev_console_utils import BaseInterpreterInterface
+from pydev_console_utils import BaseInterpreterInterface, BaseStdIn
 from pydev_console_utils import CodeFragment
 
 IS_PYTHON_3K = False
@@ -59,17 +50,6 @@
     #That's OK, not all versions of python have sys.version_info
     pass
 
-try:
-    try:
-        if USE_LIB_COPY:
-            import _pydev_xmlrpclib as xmlrpclib
-        else:
-            import xmlrpclib
-    except ImportError:
-        import xmlrpc.client as xmlrpclib
-except ImportError:
-    import _pydev_xmlrpclib as xmlrpclib
-
 
 class Command:
     def __init__(self, interpreter, code_fragment):
@@ -145,23 +125,71 @@
 
             traceback.print_exc()
             return []
-        
+
     def close(self):
         sys.exit(0)
 
     def get_greeting_msg(self):
-        return 'PyDev console: starting.'
+        return 'PyDev console: starting.\n'
+
+
+class _ProcessExecQueueHelper:
+    _debug_hook = None
+    _return_control_osc = False
+
+def set_debug_hook(debug_hook):
+    _ProcessExecQueueHelper._debug_hook = debug_hook
 
 
 def process_exec_queue(interpreter):
+
+    from pydev_ipython.inputhook import get_inputhook, set_return_control_callback
+
+    def return_control():
+        ''' A function that the inputhooks can call (via inputhook.stdin_ready()) to find
+            out if they should cede control and return '''
+        if _ProcessExecQueueHelper._debug_hook:
+            # Some of the input hooks check return control without doing
+            # a single operation, so we don't return True on every
+            # call when the debug hook is in place to allow the GUI to run
+            # XXX: Eventually the inputhook code will have diverged enough
+            # from the IPython source that it will be worthwhile rewriting
+            # it rather than pretending to maintain the old API
+            _ProcessExecQueueHelper._return_control_osc = not _ProcessExecQueueHelper._return_control_osc
+            if _ProcessExecQueueHelper._return_control_osc:
+                return True
+
+        if not interpreter.exec_queue.empty():
+            return True
+        return False
+
+    set_return_control_callback(return_control)
+
     while 1:
+        # Running the request may have changed the inputhook in use
+        inputhook = get_inputhook()
+
+        if _ProcessExecQueueHelper._debug_hook:
+            _ProcessExecQueueHelper._debug_hook()
+
+        if inputhook:
+            try:
+                # Note: it'll block here until return_control returns True.
+                inputhook()
+            except:
+                import traceback;traceback.print_exc()
         try:
             try:
-                codeFragment = interpreter.exec_queue.get(block=True, timeout=0.05)
+                code_fragment = interpreter.exec_queue.get(block=True, timeout=1/20.) # 20 calls/second
             except _queue.Empty:
                 continue
 
-            more = interpreter.addExec(codeFragment)
+            if callable(code_fragment):
+                # It can be a callable (i.e.: something that must run in the main
+                # thread can be put in the queue for later execution).
+                code_fragment()
+            else:
+                more = interpreter.addExec(code_fragment)
         except KeyboardInterrupt:
             interpreter.buffer = None
             continue
@@ -221,16 +249,6 @@
     return "PyCharm"
 
 
-def ipython_editor(interpreter):
-    def editor(file, line):
-        if file is None:
-            file = ""
-        if line is None:
-            line = "-1"
-        interpreter.ipython_editor(file, line)
-
-    return editor
-
 #=======================================================================================================================
 # StartServer
 #=======================================================================================================================
@@ -238,11 +256,8 @@
     if port == 0:
         host = ''
 
-    try:
-        from _pydev_xmlrpc_hook import InputHookedXMLRPCServer as XMLRPCServer  #@UnusedImport
-    except:
-        #I.e.: supporting the internal Jython version in PyDev to create a Jython interactive console inside Eclipse.
-        from pydev_imports import SimpleXMLRPCServer as XMLRPCServer  #@Reimport
+    #I.e.: supporting the internal Jython version in PyDev to create a Jython interactive console inside Eclipse.
+    from pydev_imports import SimpleXMLRPCServer as XMLRPCServer  #@Reimport
 
     try:
         server = XMLRPCServer((host, port), logRequests=False, allow_none=True)
@@ -262,12 +277,10 @@
     server.register_function(interpreter.interrupt)
     server.register_function(handshake)
     server.register_function(interpreter.connectToDebugger)
+    server.register_function(interpreter.hello)
 
-    if IPYTHON:
-        try:
-            interpreter.interpreter.ipython.hooks.editor = ipython_editor(interpreter)
-        except:
-            pass
+    # Functions for GUI main loop integration
+    server.register_function(interpreter.enableGui)
 
     if port == 0:
         (h, port) = server.socket.getsockname()
@@ -279,7 +292,6 @@
     sys.stderr.write(interpreter.get_greeting_msg())
     sys.stderr.flush()
 
-    interpreter.server = server
     server.serve_forever()
 
     return server
@@ -318,10 +330,9 @@
 
     return interpreterInterface.getCompletions(text, token)
 
-def get_frame():
-    interpreterInterface = get_interpreter()
-
-    return interpreterInterface.getFrame()
+#===============================================================================
+# Debugger integration
+#===============================================================================
 
 def exec_code(code, globals, locals):
     interpreterInterface = get_interpreter()
@@ -338,20 +349,6 @@
     return False
 
 
-def read_line(s):
-    ret = ''
-
-    while True:
-        c = s.recv(1)
-
-        if c == '\n' or c == '':
-            break
-        else:
-            ret += c
-
-    return ret
-
-# Debugger integration
 
 class ConsoleWriter(InteractiveInterpreter):
     skip = 0
@@ -408,7 +405,7 @@
         sys.stderr.write(''.join(lines))
 
 def consoleExec(thread_id, frame_id, expression):
-    """returns 'False' in case expression is partialy correct
+    """returns 'False' in case expression is partially correct
     """
     frame = pydevd_vars.findFrame(thread_id, frame_id)
 
@@ -452,9 +449,8 @@
 #=======================================================================================================================
 # main
 #=======================================================================================================================
-
-
 if __name__ == '__main__':
+    sys.stdin = BaseStdIn()
     port, client_port = sys.argv[1:3]
     import pydev_localhost
 
diff --git a/python/helpers/pydev/pydevd.py b/python/helpers/pydev/pydevd.py
index 2077903..1733c26 100644
--- a/python/helpers/pydev/pydevd.py
+++ b/python/helpers/pydev/pydevd.py
@@ -1,4 +1,6 @@
 #IMPORTANT: pydevd_constants must be the 1st thing defined because it'll keep a reference to the original sys._getframe
+from __future__ import nested_scopes # Jython 2.1 support
+
 import traceback
 
 from django_debug import DjangoLineBreakpoint
@@ -35,7 +37,7 @@
                          CMD_ADD_DJANGO_EXCEPTION_BREAK, \
                          CMD_REMOVE_DJANGO_EXCEPTION_BREAK, \
                          CMD_SMART_STEP_INTO,\
-    InternalChangeVariable, \
+                         InternalChangeVariable, \
                          InternalGetCompletions, \
                          InternalEvaluateExpression, \
                          InternalConsoleExec, \
@@ -55,23 +57,35 @@
                          PydevdLog, \
                          StartClient, \
                          StartServer, \
-                         InternalSetNextStatementThread, ReloadCodeCommand
+                         InternalSetNextStatementThread, \
+                         ReloadCodeCommand, \
+    CMD_SET_PY_EXCEPTION, \
+                         CMD_IGNORE_THROWN_EXCEPTION_AT,\
+                         InternalGetBreakpointException, \
+                         InternalSendCurrExceptionTrace,\
+                         InternalSendCurrExceptionTraceProceeded,\
+                         CMD_ENABLE_DONT_TRACE, \
+                         CMD_GET_FILE_CONTENTS,\
+                         CMD_SET_PROPERTY_TRACE, CMD_RUN_CUSTOM_OPERATION,\
+                         InternalRunCustomOperation, CMD_EVALUATE_CONSOLE_EXPRESSION, InternalEvaluateConsoleExpression,\
+                         InternalConsoleGetCompletions
+
 from pydevd_file_utils import NormFileToServer, GetFilenameAndBase
 import pydevd_file_utils
 import pydevd_vars
 import pydevd_vm_type
 import pydevd_tracing
 import pydevd_io
-import pydev_monkey
 from pydevd_additional_thread_info import PyDBAdditionalThreadInfo
 from pydevd_custom_frames import CustomFramesContainer, CustomFramesContainerInit
+import pydevd_dont_trace
+import pydevd_traceproperty
 
+from _pydev_imps import _pydev_time as time
 
 if USE_LIB_COPY:
-    import _pydev_time as time
     import _pydev_threading as threading
 else:
-    import time
     import threading
 
 import os
@@ -80,10 +94,13 @@
 threadingEnumerate = threading.enumerate
 threadingCurrentThread = threading.currentThread
 
+try:
+    'dummy'.encode('utf-8') # Added because otherwise Jython 2.2.1 wasn't finding the encoding (if it wasn't loaded in the main thread).
+except:
+    pass
 
 DONT_TRACE = {
               # commonly used things from the stdlib that we don't want to trace
-              'threading.py':1,
               'Queue.py':1,
               'queue.py':1,
               'socket.py':1,
@@ -92,12 +109,19 @@
               'threading.py':1,
 
               #things from pydev that we don't want to trace
+              '_pydev_execfile.py':1,
+              '_pydev_jython_execfile.py':1,
+              '_pydev_threading':1,
+              'django_debug.py':1,
+              'django_frame.py':1,
+              'pydev_log.py':1,
               'pydevd.py':1 ,
               'pydevd_additional_thread_info.py':1,
-              'pydevd_custom_frames.py':1,
               'pydevd_comm.py':1,
               'pydevd_console.py':1 ,
               'pydevd_constants.py':1,
+              'pydevd_custom_frames.py':1,
+              'pydevd_dont_trace.py':1,
               'pydevd_exec.py':1,
               'pydevd_exec2.py':1,
               'pydevd_file_utils.py':1,
@@ -105,17 +129,18 @@
               'pydevd_import_class.py':1 ,
               'pydevd_io.py':1 ,
               'pydevd_psyco_stub.py':1,
+              'pydevd_referrers.py':1 ,
               'pydevd_reload.py':1 ,
               'pydevd_resolver.py':1 ,
+              'pydevd_save_locals.py':1 ,
+              'pydevd_signature.py':1,
               'pydevd_stackless.py':1 ,
               'pydevd_traceproperty.py':1,
               'pydevd_tracing.py':1 ,
-              'pydevd_signature.py':1,
               'pydevd_utils.py':1,
               'pydevd_vars.py':1,
               'pydevd_vm_type.py':1,
-              '_pydev_execfile.py':1,
-              '_pydev_jython_execfile.py':1
+              'pydevd_xml.py':1,
             }
 
 if IS_PY3K:
@@ -135,17 +160,28 @@
 from _pydev_filesystem_encoding import getfilesystemencoding
 file_system_encoding = getfilesystemencoding()
 
-def isThreadAlive(t):
-    try:
-        # If thread is not started yet we treat it as alive.
-        # It is required to debug threads started by start_new_thread in Python 3.4
-        if hasattr(t, '_is_stopped'):
-            alive = not t._is_stopped
-        else:
-            alive = not t.__stopped
-    except:
-        alive = t.isAlive()
-    return alive
+
+# Hack for https://sw-brainwy.rhcloud.com/tracker/PyDev/363 (i.e.: calling isAlive() can throw AssertionError under some circumstances)
+# It is required to debug threads started by start_new_thread in Python 3.4
+_temp = threading.Thread()
+if hasattr(_temp, '_is_stopped'): # Python 3.4 has this
+    def isThreadAlive(t):
+        try:
+            return not t._is_stopped
+        except:
+            return t.isAlive()
+    
+elif hasattr(_temp, '_Thread__stopped'): # Python 2.7 has this
+    def isThreadAlive(t):
+        try:
+            return not t._Thread__stopped
+        except:
+            return t.isAlive()
+    
+else: # Haven't checked all other versions, so, let's use the regular isAlive call in this case.
+    def isThreadAlive(t):
+        return t.isAlive()
+del _temp
 
 #=======================================================================================================================
 # PyDBCommandThread
@@ -159,7 +195,7 @@
         self.setName('pydevd.CommandThread')
 
     def OnRun(self):
-        for i in range(1, 10):
+        for i in xrange(1, 10):
             time.sleep(0.5) #this one will only start later on (because otherwise we may not have any non-daemon threads
             if self.killReceived:
                 return
@@ -187,7 +223,7 @@
     for t in threads:
         if hasattr(t, 'doKillPydevThread'):
             t.doKillPydevThread()
-    
+
 
 #=======================================================================================================================
 # PyDBCheckAliveThread
@@ -220,62 +256,6 @@
     def doKillPydevThread(self):
         pass
 
-if USE_LIB_COPY:
-    import _pydev_thread as thread
-else:
-    try:
-        import thread
-    except ImportError:
-        import _thread as thread #Py3K changed it.
-
-_original_start_new_thread = thread.start_new_thread
-
-if getattr(thread, '_original_start_new_thread', None) is None:
-    thread._original_start_new_thread = thread.start_new_thread
-
-#=======================================================================================================================
-# NewThreadStartup
-#=======================================================================================================================
-class NewThreadStartup:
-
-    def __init__(self, original_func, args, kwargs):
-        self.original_func = original_func
-        self.args = args
-        self.kwargs = kwargs
-
-    def __call__(self):
-        global_debugger = GetGlobalDebugger()
-        global_debugger.SetTrace(global_debugger.trace_dispatch)
-        self.original_func(*self.args, **self.kwargs)
-
-thread.NewThreadStartup = NewThreadStartup
-
-#=======================================================================================================================
-# pydev_start_new_thread
-#=======================================================================================================================
-def _pydev_start_new_thread(function, args, kwargs={}):
-    '''
-    We need to replace the original thread.start_new_thread with this function so that threads started through
-    it and not through the threading module are properly traced.
-    '''
-    if USE_LIB_COPY:
-        import _pydev_thread as thread
-    else:
-        try:
-            import thread
-        except ImportError:
-            import _thread as thread #Py3K changed it.
-
-    return thread._original_start_new_thread(thread.NewThreadStartup(function, args, kwargs), ())
-
-class PydevStartNewThread(object):
-    def __get__(self, obj, type=None):
-        return self
-
-    def __call__(self, function, args, kwargs={}):
-        return _pydev_start_new_thread(function, args, kwargs)
-
-pydev_start_new_thread = PydevStartNewThread()
 
 
 #=======================================================================================================================
@@ -304,10 +284,18 @@
         self.quitting = None
         self.cmdFactory = NetCommandFactory()
         self._cmd_queue = {}  # the hash of Queues. Key is thread id, value is thread
+
         self.breakpoints = {}
         self.django_breakpoints = {}
-        self.exception_set = {}
-        self.always_exception_set = set()
+
+        self.file_to_id_to_line_breakpoint = {}
+        self.file_to_id_to_django_breakpoint = {}
+
+        # Note: breakpoints dict should not be mutated: a copy should be created
+        # and later it should be assigned back (to prevent concurrency issues).
+        self.break_on_uncaught_exceptions = {}
+        self.break_on_caught_exceptions = {}
+
         self.django_exception_break = {}
         self.readyToRun = False
         self._main_lock = threading.Lock()
@@ -316,15 +304,31 @@
         CustomFramesContainer._py_db_command_thread_event = self._py_db_command_thread_event
         self._finishDebuggingSession = False
         self._terminationEventSent = False
-        self.force_post_mortem_stop = 0
         self.signature_factory = None
         self.SetTrace = pydevd_tracing.SetTrace
+        self.break_on_exceptions_thrown_in_same_context = False
+        self.ignore_exceptions_thrown_in_lines_with_ignore_exception = True
+
+        # Suspend debugger even if breakpoint condition raises an exception
+        SUSPEND_ON_BREAKPOINT_EXCEPTION = True
+        self.suspend_on_breakpoint_exception = SUSPEND_ON_BREAKPOINT_EXCEPTION
+
+        # By default user can step into properties getter/setter/deleter methods
+        self.disable_property_trace = False
+        self.disable_property_getter_trace = False
+        self.disable_property_setter_trace = False
+        self.disable_property_deleter_trace = False
 
         #this is a dict of thread ids pointing to thread ids. Whenever a command is passed to the java end that
         #acknowledges that a thread was created, the thread id should be passed here -- and if at some time we do not
         #find that thread alive anymore, we must remove it from this list and make the java side know that the thread
         #was killed.
         self._running_thread_ids = {}
+        self._set_breakpoints_with_id = False
+
+        # This attribute holds the file-> lines which have an @IgnoreException.
+        self.filename_to_lines_where_exceptions_are_ignored = {}
+
 
     def haveAliveThreads(self):
         for t in threadingEnumerate():
@@ -398,12 +402,12 @@
         global bufferStdErrToServer
 
         if bufferStdOutToServer:
-                initStdoutRedirect()
-                self.checkOutput(sys.stdoutBuf, 1) #@UndefinedVariable
+            initStdoutRedirect()
+            self.checkOutput(sys.stdoutBuf, 1) #@UndefinedVariable
 
         if bufferStdErrToServer:
-                initStderrRedirect()
-                self.checkOutput(sys.stderrBuf, 2) #@UndefinedVariable
+            initStderrRedirect()
+            self.checkOutput(sys.stderrBuf, 2) #@UndefinedVariable
 
     def checkOutput(self, out, outCtx):
         '''Checks the output to see if we have to send some buffered output to the debug server
@@ -521,6 +525,58 @@
             additionalInfo = None
 
 
+    def consolidate_breakpoints(self, file, id_to_breakpoint, breakpoints):
+        break_dict = {}
+        for breakpoint_id, pybreakpoint in DictIterItems(id_to_breakpoint):
+            break_dict[pybreakpoint.line] = pybreakpoint
+
+        breakpoints[file] = break_dict
+
+
+    def add_break_on_exception(
+        self,
+        exception,
+        notify_always,
+        notify_on_terminate,
+        notify_on_first_raise_only,
+        ):
+        eb = ExceptionBreakpoint(
+            exception,
+            notify_always,
+            notify_on_terminate,
+            notify_on_first_raise_only,
+        )
+
+        if eb.notify_on_terminate:
+            cp = self.break_on_uncaught_exceptions.copy()
+            cp[exception] = eb
+            if DebugInfoHolder.DEBUG_TRACE_BREAKPOINTS > 0:
+                pydev_log.error("Exceptions to hook on terminate: %s\n" % (cp,))
+            self.break_on_uncaught_exceptions = cp
+
+        if eb.notify_always:
+            cp = self.break_on_caught_exceptions.copy()
+            cp[exception] = eb
+            if DebugInfoHolder.DEBUG_TRACE_BREAKPOINTS > 0:
+                pydev_log.error("Exceptions to hook always: %s\n" % (cp,))
+            self.break_on_caught_exceptions = cp
+
+        return eb
+
+    def update_after_exceptions_added(self, added):
+        updated_on_caught = False
+        updated_on_uncaught = False
+
+        for eb in added:
+            if not updated_on_uncaught and eb.notify_on_terminate:
+                updated_on_uncaught = True
+                update_exception_hook(self)
+
+            if not updated_on_caught and eb.notify_always:
+                updated_on_caught = True
+                self.setTracingForUntracedContexts()
+
+
     def processNetCommand(self, cmd_id, seq, text):
         '''Processes a command received from the Java side
 
@@ -536,6 +592,7 @@
         it may be worth refactoring it (actually, reordering the ifs so that the ones used mostly come before
         probably will give better performance).
         '''
+        #print ID_TO_MEANING[str(cmd_id)], repr(text)
 
         self._main_lock.acquire()
         try:
@@ -546,9 +603,28 @@
 
                 elif cmd_id == CMD_VERSION:
                     # response is version number
-                    local_version, pycharm_os = text.split('\t', 1)
+                    # ide_os should be 'WINDOWS' or 'UNIX'.
+                    ide_os = 'WINDOWS'
 
-                    pydevd_file_utils.set_pycharm_os(pycharm_os)
+                    # Breakpoints can be grouped by 'LINE' or by 'ID'.
+                    breakpoints_by = 'LINE'
+
+                    splitted = text.split('\t')
+                    if len(splitted) == 1:
+                        _local_version = splitted
+
+                    elif len(splitted) == 2:
+                        _local_version, ide_os = splitted
+
+                    elif len(splitted) == 3:
+                        _local_version, ide_os, breakpoints_by = splitted
+
+                    if breakpoints_by == 'ID':
+                        self._set_breakpoints_with_id = True
+                    else:
+                        self._set_breakpoints_with_id = False
+
+                    pydevd_file_utils.set_ide_os(ide_os)
 
                     cmd = self.cmdFactory.makeVersionMessage(seq)
 
@@ -684,26 +760,42 @@
 
                 elif cmd_id == CMD_SET_BREAK:
                     # func name: 'None': match anything. Empty: match global, specified: only method context.
-
                     # command to add some breakpoint.
                     # text is file\tline. Add to breakpoints dictionary
-                    type, file, line, condition, expression = text.split('\t', 4)
+                    if self._set_breakpoints_with_id:
+                        breakpoint_id, type, file, line, func_name, condition, expression = text.split('\t', 6)
 
-                    if not IS_PY3K:  # In Python 3, the frame object will have unicode for the file, whereas on python 2 it has a byte-array encoded with the filesystem encoding.
-                        file = file.encode(file_system_encoding)
-
-                    if condition.startswith('**FUNC**'):
-                        func_name, condition = condition.split('\t', 1)
+                        breakpoint_id = int(breakpoint_id)
+                        line = int(line)
 
                         # We must restore new lines and tabs as done in
                         # AbstractDebugTarget.breakpointAdded
                         condition = condition.replace("@_@NEW_LINE_CHAR@_@", '\n').\
                             replace("@_@TAB_CHAR@_@", '\t').strip()
 
-                        func_name = func_name[8:]
+                        expression = expression.replace("@_@NEW_LINE_CHAR@_@", '\n').\
+                            replace("@_@TAB_CHAR@_@", '\t').strip()
                     else:
-                        func_name = 'None'  # Match anything if not specified.
+                        #Note: this else should be removed after PyCharm migrates to setting
+                        #breakpoints by id (and ideally also provides func_name).
+                        type, file, line, condition, expression = text.split('\t', 4)
+                        # If we don't have an id given for each breakpoint, consider
+                        # the id to be the line.
+                        breakpoint_id = line = int(line)
+                        if condition.startswith('**FUNC**'):
+                            func_name, condition = condition.split('\t', 1)
 
+                            # We must restore new lines and tabs as done in
+                            # AbstractDebugTarget.breakpointAdded
+                            condition = condition.replace("@_@NEW_LINE_CHAR@_@", '\n').\
+                                replace("@_@TAB_CHAR@_@", '\t').strip()
+
+                            func_name = func_name[8:]
+                        else:
+                            func_name = 'None'  # Match anything if not specified.
+
+                    if not IS_PY3K:  # In Python 3, the frame object will have unicode for the file, whereas on python 2 it has a byte-array encoded with the filesystem encoding.
+                        file = file.encode(file_system_encoding)
 
                     file = NormFileToServer(file)
 
@@ -712,7 +804,6 @@
                             ' to file that does not exist: %s (will have no effect)\n' % (file,))
                         sys.stderr.flush()
 
-                    line = int(line)
 
                     if len(condition) <= 0 or condition is None or condition == "None":
                         condition = None
@@ -721,57 +812,68 @@
                         expression = None
 
                     if type == 'python-line':
-                        breakpoint = LineBreakpoint(type, True, condition, func_name, expression)
-                        breakpoint.add(self.breakpoints, file, line, func_name)
+                        breakpoint = LineBreakpoint(line, condition, func_name, expression)
+                        breakpoints = self.breakpoints
+                        file_to_id_to_breakpoint = self.file_to_id_to_line_breakpoint
                     elif type == 'django-line':
-                        breakpoint = DjangoLineBreakpoint(type, file, line, True, condition, func_name, expression)
-                        breakpoint.add(self.django_breakpoints, file, line, func_name)
+                        breakpoint = DjangoLineBreakpoint(file, line, condition, func_name, expression)
+                        breakpoints = self.django_breakpoints
+                        file_to_id_to_breakpoint = self.file_to_id_to_django_breakpoint
                     else:
                         raise NameError(type)
 
+                    if DebugInfoHolder.DEBUG_TRACE_BREAKPOINTS > 0:
+                        pydev_log.debug('Added breakpoint:%s - line:%s - func_name:%s\n' % (file, line, func_name.encode('utf-8')))
+                        sys.stderr.flush()
+
+                    if DictContains(file_to_id_to_breakpoint, file):
+                        id_to_pybreakpoint = file_to_id_to_breakpoint[file]
+                    else:
+                        id_to_pybreakpoint = file_to_id_to_breakpoint[file] = {}
+
+                    id_to_pybreakpoint[breakpoint_id] = breakpoint
+                    self.consolidate_breakpoints(file, id_to_pybreakpoint, breakpoints)
+
                     self.setTracingForUntracedContexts()
 
                 elif cmd_id == CMD_REMOVE_BREAK:
                     #command to remove some breakpoint
-                    #text is file\tline. Remove from breakpoints dictionary
-                    type, file, line = text.split('\t', 2)
+                    #text is type\file\tid. Remove from breakpoints dictionary
+                    breakpoint_type, file, breakpoint_id = text.split('\t', 2)
+
+                    if not IS_PY3K:  # In Python 3, the frame object will have unicode for the file, whereas on python 2 it has a byte-array encoded with the filesystem encoding.
+                        file = file.encode(file_system_encoding)
+
                     file = NormFileToServer(file)
+
                     try:
-                        line = int(line)
+                        breakpoint_id = int(breakpoint_id)
                     except ValueError:
-                        pass
+                        pydev_log.error('Error removing breakpoint. Expected breakpoint_id to be an int. Found: %s' % (breakpoint_id,))
 
                     else:
-                        found = False
+                        if breakpoint_type == 'python-line':
+                            breakpoints = self.breakpoints
+                            file_to_id_to_breakpoint = self.file_to_id_to_line_breakpoint
+                        elif breakpoint_type == 'django-line':
+                            breakpoints = self.django_breakpoints
+                            file_to_id_to_breakpoint = self.file_to_id_to_django_breakpoint
+                        else:
+                            raise NameError(breakpoint_type)
+
                         try:
-                            if type == 'django-line':
-                                del self.django_breakpoints[file][line]
-                            elif type == 'python-line':
-                                del self.breakpoints[file][line] #remove the breakpoint in that line
-                            else:
-                                try:
-                                    del self.django_breakpoints[file][line]
-                                    found = True
-                                except:
-                                    pass
-                                try:
-                                    del self.breakpoints[file][line] #remove the breakpoint in that line
-                                    found = True
-                                except:
-                                    pass
-
+                            id_to_pybreakpoint = file_to_id_to_breakpoint[file]
                             if DebugInfoHolder.DEBUG_TRACE_BREAKPOINTS > 0:
-                                sys.stderr.write('Removed breakpoint:%s - %s\n' % (file, line))
-                                sys.stderr.flush()
+                                existing = id_to_pybreakpoint[breakpoint_id]
+                                sys.stderr.write('Removed breakpoint:%s - line:%s - func_name:%s (id: %s)\n' % (
+                                    file, existing.line, existing.func_name.encode('utf-8'), breakpoint_id))
+
+                            del id_to_pybreakpoint[breakpoint_id]
+                            self.consolidate_breakpoints(file, id_to_pybreakpoint, breakpoints)
                         except KeyError:
-                            found = False
+                            pydev_log.error("Error removing breakpoint: Breakpoint id not found: %s id: %s. Available ids: %s\n" % (
+                                file, breakpoint_id, DictKeys(id_to_pybreakpoint)))
 
-                        if not found:
-                            #ok, it's not there...
-                            if DebugInfoHolder.DEBUG_TRACE_BREAKPOINTS > 0:
-                                #Sometimes, when adding a breakpoint, it adds a remove command before (don't really know why)
-                                sys.stderr.write("breakpoint not found: %s - %s\n" % (file, line))
-                                sys.stderr.flush()
 
                 elif cmd_id == CMD_EVALUATE_EXPRESSION or cmd_id == CMD_EXEC_EXPRESSION:
                     #command to evaluate the given expression
@@ -790,29 +892,118 @@
                     int_cmd = InternalConsoleExec(seq, thread_id, frame_id, expression)
                     self.postInternalCommand(int_cmd, thread_id)
 
+                elif cmd_id == CMD_SET_PY_EXCEPTION:
+                    # Command which receives set of exceptions on which user wants to break the debugger
+                    # text is: break_on_uncaught;break_on_caught;TypeError;ImportError;zipimport.ZipImportError;
+                    # This API is optional and works 'in bulk' -- it's possible
+                    # to get finer-grained control with CMD_ADD_EXCEPTION_BREAK/CMD_REMOVE_EXCEPTION_BREAK
+                    # which allows setting caught/uncaught per exception.
+                    #
+                    splitted = text.split(';')
+                    self.break_on_uncaught_exceptions = {}
+                    self.break_on_caught_exceptions = {}
+                    added = []
+                    if len(splitted) >= 4:
+                        if splitted[0] == 'true':
+                            break_on_uncaught = True
+                        else:
+                            break_on_uncaught = False
+
+                        if splitted[1] == 'true':
+                            break_on_caught = True
+                        else:
+                            break_on_caught = False
+
+                        if splitted[2] == 'true':
+                            self.break_on_exceptions_thrown_in_same_context = True
+                        else:
+                            self.break_on_exceptions_thrown_in_same_context = False
+
+                        if splitted[3] == 'true':
+                            self.ignore_exceptions_thrown_in_lines_with_ignore_exception = True
+                        else:
+                            self.ignore_exceptions_thrown_in_lines_with_ignore_exception = False
+
+                        for exception_type in splitted[4:]:
+                            exception_type = exception_type.strip()
+                            if not exception_type:
+                                continue
+
+                            exception_breakpoint = self.add_break_on_exception(
+                                exception_type,
+                                notify_always=break_on_caught,
+                                notify_on_terminate=break_on_uncaught,
+                                notify_on_first_raise_only=False,
+                            )
+                            added.append(exception_breakpoint)
+
+                        self.update_after_exceptions_added(added)
+
+                    else:
+                        sys.stderr.write("Error when setting exception list. Received: %s\n" % (text,))
+
+                elif cmd_id == CMD_GET_FILE_CONTENTS:
+
+                    if not IS_PY3K:  # In Python 3, the frame object will have unicode for the file, whereas on python 2 it has a byte-array encoded with the filesystem encoding.
+                        text = text.encode(file_system_encoding)
+
+                    if os.path.exists(text):
+                        f = open(text, 'r')
+                        try:
+                            source = f.read()
+                        finally:
+                            f.close()
+                        cmd = self.cmdFactory.makeGetFileContents(seq, source)
+
+                elif cmd_id == CMD_SET_PROPERTY_TRACE:
+                    # Command which receives whether to trace property getter/setter/deleter
+                    # text is feature_state(true/false);disable_getter/disable_setter/disable_deleter
+                    if text != "":
+                        splitted = text.split(';')
+                        if len(splitted) >= 3:
+                            if self.disable_property_trace is False and splitted[0] == 'true':
+                                # Replacing property by custom property only when the debugger starts
+                                pydevd_traceproperty.replace_builtin_property()
+                                self.disable_property_trace = True
+                            # Enable/Disable tracing of the property getter
+                            if splitted[1] == 'true':
+                                self.disable_property_getter_trace = True
+                            else:
+                                self.disable_property_getter_trace = False
+                            # Enable/Disable tracing of the property setter
+                            if splitted[2] == 'true':
+                                self.disable_property_setter_trace = True
+                            else:
+                                self.disable_property_setter_trace = False
+                            # Enable/Disable tracing of the property deleter
+                            if splitted[3] == 'true':
+                                self.disable_property_deleter_trace = True
+                            else:
+                                self.disable_property_deleter_trace = False
+                    else:
+                        # User hasn't configured any settings for property tracing
+                        pass
+
                 elif cmd_id == CMD_ADD_EXCEPTION_BREAK:
                     exception, notify_always, notify_on_terminate = text.split('\t', 2)
-
-                    eb = ExceptionBreakpoint(exception, notify_always, notify_on_terminate)
-
-                    self.exception_set[exception] = eb
-
-                    if eb.notify_on_terminate:
-                        update_exception_hook(self)
-                    if DebugInfoHolder.DEBUG_TRACE_BREAKPOINTS > 0:
-                        pydev_log.error("Exceptions to hook on terminate: %s\n" % (self.exception_set,))
-
-                    if eb.notify_always:
-                        self.always_exception_set.add(exception)
-                        if DebugInfoHolder.DEBUG_TRACE_BREAKPOINTS > 0:
-                            pydev_log.error("Exceptions to hook always: %s\n" % (self.always_exception_set,))
-                        self.setTracingForUntracedContexts()
+                    exception_breakpoint = self.add_break_on_exception(
+                        exception,
+                        notify_always=int(notify_always) > 0,
+                        notify_on_terminate = int(notify_on_terminate) == 1,
+                        notify_on_first_raise_only=int(notify_always) == 2
+                    )
+                    self.update_after_exceptions_added([exception_breakpoint])
 
                 elif cmd_id == CMD_REMOVE_EXCEPTION_BREAK:
                     exception = text
                     try:
-                        del self.exception_set[exception]
-                        self.always_exception_set.remove(exception)
+                        cp = self.break_on_uncaught_exceptions.copy()
+                        DictPop(cp, exception, None)
+                        self.break_on_uncaught_exceptions = cp
+
+                        cp = self.break_on_caught_exceptions.copy()
+                        DictPop(cp, exception, None)
+                        self.break_on_caught_exceptions = cp
                     except:
                         pydev_log.debug("Error while removing exception %s"%sys.exc_info()[0]);
                     update_exception_hook(self)
@@ -840,6 +1031,77 @@
                     except :
                         pass
 
+                elif cmd_id == CMD_EVALUATE_CONSOLE_EXPRESSION:
+                    # Command which takes care for the debug console communication
+                    if text != "":
+                        thread_id, frame_id, console_command = text.split('\t', 2)
+                        console_command, line = console_command.split('\t')
+                        if console_command == 'EVALUATE':
+                            int_cmd = InternalEvaluateConsoleExpression(seq, thread_id, frame_id, line)
+                        elif console_command == 'GET_COMPLETIONS':
+                            int_cmd = InternalConsoleGetCompletions(seq, thread_id, frame_id, line)
+                        self.postInternalCommand(int_cmd, thread_id)
+
+                elif cmd_id == CMD_RUN_CUSTOM_OPERATION:
+                    # Command which runs a custom operation
+                    if text != "":
+                        try:
+                            location, custom = text.split('||', 1)
+                        except:
+                            sys.stderr.write('Custom operation now needs a || separator. Found: %s\n' % (text,))
+                            raise
+
+                        thread_id, frame_id, scopeattrs = location.split('\t', 2)
+
+                        if scopeattrs.find('\t') != -1:  # there are attributes beyond scope
+                            scope, attrs = scopeattrs.split('\t', 1)
+                        else:
+                            scope, attrs = (scopeattrs, None)
+
+                        # : style: EXECFILE or EXEC
+                        # : encoded_code_or_file: file to execute or code
+                        # : fname: name of function to be executed in the resulting namespace
+                        style, encoded_code_or_file, fnname = custom.split('\t', 3)
+                        int_cmd = InternalRunCustomOperation(seq, thread_id, frame_id, scope, attrs,
+                                                             style, encoded_code_or_file, fnname)
+                        self.postInternalCommand(int_cmd, thread_id)
+
+                elif cmd_id == CMD_IGNORE_THROWN_EXCEPTION_AT:
+                    if text:
+                        replace = 'REPLACE:'  # Not all 3.x versions support u'REPLACE:', so, doing workaround.
+                        if not IS_PY3K:
+                            replace = unicode(replace)
+
+                        if text.startswith(replace):
+                            text = text[8:]
+                            self.filename_to_lines_where_exceptions_are_ignored.clear()
+
+                        if text:
+                            for line in text.split('||'):  # Can be bulk-created (one in each line)
+                                filename, line_number = line.split('|')
+                                if not IS_PY3K:
+                                    filename = filename.encode(file_system_encoding)
+
+                                filename = NormFileToServer(filename)
+
+                                if os.path.exists(filename):
+                                    lines_ignored = self.filename_to_lines_where_exceptions_are_ignored.get(filename)
+                                    if lines_ignored is None:
+                                        lines_ignored = self.filename_to_lines_where_exceptions_are_ignored[filename] = {}
+                                    lines_ignored[int(line_number)] = 1
+                                else:
+                                    sys.stderr.write('pydev debugger: warning: trying to ignore exception thrown'\
+                                        ' on file that does not exist: %s (will have no effect)\n' % (filename,))
+
+                elif cmd_id == CMD_ENABLE_DONT_TRACE:
+                    if text:
+                        true_str = 'true'  # Not all 3.x versions support u'str', so, doing workaround.
+                        if not IS_PY3K:
+                            true_str = unicode(true_str)
+
+                        mode = text.strip() == true_str
+                        pydevd_dont_trace.trace_filter(mode)
+
                 else:
                     #I have no idea what this is all about
                     cmd = self.cmdFactory.makeErrorMessage(seq, "unexpected command " + str(cmd_id))
@@ -881,6 +1143,44 @@
         thread.additionalInfo.pydev_state = STATE_SUSPEND
         thread.stop_reason = stop_reason
 
+        # If conditional breakpoint raises any exception during evaluation send details to Java
+        if stop_reason == CMD_SET_BREAK and self.suspend_on_breakpoint_exception:
+            self.sendBreakpointConditionException(thread)
+
+
+    def sendBreakpointConditionException(self, thread):
+        """If conditional breakpoint raises an exception during evaluation
+        send exception details to java
+        """
+        thread_id = GetThreadId(thread)
+        conditional_breakpoint_exception_tuple = thread.additionalInfo.conditional_breakpoint_exception
+        # conditional_breakpoint_exception_tuple - should contain 2 values (exception_type, stacktrace)
+        if conditional_breakpoint_exception_tuple and len(conditional_breakpoint_exception_tuple) == 2:
+            exc_type, stacktrace = conditional_breakpoint_exception_tuple
+            int_cmd = InternalGetBreakpointException(thread_id, exc_type, stacktrace)
+            # Reset the conditional_breakpoint_exception details to None
+            thread.additionalInfo.conditional_breakpoint_exception = None
+            self.postInternalCommand(int_cmd, thread_id)
+
+
+    def sendCaughtExceptionStack(self, thread, arg, curr_frame_id):
+        """Sends details on the exception which was caught (and where we stopped) to the java side.
+
+        arg is: exception type, description, traceback object
+        """
+        thread_id = GetThreadId(thread)
+        int_cmd = InternalSendCurrExceptionTrace(thread_id, arg, curr_frame_id)
+        self.postInternalCommand(int_cmd, thread_id)
+
+
+    def sendCaughtExceptionStackProceeded(self, thread):
+        """Sends that some thread was resumed and is no longer showing an exception trace.
+        """
+        thread_id = GetThreadId(thread)
+        int_cmd = InternalSendCurrExceptionTraceProceeded(thread_id)
+        self.postInternalCommand(int_cmd, thread_id)
+        self.processInternalCommands()
+
 
     def doWaitSuspend(self, thread, frame, event, arg): #@UnusedVariable
         """ busy waits until the thread state changes to RUN
@@ -898,7 +1198,7 @@
         try:
             from_this_thread = []
 
-            for frame_id, custom_frame in CustomFramesContainer.custom_frames.items():
+            for frame_id, custom_frame in DictIterItems(CustomFramesContainer.custom_frames):
                 if custom_frame.thread_id == thread.ident:
                     # print >> sys.stderr, 'Frame created: ', frame_id
                     self.writer.addCommand(self.cmdFactory.makeCustomFrameCreatedMessage(frame_id, custom_frame.name))
@@ -991,7 +1291,6 @@
 
     def handle_post_mortem_stop(self, additionalInfo, t):
         pydev_log.debug("We are stopping in post-mortem\n")
-        self.force_post_mortem_stop -= 1
         frame, frames_byid = additionalInfo.pydev_force_stop_at_exception
         thread_id = GetThreadId(t)
         pydevd_vars.addAdditionalFrameById(thread_id, frames_byid)
@@ -1060,9 +1359,9 @@
             if additionalInfo.is_tracing:
                 f = frame
                 while f is not None:
-                    fname, bs = GetFilenameAndBase(f)
-                    if bs == 'pydevd_frame.py':
-                        if 'trace_dispatch' == f.f_code.co_name:
+                    if 'trace_dispatch' == f.f_code.co_name:
+                        _fname, bs = GetFilenameAndBase(f)
+                        if bs == 'pydevd_frame.py':
                             return None  #we don't wan't to trace code invoked from pydevd_frame.trace_dispatch
                     f = f.f_back
 
@@ -1071,9 +1370,6 @@
                 self.processThreadNotAlive(GetThreadId(t))
                 return None  # suspend tracing
 
-            if is_file_to_ignore:
-                return None
-
             # each new frame...
             return additionalInfo.CreateDbFrame((self, filename, additionalInfo, t, frame)).trace_dispatch(frame, event, arg)
 
@@ -1120,18 +1416,18 @@
 
     def update_trace(self, frame, dispatch_func, overwrite_prev):
         if frame.f_trace is None:
-          frame.f_trace = dispatch_func
+            frame.f_trace = dispatch_func
         else:
-          if overwrite_prev:
-              frame.f_trace = dispatch_func
-          else:
-              try:
-                  #If it's the trace_exception, go back to the frame trace dispatch!
-                  if frame.f_trace.im_func.__name__ == 'trace_exception':
-                      frame.f_trace = frame.f_trace.im_self.trace_dispatch
-              except AttributeError:
-                  pass
-              frame = frame.f_back
+            if overwrite_prev:
+                frame.f_trace = dispatch_func
+            else:
+                try:
+                    #If it's the trace_exception, go back to the frame trace dispatch!
+                    if frame.f_trace.im_func.__name__ == 'trace_exception':
+                        frame.f_trace = frame.f_trace.im_self.trace_dispatch
+                except AttributeError:
+                    pass
+                frame = frame.f_back
         del frame
 
     def prepareToRun(self):
@@ -1150,6 +1446,7 @@
         PyDBCommandThread(self).start()
         PyDBCheckAliveThread(self).start()
 
+
     def patch_threads(self):
         try:
             # not available in jython!
@@ -1157,11 +1454,8 @@
         except:
             pass
 
-        try:
-            thread.start_new_thread = pydev_start_new_thread
-            thread.start_new = pydev_start_new_thread
-        except:
-            pass
+        from pydev_monkey import patch_thread_modules
+        patch_thread_modules()
 
 
     def run(self, file, globals=None, locals=None, set_trace=True):
@@ -1185,7 +1479,7 @@
             sys.modules['__main__'] = m
             if hasattr(sys.modules['pydevd'], '__loader__'):
                 setattr(m, '__loader__', getattr(sys.modules['pydevd'], '__loader__'))
-                
+
             m.__file__ = file
             globals = m.__dict__
             try:
@@ -1246,7 +1540,8 @@
     setup['server'] = False
     setup['port'] = 0
     setup['file'] = ''
-    setup['multiproc'] = False
+    setup['multiproc'] = False #Used by PyCharm (reuses connection: ssh tunneling)
+    setup['multiprocess'] = False # Used by PyDev (creates new connection to ide)
     setup['save-signatures'] = False
     i = 0
     del argv[0]
@@ -1279,6 +1574,9 @@
         elif (argv[i] == '--multiproc'):
             del argv[i]
             setup['multiproc'] = True
+        elif (argv[i] == '--multiprocess'):
+            del argv[i]
+            setup['multiprocess'] = True
         elif (argv[i] == '--save-signatures'):
             del argv[i]
             setup['save-signatures'] = True
@@ -1423,7 +1721,7 @@
 
         CustomFramesContainer.custom_frames_lock.acquire()
         try:
-            for _frameId, custom_frame in CustomFramesContainer.custom_frames.items():
+            for _frameId, custom_frame in DictIterItems(CustomFramesContainer.custom_frames):
                 debugger.SetTraceForFrameAndParents(custom_frame.frame, False)
         finally:
             CustomFramesContainer.custom_frames_lock.release()
@@ -1492,24 +1790,21 @@
             threading.settrace(None) # for all future threads
         except:
             pass
-        
-        try:
-            thread.start_new_thread = _original_start_new_thread
-            thread.start_new = _original_start_new_thread
-        except:
-            pass
-    
+
+        from pydev_monkey import undo_patch_thread_modules
+        undo_patch_thread_modules()
+
         debugger = GetGlobalDebugger()
-        
+
         if debugger:
             debugger.trace_dispatch = None
-    
+
             debugger.SetTraceForFrameAndParents(GetFrame(), False)
-        
+
             debugger.exiting()
-        
-            killAllPydevThreads()  
-        
+
+            killAllPydevThreads()
+
         connected = False
 
 class Dispatcher(object):
@@ -1544,21 +1839,28 @@
             self.killReceived = True
 
 
+DISPATCH_APPROACH_NEW_CONNECTION = 1 # Used by PyDev
+DISPATCH_APPROACH_EXISTING_CONNECTION = 2 # Used by PyCharm
+DISPATCH_APPROACH = DISPATCH_APPROACH_NEW_CONNECTION
+
 def dispatch():
-    argv = sys.original_argv[:]
-    setup = processCommandLine(argv)
+    setup = SetupHolder.setup
     host = setup['client']
     port = setup['port']
-    dispatcher = Dispatcher()
-    try:
-        dispatcher.connect(host, port)
-        port = dispatcher.port
-    finally:
-        dispatcher.close()
+    if DISPATCH_APPROACH == DISPATCH_APPROACH_EXISTING_CONNECTION:
+        dispatcher = Dispatcher()
+        try:
+            dispatcher.connect(host, port)
+            port = dispatcher.port
+        finally:
+            dispatcher.close()
     return host, port
 
 
 def settrace_forked():
+    '''
+    When creating a fork from a process in the debugger, we need to reset the whole debugger environment!
+    '''
     host, port = dispatch()
 
     import pydevd_tracing
@@ -1578,6 +1880,15 @@
             overwrite_prev_trace=True,
             patch_multiprocessing=True,
             )
+
+#=======================================================================================================================
+# SetupHolder
+#=======================================================================================================================
+class SetupHolder:
+
+    setup = None
+
+
 #=======================================================================================================================
 # main
 #=======================================================================================================================
@@ -1586,6 +1897,7 @@
     try:
         sys.original_argv = sys.argv[:]
         setup = processCommandLine(sys.argv)
+        SetupHolder.setup = setup
     except ValueError:
         traceback.print_exc()
         usage(1)
@@ -1611,62 +1923,73 @@
     f = setup['file']
     fix_app_engine_debug = False
 
-    if setup['multiproc']:
-        pydev_log.debug("Started in multiproc mode\n")
 
-        dispatcher = Dispatcher()
-        try:
-            dispatcher.connect(host, port)
-            if dispatcher.port is not None:
-                port = dispatcher.port
-                pydev_log.debug("Received port %d\n" %port)
-                pydev_log.info("pydev debugger: process %d is connecting\n"% os.getpid())
-
-                try:
-                    pydev_monkey.patch_new_process_functions()
-                except:
-                    pydev_log.error("Error patching process functions\n")
-                    traceback.print_exc()
-            else:
-                pydev_log.error("pydev debugger: couldn't get port for new debug process\n")
-        finally:
-            dispatcher.close()
+    try:
+        import pydev_monkey
+    except:
+        pass #Not usable on jython 2.1
     else:
-        pydev_log.info("pydev debugger: starting\n")
+        if setup['multiprocess']: # PyDev
+            pydev_monkey.patch_new_process_functions()
 
-        try:
-            pydev_monkey.patch_new_process_functions_with_warning()
-        except:
-            pydev_log.error("Error patching process functions\n")
-            traceback.print_exc()
+        elif setup['multiproc']: # PyCharm
+            pydev_log.debug("Started in multiproc mode\n")
+            # Note: we're not inside method, so, no need for 'global'
+            DISPATCH_APPROACH = DISPATCH_APPROACH_EXISTING_CONNECTION
 
-        # Only do this patching if we're not running with multiprocess turned on.
-        if f.find('dev_appserver.py') != -1:
-            if os.path.basename(f).startswith('dev_appserver.py'):
-                appserver_dir = os.path.dirname(f)
-                version_file = os.path.join(appserver_dir, 'VERSION')
-                if os.path.exists(version_file):
+            dispatcher = Dispatcher()
+            try:
+                dispatcher.connect(host, port)
+                if dispatcher.port is not None:
+                    port = dispatcher.port
+                    pydev_log.debug("Received port %d\n" %port)
+                    pydev_log.info("pydev debugger: process %d is connecting\n"% os.getpid())
+
                     try:
-                        stream = open(version_file, 'r')
-                        try:
-                            for line in stream.read().splitlines():
-                                line = line.strip()
-                                if line.startswith('release:'):
-                                    line = line[8:].strip()
-                                    version = line.replace('"', '')
-                                    version = version.split('.')
-                                    if int(version[0]) > 1:
-                                        fix_app_engine_debug = True
-
-                                    elif int(version[0]) == 1:
-                                        if int(version[1]) >= 7:
-                                            # Only fix from 1.7 onwards
-                                            fix_app_engine_debug = True
-                                    break
-                        finally:
-                            stream.close()
+                        pydev_monkey.patch_new_process_functions()
                     except:
+                        pydev_log.error("Error patching process functions\n")
                         traceback.print_exc()
+                else:
+                    pydev_log.error("pydev debugger: couldn't get port for new debug process\n")
+            finally:
+                dispatcher.close()
+        else:
+            pydev_log.info("pydev debugger: starting\n")
+
+            try:
+                pydev_monkey.patch_new_process_functions_with_warning()
+            except:
+                pydev_log.error("Error patching process functions\n")
+                traceback.print_exc()
+
+            # Only do this patching if we're not running with multiprocess turned on.
+            if f.find('dev_appserver.py') != -1:
+                if os.path.basename(f).startswith('dev_appserver.py'):
+                    appserver_dir = os.path.dirname(f)
+                    version_file = os.path.join(appserver_dir, 'VERSION')
+                    if os.path.exists(version_file):
+                        try:
+                            stream = open(version_file, 'r')
+                            try:
+                                for line in stream.read().splitlines():
+                                    line = line.strip()
+                                    if line.startswith('release:'):
+                                        line = line[8:].strip()
+                                        version = line.replace('"', '')
+                                        version = version.split('.')
+                                        if int(version[0]) > 1:
+                                            fix_app_engine_debug = True
+
+                                        elif int(version[0]) == 1:
+                                            if int(version[1]) >= 7:
+                                                # Only fix from 1.7 onwards
+                                                fix_app_engine_debug = True
+                                        break
+                            finally:
+                                stream.close()
+                        except:
+                            traceback.print_exc()
 
     try:
         # In the default run (i.e.: run directly on debug mode), we try to patch stackless as soon as possible
@@ -1718,16 +2041,21 @@
             import pydevd_psyco_stub
             sys.modules['psyco'] = pydevd_psyco_stub
 
-    debugger = PyDB()
+        debugger = PyDB()
 
-    if setup['save-signatures']:
-        if pydevd_vm_type.GetVmType() == pydevd_vm_type.PydevdVmType.JYTHON:
-            sys.stderr.write("Collecting run-time type information is not supported for Jython\n")
-        else:
-            debugger.signature_factory = SignatureFactory()
+        if setup['save-signatures']:
+            if pydevd_vm_type.GetVmType() == pydevd_vm_type.PydevdVmType.JYTHON:
+                sys.stderr.write("Collecting run-time type information is not supported for Jython\n")
+            else:
+                debugger.signature_factory = SignatureFactory()
 
-    debugger.connect(host, port)
+        try:
+            debugger.connect(host, port)
+        except:
+            sys.stderr.write("Could not connect to %s: %s\n" % (host, port))
+            traceback.print_exc()
+            sys.exit(1)
 
-    connected = True #Mark that we're connected when started from inside ide.
+        connected = True  # Mark that we're connected when started from inside ide.
 
-    debugger.run(setup['file'], None, None)
+        debugger.run(setup['file'], None, None)
diff --git a/python/helpers/pydev/pydevd_additional_thread_info.py b/python/helpers/pydev/pydevd_additional_thread_info.py
index 1b0fc2c..fa906ad 100644
--- a/python/helpers/pydev/pydevd_additional_thread_info.py
+++ b/python/helpers/pydev/pydevd_additional_thread_info.py
@@ -12,7 +12,7 @@
 #=======================================================================================================================
 class AbstractPyDBAdditionalThreadInfo:
     def __init__(self):
-        self.pydev_state = STATE_RUN 
+        self.pydev_state = STATE_RUN
         self.pydev_step_stop = None
         self.pydev_step_cmd = None
         self.pydev_notify_kill = False
@@ -20,51 +20,52 @@
         self.pydev_smart_step_stop = None
         self.pydev_django_resolve_frame = None
         self.is_tracing = False
+        self.conditional_breakpoint_exception = None
 
-        
+
     def IterFrames(self):
         raise NotImplementedError()
-    
+
     def CreateDbFrame(self, args):
         #args = mainDebugger, filename, base, additionalInfo, t, frame
         raise NotImplementedError()
-    
+
     def __str__(self):
         return 'State:%s Stop:%s Cmd: %s Kill:%s' % (self.pydev_state, self.pydev_step_stop, self.pydev_step_cmd, self.pydev_notify_kill)
 
-    
+
 #=======================================================================================================================
 # PyDBAdditionalThreadInfoWithCurrentFramesSupport
 #=======================================================================================================================
 class PyDBAdditionalThreadInfoWithCurrentFramesSupport(AbstractPyDBAdditionalThreadInfo):
-    
+
     def IterFrames(self):
         #sys._current_frames(): dictionary with thread id -> topmost frame
         return sys._current_frames().values() #return a copy... don't know if it's changed if we did get an iterator
 
     #just create the db frame directly
     CreateDbFrame = PyDBFrame
-    
+
 #=======================================================================================================================
 # PyDBAdditionalThreadInfoWithoutCurrentFramesSupport
 #=======================================================================================================================
 class PyDBAdditionalThreadInfoWithoutCurrentFramesSupport(AbstractPyDBAdditionalThreadInfo):
-    
+
     def __init__(self):
         AbstractPyDBAdditionalThreadInfo.__init__(self)
-        #That's where the last frame entered is kept. That's needed so that we're able to 
+        #That's where the last frame entered is kept. That's needed so that we're able to
         #trace contexts that were previously untraced and are currently active. So, the bad thing
         #is that the frame may be kept alive longer than it would if we go up on the frame stack,
         #and is only disposed when some other frame is removed.
-        #A better way would be if we could get the topmost frame for each thread, but that's 
+        #A better way would be if we could get the topmost frame for each thread, but that's
         #not possible (until python 2.5 -- which is the PyDBAdditionalThreadInfoWithCurrentFramesSupport version)
         #Or if the user compiled threadframe (from http://www.majid.info/mylos/stories/2004/06/10/threadframe.html)
-        
+
         #NOT RLock!! (could deadlock if it was)
         self.lock = threading.Lock()
         self._acquire_lock = self.lock.acquire
         self._release_lock = self.lock.release
-        
+
         #collection with the refs
         d = {}
         self.pydev_existing_frames = d
@@ -72,8 +73,8 @@
             self._iter_frames = d.iterkeys
         except AttributeError:
             self._iter_frames = d.keys
-            
-        
+
+
     def _OnDbFrameCollected(self, ref):
         '''
             Callback to be called when a given reference is garbage-collected.
@@ -83,8 +84,8 @@
             del self.pydev_existing_frames[ref]
         finally:
             self._release_lock()
-        
-    
+
+
     def _AddDbFrame(self, db_frame):
         self._acquire_lock()
         try:
@@ -94,8 +95,8 @@
             self.pydev_existing_frames[r] = r
         finally:
             self._release_lock()
-    
-        
+
+
     def CreateDbFrame(self, args):
         #the frame must be cached as a weak-ref (we return the actual db frame -- which will be kept
         #alive until its trace_dispatch method is not referenced anymore).
@@ -106,14 +107,14 @@
         db_frame.frame = args[-1]
         self._AddDbFrame(db_frame)
         return db_frame
-    
-    
+
+
     def IterFrames(self):
         #We cannot use yield (because of the lock)
         self._acquire_lock()
         try:
             ret = []
-            
+
             for weak_db_frame in self._iter_frames():
                 try:
                     ret.append(weak_db_frame().frame)
diff --git a/python/helpers/pydev/pydevd_breakpoints.py b/python/helpers/pydev/pydevd_breakpoints.py
index beebebf..82a230d 100644
--- a/python/helpers/pydev/pydevd_breakpoints.py
+++ b/python/helpers/pydev/pydevd_breakpoints.py
@@ -2,14 +2,12 @@
 import pydevd_tracing
 import sys
 import pydev_log
+import pydevd_import_class
 
 _original_excepthook = None
 _handle_exceptions = None
 
 
-NOTIFY_ALWAYS="NOTIFY_ALWAYS"
-NOTIFY_ON_TERMINATE="NOTIFY_ON_TERMINATE"
-
 if USE_LIB_COPY:
     import _pydev_threading as threading
 else:
@@ -20,52 +18,39 @@
 from pydevd_comm import GetGlobalDebugger
 
 class ExceptionBreakpoint:
-    def __init__(self, qname, notify_always, notify_on_terminate):
-        exctype = get_class(qname)
+
+    def __init__(
+        self,
+        qname,
+        notify_always,
+        notify_on_terminate,
+        notify_on_first_raise_only,
+        ):
+        exctype = _get_class(qname)
         self.qname = qname
         if exctype is not None:
             self.name = exctype.__name__
         else:
             self.name = None
 
-        self.notify_on_terminate = int(notify_on_terminate) == 1
-        self.notify_always = int(notify_always) > 0
-        self.notify_on_first_raise_only = int(notify_always) == 2
+        self.notify_on_terminate = notify_on_terminate
+        self.notify_always = notify_always
+        self.notify_on_first_raise_only = notify_on_first_raise_only
 
         self.type = exctype
-        self.notify = {NOTIFY_ALWAYS: self.notify_always, NOTIFY_ON_TERMINATE: self.notify_on_terminate}
 
 
     def __str__(self):
         return self.qname
 
 class LineBreakpoint:
-    def __init__(self, type, flag, condition, func_name, expression):
-        self.type = type
+
+    def __init__(self, line, condition, func_name, expression):
+        self.line = line
         self.condition = condition
         self.func_name = func_name
         self.expression = expression
 
-    def get_break_dict(self, breakpoints, file):
-        if DictContains(breakpoints, file):
-            breakDict = breakpoints[file]
-        else:
-            breakDict = {}
-        breakpoints[file] = breakDict
-        return breakDict
-
-    def trace(self, file, line, func_name):
-        if DebugInfoHolder.DEBUG_TRACE_BREAKPOINTS > 0:
-            pydev_log.debug('Added breakpoint:%s - line:%s - func_name:%s\n' % (file, line, func_name))
-            sys.stderr.flush()
-
-    def add(self, breakpoints, file, line, func_name):
-      self.trace(file, line, func_name)
-
-      breakDict = self.get_break_dict(breakpoints, file)
-
-      breakDict[line] = self
-
 def get_exception_full_qname(exctype):
     if not exctype:
         return None
@@ -77,41 +62,41 @@
     return exctype.__name__
 
 
-def get_exception_breakpoint(exctype, exceptions, notify_class):
-    name = get_exception_full_qname(exctype)
+def get_exception_breakpoint(exctype, exceptions):
+    exception_full_qname = get_exception_full_qname(exctype)
+
     exc = None
     if exceptions is not None:
-        for k, e in exceptions.items():
-          if e.notify[notify_class]:
-            if name == k:
-                return e
-            if (e.type is not None and issubclass(exctype, e.type)):
-                if exc is None or issubclass(e.type, exc.type):
-                    exc = e
+        try:
+            return exceptions[exception_full_qname]
+        except KeyError:
+            for exception_breakpoint in DictIterValues(exceptions):
+                if exception_breakpoint.type is not None and issubclass(exctype, exception_breakpoint.type):
+                    if exc is None or issubclass(exception_breakpoint.type, exc.type):
+                        exc = exception_breakpoint
     return exc
 
 #=======================================================================================================================
-# excepthook
+# _excepthook
 #=======================================================================================================================
-def excepthook(exctype, value, tb):
+def _excepthook(exctype, value, tb):
     global _handle_exceptions
-    if _handle_exceptions is not None:
-        exception_breakpoint = get_exception_breakpoint(exctype, _handle_exceptions, NOTIFY_ON_TERMINATE)
+    if _handle_exceptions:
+        exception_breakpoint = get_exception_breakpoint(exctype, _handle_exceptions)
     else:
         exception_breakpoint = None
 
-    if exception_breakpoint is None:
-        return _original_excepthook(exctype, value, tb)
-
     #Always call the original excepthook before going on to call the debugger post mortem to show it.
     _original_excepthook(exctype, value, tb)
 
+    if not exception_breakpoint:
+        return
+
     if tb is None:  #sometimes it can be None, e.g. with GTK
-      return
+        return
 
     frames = []
 
-    traceback = tb
     while tb:
         frames.append(tb.tb_frame)
         tb = tb.tb_next
@@ -122,9 +107,7 @@
     thread.additionalInfo.exception = (exctype, value, tb)
     thread.additionalInfo.pydev_force_stop_at_exception = (frame, frames_byid)
     thread.additionalInfo.message = exception_breakpoint.qname
-    #sys.exc_info = lambda : (exctype, value, traceback)
     debugger = GetGlobalDebugger()
-    debugger.force_post_mortem_stop += 1
 
     pydevd_tracing.SetTrace(None) #no tracing from here
 
@@ -133,38 +116,27 @@
     debugger.handle_post_mortem_stop(thread.additionalInfo, thread)
 
 #=======================================================================================================================
-# set_pm_excepthook
+# _set_pm_excepthook
 #=======================================================================================================================
-def set_pm_excepthook(handle_exceptions_arg=None):
+def _set_pm_excepthook(handle_exceptions_dict=None):
     '''
     Should be called to register the excepthook to be used.
 
-    It's only useful for uncaucht exceptions. I.e.: exceptions that go up to the excepthook.
+    It's only useful for uncaught exceptions. I.e.: exceptions that go up to the excepthook.
 
-    Can receive a parameter to stop only on some exceptions.
-
-    E.g.:
-        register_excepthook((IndexError, ValueError))
-
-        or
-
-        register_excepthook(IndexError)
-
-        if passed without a parameter, will break on any exception
-
-    @param handle_exceptions: exception or tuple(exceptions)
+    @param handle_exceptions: dict(exception -> ExceptionBreakpoint)
         The exceptions that should be handled.
     '''
     global _handle_exceptions
     global _original_excepthook
-    if sys.excepthook != excepthook:
-        #Only keep the original if it's not our own excepthook (if called many times).
+    if sys.excepthook != _excepthook:
+        #Only keep the original if it's not our own _excepthook (if called many times).
         _original_excepthook = sys.excepthook
 
-    _handle_exceptions = handle_exceptions_arg
-    sys.excepthook = excepthook
+    _handle_exceptions = handle_exceptions_dict
+    sys.excepthook = _excepthook
 
-def restore_pm_excepthook():
+def _restore_pm_excepthook():
     global _original_excepthook
     if _original_excepthook:
         sys.excepthook = _original_excepthook
@@ -172,27 +144,16 @@
 
 
 def update_exception_hook(dbg):
-    if dbg.exception_set:
-        set_pm_excepthook(dict(dbg.exception_set))
+    if dbg.break_on_uncaught_exceptions:
+        _set_pm_excepthook(dbg.break_on_uncaught_exceptions)
     else:
-        restore_pm_excepthook()
+        _restore_pm_excepthook()
 
-def get_class( kls ):
+def _get_class( kls ):
     if IS_PY24 and "BaseException" == kls:
         kls = "Exception"
-    parts = kls.split('.')
-    module = ".".join(parts[:-1])
-    if module == "":
-        if IS_PY3K:
-            module = "builtins"
-        else:
-            module = "__builtin__"
+
     try:
-        m = __import__( module )
-        for comp in parts[-1:]:
-            if m is None:
-                return None
-            m = getattr(m, comp, None)
-        return m
-    except ImportError:
-        return None
\ No newline at end of file
+        return eval(kls)
+    except:
+        return pydevd_import_class.ImportName(kls)
diff --git a/python/helpers/pydev/pydevd_comm.py b/python/helpers/pydev/pydevd_comm.py
index b4cf585..c7f39a1 100644
--- a/python/helpers/pydev/pydevd_comm.py
+++ b/python/helpers/pydev/pydevd_comm.py
@@ -61,32 +61,14 @@
 
 import sys
 
-if USE_LIB_COPY:
-    import _pydev_time as time
-    import _pydev_threading as threading
-    try:
-        import _pydev_thread as thread
-    except ImportError:
-        import _thread as thread #Py3K changed it.
-    import _pydev_Queue as _queue
-    from _pydev_socket import socket
-    from _pydev_socket import AF_INET, SOCK_STREAM
-    from _pydev_socket import SHUT_RD, SHUT_WR
-else:
-    import time
-    import threading
-    try:
-        import thread
-    except ImportError:
-        import _thread as thread #Py3K changed it.
+from _pydev_imps import _pydev_time as time
 
-    try:
-        import Queue as _queue
-    except ImportError:
-        import queue as _queue
-    from socket import socket
-    from socket import AF_INET, SOCK_STREAM
-    from socket import SHUT_RD, SHUT_WR
+if USE_LIB_COPY:
+    import _pydev_threading as threading
+else:
+    import threading
+from _pydev_imps._pydev_socket import socket, AF_INET, SOCK_STREAM, SHUT_RD, SHUT_WR
+from pydev_imports import _queue
 
 try:
     from urllib import quote, quote_plus, unquote, unquote_plus
@@ -103,6 +85,8 @@
 import _pydev_completer
 
 from pydevd_tracing import GetExceptionTracebackStr
+import pydevd_console
+from pydev_monkey import disable_trace_thread_modules, enable_trace_thread_modules
 
 
 
@@ -126,6 +110,8 @@
 CMD_RUN_TO_LINE = 118
 CMD_RELOAD_CODE = 119
 CMD_GET_COMPLETIONS = 120
+
+# Note: renumbered (conflicted on merge)
 CMD_CONSOLE_EXEC = 121
 CMD_ADD_EXCEPTION_BREAK = 122
 CMD_REMOVE_EXCEPTION_BREAK = 123
@@ -136,6 +122,24 @@
 CMD_SMART_STEP_INTO = 128
 CMD_EXIT = 129
 CMD_SIGNATURE_CALL_TRACE = 130
+
+
+
+CMD_SET_PY_EXCEPTION = 131
+CMD_GET_FILE_CONTENTS = 132
+CMD_SET_PROPERTY_TRACE = 133
+# Pydev debug console commands
+CMD_EVALUATE_CONSOLE_EXPRESSION = 134
+CMD_RUN_CUSTOM_OPERATION = 135
+CMD_GET_BREAKPOINT_EXCEPTION = 136
+CMD_STEP_CAUGHT_EXCEPTION = 137
+CMD_SEND_CURR_EXCEPTION_TRACE = 138
+CMD_SEND_CURR_EXCEPTION_TRACE_PROCEEDED = 139
+CMD_IGNORE_THROWN_EXCEPTION_AT = 140
+CMD_ENABLE_DONT_TRACE = 141
+
+
+
 CMD_VERSION = 501
 CMD_RETURN = 502
 CMD_ERROR = 901
@@ -171,6 +175,19 @@
     '128':'CMD_SMART_STEP_INTO',
     '129': 'CMD_EXIT',
     '130': 'CMD_SIGNATURE_CALL_TRACE',
+
+    '131': 'CMD_SET_PY_EXCEPTION',
+    '132': 'CMD_GET_FILE_CONTENTS',
+    '133': 'CMD_SET_PROPERTY_TRACE',
+    '134': 'CMD_EVALUATE_CONSOLE_EXPRESSION',
+    '135': 'CMD_RUN_CUSTOM_OPERATION',
+    '136': 'CMD_GET_BREAKPOINT_EXCEPTION',
+    '137': 'CMD_STEP_CAUGHT_EXCEPTION',
+    '138': 'CMD_SEND_CURR_EXCEPTION_TRACE',
+    '139': 'CMD_SEND_CURR_EXCEPTION_TRACE_PROCEEDED',
+    '140': 'CMD_IGNORE_THROWN_EXCEPTION_AT',
+    '141': 'CMD_ENABLE_DONT_TRACE',
+
     '501':'CMD_VERSION',
     '502':'CMD_RETURN',
     '901':'CMD_ERROR',
@@ -274,7 +291,7 @@
         #We must close the socket so that it doesn't stay halted there.
         self.killReceived = True
         try:
-            self.sock.shutdown(SHUT_RD) #shotdown the socket for read
+            self.sock.shutdown(SHUT_RD) #shutdown the socket for read
         except:
             #just ignore that
             pass
@@ -564,68 +581,69 @@
         except:
             return self.makeErrorMessage(0, GetExceptionTracebackStr())
 
-    def makeThreadSuspendMessage(self, thread_id, frame, stop_reason, message):
-
+    def makeThreadSuspendStr(self, thread_id, frame, stop_reason, message):
         """ <xml>
             <thread id="id" stop_reason="reason">
                     <frame id="id" name="functionName " file="file" line="line">
                     <var variable stuffff....
                 </frame>
             </thread>
-           """
+        """
+        cmdTextList = ["<xml>"]
+
+        if message:
+            message = pydevd_vars.makeValidXmlValue(str(message))
+
+        cmdTextList.append('<thread id="%s" stop_reason="%s" message="%s">' % (thread_id, stop_reason, message))
+
+        curFrame = frame
         try:
-            cmdTextList = ["<xml>"]
+            while curFrame:
+                #print cmdText
+                myId = str(id(curFrame))
+                #print "id is ", myId
 
-            if message:
-                message = pydevd_vars.makeValidXmlValue(str(message))
+                if curFrame.f_code is None:
+                    break #Iron Python sometimes does not have it!
 
-            cmdTextList.append('<thread id="%s" stop_reason="%s" message="%s">' % (thread_id, stop_reason, message))
+                myName = curFrame.f_code.co_name #method name (if in method) or ? if global
+                if myName is None:
+                    break #Iron Python sometimes does not have it!
 
-            curFrame = frame
-            try:
-                while curFrame:
-                    #print cmdText
-                    myId = str(id(curFrame))
-                    #print "id is ", myId
+                #print "name is ", myName
 
-                    if curFrame.f_code is None:
-                        break #Iron Python sometimes does not have it!
+                filename, base = pydevd_file_utils.GetFilenameAndBase(curFrame)
 
-                    myName = curFrame.f_code.co_name #method name (if in method) or ? if global
-                    if myName is None:
-                        break #Iron Python sometimes does not have it!
+                myFile = pydevd_file_utils.NormFileToClient(filename)
+                if file_system_encoding.lower() != "utf-8" and hasattr(myFile, "decode"):
+                    # myFile is a byte string encoded using the file system encoding
+                    # convert it to utf8
+                    myFile = myFile.decode(file_system_encoding).encode("utf-8")
 
-                    #print "name is ", myName
+                #print "file is ", myFile
+                #myFile = inspect.getsourcefile(curFrame) or inspect.getfile(frame)
 
-                    filename, base = pydevd_file_utils.GetFilenameAndBase(curFrame)
+                myLine = str(curFrame.f_lineno)
+                #print "line is ", myLine
 
-                    myFile = pydevd_file_utils.NormFileToClient(filename)
-                    if file_system_encoding.lower() != "utf-8" and hasattr(myFile, "decode"):
-                        # myFile is a byte string encoded using the file system encoding
-                        # convert it to utf8
-                        myFile = myFile.decode(file_system_encoding).encode("utf-8")
+                #the variables are all gotten 'on-demand'
+                #variables = pydevd_vars.frameVarsToXML(curFrame.f_locals)
 
-                    #print "file is ", myFile
-                    #myFile = inspect.getsourcefile(curFrame) or inspect.getfile(frame)
+                variables = ''
+                cmdTextList.append('<frame id="%s" name="%s" ' % (myId , pydevd_vars.makeValidXmlValue(myName)))
+                cmdTextList.append('file="%s" line="%s">"' % (quote(myFile, '/>_= \t'), myLine))
+                cmdTextList.append(variables)
+                cmdTextList.append("</frame>")
+                curFrame = curFrame.f_back
+        except :
+            traceback.print_exc()
 
-                    myLine = str(curFrame.f_lineno)
-                    #print "line is ", myLine
+        cmdTextList.append("</thread></xml>")
+        return ''.join(cmdTextList)
 
-                    #the variables are all gotten 'on-demand'
-                    #variables = pydevd_vars.frameVarsToXML(curFrame.f_locals)
-
-                    variables = ''
-                    cmdTextList.append('<frame id="%s" name="%s" ' % (myId , pydevd_vars.makeValidXmlValue(myName)))
-                    cmdTextList.append('file="%s" line="%s">"' % (quote(myFile, '/>_= \t'), myLine))
-                    cmdTextList.append(variables)
-                    cmdTextList.append("</frame>")
-                    curFrame = curFrame.f_back
-            except :
-                traceback.print_exc()
-
-            cmdTextList.append("</thread></xml>")
-            cmdText = ''.join(cmdTextList)
-            return NetCommand(CMD_THREAD_SUSPEND, 0, cmdText)
+    def makeThreadSuspendMessage(self, thread_id, frame, stop_reason, message):
+        try:
+            return NetCommand(CMD_THREAD_SUSPEND, 0, self.makeThreadSuspendStr(thread_id, frame, stop_reason, message))
         except:
             return self.makeErrorMessage(0, GetExceptionTracebackStr())
 
@@ -660,6 +678,51 @@
         except Exception:
             return self.makeErrorMessage(seq, GetExceptionTracebackStr())
 
+    def makeGetFileContents(self, seq, payload):
+        try:
+            return NetCommand(CMD_GET_FILE_CONTENTS, seq, payload)
+        except Exception:
+            return self.makeErrorMessage(seq, GetExceptionTracebackStr())
+
+    def makeSendBreakpointExceptionMessage(self, seq, payload):
+        try:
+            return NetCommand(CMD_GET_BREAKPOINT_EXCEPTION, seq, payload)
+        except Exception:
+            return self.makeErrorMessage(seq, GetExceptionTracebackStr())
+
+    def makeSendCurrExceptionTraceMessage(self, seq, thread_id, curr_frame_id, exc_type, exc_desc, trace_obj):
+        try:
+            while trace_obj.tb_next is not None:
+                trace_obj = trace_obj.tb_next
+
+            exc_type = pydevd_vars.makeValidXmlValue(str(exc_type)).replace('\t', '  ') or 'exception: type unknown'
+            exc_desc = pydevd_vars.makeValidXmlValue(str(exc_desc)).replace('\t', '  ') or 'exception: no description'
+
+            payload = str(curr_frame_id) + '\t' + exc_type + "\t" + exc_desc + "\t" + \
+                self.makeThreadSuspendStr(thread_id, trace_obj.tb_frame, CMD_SEND_CURR_EXCEPTION_TRACE, '')
+
+            return NetCommand(CMD_SEND_CURR_EXCEPTION_TRACE, seq, payload)
+        except Exception:
+            return self.makeErrorMessage(seq, GetExceptionTracebackStr())
+
+    def makeSendCurrExceptionTraceProceededMessage(self, seq, thread_id):
+        try:
+            return NetCommand(CMD_SEND_CURR_EXCEPTION_TRACE_PROCEEDED, 0, str(thread_id))
+        except:
+            return self.makeErrorMessage(0, GetExceptionTracebackStr())
+
+    def makeSendConsoleMessage(self, seq, payload):
+        try:
+            return NetCommand(CMD_EVALUATE_CONSOLE_EXPRESSION, seq, payload)
+        except Exception:
+            return self.makeErrorMessage(seq, GetExceptionTracebackStr())
+
+    def makeCustomOperationMessage(self, seq, payload):
+        try:
+            return NetCommand(CMD_RUN_CUSTOM_OPERATION, seq, payload)
+        except Exception:
+            return self.makeErrorMessage(seq, GetExceptionTracebackStr())
+
     def makeLoadSourceMessage(self, seq, source, dbg=None):
         try:
             net = NetCommand(CMD_LOAD_SOURCE, seq, '%s' % source)
@@ -698,7 +761,7 @@
     def canBeExecutedBy(self, thread_id):
         '''By default, it must be in the same thread to be executed
         '''
-        return self.thread_id == thread_id
+        return self.thread_id == thread_id or self.thread_id.endswith('|' + thread_id)
 
     def doIt(self, dbg):
         raise NotImplementedError("you have to override doIt")
@@ -929,7 +992,7 @@
         try:
             result = pydevd_vars.evaluateExpression(self.thread_id, self.frame_id, self.expression, self.doExec)
             xml = "<xml>"
-            xml += pydevd_vars.varToXML(result, "", self.doTrim)
+            xml += pydevd_vars.varToXML(result, self.expression, self.doTrim)
             xml += "</xml>"
             cmd = dbg.cmdFactory.makeEvaluateExpressionMessage(self.sequence, xml)
             dbg.writer.addCommand(cmd)
@@ -961,7 +1024,6 @@
                 frame = pydevd_vars.findFrame(self.thread_id, self.frame_id)
                 if frame is not None:
 
-
                     msg = _pydev_completer.GenerateCompletionsAsXML(frame, self.act_tok)
 
                     cmd = dbg.cmdFactory.makeGetCompletionsMessage(self.sequence, msg)
@@ -981,6 +1043,182 @@
             cmd = dbg.cmdFactory.makeErrorMessage(self.sequence, "Error evaluating expression " + exc)
             dbg.writer.addCommand(cmd)
 
+#=======================================================================================================================
+# InternalGetBreakpointException
+#=======================================================================================================================
+class InternalGetBreakpointException(InternalThreadCommand):
+    """ Send details of exception raised while evaluating conditional breakpoint """
+    def __init__(self, thread_id, exc_type, stacktrace):
+        self.sequence = 0
+        self.thread_id = thread_id
+        self.stacktrace = stacktrace
+        self.exc_type = exc_type
+
+    def doIt(self, dbg):
+        try:
+            callstack = "<xml>"
+
+            makeValid = pydevd_vars.makeValidXmlValue
+
+            for filename, line, methodname, methodobj in self.stacktrace:
+                if file_system_encoding.lower() != "utf-8" and hasattr(filename, "decode"):
+                    # filename is a byte string encoded using the file system encoding
+                    # convert it to utf8
+                    filename = filename.decode(file_system_encoding).encode("utf-8")
+
+                callstack += '<frame thread_id = "%s" file="%s" line="%s" name="%s" obj="%s" />' \
+                                    % (self.thread_id, makeValid(filename), line, makeValid(methodname), makeValid(methodobj))
+            callstack += "</xml>"
+
+            cmd = dbg.cmdFactory.makeSendBreakpointExceptionMessage(self.sequence, self.exc_type + "\t" + callstack)
+            dbg.writer.addCommand(cmd)
+        except:
+            exc = GetExceptionTracebackStr()
+            sys.stderr.write('%s\n' % (exc,))
+            cmd = dbg.cmdFactory.makeErrorMessage(self.sequence, "Error Sending Exception: " + exc)
+            dbg.writer.addCommand(cmd)
+
+
+#=======================================================================================================================
+# InternalSendCurrExceptionTrace
+#=======================================================================================================================
+class InternalSendCurrExceptionTrace(InternalThreadCommand):
+    """ Send details of the exception that was caught and where we've broken in.
+    """
+    def __init__(self, thread_id, arg, curr_frame_id):
+        '''
+        :param arg: exception type, description, traceback object
+        '''
+        self.sequence = 0
+        self.thread_id = thread_id
+        self.curr_frame_id = curr_frame_id
+        self.arg = arg
+
+    def doIt(self, dbg):
+        try:
+            cmd = dbg.cmdFactory.makeSendCurrExceptionTraceMessage(self.sequence, self.thread_id, self.curr_frame_id, *self.arg)
+            del self.arg
+            dbg.writer.addCommand(cmd)
+        except:
+            exc = GetExceptionTracebackStr()
+            sys.stderr.write('%s\n' % (exc,))
+            cmd = dbg.cmdFactory.makeErrorMessage(self.sequence, "Error Sending Current Exception Trace: " + exc)
+            dbg.writer.addCommand(cmd)
+
+#=======================================================================================================================
+# InternalSendCurrExceptionTraceProceeded
+#=======================================================================================================================
+class InternalSendCurrExceptionTraceProceeded(InternalThreadCommand):
+    """ Send details of the exception that was caught and where we've broken in.
+    """
+    def __init__(self, thread_id):
+        self.sequence = 0
+        self.thread_id = thread_id
+
+    def doIt(self, dbg):
+        try:
+            cmd = dbg.cmdFactory.makeSendCurrExceptionTraceProceededMessage(self.sequence, self.thread_id)
+            dbg.writer.addCommand(cmd)
+        except:
+            exc = GetExceptionTracebackStr()
+            sys.stderr.write('%s\n' % (exc,))
+            cmd = dbg.cmdFactory.makeErrorMessage(self.sequence, "Error Sending Current Exception Trace Proceeded: " + exc)
+            dbg.writer.addCommand(cmd)
+
+
+#=======================================================================================================================
+# InternalEvaluateConsoleExpression
+#=======================================================================================================================
+class InternalEvaluateConsoleExpression(InternalThreadCommand):
+    """ Execute the given command in the debug console """
+
+    def __init__(self, seq, thread_id, frame_id, line):
+        self.sequence = seq
+        self.thread_id = thread_id
+        self.frame_id = frame_id
+        self.line = line
+
+    def doIt(self, dbg):
+        """ Create an XML for console output, error and more (true/false)
+        <xml>
+            <output message=output_message></output>
+            <error message=error_message></error>
+            <more>true/false</more>
+        </xml>
+        """
+        try:
+            frame = pydevd_vars.findFrame(self.thread_id, self.frame_id)
+            if frame is not None:
+                console_message = pydevd_console.execute_console_command(frame, self.thread_id, self.frame_id, self.line)
+                cmd = dbg.cmdFactory.makeSendConsoleMessage(self.sequence, console_message.toXML())
+            else:
+                from pydevd_console import ConsoleMessage
+                console_message = ConsoleMessage()
+                console_message.add_console_message(
+                    pydevd_console.CONSOLE_ERROR, 
+                    "Select the valid frame in the debug view (thread: %s, frame: %s invalid)" % (self.thread_id, self.frame_id), 
+                )
+                cmd = dbg.cmdFactory.makeErrorMessage(self.sequence, console_message.toXML())
+        except:
+            exc = GetExceptionTracebackStr()
+            cmd = dbg.cmdFactory.makeErrorMessage(self.sequence, "Error evaluating expression " + exc)
+        dbg.writer.addCommand(cmd)
+
+
+#=======================================================================================================================
+# InternalRunCustomOperation
+#=======================================================================================================================
+class InternalRunCustomOperation(InternalThreadCommand):
+    """ Run a custom command on an expression
+    """
+    def __init__(self, seq, thread_id, frame_id, scope, attrs, style, encoded_code_or_file, fnname):
+        self.sequence = seq
+        self.thread_id = thread_id
+        self.frame_id = frame_id
+        self.scope = scope
+        self.attrs = attrs
+        self.style = style
+        self.code_or_file = unquote_plus(encoded_code_or_file)
+        self.fnname = fnname
+
+    def doIt(self, dbg):
+        try:
+            res = pydevd_vars.customOperation(self.thread_id, self.frame_id, self.scope, self.attrs,
+                                              self.style, self.code_or_file, self.fnname)
+            resEncoded = quote_plus(res)
+            cmd = dbg.cmdFactory.makeCustomOperationMessage(self.sequence, resEncoded)
+            dbg.writer.addCommand(cmd)
+        except:
+            exc = GetExceptionTracebackStr()
+            cmd = dbg.cmdFactory.makeErrorMessage(self.sequence, "Error in running custom operation" + exc)
+            dbg.writer.addCommand(cmd)
+
+
+#=======================================================================================================================
+# InternalConsoleGetCompletions
+#=======================================================================================================================
+class InternalConsoleGetCompletions(InternalThreadCommand):
+    """ Fetch the completions in the debug console
+    """
+    def __init__(self, seq, thread_id, frame_id, act_tok):
+        self.sequence = seq
+        self.thread_id = thread_id
+        self.frame_id = frame_id
+        self.act_tok = act_tok
+
+    def doIt(self, dbg):
+        """ Get completions and write back to the client
+        """
+        try:
+            frame = pydevd_vars.findFrame(self.thread_id, self.frame_id)
+            completions_xml = pydevd_console.get_completions(frame, self.act_tok)
+            cmd = dbg.cmdFactory.makeSendConsoleMessage(self.sequence, completions_xml)
+            dbg.writer.addCommand(cmd)
+        except:
+            exc = GetExceptionTracebackStr()
+            cmd = dbg.cmdFactory.makeErrorMessage(self.sequence, "Error in fetching completions" + exc)
+            dbg.writer.addCommand(cmd)
+
 
 #=======================================================================================================================
 # InternalConsoleExec
@@ -996,13 +1234,10 @@
 
     def doIt(self, dbg):
         """ Converts request into python variable """
-        pydev_start_new_thread = None
         try:
             try:
-                pydev_start_new_thread = thread.start_new_thread
-
-                thread.start_new_thread = thread._original_start_new_thread #don't trace new threads created by console command
-                thread.start_new = thread._original_start_new_thread
+                #don't trace new threads created by console command
+                disable_trace_thread_modules()
 
                 result = pydevconsole.consoleExec(self.thread_id, self.frame_id, self.expression)
                 xml = "<xml>"
@@ -1016,8 +1251,8 @@
                 cmd = dbg.cmdFactory.makeErrorMessage(self.sequence, "Error evaluating console expression " + exc)
                 dbg.writer.addCommand(cmd)
         finally:
-            thread.start_new_thread = pydev_start_new_thread
-            thread.start_new = pydev_start_new_thread
+            enable_trace_thread_modules()
+
             sys.stderr.flush()
             sys.stdout.flush()
 
diff --git a/python/helpers/pydev/pydevd_console.py b/python/helpers/pydev/pydevd_console.py
new file mode 100644
index 0000000..52b18bb
--- /dev/null
+++ b/python/helpers/pydev/pydevd_console.py
@@ -0,0 +1,212 @@
+'''An helper file for the pydev debugger (REPL) console
+'''
+from code import InteractiveConsole
+import sys
+import traceback
+
+import _pydev_completer
+from pydevd_tracing import GetExceptionTracebackStr
+from pydevd_vars import makeValidXmlValue
+from pydev_imports import Exec
+from pydevd_io import IOBuf
+from pydev_console_utils import BaseInterpreterInterface, BaseStdIn
+from pydev_override import overrides
+import pydevd_save_locals
+
+CONSOLE_OUTPUT = "output"
+CONSOLE_ERROR = "error"
+
+
+#=======================================================================================================================
+# ConsoleMessage
+#=======================================================================================================================
+class ConsoleMessage:
+    """Console Messages
+    """
+    def __init__(self):
+        self.more = False
+        # List of tuple [('error', 'error_message'), ('message_list', 'output_message')]
+        self.console_messages = []
+
+    def add_console_message(self, message_type, message):
+        """add messages in the console_messages list
+        """
+        for m in message.split("\n"):
+            if m.strip():
+                self.console_messages.append((message_type, m))
+
+    def update_more(self, more):
+        """more is set to true if further input is required from the user
+        else more is set to false
+        """
+        self.more = more
+
+    def toXML(self):
+        """Create an XML for console message_list, error and more (true/false)
+        <xml>
+            <message_list>console message_list</message_list>
+            <error>console error</error>
+            <more>true/false</more>
+        </xml>
+        """
+        makeValid = makeValidXmlValue
+
+        xml = '<xml><more>%s</more>' % (self.more)
+
+        for message_type, message in self.console_messages:
+            xml += '<%s message="%s"></%s>' % (message_type, makeValid(message), message_type)
+
+        xml += '</xml>'
+
+        return xml
+
+
+#=======================================================================================================================
+# DebugConsoleStdIn
+#=======================================================================================================================
+class DebugConsoleStdIn(BaseStdIn):
+
+    overrides(BaseStdIn.readline)
+    def readline(self, *args, **kwargs):
+        sys.stderr.write('Warning: Reading from stdin is still not supported in this console.\n')
+        return '\n'
+
+#=======================================================================================================================
+# DebugConsole
+#=======================================================================================================================
+class DebugConsole(InteractiveConsole, BaseInterpreterInterface):
+    """Wrapper around code.InteractiveConsole, in order to send
+    errors and outputs to the debug console
+    """
+
+    overrides(BaseInterpreterInterface.createStdIn)
+    def createStdIn(self):
+        return DebugConsoleStdIn() #For now, raw_input is not supported in this console.
+
+
+    overrides(InteractiveConsole.push)
+    def push(self, line, frame):
+        """Change built-in stdout and stderr methods by the
+        new custom StdMessage.
+        execute the InteractiveConsole.push.
+        Change the stdout and stderr back be the original built-ins
+
+        Return boolean (True if more input is required else False),
+        output_messages and input_messages
+        """
+        more = False
+        original_stdout = sys.stdout
+        original_stderr = sys.stderr
+        try:
+            try:
+                self.frame = frame
+                out = sys.stdout = IOBuf()
+                err = sys.stderr = IOBuf()
+                more = self.addExec(line)
+            except Exception:
+                exc = GetExceptionTracebackStr()
+                err.buflist.append("Internal Error: %s" % (exc,))
+        finally:
+            #Remove frame references.
+            self.frame = None
+            frame = None
+            sys.stdout = original_stdout
+            sys.stderr = original_stderr
+
+        return more, out.buflist, err.buflist
+
+
+    overrides(BaseInterpreterInterface.doAddExec)
+    def doAddExec(self, line):
+        return InteractiveConsole.push(self, line)
+
+
+    overrides(InteractiveConsole.runcode)
+    def runcode(self, code):
+        """Execute a code object.
+
+        When an exception occurs, self.showtraceback() is called to
+        display a traceback.  All exceptions are caught except
+        SystemExit, which is reraised.
+
+        A note about KeyboardInterrupt: this exception may occur
+        elsewhere in this code, and may not always be caught.  The
+        caller should be prepared to deal with it.
+
+        """
+        try:
+            Exec(code, self.frame.f_globals, self.frame.f_locals)
+            pydevd_save_locals.save_locals(self.frame)
+        except SystemExit:
+            raise
+        except:
+            self.showtraceback()
+
+
+#=======================================================================================================================
+# InteractiveConsoleCache
+#=======================================================================================================================
+class InteractiveConsoleCache:
+
+    thread_id = None
+    frame_id = None
+    interactive_console_instance = None
+
+
+#Note: On Jython 2.1 we can't use classmethod or staticmethod, so, just make the functions below free-functions.
+def get_interactive_console(thread_id, frame_id, frame, console_message):
+    """returns the global interactive console.
+    interactive console should have been initialized by this time
+    """
+    if InteractiveConsoleCache.thread_id == thread_id and InteractiveConsoleCache.frame_id == frame_id:
+        return InteractiveConsoleCache.interactive_console_instance
+
+    InteractiveConsoleCache.interactive_console_instance = DebugConsole()
+    InteractiveConsoleCache.thread_id = thread_id
+    InteractiveConsoleCache.frame_id = frame_id
+
+    console_stacktrace = traceback.extract_stack(frame, limit=1)
+    if console_stacktrace:
+        current_context = console_stacktrace[0] # top entry from stacktrace
+        context_message = 'File "%s", line %s, in %s' % (current_context[0], current_context[1], current_context[2])
+        console_message.add_console_message(CONSOLE_OUTPUT, "[Current context]: %s" % (context_message,))
+    return InteractiveConsoleCache.interactive_console_instance
+
+
+def clear_interactive_console():
+    InteractiveConsoleCache.thread_id = None
+    InteractiveConsoleCache.frame_id = None
+    InteractiveConsoleCache.interactive_console_instance = None
+
+
+def execute_console_command(frame, thread_id, frame_id, line):
+    """fetch an interactive console instance from the cache and
+    push the received command to the console.
+
+    create and return an instance of console_message
+    """
+    console_message = ConsoleMessage()
+
+    interpreter = get_interactive_console(thread_id, frame_id, frame, console_message)
+    more, output_messages, error_messages = interpreter.push(line, frame)
+    console_message.update_more(more)
+
+    for message in output_messages:
+        console_message.add_console_message(CONSOLE_OUTPUT, message)
+
+    for message in error_messages:
+        console_message.add_console_message(CONSOLE_ERROR, message)
+
+    return console_message
+
+
+def get_completions(frame, act_tok):
+    """ fetch all completions, create xml for the same
+    return the completions xml
+    """
+    return _pydev_completer.GenerateCompletionsAsXML(frame, act_tok)
+
+
+
+
+
diff --git a/python/helpers/pydev/pydevd_constants.py b/python/helpers/pydev/pydevd_constants.py
index 71fe4ae..74e8974 100644
--- a/python/helpers/pydev/pydevd_constants.py
+++ b/python/helpers/pydev/pydevd_constants.py
@@ -1,7 +1,6 @@
 '''
 This module holds the constants used for specifying the states of the debugger.
 '''
-
 STATE_RUN = 1
 STATE_SUSPEND = 2
 
@@ -17,13 +16,13 @@
     setattr(__builtin__, 'False', 0)
 
 class DebugInfoHolder:
-    #we have to put it here because it can be set through the command line (so, the 
+    #we have to put it here because it can be set through the command line (so, the
     #already imported references would not have it).
     DEBUG_RECORD_SOCKET_READS = False
     DEBUG_TRACE_LEVEL = -1
     DEBUG_TRACE_BREAKPOINTS = -1
 
-#Optimize with psyco? This gave a 50% speedup in the debugger in tests 
+#Optimize with psyco? This gave a 50% speedup in the debugger in tests
 USE_PSYCO_OPTIMIZATION = True
 
 #Hold a reference to the original _getframe (because psyco will change that as soon as it's imported)
@@ -111,11 +110,48 @@
             return default
 
 
+if IS_PY3K:
+    def DictKeys(d):
+        return list(d.keys())
+
+    def DictValues(d):
+        return list(d.values())
+
+    DictIterValues = dict.values
+
+    def DictIterItems(d):
+        return d.items()
+
+    def DictItems(d):
+        return list(d.items())
+
+else:
+    DictKeys = dict.keys
+    try:
+        DictIterValues = dict.itervalues
+    except:
+        DictIterValues = dict.values #Older versions don't have the itervalues
+
+    DictValues = dict.values
+
+    def DictIterItems(d):
+        return d.iteritems()
+
+    def DictItems(d):
+        return d.items()
+
+
 try:
-    xrange
+    xrange = xrange
 except:
     #Python 3k does not have it
     xrange = range
+    
+try:
+    import itertools
+    izip = itertools.izip
+except:
+    izip = zip
 
 try:
     object
@@ -128,10 +164,10 @@
 except:
     def enumerate(lst):
         ret = []
-        i=0
+        i = 0
         for element in lst:
             ret.append((i, element))
-            i+=1
+            i += 1
         return ret
 
 #=======================================================================================================================
@@ -174,8 +210,7 @@
                 except AttributeError:
                     try:
                         #Jython does not have it!
-                        import java.lang.management.ManagementFactory #@UnresolvedImport -- just for jython
-
+                        import java.lang.management.ManagementFactory  #@UnresolvedImport -- just for jython
                         pid = java.lang.management.ManagementFactory.getRuntimeMXBean().getName()
                         pid = pid.replace('@', '_')
                     except:
@@ -262,4 +297,4 @@
 if __name__ == '__main__':
     if Null():
         sys.stdout.write('here\n')
-        
+
diff --git a/python/helpers/pydev/pydevd_dont_trace.py b/python/helpers/pydev/pydevd_dont_trace.py
new file mode 100644
index 0000000..2d5ad95
--- /dev/null
+++ b/python/helpers/pydev/pydevd_dont_trace.py
@@ -0,0 +1,127 @@
+'''
+Support for a tag that allows skipping over functions while debugging.
+'''
+import linecache
+import re
+from pydevd_constants import DictContains
+
+# To suppress tracing a method, add the tag @DontTrace
+# to a comment either preceding or on the same line as
+# the method definition
+#
+# E.g.:
+# #@DontTrace
+# def test1():
+#     pass
+#
+#  ... or ...
+#
+# def test2(): #@DontTrace
+#     pass
+DONT_TRACE_TAG = '@DontTrace'
+
+# Regular expression to match a decorator (at the beginning
+# of a line).
+RE_DECORATOR = re.compile(r'^\s*@')
+
+# Mapping from code object to bool.
+# If the key exists, the value is the cached result of should_trace_hook
+_filename_to_ignored_lines = {}
+
+def default_should_trace_hook(frame, filename):
+    '''
+    Return True if this frame should be traced, False if tracing should be blocked.
+    '''
+    # First, check whether this code object has a cached value
+    ignored_lines = _filename_to_ignored_lines.get(filename)
+    if ignored_lines is None:
+        # Now, look up that line of code and check for a @DontTrace
+        # preceding or on the same line as the method.
+        # E.g.:
+        # #@DontTrace
+        # def test():
+        #     pass
+        #  ... or ...
+        # def test(): #@DontTrace
+        #     pass
+        ignored_lines = {}
+        lines = linecache.getlines(filename)
+        i_line = 0  # Could use enumerate, but not there on all versions...
+        for line in lines:
+            j = line.find('#')
+            if j >= 0:
+                comment = line[j:]
+                if DONT_TRACE_TAG in comment:
+                    ignored_lines[i_line] = 1
+                    
+                    #Note: when it's found in the comment, mark it up and down for the decorator lines found.
+                    k = i_line - 1
+                    while k >= 0:
+                        if RE_DECORATOR.match(lines[k]):
+                            ignored_lines[k] = 1
+                            k -= 1
+                        else:
+                            break
+                        
+                    k = i_line + 1
+                    while k <= len(lines):
+                        if RE_DECORATOR.match(lines[k]):
+                            ignored_lines[k] = 1
+                            k += 1
+                        else:
+                            break
+                        
+            i_line += 1
+                    
+
+        _filename_to_ignored_lines[filename] = ignored_lines
+
+    func_line = frame.f_code.co_firstlineno - 1 # co_firstlineno is 1-based, so -1 is needed
+    return not (
+        DictContains(ignored_lines, func_line - 1) or #-1 to get line before method 
+        DictContains(ignored_lines, func_line)) #method line
+
+
+should_trace_hook = None
+
+
+def clear_trace_filter_cache():
+    '''
+    Clear the trace filter cache.
+    Call this after reloading.
+    '''
+    global should_trace_hook
+    try:
+        # Need to temporarily disable a hook because otherwise
+        # _filename_to_ignored_lines.clear() will never complete.
+        old_hook = should_trace_hook
+        should_trace_hook = None
+
+        # Clear the linecache
+        linecache.clearcache()
+        _filename_to_ignored_lines.clear()
+
+    finally:
+        should_trace_hook = old_hook
+
+
+def trace_filter(mode):
+    '''
+    Set the trace filter mode.
+
+    mode: Whether to enable the trace hook.
+      True: Trace filtering on (skipping methods tagged @DontTrace)
+      False: Trace filtering off (trace methods tagged @DontTrace)
+      None/default: Toggle trace filtering.
+    '''
+    global should_trace_hook
+    if mode is None:
+        mode = should_trace_hook is None
+
+    if mode:
+        should_trace_hook = default_should_trace_hook
+    else:
+        should_trace_hook = None
+
+    return mode
+
diff --git a/python/helpers/pydev/pydevd_file_utils.py b/python/helpers/pydev/pydevd_file_utils.py
index b4f8d50..c135c4b 100644
--- a/python/helpers/pydev/pydevd_file_utils.py
+++ b/python/helpers/pydev/pydevd_file_utils.py
@@ -3,72 +3,91 @@
         - The case of a file will match the actual file in the filesystem (otherwise breakpoints won't be hit).
         - Providing means for the user to make path conversions when doing a remote debugging session in
           one machine and debugging in another.
-    
+
     To do that, the PATHS_FROM_ECLIPSE_TO_PYTHON constant must be filled with the appropriate paths.
-    
-    @note: 
-        in this context, the server is where your python process is running 
+
+    @note:
+        in this context, the server is where your python process is running
         and the client is where eclipse is running.
-    
-    E.g.: 
+
+    E.g.:
         If the server (your python process) has the structure
-            /user/projects/my_project/src/package/module1.py  
-        
-        and the client has: 
-            c:\my_project\src\package\module1.py  
-            
+            /user/projects/my_project/src/package/module1.py
+
+        and the client has:
+            c:\my_project\src\package\module1.py
+
         the PATHS_FROM_ECLIPSE_TO_PYTHON would have to be:
             PATHS_FROM_ECLIPSE_TO_PYTHON = [(r'c:\my_project\src', r'/user/projects/my_project/src')]
-    
+
     @note: DEBUG_CLIENT_SERVER_TRANSLATION can be set to True to debug the result of those translations
-    
+
     @note: the case of the paths is important! Note that this can be tricky to get right when one machine
     uses a case-independent filesystem and the other uses a case-dependent filesystem (if the system being
-    debugged is case-independent, 'normcase()' should be used on the paths defined in PATHS_FROM_ECLIPSE_TO_PYTHON). 
-    
+    debugged is case-independent, 'normcase()' should be used on the paths defined in PATHS_FROM_ECLIPSE_TO_PYTHON).
+
     @note: all the paths with breakpoints must be translated (otherwise they won't be found in the server)
-    
+
     @note: to enable remote debugging in the target machine (pydev extensions in the eclipse installation)
         import pydevd;pydevd.settrace(host, stdoutToServer, stderrToServer, port, suspend)
-        
+
         see parameter docs on pydevd.py
-        
-    @note: for doing a remote debugging session, all the pydevd_ files must be on the server accessible 
-        through the PYTHONPATH (and the PATHS_FROM_ECLIPSE_TO_PYTHON only needs to be set on the target 
+
+    @note: for doing a remote debugging session, all the pydevd_ files must be on the server accessible
+        through the PYTHONPATH (and the PATHS_FROM_ECLIPSE_TO_PYTHON only needs to be set on the target
         machine for the paths that'll actually have breakpoints).
 '''
 
-
-
-
-from pydevd_constants import * #@UnusedWildImport
 import os.path
 import sys
 import traceback
 
-
-
+os_normcase = os.path.normcase
 basename = os.path.basename
 exists = os.path.exists
 join = os.path.join
 
 try:
-    rPath = os.path.realpath #@UndefinedVariable
+    rPath = os.path.realpath  #@UndefinedVariable
 except:
     # jython does not support os.path.realpath
     # realpath is a no-op on systems without islink support
-    rPath = os.path.abspath 
-  
+    rPath = os.path.abspath
+
 #defined as a list of tuples where the 1st element of the tuple is the path in the client machine
 #and the 2nd element is the path in the server machine.
 #see module docstring for more details.
 PATHS_FROM_ECLIPSE_TO_PYTHON = []
 
-
 #example:
 #PATHS_FROM_ECLIPSE_TO_PYTHON = [
-#(normcase(r'd:\temp\temp_workspace_2\test_python\src\yyy\yyy'),
-# normcase(r'd:\temp\temp_workspace_2\test_python\src\hhh\xxx'))]
+#  (r'd:\temp\temp_workspace_2\test_python\src\yyy\yyy',
+#   r'd:\temp\temp_workspace_2\test_python\src\hhh\xxx')
+#]
+
+
+normcase = os_normcase # May be rebound on set_ide_os
+
+def set_ide_os(os):
+    '''
+    We need to set the IDE os because the host where the code is running may be
+    actually different from the client (and the point is that we want the proper
+    paths to translate from the client to the server).
+    '''
+    global normcase
+    if os == 'UNIX':
+        normcase = lambda f:f #Change to no-op if the client side is on unix/mac.
+    else:
+        normcase = os_normcase
+
+    # After setting the ide OS, apply the normcase to the existing paths.
+
+    # Note: not using enumerate nor list comprehension because it may not be available in older python versions...
+    i = 0
+    for path in PATHS_FROM_ECLIPSE_TO_PYTHON[:]:
+        PATHS_FROM_ECLIPSE_TO_PYTHON[i] = (normcase(path[0]), normcase(path[1]))
+        i += 1
+
 
 DEBUG_CLIENT_SERVER_TRANSLATION = False
 
@@ -79,14 +98,6 @@
 NORM_FILENAME_TO_CLIENT_CONTAINER = {}
 
 
-pycharm_os = None
-
-def normcase(file):
-    global pycharm_os
-    if pycharm_os == 'UNIX':
-        return file
-    else:
-        return os.path.normcase(file)
 
 
 def _NormFile(filename):
@@ -147,7 +158,7 @@
             return None
     return None
 
-    
+
 #Now, let's do a quick test to see if we're working with a version of python that has no problems
 #related to the names generated...
 try:
@@ -162,11 +173,11 @@
         sys.stderr.write('pydev debugger: Related bug: http://bugs.python.org/issue1666807\n')
         sys.stderr.write('-------------------------------------------------------------------------------\n')
         sys.stderr.flush()
-        
+
         NORM_SEARCH_CACHE = {}
-        
+
         initial_norm_file = _NormFile
-        def _NormFile(filename): #Let's redefine _NormFile to work with paths that may be incorrect
+        def _NormFile(filename):  #Let's redefine _NormFile to work with paths that may be incorrect
             try:
                 return NORM_SEARCH_CACHE[filename]
             except KeyError:
@@ -180,7 +191,7 @@
                     else:
                         sys.stderr.write('pydev debugger: Unable to find real location for: %s\n' % (filename,))
                         ret = filename
-                        
+
                 NORM_SEARCH_CACHE[filename] = ret
                 return ret
 except:
@@ -195,28 +206,28 @@
     for eclipse_prefix, server_prefix in PATHS_FROM_ECLIPSE_TO_PYTHON:
         if eclipse_sep is not None and python_sep is not None:
             break
-        
+
         if eclipse_sep is None:
             for c in eclipse_prefix:
                 if c in ('/', '\\'):
                     eclipse_sep = c
                     break
-                
+
         if python_sep is None:
             for c in server_prefix:
                 if c in ('/', '\\'):
                     python_sep = c
                     break
-        
+
     #If they're the same or one of them cannot be determined, just make it all None.
     if eclipse_sep == python_sep or eclipse_sep is None or python_sep is None:
         eclipse_sep = python_sep = None
-            
-                
-    #only setup translation functions if absolutely needed! 
+
+
+    #only setup translation functions if absolutely needed!
     def NormFileToServer(filename):
         #Eclipse will send the passed filename to be translated to the python process
-        #So, this would be 'NormFileFromEclipseToPython' 
+        #So, this would be 'NormFileFromEclipseToPython'
         try:
             return NORM_FILENAME_TO_SERVER_CONTAINER[filename]
         except KeyError:
@@ -234,17 +245,17 @@
                 if DEBUG_CLIENT_SERVER_TRANSLATION:
                     sys.stderr.write('pydev debugger: to server: unable to find matching prefix for: %s in %s\n' % \
                         (translated, [x[0] for x in PATHS_FROM_ECLIPSE_TO_PYTHON]))
-                    
+
             #Note that when going to the server, we do the replace first and only later do the norm file.
             if eclipse_sep is not None:
                 translated = translated.replace(eclipse_sep, python_sep)
             translated = _NormFile(translated)
-                
+
             NORM_FILENAME_TO_SERVER_CONTAINER[filename] = translated
             return translated
-        
-    
-    def NormFileToClient(filename): 
+
+
+    def NormFileToClient(filename):
         #The result of this method will be passed to eclipse
         #So, this would be 'NormFileFromPythonToEclipse'
         try:
@@ -264,15 +275,15 @@
                 if DEBUG_CLIENT_SERVER_TRANSLATION:
                     sys.stderr.write('pydev debugger: to client: unable to find matching prefix for: %s in %s\n' % \
                         (translated, [x[1] for x in PATHS_FROM_ECLIPSE_TO_PYTHON]))
-                        
+
             if eclipse_sep is not None:
                 translated = translated.replace(python_sep, eclipse_sep)
-            
+
             #The resulting path is not in the python process, so, we cannot do a _NormFile here,
             #only at the beginning of this method.
             NORM_FILENAME_TO_CLIENT_CONTAINER[filename] = translated
             return translated
-        
+
 else:
     #no translation step needed (just inline the calls)
     NormFileToClient = _NormFile
@@ -299,6 +310,3 @@
             f = f[:-1]
     return GetFileNameAndBaseFromFile(f)
 
-def set_pycharm_os(os):
-    global pycharm_os
-    pycharm_os = os
diff --git a/python/helpers/pydev/pydevd_frame.py b/python/helpers/pydev/pydevd_frame.py
index 05faaeb..374d281 100644
--- a/python/helpers/pydev/pydevd_frame.py
+++ b/python/helpers/pydev/pydevd_frame.py
@@ -1,18 +1,29 @@
-from django_debug import is_django_render_call, get_template_file_name, get_template_line, is_django_suspended, suspend_django, is_django_resolve_call, is_django_context_get_call
-from django_debug import find_django_render_frame
-from django_frame import just_raised
-from django_frame import is_django_exception_break_context
-from django_frame import DjangoTemplateFrame
-from pydevd_comm import * #@UnusedWildImport
-from pydevd_breakpoints import * #@UnusedWildImport
-import traceback #@Reimport
+import linecache
 import os.path
-import sys
+import re
+import traceback  # @Reimport
+
+from django_debug import find_django_render_frame
+from django_debug import is_django_render_call, is_django_suspended, suspend_django, is_django_resolve_call, is_django_context_get_call
+from django_frame import DjangoTemplateFrame
+from django_frame import is_django_exception_break_context
+from django_frame import just_raised, get_template_file_name, get_template_line
 import pydev_log
+from pydevd_breakpoints import get_exception_breakpoint, get_exception_name
+from pydevd_comm import CMD_ADD_DJANGO_EXCEPTION_BREAK, \
+    CMD_STEP_CAUGHT_EXCEPTION, CMD_STEP_RETURN, CMD_STEP_OVER, CMD_SET_BREAK, \
+    CMD_STEP_INTO, CMD_SMART_STEP_INTO, CMD_RUN_TO_LINE, CMD_SET_NEXT_STATEMENT
+from pydevd_constants import *  # @UnusedWildImport
+from pydevd_file_utils import GetFilenameAndBase
 from pydevd_signature import sendSignatureCallTrace
+import pydevd_vars
+import pydevd_dont_trace
 
 basename = os.path.basename
 
+IGNORE_EXCEPTION_TAG = re.compile('[^#]*#.*@IgnoreException')
+
+
 #=======================================================================================================================
 # PyDBFrame
 #=======================================================================================================================
@@ -22,6 +33,13 @@
     is reused for the entire context.
     '''
 
+    #Note: class (and not instance) attributes.
+
+    #Same thing in the main debugger but only considering the file contents, while the one in the main debugger
+    #considers the user input (so, the actual result must be a join of both).
+    filename_to_lines_where_exceptions_are_ignored = {}
+    filename_to_stat_info = {}
+
     def __init__(self, args):
         #args = mainDebugger, filename, base, info, t, frame
         #yeap, much faster than putting in self and then getting it from self later on
@@ -33,84 +51,234 @@
     def doWaitSuspend(self, *args, **kwargs):
         self._args[0].doWaitSuspend(*args, **kwargs)
 
+    def _is_django_render_call(self, frame):
+        try:
+            return self._cached_is_django_render_call
+        except:
+            # Calculate lazily: note that a PyDBFrame always deals with the same
+            # frame over and over, so, we can cache this.
+            # -- although we can't cache things which change over time (such as
+            #    the breakpoints for the file).
+            ret = self._cached_is_django_render_call = is_django_render_call(frame)
+            return ret
+
     def trace_exception(self, frame, event, arg):
         if event == 'exception':
-            (flag, frame) = self.shouldStopOnException(frame, event, arg)
+            flag, frame = self.should_stop_on_exception(frame, event, arg)
 
             if flag:
-              self.handle_exception(frame, event, arg)
-              return self.trace_dispatch
+                self.handle_exception(frame, event, arg)
+                return self.trace_dispatch
 
         return self.trace_exception
 
-    def shouldStopOnException(self, frame, event, arg):
-      mainDebugger, filename, info, thread = self._args
-      flag = False
+    def should_stop_on_exception(self, frame, event, arg):
+        mainDebugger, _filename, info, thread = self._args
+        flag = False
 
-      if info.pydev_state != STATE_SUSPEND:  #and breakpoint is not None:
-          (exception, value, trace) = arg
+        if info.pydev_state != STATE_SUSPEND:  #and breakpoint is not None:
+            exception, value, trace = arg
 
-          if trace is not None: #on jython trace is None on the first event
-              exception_breakpoint = get_exception_breakpoint(exception, dict(mainDebugger.exception_set), NOTIFY_ALWAYS)
-              if exception_breakpoint is not None:
-                  if not exception_breakpoint.notify_on_first_raise_only or just_raised(trace):
-                      curr_func_name = frame.f_code.co_name
-                      add_exception_to_frame(frame, (exception, value, trace))
-                      self.setSuspend(thread, CMD_ADD_EXCEPTION_BREAK)
-                      thread.additionalInfo.message = exception_breakpoint.qname
-                      flag = True
-                  else:
-                      flag = False
-              else:
-                  try:
-                      if mainDebugger.django_exception_break and get_exception_name(exception) in ['VariableDoesNotExist', 'TemplateDoesNotExist', 'TemplateSyntaxError'] and just_raised(trace) and is_django_exception_break_context(frame):
-                          render_frame = find_django_render_frame(frame)
-                          if render_frame:
-                              suspend_frame = suspend_django(self, mainDebugger, thread, render_frame, CMD_ADD_DJANGO_EXCEPTION_BREAK)
+            if trace is not None: #on jython trace is None on the first event
+                exception_breakpoint = get_exception_breakpoint(
+                    exception, mainDebugger.break_on_caught_exceptions)
 
-                              if suspend_frame:
-                                  add_exception_to_frame(suspend_frame, (exception, value, trace))
-                                  flag = True
-                                  thread.additionalInfo.message = 'VariableDoesNotExist'
-                                  suspend_frame.f_back = frame
-                                  frame = suspend_frame
-                  except :
-                      flag = False
+                if exception_breakpoint is not None:
+                    if not exception_breakpoint.notify_on_first_raise_only or just_raised(trace):
+                        # print frame.f_code.co_name
+                        add_exception_to_frame(frame, (exception, value, trace))
+                        thread.additionalInfo.message = exception_breakpoint.qname
+                        flag = True
+                    else:
+                        flag = False
+                else:
+                    try:
+                        if mainDebugger.django_exception_break and get_exception_name(exception) in [
+                                'VariableDoesNotExist', 'TemplateDoesNotExist', 'TemplateSyntaxError'] \
+                                and just_raised(trace) and is_django_exception_break_context(frame):
 
-      return (flag, frame)
+                            render_frame = find_django_render_frame(frame)
+                            if render_frame:
+                                suspend_frame = suspend_django(
+                                    self, mainDebugger, thread, render_frame, CMD_ADD_DJANGO_EXCEPTION_BREAK)
+
+                                if suspend_frame:
+                                    add_exception_to_frame(suspend_frame, (exception, value, trace))
+                                    flag = True
+                                    thread.additionalInfo.message = 'VariableDoesNotExist'
+                                    suspend_frame.f_back = frame
+                                    frame = suspend_frame
+                    except :
+                        flag = False
+
+        return flag, frame
 
     def handle_exception(self, frame, event, arg):
-      mainDebugger = self._args[0]
-      thread = self._args[3]
-      self.doWaitSuspend(thread, frame, event, arg)
-      mainDebugger.SetTraceForFrameAndParents(frame)
+        try:
+            # print 'handle_exception', frame.f_lineno, frame.f_code.co_name
+
+            # We have 3 things in arg: exception type, description, traceback object
+            trace_obj = arg[2]
+            mainDebugger = self._args[0]
+
+            if not hasattr(trace_obj, 'tb_next'):
+                return  #Not always there on Jython...
+
+            initial_trace_obj = trace_obj
+            if trace_obj.tb_next is None and trace_obj.tb_frame is frame:
+                #I.e.: tb_next should be only None in the context it was thrown (trace_obj.tb_frame is frame is just a double check).
+
+                if mainDebugger.break_on_exceptions_thrown_in_same_context:
+                    #Option: Don't break if an exception is caught in the same function from which it is thrown
+                    return
+            else:
+                #Get the trace_obj from where the exception was raised...
+                while trace_obj.tb_next is not None:
+                    trace_obj = trace_obj.tb_next
+
+
+            if mainDebugger.ignore_exceptions_thrown_in_lines_with_ignore_exception:
+                for check_trace_obj in (initial_trace_obj, trace_obj):
+                    filename = GetFilenameAndBase(check_trace_obj.tb_frame)[0]
+
+
+                    filename_to_lines_where_exceptions_are_ignored = self.filename_to_lines_where_exceptions_are_ignored
+
+
+                    lines_ignored = filename_to_lines_where_exceptions_are_ignored.get(filename)
+                    if lines_ignored is None:
+                        lines_ignored = filename_to_lines_where_exceptions_are_ignored[filename] = {}
+
+                    try:
+                        curr_stat = os.stat(filename)
+                        curr_stat = (curr_stat.st_size, curr_stat.st_mtime)
+                    except:
+                        curr_stat = None
+
+                    last_stat = self.filename_to_stat_info.get(filename)
+                    if last_stat != curr_stat:
+                        self.filename_to_stat_info[filename] = curr_stat
+                        lines_ignored.clear()
+                        try:
+                            linecache.checkcache(filename)
+                        except:
+                            #Jython 2.1
+                            linecache.checkcache()
+
+                    from_user_input = mainDebugger.filename_to_lines_where_exceptions_are_ignored.get(filename)
+                    if from_user_input:
+                        merged = {}
+                        merged.update(lines_ignored)
+                        #Override what we have with the related entries that the user entered
+                        merged.update(from_user_input)
+                    else:
+                        merged = lines_ignored
+
+                    exc_lineno = check_trace_obj.tb_lineno
+
+                    # print ('lines ignored', lines_ignored)
+                    # print ('user input', from_user_input)
+                    # print ('merged', merged, 'curr', exc_lineno)
+
+                    if not DictContains(merged, exc_lineno):  #Note: check on merged but update lines_ignored.
+                        try:
+                            line = linecache.getline(filename, exc_lineno, check_trace_obj.tb_frame.f_globals)
+                        except:
+                            #Jython 2.1
+                            line = linecache.getline(filename, exc_lineno)
+
+                        if IGNORE_EXCEPTION_TAG.match(line) is not None:
+                            lines_ignored[exc_lineno] = 1
+                            return
+                        else:
+                            #Put in the cache saying not to ignore
+                            lines_ignored[exc_lineno] = 0
+                    else:
+                        #Ok, dict has it already cached, so, let's check it...
+                        if merged.get(exc_lineno, 0):
+                            return
+
+
+            thread = self._args[3]
+
+            try:
+                frame_id_to_frame = {}
+                frame_id_to_frame[id(frame)] = frame
+                f = trace_obj.tb_frame
+                while f is not None:
+                    frame_id_to_frame[id(f)] = f
+                    f = f.f_back
+                f = None
+
+                thread_id = GetThreadId(thread)
+                pydevd_vars.addAdditionalFrameById(thread_id, frame_id_to_frame)
+                try:
+                    mainDebugger.sendCaughtExceptionStack(thread, arg, id(frame))
+                    self.setSuspend(thread, CMD_STEP_CAUGHT_EXCEPTION)
+                    self.doWaitSuspend(thread, frame, event, arg)
+                    mainDebugger.sendCaughtExceptionStackProceeded(thread)
+
+                finally:
+                    pydevd_vars.removeAdditionalFrameById(thread_id)
+            except:
+                traceback.print_exc()
+
+            mainDebugger.SetTraceForFrameAndParents(frame)
+        finally:
+            #Clear some local variables...
+            trace_obj = None
+            initial_trace_obj = None
+            check_trace_obj = None
+            f = None
+            frame_id_to_frame = None
+            mainDebugger = None
+            thread = None
 
     def trace_dispatch(self, frame, event, arg):
-        mainDebugger, filename, info, thread = self._args
+        main_debugger, filename, info, thread = self._args
         try:
             info.is_tracing = True
 
-            if mainDebugger._finishDebuggingSession:
+            if main_debugger._finishDebuggingSession:
                 return None
 
             if getattr(thread, 'pydev_do_not_trace', None):
                 return None
 
-            if event == 'call':
-                sendSignatureCallTrace(mainDebugger, frame, filename)
+            if event == 'call' and main_debugger.signature_factory:
+                sendSignatureCallTrace(main_debugger, frame, filename)
 
-            if event not in ('line', 'call', 'return'):
-                if event == 'exception':
-                    (flag, frame) = self.shouldStopOnException(frame, event, arg)
+            is_exception_event = event == 'exception'
+            has_exception_breakpoints = main_debugger.break_on_caught_exceptions or main_debugger.django_exception_break
+
+            if is_exception_event:
+                if has_exception_breakpoints:
+                    flag, frame = self.should_stop_on_exception(frame, event, arg)
                     if flag:
                         self.handle_exception(frame, event, arg)
                         return self.trace_dispatch
-                else:
-                #I believe this can only happen in jython on some frontiers on jython and java code, which we don't want to trace.
-                    return None
 
-            if event is not 'exception':
-                breakpoints_for_file = mainDebugger.breakpoints.get(filename)
+            elif event not in ('line', 'call', 'return'):
+                #I believe this can only happen in jython on some frontiers on jython and java code, which we don't want to trace.
+                return None
+
+            stop_frame = info.pydev_step_stop
+            step_cmd = info.pydev_step_cmd
+
+            if is_exception_event:
+                breakpoints_for_file = None
+            else:
+                # If we are in single step mode and something causes us to exit the current frame, we need to make sure we break
+                # eventually.  Force the step mode to step into and the step stop frame to None.
+                # I.e.: F6 in the end of a function should stop in the next possible position (instead of forcing the user
+                # to make a step in or step over at that location).
+                # Note: this is especially troublesome when we're skipping code with the
+                # @DontTrace comment.
+                if stop_frame is frame and event in ('return', 'exception') and step_cmd in (CMD_STEP_RETURN, CMD_STEP_OVER):
+                    info.pydev_step_cmd = CMD_STEP_INTO
+                    info.pydev_step_stop = None
+
+                breakpoints_for_file = main_debugger.breakpoints.get(filename)
 
                 can_skip = False
 
@@ -118,10 +286,11 @@
                     #we can skip if:
                     #- we have no stop marked
                     #- we should make a step return/step over and we're not in the current frame
-                    can_skip = (info.pydev_step_cmd is None and info.pydev_step_stop is None)\
-                    or (info.pydev_step_cmd in (CMD_STEP_RETURN, CMD_STEP_OVER) and info.pydev_step_stop is not frame)
+                    can_skip = (step_cmd is None and stop_frame is None)\
+                        or (step_cmd in (CMD_STEP_RETURN, CMD_STEP_OVER) and stop_frame is not frame)
 
-                if  mainDebugger.django_breakpoints:
+                check_stop_on_django_render_call = main_debugger.django_breakpoints and self._is_django_render_call(frame)
+                if check_stop_on_django_render_call:
                     can_skip = False
 
                 # Let's check to see if we are in a function that has a breakpoint. If we don't have a breakpoint,
@@ -130,7 +299,7 @@
                 #so, that's why the additional checks are there.
                 if not breakpoints_for_file:
                     if can_skip:
-                        if mainDebugger.always_exception_set or mainDebugger.django_exception_break:
+                        if has_exception_breakpoints:
                             return self.trace_exception
                         else:
                             return None
@@ -143,17 +312,18 @@
                     if curr_func_name in ('?', '<module>'):
                         curr_func_name = ''
 
-                    for breakpoint in breakpoints_for_file.values(): #jython does not support itervalues()
+                    for breakpoint in DictIterValues(breakpoints_for_file): #jython does not support itervalues()
                         #will match either global or some function
                         if breakpoint.func_name in ('None', curr_func_name):
                             break
 
                     else: # if we had some break, it won't get here (so, that's a context that we want to skip)
                         if can_skip:
-                            #print 'skipping', frame.f_lineno, info.pydev_state, info.pydev_step_stop, info.pydev_step_cmd
-                            return None
-            else:
-                breakpoints_for_file = None
+                            if has_exception_breakpoints:
+                                return self.trace_exception
+                            else:
+                                return None
+
 
             #We may have hit a breakpoint or we are already in step mode. Either way, let's check what we should do in this frame
             #print 'NOT skipped', frame.f_lineno, frame.f_code.co_name, event
@@ -163,33 +333,63 @@
 
 
                 flag = False
-                if event == 'call' and info.pydev_state != STATE_SUSPEND and mainDebugger.django_breakpoints \
-                and is_django_render_call(frame):
-                    (flag, frame) = self.shouldStopOnDjangoBreak(frame, event, arg)
+                if event == 'call' and info.pydev_state != STATE_SUSPEND and check_stop_on_django_render_call:
+                    flag, frame = self.should_stop_on_django_breakpoint(frame, event, arg)
 
                 #return is not taken into account for breakpoint hit because we'd have a double-hit in this case
                 #(one for the line and the other for the return).
 
                 if not flag and event != 'return' and info.pydev_state != STATE_SUSPEND and breakpoints_for_file is not None\
-                and DictContains(breakpoints_for_file, line):
+                    and DictContains(breakpoints_for_file, line):
                     #ok, hit breakpoint, now, we have to discover if it is a conditional breakpoint
                     # lets do the conditional stuff here
                     breakpoint = breakpoints_for_file[line]
 
                     stop = True
-                    if info.pydev_step_cmd == CMD_STEP_OVER and info.pydev_step_stop is frame and event in ('line', 'return'):
+                    if step_cmd == CMD_STEP_OVER and stop_frame is frame and event in ('line', 'return'):
                         stop = False #we don't stop on breakpoint if we have to stop by step-over (it will be processed later)
                     else:
-                        if breakpoint.condition is not None:
+                        condition = breakpoint.condition
+                        if condition is not None:
                             try:
-                                val = eval(breakpoint.condition, frame.f_globals, frame.f_locals)
+                                val = eval(condition, frame.f_globals, frame.f_locals)
                                 if not val:
                                     return self.trace_dispatch
 
                             except:
-                                pydev_log.info('Error while evaluating condition \'%s\': %s\n' % (breakpoint.condition, sys.exc_info()[1]))
+                                if type(condition) != type(''):
+                                    if hasattr(condition, 'encode'):
+                                        condition = condition.encode('utf-8')
 
-                                return self.trace_dispatch
+                                msg = 'Error while evaluating expression: %s\n' % (condition,)
+                                sys.stderr.write(msg)
+                                traceback.print_exc()
+                                if not main_debugger.suspend_on_breakpoint_exception:
+                                    return self.trace_dispatch
+                                else:
+                                    stop = True
+                                    try:
+                                        additional_info = None
+                                        try:
+                                            additional_info = thread.additionalInfo
+                                        except AttributeError:
+                                            pass  #that's ok, no info currently set
+
+                                        if additional_info is not None:
+                                            # add exception_type and stacktrace into thread additional info
+                                            etype, value, tb = sys.exc_info()
+                                            try:
+                                                error = ''.join(traceback.format_exception_only(etype, value))
+                                                stack = traceback.extract_stack(f=tb.tb_frame.f_back)
+
+                                                # On self.setSuspend(thread, CMD_SET_BREAK) this info will be
+                                                # sent to the client.
+                                                additional_info.conditional_breakpoint_exception = \
+                                                    ('Condition:\n' + condition + '\n\nError:\n' + error, stack)
+                                            finally:
+                                                etype, value, tb = None, None, None
+                                    except:
+                                        traceback.print_exc()
 
                     if breakpoint.expression is not None:
                         try:
@@ -216,30 +416,45 @@
             #step handling. We stop when we hit the right frame
             try:
                 django_stop = False
-                if info.pydev_step_cmd == CMD_STEP_INTO:
+
+                should_skip = False
+                if pydevd_dont_trace.should_trace_hook is not None:
+                    if not hasattr(self, 'should_skip'):
+                        # I.e.: cache the result on self.should_skip (no need to evaluate the same frame multiple times).
+                        # Note that on a code reload, we won't re-evaluate this because in practice, the frame.f_code
+                        # Which will be handled by this frame is read-only, so, we can cache it safely.
+                        should_skip = self.should_skip = not pydevd_dont_trace.should_trace_hook(frame, filename)
+                    else:
+                        should_skip = self.should_skip
+
+                if should_skip:
+                    stop = False
+
+                elif step_cmd == CMD_STEP_INTO:
                     stop = event in ('line', 'return')
+
                     if is_django_suspended(thread):
                         #django_stop = event == 'call' and is_django_render_call(frame)
                         stop = stop and is_django_resolve_call(frame.f_back) and not is_django_context_get_call(frame)
                         if stop:
                             info.pydev_django_resolve_frame = 1 #we remember that we've go into python code from django rendering frame
 
-                elif info.pydev_step_cmd == CMD_STEP_OVER:
+                elif step_cmd == CMD_STEP_OVER:
                     if is_django_suspended(thread):
-                        django_stop = event == 'call' and is_django_render_call(frame)
+                        django_stop = event == 'call' and self._is_django_render_call(frame)
 
                         stop = False
                     else:
                         if event == 'return' and info.pydev_django_resolve_frame is not None and is_django_resolve_call(frame.f_back):
                             #we return to Django suspend mode and should not stop before django rendering frame
-                            info.pydev_step_stop = info.pydev_django_resolve_frame
+                            stop_frame = info.pydev_step_stop = info.pydev_django_resolve_frame
                             info.pydev_django_resolve_frame = None
                             thread.additionalInfo.suspend_type = DJANGO_SUSPEND
 
 
-                        stop = info.pydev_step_stop is frame and event in ('line', 'return')
+                        stop = stop_frame is frame and event in ('line', 'return')
 
-                elif info.pydev_step_cmd == CMD_SMART_STEP_INTO:
+                elif step_cmd == CMD_SMART_STEP_INTO:
                     stop = False
                     if info.pydev_smart_step_stop is frame:
                         info.pydev_func_name = None
@@ -253,12 +468,12 @@
                             curr_func_name = ''
 
                         if curr_func_name == info.pydev_func_name:
-                                stop = True
+                            stop = True
 
-                elif info.pydev_step_cmd == CMD_STEP_RETURN:
-                    stop = event == 'return' and info.pydev_step_stop is frame
+                elif step_cmd == CMD_STEP_RETURN:
+                    stop = event == 'return' and stop_frame is frame
 
-                elif info.pydev_step_cmd == CMD_RUN_TO_LINE or info.pydev_step_cmd == CMD_SET_NEXT_STATEMENT:
+                elif step_cmd == CMD_RUN_TO_LINE or step_cmd == CMD_SET_NEXT_STATEMENT:
                     stop = False
 
                     if event == 'line' or event == 'exception':
@@ -286,13 +501,13 @@
                     stop = False
 
                 if django_stop:
-                    frame = suspend_django(self, mainDebugger, thread, frame)
+                    frame = suspend_django(self, main_debugger, thread, frame)
                     if frame:
                         self.doWaitSuspend(thread, frame, event, arg)
                 elif stop:
                     #event is always == line or return at this point
                     if event == 'line':
-                        self.setSuspend(thread, info.pydev_step_cmd)
+                        self.setSuspend(thread, step_cmd)
                         self.doWaitSuspend(thread, frame, event, arg)
                     else: #return event
                         back = frame.f_back
@@ -300,12 +515,18 @@
                             #When we get to the pydevd run function, the debugging has actually finished for the main thread
                             #(note that it can still go on for other threads, but for this one, we just make it finish)
                             #So, just setting it to None should be OK
-                            if basename(back.f_code.co_filename) == 'pydevd.py' and back.f_code.co_name == 'run':
+                            base = basename(back.f_code.co_filename)
+                            if base == 'pydevd.py' and back.f_code.co_name == 'run':
                                 back = None
 
+                            elif base == 'pydevd_traceproperty.py':
+                                # We dont want to trace the return event of pydevd_traceproperty (custom property for debugging)
+                                #if we're in a return, we want it to appear to the user in the previous frame!
+                                return None
+
                         if back is not None:
                             #if we're in a return, we want it to appear to the user in the previous frame!
-                            self.setSuspend(thread, info.pydev_step_cmd)
+                            self.setSuspend(thread, step_cmd)
                             self.doWaitSuspend(thread, back, event, arg)
                         else:
                             #in jython we may not have a back frame
@@ -320,7 +541,7 @@
 
             #if we are quitting, let's stop the tracing
             retVal = None
-            if not mainDebugger.quitting:
+            if not main_debugger.quitting:
                 retVal = self.trace_dispatch
 
             return retVal
@@ -339,24 +560,36 @@
                 sys.exc_clear() #don't keep the traceback
             pass #ok, psyco not available
 
-    def shouldStopOnDjangoBreak(self, frame, event, arg):
-        mainDebugger, filename, info, thread = self._args
+    def should_stop_on_django_breakpoint(self, frame, event, arg):
+        mainDebugger = self._args[0]
+        thread = self._args[3]
         flag = False
-        filename = get_template_file_name(frame)
-        pydev_log.debug("Django is rendering a template: %s\n" % filename)
-        django_breakpoints_for_file = mainDebugger.django_breakpoints.get(filename)
+        template_frame_file = get_template_file_name(frame)
+
+        #pydev_log.debug("Django is rendering a template: %s\n" % template_frame_file)
+
+        django_breakpoints_for_file = mainDebugger.django_breakpoints.get(template_frame_file)
         if django_breakpoints_for_file:
-            pydev_log.debug("Breakpoints for that file: %s\n" % django_breakpoints_for_file)
-            template_line = get_template_line(frame)
-            pydev_log.debug("Tracing template line: %d\n" % template_line)
 
-            if DictContains(django_breakpoints_for_file, template_line):
-                django_breakpoint = django_breakpoints_for_file[template_line]
+            #pydev_log.debug("Breakpoints for that file: %s\n" % django_breakpoints_for_file)
 
-                if django_breakpoint.is_triggered(frame):
-                    pydev_log.debug("Breakpoint is triggered.\n")
+            template_frame_line = get_template_line(frame, template_frame_file)
+
+            #pydev_log.debug("Tracing template line: %d\n" % template_frame_line)
+
+            if DictContains(django_breakpoints_for_file, template_frame_line):
+                django_breakpoint = django_breakpoints_for_file[template_frame_line]
+
+                if django_breakpoint.is_triggered(template_frame_file, template_frame_line):
+
+                    #pydev_log.debug("Breakpoint is triggered.\n")
+
                     flag = True
-                    new_frame = DjangoTemplateFrame(frame)
+                    new_frame = DjangoTemplateFrame(
+                        frame,
+                        template_frame_file=template_frame_file,
+                        template_frame_line=template_frame_line,
+                    )
 
                     if django_breakpoint.condition is not None:
                         try:
@@ -379,7 +612,7 @@
                                 thread.additionalInfo.message = val
                     if flag:
                         frame = suspend_django(self, mainDebugger, thread, frame)
-        return (flag, frame)
+        return flag, frame
 
 def add_exception_to_frame(frame, exception_info):
     frame.f_locals['__exception__'] = exception_info
\ No newline at end of file
diff --git a/python/helpers/pydev/pydevd_io.py b/python/helpers/pydev/pydevd_io.py
index a83adc8..2e74154 100644
--- a/python/helpers/pydev/pydevd_io.py
+++ b/python/helpers/pydev/pydevd_io.py
@@ -24,10 +24,10 @@
             r.flush()
 
     def __getattr__(self, name):
-            for r in self._redirectTo:
-                if hasattr(r, name):
-                    return r.__getattribute__(name)
-            raise AttributeError(name)
+        for r in self._redirectTo:
+            if hasattr(r, name):
+                return r.__getattribute__(name)
+        raise AttributeError(name)
 
 class IOBuf:
     '''This class works as a replacement for stdio and stderr.
diff --git a/python/helpers/pydev/pydevd_referrers.py b/python/helpers/pydev/pydevd_referrers.py
new file mode 100644
index 0000000..66b1a0e
--- /dev/null
+++ b/python/helpers/pydev/pydevd_referrers.py
@@ -0,0 +1,238 @@
+from pydevd_constants import DictContains
+import sys
+import pydevd_vars
+from os.path import basename
+import traceback
+try:
+    from urllib import quote, quote_plus, unquote, unquote_plus
+except:
+    from urllib.parse import quote, quote_plus, unquote, unquote_plus  #@Reimport @UnresolvedImport
+
+#===================================================================================================
+# print_var_node
+#===================================================================================================
+def print_var_node(xml_node, stream):
+    name = xml_node.getAttribute('name')
+    value = xml_node.getAttribute('value')
+    val_type = xml_node.getAttribute('type')
+
+    found_as = xml_node.getAttribute('found_as')
+    stream.write('Name: ')
+    stream.write(unquote_plus(name))
+    stream.write(', Value: ')
+    stream.write(unquote_plus(value))
+    stream.write(', Type: ')
+    stream.write(unquote_plus(val_type))
+    if found_as:
+        stream.write(', Found as: %s' % (unquote_plus(found_as),))
+    stream.write('\n')
+
+#===================================================================================================
+# print_referrers
+#===================================================================================================
+def print_referrers(obj, stream=None):
+    if stream is None:
+        stream = sys.stdout
+    result = get_referrer_info(obj)
+    from xml.dom.minidom import parseString
+    dom = parseString(result)
+
+    xml = dom.getElementsByTagName('xml')[0]
+    for node in xml.childNodes:
+        if node.nodeType == node.TEXT_NODE:
+            continue
+
+        if node.localName == 'for':
+            stream.write('Searching references for: ')
+            for child in node.childNodes:
+                if child.nodeType == node.TEXT_NODE:
+                    continue
+                print_var_node(child, stream)
+
+        elif node.localName == 'var':
+            stream.write('Referrer found: ')
+            print_var_node(node, stream)
+
+        else:
+            sys.stderr.write('Unhandled node: %s\n' % (node,))
+
+    return result
+
+
+#===================================================================================================
+# get_referrer_info
+#===================================================================================================
+def get_referrer_info(searched_obj):
+    DEBUG = 0
+    if DEBUG:
+        sys.stderr.write('Getting referrers info.\n')
+    try:
+        try:
+            if searched_obj is None:
+                ret = ['<xml>\n']
+
+                ret.append('<for>\n')
+                ret.append(pydevd_vars.varToXML(
+                    searched_obj,
+                    'Skipping getting referrers for None',
+                    additionalInXml=' id="%s"' % (id(searched_obj),)))
+                ret.append('</for>\n')
+                ret.append('</xml>')
+                ret = ''.join(ret)
+                return ret
+
+            obj_id = id(searched_obj)
+
+            try:
+                if DEBUG:
+                    sys.stderr.write('Getting referrers...\n')
+                import gc
+                referrers = gc.get_referrers(searched_obj)
+            except:
+                traceback.print_exc()
+                ret = ['<xml>\n']
+
+                ret.append('<for>\n')
+                ret.append(pydevd_vars.varToXML(
+                    searched_obj,
+                    'Exception raised while trying to get_referrers.',
+                    additionalInXml=' id="%s"' % (id(searched_obj),)))
+                ret.append('</for>\n')
+                ret.append('</xml>')
+                ret = ''.join(ret)
+                return ret
+
+            if DEBUG:
+                sys.stderr.write('Found %s referrers.\n' % (len(referrers),))
+
+            curr_frame = sys._getframe()
+            frame_type = type(curr_frame)
+
+            #Ignore this frame and any caller frame of this frame
+
+            ignore_frames = {}  #Should be a set, but it's not available on all python versions.
+            while curr_frame is not None:
+                if basename(curr_frame.f_code.co_filename).startswith('pydev'):
+                    ignore_frames[curr_frame] = 1
+                curr_frame = curr_frame.f_back
+
+
+            ret = ['<xml>\n']
+
+            ret.append('<for>\n')
+            if DEBUG:
+                sys.stderr.write('Searching Referrers of obj with id="%s"\n' % (obj_id,))
+
+            ret.append(pydevd_vars.varToXML(
+                searched_obj,
+                'Referrers of obj with id="%s"' % (obj_id,)))
+            ret.append('</for>\n')
+
+            all_objects = None
+
+            for r in referrers:
+                try:
+                    if DictContains(ignore_frames, r):
+                        continue  #Skip the references we may add ourselves
+                except:
+                    pass  #Ok: unhashable type checked...
+
+                if r is referrers:
+                    continue
+
+                r_type = type(r)
+                r_id = str(id(r))
+
+                representation = str(r_type)
+
+                found_as = ''
+                if r_type == frame_type:
+                    if DEBUG:
+                        sys.stderr.write('Found frame referrer: %r\n' % (r,))
+                    for key, val in r.f_locals.items():
+                        if val is searched_obj:
+                            found_as = key
+                            break
+
+                elif r_type == dict:
+                    if DEBUG:
+                        sys.stderr.write('Found dict referrer: %r\n' % (r,))
+
+                    # Try to check if it's a value in the dict (and under which key it was found)
+                    for key, val in r.items():
+                        if val is searched_obj:
+                            found_as = key
+                            if DEBUG:
+                                sys.stderr.write('    Found as %r in dict\n' % (found_as,))
+                            break
+
+                    #Ok, there's one annoying thing: many times we find it in a dict from an instance,
+                    #but with this we don't directly have the class, only the dict, so, to workaround that
+                    #we iterate over all reachable objects ad check if one of those has the given dict.
+                    if all_objects is None:
+                        all_objects = gc.get_objects()
+
+                    for x in all_objects:
+                        try:
+                            if getattr(x, '__dict__', None) is r:
+                                r = x
+                                r_type = type(x)
+                                r_id = str(id(r))
+                                representation = str(r_type)
+                                break
+                        except:
+                            pass  #Just ignore any error here (i.e.: ReferenceError, etc.)
+
+                elif r_type in (tuple, list):
+                    if DEBUG:
+                        sys.stderr.write('Found tuple referrer: %r\n' % (r,))
+
+                    #Don't use enumerate() because not all Python versions have it.
+                    i = 0
+                    for x in r:
+                        if x is searched_obj:
+                            found_as = '%s[%s]' % (r_type.__name__, i)
+                            if DEBUG:
+                                sys.stderr.write('    Found as %s in tuple: \n' % (found_as,))
+                            break
+                        i += 1
+
+                if found_as:
+                    found_as = ' found_as="%s"' % (pydevd_vars.makeValidXmlValue(found_as),)
+
+                ret.append(pydevd_vars.varToXML(
+                    r,
+                    representation,
+                    additionalInXml=' id="%s"%s' % (r_id, found_as)))
+        finally:
+            if DEBUG:
+                sys.stderr.write('Done searching for references.\n')
+
+            #If we have any exceptions, don't keep dangling references from this frame to any of our objects.
+            all_objects = None
+            referrers = None
+            searched_obj = None
+            r = None
+            x = None
+            key = None
+            val = None
+            curr_frame = None
+            ignore_frames = None
+    except:
+        traceback.print_exc()
+        ret = ['<xml>\n']
+
+        ret.append('<for>\n')
+        ret.append(pydevd_vars.varToXML(
+            searched_obj,
+            'Error getting referrers for:',
+            additionalInXml=' id="%s"' % (id(searched_obj),)))
+        ret.append('</for>\n')
+        ret.append('</xml>')
+        ret = ''.join(ret)
+        return ret
+
+    ret.append('</xml>')
+    ret = ''.join(ret)
+    return ret
+
diff --git a/python/helpers/pydev/pydevd_resolver.py b/python/helpers/pydev/pydevd_resolver.py
index 614549f..3fe895c 100644
--- a/python/helpers/pydev/pydevd_resolver.py
+++ b/python/helpers/pydev/pydevd_resolver.py
@@ -13,6 +13,7 @@
     setattr(__builtin__, 'False', 0)
 
 import pydevd_constants
+from pydevd_constants import DictIterItems, xrange, izip
 
 
 MAX_ITEMS_TO_HANDLE = 500
@@ -58,7 +59,7 @@
 class AbstractResolver:
     '''
         This class exists only for documentation purposes to explain how to create a resolver.
-        
+
         Some examples on how to resolve things:
         - list: getDictionary could return a dict with index->item and use the index to resolve it later
         - set: getDictionary could return a dict with id(object)->object and reiterate in that array to resolve it later
@@ -69,7 +70,7 @@
         '''
             In this method, we'll resolve some child item given the string representation of the item in the key
             representing the previously asked dictionary.
-            
+
             @param var: this is the actual variable to be resolved.
             @param attribute: this is the string representation of a key previously returned in getDictionary.
         '''
@@ -78,7 +79,7 @@
     def getDictionary(self, var):
         '''
             @param var: this is the variable that should have its children gotten.
-            
+
             @return: a dictionary where each pair key, value should be shown to the user as children items
             in the variables view for the given var.
         '''
@@ -128,12 +129,12 @@
 
                 declaredMethods = obj.getDeclaredMethods()
                 declaredFields = obj.getDeclaredFields()
-                for i in range(len(declaredMethods)):
+                for i in xrange(len(declaredMethods)):
                     name = declaredMethods[i].getName()
                     ret[name] = declaredMethods[i].toString()
                     found.put(name, 1)
 
-                for i in range(len(declaredFields)):
+                for i in xrange(len(declaredFields)):
                     name = declaredFields[i].getName()
                     found.put(name, 1)
                     #if declaredFields[i].isAccessible():
@@ -145,7 +146,7 @@
                         ret[name] = declaredFields[i].toString()
 
         #this simple dir does not always get all the info, that's why we have the part before
-        #(e.g.: if we do a dir on String, some methods that are from other interfaces such as 
+        #(e.g.: if we do a dir on String, some methods that are from other interfaces such as
         #charAt don't appear)
         try:
             d = dir(original)
@@ -169,8 +170,8 @@
             names = var.__members__
         d = {}
 
-        #Be aware that the order in which the filters are applied attempts to 
-        #optimize the operation by removing as many items as possible in the 
+        #Be aware that the order in which the filters are applied attempts to
+        #optimize the operation by removing as many items as possible in the
         #first filters, leaving fewer items for later filters
 
         if filterBuiltIn or filterFunction:
@@ -212,18 +213,18 @@
 class DictResolver:
 
     def resolve(self, dict, key):
-        if key == '__len__':
+        if key in ('__len__', TOO_LARGE_ATTR):
             return None
 
         if '(' not in key:
             #we have to treat that because the dict resolver is also used to directly resolve the global and local
-            #scopes (which already have the items directly) 
+            #scopes (which already have the items directly)
             return dict[key]
 
         #ok, we have to iterate over the items to find the one that matches the id, because that's the only way
         #to actually find the reference from the string we have before.
         expected_id = int(key.split('(')[-1][:-1])
-        for key, val in dict.items():
+        for key, val in DictIterItems(dict):
             if id(key) == expected_id:
                 return val
 
@@ -241,10 +242,15 @@
     def getDictionary(self, dict):
         ret = {}
 
-        for key, val in dict.items():
+        i = 0
+        for key, val in DictIterItems(dict):
+            i += 1
             #we need to add the id because otherwise we cannot find the real object to get its contents later on.
             key = '%s (%s)' % (self.keyStr(key), id(key))
             ret[key] = val
+            if i > MAX_ITEMS_TO_HANDLE:
+                ret[TOO_LARGE_ATTR] = TOO_LARGE_MSG
+                break
 
         ret['__len__'] = len(dict)
         return ret
@@ -261,7 +267,7 @@
             @param var: that's the original attribute
             @param attribute: that's the key passed in the dict (as a string)
         '''
-        if attribute == '__len__' or attribute == TOO_LARGE_ATTR:
+        if attribute in ('__len__', TOO_LARGE_ATTR):
             return None
         return var[int(attribute)]
 
@@ -270,12 +276,12 @@
         # modified 'cause jython does not have enumerate support
         l = len(var)
         d = {}
-        
+
         if l < MAX_ITEMS_TO_HANDLE:
             format = '%0' + str(int(len(str(l)))) + 'd'
-            
-            
-            for i, item in zip(range(l), var):
+
+
+            for i, item in izip(xrange(l), var):
                 d[ format % i ] = item
         else:
             d[TOO_LARGE_ATTR] = TOO_LARGE_MSG
@@ -293,7 +299,7 @@
     '''
 
     def resolve(self, var, attribute):
-        if attribute == '__len__':
+        if attribute in ('__len__', TOO_LARGE_ATTR):
             return None
 
         attribute = int(attribute)
@@ -305,8 +311,16 @@
 
     def getDictionary(self, var):
         d = {}
+        i = 0
         for item in var:
-            d[ id(item) ] = item
+            i+= 1
+            d[id(item)] = item
+            
+            if i > MAX_ITEMS_TO_HANDLE:
+                d[TOO_LARGE_ATTR] = TOO_LARGE_MSG
+                break
+
+            
         d['__len__'] = len(var)
         return d
 
@@ -325,7 +339,7 @@
         ret = {}
 
         declaredFields = obj.__class__.getDeclaredFields()
-        for i in range(len(declaredFields)):
+        for i in xrange(len(declaredFields)):
             name = declaredFields[i].getName()
             try:
                 declaredFields[i].setAccessible(True)
@@ -352,7 +366,7 @@
     def getDictionary(self, obj):
         ret = {}
 
-        for i in range(len(obj)):
+        for i in xrange(len(obj)):
             ret[ i ] = obj[i]
 
         ret['__len__'] = len(obj)
diff --git a/python/helpers/pydev/pydevd_save_locals.py b/python/helpers/pydev/pydevd_save_locals.py
index 2808081..15a7382 100644
--- a/python/helpers/pydev/pydevd_save_locals.py
+++ b/python/helpers/pydev/pydevd_save_locals.py
@@ -2,6 +2,7 @@
 Utility for saving locals.
 """
 import sys
+import pydevd_vars
 
 def is_save_locals_available():
     try:
@@ -12,7 +13,7 @@
     except:
         pass
 
-    
+
     try:
         import ctypes
     except:
@@ -22,7 +23,7 @@
         func = ctypes.pythonapi.PyFrame_LocalsToFast
     except:
         return False
-    
+
     return True
 
 def save_locals(frame):
@@ -32,6 +33,10 @@
     Note: the 'save_locals' branch had a different approach wrapping the frame (much more code, but it gives ideas
     on how to save things partially, not the 'whole' locals).
     """
+    if not isinstance(frame, pydevd_vars.frame_type):
+        # Fix exception when changing Django variable (receiving DjangoTemplateFrame)
+        return
+
     try:
         if '__pypy__' in sys.builtin_module_names:
             import __pypy__
@@ -40,7 +45,7 @@
             return
     except:
         pass
-    
+
 
     try:
         import ctypes
diff --git a/python/helpers/pydev/pydevd_signature.py b/python/helpers/pydev/pydevd_signature.py
index e11bb5d..03dc0eb 100644
--- a/python/helpers/pydev/pydevd_signature.py
+++ b/python/helpers/pydev/pydevd_signature.py
@@ -6,6 +6,7 @@
 import gc
 from pydevd_comm import CMD_SIGNATURE_CALL_TRACE, NetCommand
 import pydevd_vars
+from pydevd_constants import xrange
 
 class Signature(object):
     def __init__(self, file, name):
@@ -43,7 +44,7 @@
             locals = frame.f_locals
             filename, modulename, funcname = self.file_module_function_of(frame)
             res = Signature(filename, funcname)
-            for i in range(0, code.co_argcount):
+            for i in xrange(0, code.co_argcount):
                 name = code.co_varnames[i]
                 tp = type(locals[name])
                 class_name = tp.__name__
@@ -123,9 +124,8 @@
     return NetCommand(CMD_SIGNATURE_CALL_TRACE, 0, cmdText)
 
 def sendSignatureCallTrace(dbg, frame, filename):
-    if dbg.signature_factory:
-        if dbg.signature_factory.is_in_scope(filename):
-            dbg.writer.addCommand(create_signature_message(dbg.signature_factory.create_signature(frame)))
+    if dbg.signature_factory.is_in_scope(filename):
+        dbg.writer.addCommand(create_signature_message(dbg.signature_factory.create_signature(frame)))
 
 
 
diff --git a/python/helpers/pydev/pydevd_stackless.py b/python/helpers/pydev/pydevd_stackless.py
index bd3b306..c2fd508 100644
--- a/python/helpers/pydev/pydevd_stackless.py
+++ b/python/helpers/pydev/pydevd_stackless.py
@@ -7,6 +7,7 @@
 import weakref
 from pydevd_file_utils import GetFilenameAndBase
 from pydevd import DONT_TRACE
+from pydevd_constants import DictItems
 
 
 # Used so that we don't loose the id (because we'll remove when it's not alive and would generate a new id for the
@@ -195,7 +196,7 @@
             register_tasklet_info(prev)
 
         try:
-            for tasklet_ref, tasklet_info in list(_weak_tasklet_registered_to_info.items()):  # Make sure it's a copy!
+            for tasklet_ref, tasklet_info in DictItems(_weak_tasklet_registered_to_info):  # Make sure it's a copy!
                 tasklet = tasklet_ref()
                 if tasklet is None or not tasklet.alive:
                     # Garbage-collected already!
@@ -269,7 +270,7 @@
                 register_tasklet_info(prev)
 
             try:
-                for tasklet_ref, tasklet_info in list(_weak_tasklet_registered_to_info.items()):  # Make sure it's a copy!
+                for tasklet_ref, tasklet_info in DictItems(_weak_tasklet_registered_to_info):  # Make sure it's a copy!
                     tasklet = tasklet_ref()
                     if tasklet is None or not tasklet.alive:
                         # Garbage-collected already!
@@ -388,7 +389,7 @@
         _application_set_schedule_callback = callable
         return old
 
-    def get_schedule_callback(callable):
+    def get_schedule_callback():
         global _application_set_schedule_callback
         return _application_set_schedule_callback
 
diff --git a/python/helpers/pydev/pydevd_traceproperty.py b/python/helpers/pydev/pydevd_traceproperty.py
new file mode 100644
index 0000000..d8e7e5f
--- /dev/null
+++ b/python/helpers/pydev/pydevd_traceproperty.py
@@ -0,0 +1,108 @@
+'''For debug purpose we are replacing actual builtin property by the debug property
+'''
+from pydevd_comm import GetGlobalDebugger
+from pydevd_constants import * #@UnusedWildImport
+import pydevd_tracing
+
+#=======================================================================================================================
+# replace_builtin_property
+#=======================================================================================================================
+def replace_builtin_property(new_property=None):
+    if new_property is None:
+        new_property = DebugProperty
+    original = property
+    if not IS_PY3K:
+        try:
+            import __builtin__
+            __builtin__.__dict__['property'] = new_property
+        except:
+            if DebugInfoHolder.DEBUG_TRACE_LEVEL:
+                import traceback;traceback.print_exc() #@Reimport
+    else:
+        try:
+            import builtins #Python 3.0 does not have the __builtin__ module @UnresolvedImport
+            builtins.__dict__['property'] = new_property
+        except:
+            if DebugInfoHolder.DEBUG_TRACE_LEVEL:
+                import traceback;traceback.print_exc() #@Reimport
+    return original
+
+
+#=======================================================================================================================
+# DebugProperty
+#=======================================================================================================================
+class DebugProperty(object):
+    """A custom property which allows python property to get
+    controlled by the debugger and selectively disable/re-enable
+    the tracing.
+    """
+
+
+    def __init__(self, fget=None, fset=None, fdel=None, doc=None):
+        self.fget = fget
+        self.fset = fset
+        self.fdel = fdel
+        self.__doc__ = doc
+
+
+    def __get__(self, obj, objtype=None):
+        if obj is None:
+            return self
+        global_debugger = GetGlobalDebugger()
+        try:
+            if global_debugger is not None and global_debugger.disable_property_getter_trace:
+                pydevd_tracing.SetTrace(None)
+            if self.fget is None:
+                raise AttributeError("unreadable attribute")
+            return self.fget(obj)
+        finally:
+            if global_debugger is not None:
+                pydevd_tracing.SetTrace(global_debugger.trace_dispatch)
+
+
+    def __set__(self, obj, value):
+        global_debugger = GetGlobalDebugger()
+        try:
+            if global_debugger is not None and global_debugger.disable_property_setter_trace:
+                pydevd_tracing.SetTrace(None)
+            if self.fset is None:
+                raise AttributeError("can't set attribute")
+            self.fset(obj, value)
+        finally:
+            if global_debugger is not None:
+                pydevd_tracing.SetTrace(global_debugger.trace_dispatch)
+
+
+    def __delete__(self, obj):
+        global_debugger = GetGlobalDebugger()
+        try:
+            if global_debugger is not None and global_debugger.disable_property_deleter_trace:
+                pydevd_tracing.SetTrace(None)
+            if self.fdel is None:
+                raise AttributeError("can't delete attribute")
+            self.fdel(obj)
+        finally:
+            if global_debugger is not None:
+                pydevd_tracing.SetTrace(global_debugger.trace_dispatch)
+
+
+    def getter(self, fget):
+        """Overriding getter decorator for the property
+        """
+        self.fget = fget
+        return self
+
+
+    def setter(self, fset):
+        """Overriding setter decorator for the property
+        """
+        self.fset = fset
+        return self
+
+
+    def deleter(self, fdel):
+        """Overriding deleter decorator for the property
+        """
+        self.fdel = fdel
+        return self
+
diff --git a/python/helpers/pydev/pydevd_tracing.py b/python/helpers/pydev/pydevd_tracing.py
index 1a5a833..7bc1ba5 100644
--- a/python/helpers/pydev/pydevd_tracing.py
+++ b/python/helpers/pydev/pydevd_tracing.py
@@ -65,7 +65,7 @@
                     sys.stderr.flush()
 
     if TracingFunctionHolder._original_tracing:
-            TracingFunctionHolder._original_tracing(tracing_func)
+        TracingFunctionHolder._original_tracing(tracing_func)
 
 def SetTrace(tracing_func):
     if TracingFunctionHolder._original_tracing is None:
diff --git a/python/helpers/pydev/pydevd_vars.py b/python/helpers/pydev/pydevd_vars.py
index de8c241..0cc45f7 100644
--- a/python/helpers/pydev/pydevd_vars.py
+++ b/python/helpers/pydev/pydevd_vars.py
@@ -3,7 +3,6 @@
 """
 import pickle
 from django_frame import DjangoTemplateFrame
-from pydevd_constants import * #@UnusedWildImport
 from types import * #@UnusedWildImport
 
 from pydevd_custom_frames import getCustomFrame
@@ -19,16 +18,15 @@
     import _pydev_threading as threading
 else:
     import threading
-import pydevd_resolver
 import traceback
 import pydevd_save_locals
-from pydev_imports import Exec, quote, execfile
+from pydev_imports import Exec, execfile
 
 try:
     import types
     frame_type = types.FrameType
 except:
-    frame_type = None
+    frame_type = type(sys._getframe())
 
 
 #-------------------------------------------------------------------------- defining true and false for earlier versions
@@ -37,25 +35,14 @@
     __setFalse = False
 except:
     import __builtin__
-
     setattr(__builtin__, 'True', 1)
     setattr(__builtin__, 'False', 0)
 
 #------------------------------------------------------------------------------------------------------ class for errors
 
-class VariableError(RuntimeError): pass
+class VariableError(RuntimeError):pass
 
-class FrameNotFoundError(RuntimeError): pass
-
-
-if USE_PSYCO_OPTIMIZATION:
-    try:
-        import psyco
-
-        varToXML = psyco.proxy(varToXML)
-    except ImportError:
-        if hasattr(sys, 'exc_clear'): #jython does not have it
-            sys.exc_clear() #don't keep the traceback -- clients don't want to see it
+class FrameNotFoundError(RuntimeError):pass
 
 def iterFrames(initialFrame):
     '''NO-YIELD VERSION: Iterates through all the frames starting at the specified frame (which will be the first returned item)'''
@@ -166,50 +153,127 @@
         traceback.print_exc()
         return None
 
-def resolveCompoundVariable(thread_id, frame_id, scope, attrs):
-    """ returns the value of the compound variable as a dictionary"""
+def getVariable(thread_id, frame_id, scope, attrs):
+    """
+    returns the value of a variable
+
+    :scope: can be BY_ID, EXPRESSION, GLOBAL, LOCAL, FRAME
+
+    BY_ID means we'll traverse the list of all objects alive to get the object.
+
+    :attrs: after reaching the proper scope, we have to get the attributes until we find
+            the proper location (i.e.: obj\tattr1\tattr2)
+
+    :note: when BY_ID is used, the frame_id is considered the id of the object to find and
+           not the frame (as we don't care about the frame in this case).
+    """
+    if scope == 'BY_ID':
+        if thread_id != GetThreadId(threading.currentThread()) :
+            raise VariableError("getVariable: must execute on same thread")
+
+        try:
+            import gc
+            objects = gc.get_objects()
+        except:
+            pass  #Not all python variants have it.
+        else:
+            frame_id = int(frame_id)
+            for var in objects:
+                if id(var) == frame_id:
+                    if attrs is not None:
+                        attrList = attrs.split('\t')
+                        for k in attrList:
+                            _type, _typeName, resolver = getType(var)
+                            var = resolver.resolve(var, k)
+
+                    return var
+
+        #If it didn't return previously, we coudn't find it by id (i.e.: alrceady garbage collected).
+        sys.stderr.write('Unable to find object with id: %s\n' % (frame_id,))
+        return None
+
     frame = findFrame(thread_id, frame_id)
     if frame is None:
         return {}
 
-    attrList = attrs.split('\t')
-    
-    if scope == "GLOBAL":
-        var = frame.f_globals
-        del attrList[0] # globals are special, and they get a single dummy unused attribute
+    if attrs is not None:
+        attrList = attrs.split('\t')
     else:
-        var = frame.f_locals
-        type, _typeName, resolver = getType(var)
-        try:
-            resolver.resolve(var, attrList[0])
-        except:
-            var = frame.f_globals
+        attrList = []
 
-    for k in attrList:
-        type, _typeName, resolver = getType(var)
-        var = resolver.resolve(var, k)
+    if scope == 'EXPRESSION':
+        for count in xrange(len(attrList)):
+            if count == 0:
+                # An Expression can be in any scope (globals/locals), therefore it needs to evaluated as an expression
+                var = evaluateExpression(thread_id, frame_id, attrList[count], False)
+            else:
+                _type, _typeName, resolver = getType(var)
+                var = resolver.resolve(var, attrList[count])
+    else:
+        if scope == "GLOBAL":
+            var = frame.f_globals
+            del attrList[0]  # globals are special, and they get a single dummy unused attribute
+        else:
+            var = frame.f_locals
+
+        for k in attrList:
+            _type, _typeName, resolver = getType(var)
+            var = resolver.resolve(var, k)
+
+    return var
+
+
+def resolveCompoundVariable(thread_id, frame_id, scope, attrs):
+    """ returns the value of the compound variable as a dictionary"""
+
+    var = getVariable(thread_id, frame_id, scope, attrs)
 
     try:
-        type, _typeName, resolver = getType(var)
+        _type, _typeName, resolver = getType(var)
         return resolver.getDictionary(var)
     except:
+        sys.stderr.write('Error evaluating: thread_id: %s\nframe_id: %s\nscope: %s\nattrs: %s\n' % (
+            thread_id, frame_id, scope, attrs,))
         traceback.print_exc()
-        
-        
+
+
 def resolveVar(var, attrs):
     attrList = attrs.split('\t')
-    
+
     for k in attrList:
         type, _typeName, resolver = getType(var)
-        
+
         var = resolver.resolve(var, k)
-    
+
     try:
         type, _typeName, resolver = getType(var)
         return resolver.getDictionary(var)
     except:
         traceback.print_exc()
-    
+
+
+def customOperation(thread_id, frame_id, scope, attrs, style, code_or_file, operation_fn_name):
+    """
+    We'll execute the code_or_file and then search in the namespace the operation_fn_name to execute with the given var.
+
+    code_or_file: either some code (i.e.: from pprint import pprint) or a file to be executed.
+    operation_fn_name: the name of the operation to execute after the exec (i.e.: pprint)
+    """
+    expressionValue = getVariable(thread_id, frame_id, scope, attrs)
+
+    try:
+        namespace = {'__name__': '<customOperation>'}
+        if style == "EXECFILE":
+            namespace['__file__'] = code_or_file
+            execfile(code_or_file, namespace, namespace)
+        else:  # style == EXEC
+            namespace['__file__'] = '<customOperationCode>'
+            Exec(code_or_file, namespace, namespace)
+
+        return str(namespace[operation_fn_name](expressionValue))
+    except:
+        traceback.print_exc()
+
 
 def evaluateExpression(thread_id, frame_id, expression, doExec):
     '''returns the result of the evaluated expression
@@ -230,6 +294,7 @@
     updated_globals.update(frame.f_locals)  #locals later because it has precedence over the actual globals
 
     try:
+
         if doExec:
             try:
                 #try to make it an eval (if it is an eval we can print it, otherwise we'll exec it and
@@ -240,7 +305,7 @@
                 pydevd_save_locals.save_locals(frame)
             else:
                 result = eval(compiled, updated_globals, frame.f_locals)
-                if result is not None: #Only print if it's not None (as python does)
+                if result is not None:  #Only print if it's not None (as python does)
                     sys.stdout.write('%s\n' % (result,))
             return
 
@@ -251,7 +316,6 @@
             except Exception:
                 s = StringIO()
                 traceback.print_exc(file=s)
-
                 result = s.getvalue()
 
                 try:
@@ -265,6 +329,22 @@
 
                 result = ExceptionOnEvaluate(result)
 
+                # Ok, we have the initial error message, but let's see if we're dealing with a name mangling error...
+                try:
+                    if '__' in expression:
+                        # Try to handle '__' name mangling...
+                        split = expression.split('.')
+                        curr = frame.f_locals.get(split[0])
+                        for entry in split[1:]:
+                            if entry.startswith('__') and not hasattr(curr, entry):
+                                entry = '_%s%s' % (curr.__class__.__name__, entry)
+                            curr = getattr(curr, entry)
+
+                        result = curr
+                except:
+                    pass
+
+
             return result
     finally:
         #Should not be kept alive if an exception happens and this frame is kept in the stack.
@@ -273,22 +353,18 @@
 
 def changeAttrExpression(thread_id, frame_id, attr, expression):
     '''Changes some attribute in a given frame.
-    @note: it will not (currently) work if we're not in the topmost frame (that's a python
-    deficiency -- and it appears that there is no way of making it currently work --
-    will probably need some change to the python internals)
     '''
     frame = findFrame(thread_id, frame_id)
     if frame is None:
         return
 
-    if isinstance(frame, DjangoTemplateFrame):
-        result = eval(expression, frame.f_globals, frame.f_locals)
-        frame.changeVariable(attr, result)
-
     try:
         expression = expression.replace('@LINE@', '\n')
 
-
+        if isinstance(frame, DjangoTemplateFrame):
+            result = eval(expression, frame.f_globals, frame.f_locals)
+            frame.changeVariable(attr, result)
+            return
 
         if attr[:7] == "Globals":
             attr = attr[8:]
diff --git a/python/helpers/pydev/pydevd_xml.py b/python/helpers/pydev/pydevd_xml.py
index ac3f71c..52bb186 100644
--- a/python/helpers/pydev/pydevd_xml.py
+++ b/python/helpers/pydev/pydevd_xml.py
@@ -1,6 +1,7 @@
 import pydev_log
 import traceback
 import pydevd_resolver
+import sys
 from pydevd_constants import * #@UnusedWildImport
 
 from pydev_imports import quote
@@ -146,9 +147,9 @@
             pydev_log.error("Unexpected error, recovered safely.\n")
 
     return xml
-    
-    
-def varToXML(val, name, doTrim=True):
+
+
+def varToXML(val, name, doTrim=True, additionalInXml=''):
     """ single variable or dictionary to xml representation """
 
     is_exception_on_eval = isinstance(val, ExceptionOnEvaluate)
@@ -162,19 +163,22 @@
 
     try:
         if hasattr(v, '__class__'):
-            try:
-                cName = str(v.__class__)
-                if cName.find('.') != -1:
-                    cName = cName.split('.')[-1]
+            if v.__class__ == frame_type:
+                value = pydevd_resolver.frameResolver.getFrameName(v)
+            else:
+                try:
+                    cName = str(v.__class__)
+                    if cName.find('.') != -1:
+                        cName = cName.split('.')[-1]
 
-                elif cName.find("'") != -1: #does not have '.' (could be something like <type 'int'>)
-                    cName = cName[cName.index("'") + 1:]
+                    elif cName.find("'") != -1: #does not have '.' (could be something like <type 'int'>)
+                        cName = cName[cName.index("'") + 1:]
 
-                if cName.endswith("'>"):
-                    cName = cName[:-2]
-            except:
-                cName = str(v.__class__)
-            value = '%s: %s' % (cName, v)
+                    if cName.endswith("'>"):
+                        cName = cName[:-2]
+                except:
+                    cName = str(v.__class__)
+                value = '%s: %s' % (cName, v)
         else:
             value = str(v)
     except:
@@ -218,4 +222,13 @@
         else:
             xmlCont = ''
 
-    return ''.join((xml, xmlValue, xmlCont, ' />\n'))
+    return ''.join((xml, xmlValue, xmlCont, additionalInXml, ' />\n'))
+
+if USE_PSYCO_OPTIMIZATION:
+    try:
+        import psyco
+
+        varToXML = psyco.proxy(varToXML)
+    except ImportError:
+        if hasattr(sys, 'exc_clear'): #jython does not have it
+            sys.exc_clear() #don't keep the traceback -- clients don't want to see it
diff --git a/python/helpers/pydev/runfiles.py b/python/helpers/pydev/runfiles.py
index 4a25469..67c88be 100644
--- a/python/helpers/pydev/runfiles.py
+++ b/python/helpers/pydev/runfiles.py
@@ -1,530 +1,249 @@
-import fnmatch
-import os.path
-import re
-import sys
-import unittest
+import os
+
+def main():
+    import sys
+
+    #Separate the nose params and the pydev params.
+    pydev_params = []
+    other_test_framework_params = []
+    found_other_test_framework_param = None
+
+    NOSE_PARAMS = '--nose-params'
+    PY_TEST_PARAMS = '--py-test-params'
+
+    for arg in sys.argv[1:]:
+        if not found_other_test_framework_param and arg != NOSE_PARAMS and arg != PY_TEST_PARAMS:
+            pydev_params.append(arg)
+
+        else:
+            if not found_other_test_framework_param:
+                found_other_test_framework_param = arg
+            else:
+                other_test_framework_params.append(arg)
 
 
+    #Here we'll run either with nose or with the pydev_runfiles.
+    import pydev_runfiles
+    import pydev_runfiles_xml_rpc
+    import pydevd_constants
+    from pydevd_file_utils import _NormFile
 
+    DEBUG = 0
+    if DEBUG:
+        sys.stdout.write('Received parameters: %s\n' % (sys.argv,))
+        sys.stdout.write('Params for pydev: %s\n' % (pydev_params,))
+        if found_other_test_framework_param:
+            sys.stdout.write('Params for test framework: %s, %s\n' % (found_other_test_framework_param, other_test_framework_params))
 
-try:
-    __setFalse = False
-except:
-    import __builtin__
-    setattr(__builtin__, 'True', 1)
-    setattr(__builtin__, 'False', 0)
-
-
-
-
-#=======================================================================================================================
-# Jython?
-#=======================================================================================================================
-try:
-    import org.python.core.PyDictionary #@UnresolvedImport @UnusedImport -- just to check if it could be valid
-    def DictContains(d, key):
-        return d.has_key(key)
-except:
     try:
-        #Py3k does not have has_key anymore, and older versions don't have __contains__
-        DictContains = dict.__contains__
+        configuration = pydev_runfiles.parse_cmdline([sys.argv[0]] + pydev_params)
     except:
-        DictContains = dict.has_key
+        sys.stderr.write('Command line received: %s\n' % (sys.argv,))
+        raise
+    pydev_runfiles_xml_rpc.InitializeServer(configuration.port) #Note that if the port is None, a Null server will be initialized.
 
-try:
-    xrange
-except:
-    #Python 3k does not have it
-    xrange = range
-
-try:
-    enumerate
-except:
-    def enumerate(lst):
-        ret = []
-        i=0
-        for element in lst:
-            ret.append((i, element))
-            i+=1
-        return ret
-    
-
-
-#=======================================================================================================================
-# getopt code copied since gnu_getopt is not available on jython 2.1
-#=======================================================================================================================
-class GetoptError(Exception):
-    opt = ''
-    msg = ''
-    def __init__(self, msg, opt=''):
-        self.msg = msg
-        self.opt = opt
-        Exception.__init__(self, msg, opt)
-
-    def __str__(self):
-        return self.msg
-
-
-def gnu_getopt(args, shortopts, longopts=[]):
-    """getopt(args, options[, long_options]) -> opts, args
-
-    This function works like getopt(), except that GNU style scanning
-    mode is used by default. This means that option and non-option
-    arguments may be intermixed. The getopt() function stops
-    processing options as soon as a non-option argument is
-    encountered.
-
-    If the first character of the option string is `+', or if the
-    environment variable POSIXLY_CORRECT is set, then option
-    processing stops as soon as a non-option argument is encountered.
-    """
-
-    opts = []
-    prog_args = []
-    if isinstance(longopts, ''.__class__):
-        longopts = [longopts]
-    else:
-        longopts = list(longopts)
-
-    # Allow options after non-option arguments?
-    if shortopts.startswith('+'):
-        shortopts = shortopts[1:]
-        all_options_first = True
-    elif os.environ.get("POSIXLY_CORRECT"):
-        all_options_first = True
-    else:
-        all_options_first = False
-
-    while args:
-        if args[0] == '--':
-            prog_args += args[1:]
-            break
-
-        if args[0][:2] == '--':
-            opts, args = do_longs(opts, args[0][2:], longopts, args[1:])
-        elif args[0][:1] == '-':
-            opts, args = do_shorts(opts, args[0][1:], shortopts, args[1:])
-        else:
-            if all_options_first:
-                prog_args += args
-                break
-            else:
-                prog_args.append(args[0])
-                args = args[1:]
-
-    return opts, prog_args
-
-def do_longs(opts, opt, longopts, args):
+    NOSE_FRAMEWORK = 1
+    PY_TEST_FRAMEWORK = 2
     try:
-        i = opt.index('=')
-    except ValueError:
-        optarg = None
+        if found_other_test_framework_param:
+            test_framework = 0 #Default (pydev)
+            if found_other_test_framework_param == NOSE_PARAMS:
+                import nose
+                test_framework = NOSE_FRAMEWORK
+
+            elif found_other_test_framework_param == PY_TEST_PARAMS:
+                import pytest
+                test_framework = PY_TEST_FRAMEWORK
+
+            else:
+                raise ImportError()
+
+        else:
+            raise ImportError()
+
+    except ImportError:
+        if found_other_test_framework_param:
+            sys.stderr.write('Warning: Could not import the test runner: %s. Running with the default pydev unittest runner instead.\n' % (
+                found_other_test_framework_param,))
+
+        test_framework = 0
+
+    #Clear any exception that may be there so that clients don't see it.
+    #See: https://sourceforge.net/tracker/?func=detail&aid=3408057&group_id=85796&atid=577329
+    if hasattr(sys, 'exc_clear'):
+        sys.exc_clear()
+
+    if test_framework == 0:
+
+        pydev_runfiles.main(configuration)
+
     else:
-        opt, optarg = opt[:i], opt[i + 1:]
+        #We'll convert the parameters to what nose or py.test expects.
+        #The supported parameters are:
+        #runfiles.py  --config-file|-t|--tests <Test.test1,Test2>  dirs|files --nose-params xxx yyy zzz
+        #(all after --nose-params should be passed directly to nose)
 
-    has_arg, opt = long_has_args(opt, longopts)
-    if has_arg:
-        if optarg is None:
-            if not args:
-                raise GetoptError('option --%s requires argument' % opt, opt)
-            optarg, args = args[0], args[1:]
-    elif optarg:
-        raise GetoptError('option --%s must not have an argument' % opt, opt)
-    opts.append(('--' + opt, optarg or ''))
-    return opts, args
-
-# Return:
-#   has_arg?
-#   full option name
-def long_has_args(opt, longopts):
-    possibilities = [o for o in longopts if o.startswith(opt)]
-    if not possibilities:
-        raise GetoptError('option --%s not recognized' % opt, opt)
-    # Is there an exact match?
-    if opt in possibilities:
-        return False, opt
-    elif opt + '=' in possibilities:
-        return True, opt
-    # No exact match, so better be unique.
-    if len(possibilities) > 1:
-        # XXX since possibilities contains all valid continuations, might be
-        # nice to work them into the error msg
-        raise GetoptError('option --%s not a unique prefix' % opt, opt)
-    assert len(possibilities) == 1
-    unique_match = possibilities[0]
-    has_arg = unique_match.endswith('=')
-    if has_arg:
-        unique_match = unique_match[:-1]
-    return has_arg, unique_match
-
-def do_shorts(opts, optstring, shortopts, args):
-    while optstring != '':
-        opt, optstring = optstring[0], optstring[1:]
-        if short_has_arg(opt, shortopts):
-            if optstring == '':
-                if not args:
-                    raise GetoptError('option -%s requires argument' % opt,
-                                      opt)
-                optstring, args = args[0], args[1:]
-            optarg, optstring = optstring, ''
-        else:
-            optarg = ''
-        opts.append(('-' + opt, optarg))
-    return opts, args
-
-def short_has_arg(opt, shortopts):
-    for i in range(len(shortopts)):
-        if opt == shortopts[i] != ':':
-            return shortopts.startswith(':', i + 1)
-    raise GetoptError('option -%s not recognized' % opt, opt)
+        #In java:
+        #--tests = Constants.ATTR_UNITTEST_TESTS
+        #--config-file = Constants.ATTR_UNITTEST_CONFIGURATION_FILE
 
 
-#=======================================================================================================================
-# End getopt code
-#=======================================================================================================================
+        #The only thing actually handled here are the tests that we want to run, which we'll
+        #handle and pass as what the test framework expects.
 
+        py_test_accept_filter = {}
+        files_to_tests = configuration.files_to_tests
 
+        if files_to_tests:
+            #Handling through the file contents (file where each line is a test)
+            files_or_dirs = []
+            for file, tests in files_to_tests.items():
+                if test_framework == NOSE_FRAMEWORK:
+                    for test in tests:
+                        files_or_dirs.append(file + ':' + test)
 
+                elif test_framework == PY_TEST_FRAMEWORK:
+                    file = _NormFile(file)
+                    py_test_accept_filter[file] = tests
+                    files_or_dirs.append(file)
 
-
-
-
-
-
-
-#=======================================================================================================================
-# parse_cmdline
-#=======================================================================================================================
-def parse_cmdline():
-    """ parses command line and returns test directories, verbosity, test filter and test suites
-        usage: 
-            runfiles.py  -v|--verbosity <level>  -f|--filter <regex>  -t|--tests <Test.test1,Test2>  dirs|files
-    """
-    verbosity = 2
-    test_filter = None
-    tests = None
-
-    optlist, dirs = gnu_getopt(sys.argv[1:], "v:f:t:", ["verbosity=", "filter=", "tests="])
-    for opt, value in optlist:
-        if opt in ("-v", "--verbosity"):
-            verbosity = value
-
-        elif opt in ("-f", "--filter"):
-            test_filter = value.split(',')
-
-        elif opt in ("-t", "--tests"):
-            tests = value.split(',')
-
-    if type([]) != type(dirs):
-        dirs = [dirs]
-
-    ret_dirs = []
-    for d in dirs:
-        if '|' in d:
-            #paths may come from the ide separated by |
-            ret_dirs.extend(d.split('|'))
-        else:
-            ret_dirs.append(d)
-
-    return ret_dirs, int(verbosity), test_filter, tests
-
-
-#=======================================================================================================================
-# PydevTestRunner
-#=======================================================================================================================
-class PydevTestRunner:
-    """ finds and runs a file or directory of files as a unit test """
-
-    __py_extensions = ["*.py", "*.pyw"]
-    __exclude_files = ["__init__.*"]
-
-    def __init__(self, test_dir, test_filter=None, verbosity=2, tests=None):
-        self.test_dir = test_dir
-        self.__adjust_path()
-        self.test_filter = self.__setup_test_filter(test_filter)
-        self.verbosity = verbosity
-        self.tests = tests
-
-
-    def __adjust_path(self):
-        """ add the current file or directory to the python path """
-        path_to_append = None
-        for n in xrange(len(self.test_dir)):
-            dir_name = self.__unixify(self.test_dir[n])
-            if os.path.isdir(dir_name):
-                if not dir_name.endswith("/"):
-                    self.test_dir[n] = dir_name + "/"
-                path_to_append = os.path.normpath(dir_name)
-            elif os.path.isfile(dir_name):
-                path_to_append = os.path.dirname(dir_name)
-            else:
-                msg = ("unknown type. \n%s\nshould be file or a directory.\n" % (dir_name))
-                raise RuntimeError(msg)
-        if path_to_append is not None:
-            #Add it as the last one (so, first things are resolved against the default dirs and 
-            #if none resolves, then we try a relative import).
-            sys.path.append(path_to_append)
-        return
-
-    def __setup_test_filter(self, test_filter):
-        """ turn a filter string into a list of filter regexes """
-        if test_filter is None or len(test_filter) == 0:
-            return None
-        return [re.compile("test%s" % f) for f in test_filter]
-
-    def __is_valid_py_file(self, fname):
-        """ tests that a particular file contains the proper file extension 
-            and is not in the list of files to exclude """
-        is_valid_fname = 0
-        for invalid_fname in self.__class__.__exclude_files:
-            is_valid_fname += int(not fnmatch.fnmatch(fname, invalid_fname))
-        if_valid_ext = 0
-        for ext in self.__class__.__py_extensions:
-            if_valid_ext += int(fnmatch.fnmatch(fname, ext))
-        return is_valid_fname > 0 and if_valid_ext > 0
-
-    def __unixify(self, s):
-        """ stupid windows. converts the backslash to forwardslash for consistency """
-        return os.path.normpath(s).replace(os.sep, "/")
-
-    def __importify(self, s, dir=False):
-        """ turns directory separators into dots and removes the ".py*" extension 
-            so the string can be used as import statement """
-        if not dir:
-            dirname, fname = os.path.split(s)
-
-            if fname.count('.') > 1:
-                #if there's a file named xxx.xx.py, it is not a valid module, so, let's not load it...
-                return
-
-            imp_stmt_pieces = [dirname.replace("\\", "/").replace("/", "."), os.path.splitext(fname)[0]]
-
-            if len(imp_stmt_pieces[0]) == 0:
-                imp_stmt_pieces = imp_stmt_pieces[1:]
-
-            return ".".join(imp_stmt_pieces)
-
-        else: #handle dir
-            return s.replace("\\", "/").replace("/", ".")
-
-    def __add_files(self, pyfiles, root, files):
-        """ if files match, appends them to pyfiles. used by os.path.walk fcn """
-        for fname in files:
-            if self.__is_valid_py_file(fname):
-                name_without_base_dir = self.__unixify(os.path.join(root, fname))
-                pyfiles.append(name_without_base_dir)
-        return
-
-
-    def find_import_files(self):
-        """ return a list of files to import """
-        pyfiles = []
-
-        for base_dir in self.test_dir:
-            if os.path.isdir(base_dir):
-                if hasattr(os, 'walk'):
-                    for root, dirs, files in os.walk(base_dir):
-                        self.__add_files(pyfiles, root, files)
                 else:
-                    # jython2.1 is too old for os.walk!
-                    os.path.walk(base_dir, self.__add_files, pyfiles)
+                    raise AssertionError('Cannot handle test framework: %s at this point.' % (test_framework,))
 
-            elif os.path.isfile(base_dir):
-                pyfiles.append(base_dir)
+        else:
+            if configuration.tests:
+                #Tests passed (works together with the files_or_dirs)
+                files_or_dirs = []
+                for file in configuration.files_or_dirs:
+                    if test_framework == NOSE_FRAMEWORK:
+                        for t in configuration.tests:
+                            files_or_dirs.append(file + ':' + t)
 
-        return pyfiles
-
-    def __get_module_from_str(self, modname, print_exception):
-        """ Import the module in the given import path.
-            * Returns the "final" module, so importing "coilib40.subject.visu" 
-            returns the "visu" module, not the "coilib40" as returned by __import__ """
-        try:
-            mod = __import__(modname)
-            for part in modname.split('.')[1:]:
-                mod = getattr(mod, part)
-            return mod
-        except:
-            if print_exception:
-                import traceback;traceback.print_exc()
-                sys.stderr.write('ERROR: Module: %s could not be imported.\n' % (modname,))
-            return None
-
-    def find_modules_from_files(self, pyfiles):
-        """ returns a lisst of modules given a list of files """
-        #let's make sure that the paths we want are in the pythonpath...
-        imports = [self.__importify(s) for s in pyfiles]
-
-        system_paths = []
-        for s in sys.path:
-            system_paths.append(self.__importify(s, True))
-
-
-        ret = []
-        for imp in imports:
-            if imp is None:
-                continue #can happen if a file is not a valid module
-            choices = []
-            for s in system_paths:
-                if imp.startswith(s):
-                    add = imp[len(s) + 1:]
-                    if add:
-                        choices.append(add)
-                    #sys.stdout.write(' ' + add + ' ')
-
-            if not choices:
-                sys.stdout.write('PYTHONPATH not found for file: %s\n' % imp)
-            else:
-                for i, import_str in enumerate(choices):
-                    mod = self.__get_module_from_str(import_str, print_exception=i == len(choices) - 1)
-                    if mod is not None:
-                        ret.append(mod)
-                        break
-
-
-        return ret
-
-    def find_tests_from_modules(self, modules):
-        """ returns the unittests given a list of modules """
-        loader = unittest.TestLoader()
-
-        ret = []
-        if self.tests:
-            accepted_classes = {}
-            accepted_methods = {}
-
-            for t in self.tests:
-                splitted = t.split('.')
-                if len(splitted) == 1:
-                    accepted_classes[t] = t
-
-                elif len(splitted) == 2:
-                    accepted_methods[t] = t
-
-            #===========================================================================================================
-            # GetTestCaseNames
-            #===========================================================================================================
-            class GetTestCaseNames:
-                """Yes, we need a class for that (cannot use outer context on jython 2.1)"""
-
-                def __init__(self, accepted_classes, accepted_methods):
-                    self.accepted_classes = accepted_classes
-                    self.accepted_methods = accepted_methods
-
-                def __call__(self, testCaseClass):
-                    """Return a sorted sequence of method names found within testCaseClass"""
-                    testFnNames = []
-                    className = testCaseClass.__name__
-
-                    if DictContains(self.accepted_classes, className):
-                        for attrname in dir(testCaseClass):
-                            #If a class is chosen, we select all the 'test' methods'
-                            if attrname.startswith('test') and hasattr(getattr(testCaseClass, attrname), '__call__'):
-                                testFnNames.append(attrname)
+                    elif test_framework == PY_TEST_FRAMEWORK:
+                        file = _NormFile(file)
+                        py_test_accept_filter[file] = configuration.tests
+                        files_or_dirs.append(file)
 
                     else:
-                        for attrname in dir(testCaseClass):
-                            #If we have the class+method name, we must do a full check and have an exact match.
-                            if DictContains(self.accepted_methods, className + '.' + attrname):
-                                if hasattr(getattr(testCaseClass, attrname), '__call__'):
-                                    testFnNames.append(attrname)
+                        raise AssertionError('Cannot handle test framework: %s at this point.' % (test_framework,))
+            else:
+                #Only files or dirs passed (let it do the test-loading based on those paths)
+                files_or_dirs = configuration.files_or_dirs
 
-                    #sorted() is not available in jython 2.1
-                    testFnNames.sort()
-                    return testFnNames
+        argv = other_test_framework_params + files_or_dirs
 
 
-            loader.getTestCaseNames = GetTestCaseNames(accepted_classes, accepted_methods)
+        if test_framework == NOSE_FRAMEWORK:
+            #Nose usage: http://somethingaboutorange.com/mrl/projects/nose/0.11.2/usage.html
+            #show_stdout_option = ['-s']
+            #processes_option = ['--processes=2']
+            argv.insert(0, sys.argv[0])
+            if DEBUG:
+                sys.stdout.write('Final test framework args: %s\n' % (argv[1:],))
+
+            import pydev_runfiles_nose
+            PYDEV_NOSE_PLUGIN_SINGLETON = pydev_runfiles_nose.StartPydevNosePluginSingleton(configuration)
+            argv.append('--with-pydevplugin')
+            nose.run(argv=argv, addplugins=[PYDEV_NOSE_PLUGIN_SINGLETON])
+
+        elif test_framework == PY_TEST_FRAMEWORK:
+            if DEBUG:
+                sys.stdout.write('Final test framework args: %s\n' % (argv,))
+                sys.stdout.write('py_test_accept_filter: %s\n' % (py_test_accept_filter,))
+
+            import os
+
+            try:
+                xrange
+            except:
+                xrange = range
+
+            for i in xrange(len(argv)):
+                arg = argv[i]
+                #Workaround bug in py.test: if we pass the full path it ends up importing conftest
+                #more than once (so, always work with relative paths).
+                if os.path.isfile(arg) or os.path.isdir(arg):
+                    from pydev_imports import relpath
+                    arg = relpath(arg)
+                    argv[i] = arg
+
+            d = os.path.dirname(__file__)
+            if d not in sys.path:
+                sys.path.insert(0, d)
+
+            import pickle, zlib, base64
+
+            # Update environment PYTHONPATH so that it finds our plugin if using xdist.
+            os.environ['PYTHONPATH'] = os.pathsep.join(sys.path)
+
+            # Set what should be skipped in the plugin through an environment variable
+            s = base64.b64encode(zlib.compress(pickle.dumps(py_test_accept_filter)))
+            if pydevd_constants.IS_PY3K:
+                s = s.decode('ascii') # Must be str in py3.
+            os.environ['PYDEV_PYTEST_SKIP'] = s
+
+            # Identifies the main pid (i.e.: if it's not the main pid it has to connect back to the
+            # main pid to give xml-rpc notifications).
+            os.environ['PYDEV_MAIN_PID'] = str(os.getpid())
+            os.environ['PYDEV_PYTEST_SERVER'] = str(configuration.port)
+
+            argv.append('-p')
+            argv.append('pydev_runfiles_pytest2')
+            pytest.main(argv)
+
+        else:
+            raise AssertionError('Cannot handle test framework: %s at this point.' % (test_framework,))
 
 
-        ret.extend([loader.loadTestsFromModule(m) for m in modules])
-
-        return ret
-
-
-    def filter_tests(self, test_objs):
-        """ based on a filter name, only return those tests that have
-            the test case names that match """
-        test_suite = []
-        for test_obj in test_objs:
-
-            if isinstance(test_obj, unittest.TestSuite):
-                if test_obj._tests:
-                    test_obj._tests = self.filter_tests(test_obj._tests)
-                    if test_obj._tests:
-                        test_suite.append(test_obj)
-
-            elif isinstance(test_obj, unittest.TestCase):
-                test_cases = []
-                for tc in test_objs:
-                    try:
-                        testMethodName = tc._TestCase__testMethodName
-                    except AttributeError:
-                        #changed in python 2.5
-                        testMethodName = tc._testMethodName
-
-                    if self.__match(self.test_filter, testMethodName) and self.__match_tests(self.tests, tc, testMethodName):
-                        test_cases.append(tc)
-                return test_cases
-        return test_suite
-
-
-    def __match_tests(self, tests, test_case, test_method_name):
-        if not tests:
-            return 1
-
-        for t in tests:
-            class_and_method = t.split('.')
-            if len(class_and_method) == 1:
-                #only class name
-                if class_and_method[0] == test_case.__class__.__name__:
-                    return 1
-
-            elif len(class_and_method) == 2:
-                if class_and_method[0] == test_case.__class__.__name__ and class_and_method[1] == test_method_name:
-                    return 1
-
-        return 0
-
-
-
-
-    def __match(self, filter_list, name):
-        """ returns whether a test name matches the test filter """
-        if filter_list is None:
-            return 1
-        for f in filter_list:
-            if re.match(f, name):
-                return 1
-        return 0
-
-
-    def run_tests(self):
-        """ runs all tests """
-        sys.stdout.write("Finding files...\n")
-        files = self.find_import_files()
-        sys.stdout.write('%s %s\n' % (self.test_dir, '... done'))
-        sys.stdout.write("Importing test modules ... ")
-        modules = self.find_modules_from_files(files)
-        sys.stdout.write("done.\n")
-        all_tests = self.find_tests_from_modules(modules)
-        if self.test_filter or self.tests:
-
-            if self.test_filter:
-                sys.stdout.write('Test Filter: %s' % ([p.pattern for p in self.test_filter],))
-
-            if self.tests:
-                sys.stdout.write('Tests to run: %s' % (self.tests,))
-
-            all_tests = self.filter_tests(all_tests)
-
-        sys.stdout.write('\n')
-        runner = unittest.TextTestRunner(stream=sys.stdout, descriptions=1, verbosity=verbosity)
-        runner.run(unittest.TestSuite(all_tests))
-        return
-
-#=======================================================================================================================
-# main        
-#=======================================================================================================================
 if __name__ == '__main__':
-    dirs, verbosity, test_filter, tests = parse_cmdline()
-    PydevTestRunner(dirs, test_filter, verbosity, tests).run_tests()
+    try:
+        main()
+    finally:
+        try:
+            #The server is not a daemon thread, so, we have to ask for it to be killed!
+            import pydev_runfiles_xml_rpc
+            pydev_runfiles_xml_rpc.forceServerKill()
+        except:
+            pass #Ignore any errors here
+
+    import sys
+    import threading
+    if hasattr(sys, '_current_frames') and hasattr(threading, 'enumerate'):
+        import time
+        import traceback
+
+        class DumpThreads(threading.Thread):
+            def run(self):
+                time.sleep(10)
+
+                thread_id_to_name = {}
+                try:
+                    for t in threading.enumerate():
+                        thread_id_to_name[t.ident] = '%s  (daemon: %s)' % (t.name, t.daemon)
+                except:
+                    pass
+
+                stack_trace = [
+                    '===============================================================================',
+                    'pydev pyunit runner: Threads still found running after tests finished',
+                    '================================= Thread Dump =================================']
+
+                for thread_id, stack in sys._current_frames().items():
+                    stack_trace.append('\n-------------------------------------------------------------------------------')
+                    stack_trace.append(" Thread %s" % thread_id_to_name.get(thread_id, thread_id))
+                    stack_trace.append('')
+
+                    if 'self' in stack.f_locals:
+                        sys.stderr.write(str(stack.f_locals['self'])+'\n')
+
+                    for filename, lineno, name, line in traceback.extract_stack(stack):
+                        stack_trace.append(' File "%s", line %d, in %s' % (filename, lineno, name))
+                        if line:
+                            stack_trace.append("   %s" % (line.strip()))
+                stack_trace.append('\n=============================== END Thread Dump ===============================')
+                sys.stderr.write('\n'.join(stack_trace))
+
+
+        dump_current_frames_thread = DumpThreads()
+        dump_current_frames_thread.setDaemon(True) # Daemon so that this thread doesn't halt it!
+        dump_current_frames_thread.start()
diff --git a/python/helpers/pydev/stubs/_django_manager_body.py b/python/helpers/pydev/stubs/_django_manager_body.py
new file mode 100644
index 0000000..2bf4706
--- /dev/null
+++ b/python/helpers/pydev/stubs/_django_manager_body.py
@@ -0,0 +1,414 @@
+# This is a dummy for code-completion purposes.
+
+def __unicode__(self):
+    """
+    Return "app_label.model_label.manager_name". 
+    """
+
+def _copy_to_model(self, model):
+    """
+    Makes a copy of the manager and assigns it to 'model', which should be
+    a child of the existing model (used when inheriting a manager from an
+    abstract base class).
+    """
+
+
+def _db(self):
+    """
+
+    """
+
+
+def _get_queryset_methods(cls, queryset_class):
+    """
+
+    """
+
+
+def _hints(self):
+    """
+    dict() -> new empty dictionary
+    dict(mapping) -> new dictionary initialized from a mapping object's
+        (key, value) pairs
+    dict(iterable) -> new dictionary initialized as if via:
+        d = {}
+        for k, v in iterable:
+            d[k] = v
+    dict(**kwargs) -> new dictionary initialized with the name=value pairs
+        in the keyword argument list.  For example:  dict(one=1, two=2)
+    """
+
+
+def _inherited(self):
+    """
+
+    """
+
+
+def _insert(self, *args, **kwargs):
+    """
+    Inserts a new record for the given model. This provides an interface to
+    the InsertQuery class and is how Model.save() is implemented.
+    """
+
+
+def _queryset_class(self):
+    """
+    Represents a lazy database lookup for a set of objects.
+    """
+
+
+def _set_creation_counter(self):
+    """
+    Sets the creation counter value for this instance and increments the
+    class-level copy.
+    """
+
+
+def _update(self, *args, **kwargs):
+    """
+    A version of update that accepts field objects instead of field names.
+    Used primarily for model saving and not intended for use by general
+    code (it requires too much poking around at model internals to be
+    useful at that level).
+    """
+
+
+def aggregate(self, *args, **kwargs):
+    """
+    Returns a dictionary containing the calculations (aggregation)
+    over the current queryset
+    
+    If args is present the expression is passed as a kwarg using
+    the Aggregate object's default alias.
+    """
+
+
+def all(self):
+    """
+    @rtype: django.db.models.query.QuerySet
+    """
+
+
+def annotate(self, *args, **kwargs):
+    """
+    Return a query set in which the returned objects have been annotated
+    with data aggregated from related fields.
+    """
+
+
+def bulk_create(self, *args, **kwargs):
+    """
+    Inserts each of the instances into the database. This does *not* call
+    save() on each of the instances, does not send any pre/post save
+    signals, and does not set the primary key attribute if it is an
+    autoincrement field.
+    """
+
+
+def check(self, **kwargs):
+    """
+
+    """
+
+
+def complex_filter(self, *args, **kwargs):
+    """
+    Returns a new QuerySet instance with filter_obj added to the filters.
+    
+    filter_obj can be a Q object (or anything with an add_to_query()
+    method) or a dictionary of keyword lookup arguments.
+    
+    This exists to support framework features such as 'limit_choices_to',
+    and usually it will be more natural to use other methods.
+    
+    @rtype: django.db.models.query.QuerySet
+    """
+
+
+def contribute_to_class(self, model, name):
+    """
+
+    """
+
+
+def count(self, *args, **kwargs):
+    """
+    Performs a SELECT COUNT() and returns the number of records as an
+    integer.
+    
+    If the QuerySet is already fully cached this simply returns the length
+    of the cached results set to avoid multiple SELECT COUNT(*) calls.
+    """
+
+
+def create(self, *args, **kwargs):
+    """
+    Creates a new object with the given kwargs, saving it to the database
+    and returning the created object.
+    """
+
+
+def creation_counter(self):
+    """
+
+    """
+
+
+def dates(self, *args, **kwargs):
+    """
+    Returns a list of date objects representing all available dates for
+    the given field_name, scoped to 'kind'.
+    """
+
+
+def datetimes(self, *args, **kwargs):
+    """
+    Returns a list of datetime objects representing all available
+    datetimes for the given field_name, scoped to 'kind'.
+    """
+
+
+def db(self):
+    """
+
+    """
+
+
+def db_manager(self, using=None, hints=None):
+    """
+
+    """
+
+
+def defer(self, *args, **kwargs):
+    """
+    Defers the loading of data for certain fields until they are accessed.
+    The set of fields to defer is added to any existing set of deferred
+    fields. The only exception to this is if None is passed in as the only
+    parameter, in which case all deferrals are removed (None acts as a
+    reset option).
+    """
+
+
+def distinct(self, *args, **kwargs):
+    """
+    Returns a new QuerySet instance that will select only distinct results.
+    
+    @rtype: django.db.models.query.QuerySet
+    """
+
+
+def earliest(self, *args, **kwargs):
+    """
+
+    """
+
+
+def exclude(self, *args, **kwargs):
+    """
+    Returns a new QuerySet instance with NOT (args) ANDed to the existing
+    set.
+    
+    @rtype: django.db.models.query.QuerySet
+    """
+
+
+def exists(self, *args, **kwargs):
+    """
+
+    """
+
+
+def extra(self, *args, **kwargs):
+    """
+    Adds extra SQL fragments to the query.
+    """
+
+
+def filter(self, *args, **kwargs):
+    """
+    Returns a new QuerySet instance with the args ANDed to the existing
+    set.
+    
+    @rtype: django.db.models.query.QuerySet
+    """
+
+
+def first(self, *args, **kwargs):
+    """
+    Returns the first object of a query, returns None if no match is found.
+    """
+
+
+def from_queryset(cls, queryset_class, class_name=None):
+    """
+
+    """
+
+
+def get(self, *args, **kwargs):
+    """
+    Performs the query and returns a single object matching the given
+    keyword arguments.
+    """
+
+
+def get_or_create(self, *args, **kwargs):
+    """
+    Looks up an object with the given kwargs, creating one if necessary.
+    Returns a tuple of (object, created), where created is a boolean
+    specifying whether an object was created.
+    """
+
+
+def get_queryset(self):
+    """
+    Returns a new QuerySet object.  Subclasses can override this method to
+    easily customize the behavior of the Manager.
+    
+    @rtype: django.db.models.query.QuerySet
+    """
+
+
+def in_bulk(self, *args, **kwargs):
+    """
+    Returns a dictionary mapping each of the given IDs to the object with
+    that ID.
+    """
+
+
+def iterator(self, *args, **kwargs):
+    """
+    An iterator over the results from applying this QuerySet to the
+    database.
+    """
+
+
+def last(self, *args, **kwargs):
+    """
+    Returns the last object of a query, returns None if no match is found.
+    """
+
+
+def latest(self, *args, **kwargs):
+    """
+
+    """
+
+
+def model(self):
+    """
+    MyModel(id)
+    """
+
+
+def none(self, *args, **kwargs):
+    """
+    Returns an empty QuerySet.
+    
+    @rtype: django.db.models.query.QuerySet
+    """
+
+
+def only(self, *args, **kwargs):
+    """
+    Essentially, the opposite of defer. Only the fields passed into this
+    method and that are not already specified as deferred are loaded
+    immediately when the queryset is evaluated.
+    """
+
+
+def order_by(self, *args, **kwargs):
+    """
+    Returns a new QuerySet instance with the ordering changed.
+    
+    @rtype: django.db.models.query.QuerySet
+    """
+
+
+def prefetch_related(self, *args, **kwargs):
+    """
+    Returns a new QuerySet instance that will prefetch the specified
+    Many-To-One and Many-To-Many related objects when the QuerySet is
+    evaluated.
+    
+    When prefetch_related() is called more than once, the list of lookups to
+    prefetch is appended to. If prefetch_related(None) is called, the list
+    is cleared.
+    
+    @rtype: django.db.models.query.QuerySet
+    """
+
+
+def raw(self, *args, **kwargs):
+    """
+
+    """
+
+
+def reverse(self, *args, **kwargs):
+    """
+    Reverses the ordering of the QuerySet.
+    
+    @rtype: django.db.models.query.QuerySet
+    """
+
+
+def select_for_update(self, *args, **kwargs):
+    """
+    Returns a new QuerySet instance that will select objects with a
+    FOR UPDATE lock.
+    
+    @rtype: django.db.models.query.QuerySet
+    """
+
+
+def select_related(self, *args, **kwargs):
+    """
+    Returns a new QuerySet instance that will select related objects.
+    
+    If fields are specified, they must be ForeignKey fields and only those
+    related objects are included in the selection.
+    
+    If select_related(None) is called, the list is cleared.
+    
+    @rtype: django.db.models.query.QuerySet
+    """
+
+
+def update(self, *args, **kwargs):
+    """
+    Updates all elements in the current QuerySet, setting all the given
+    fields to the appropriate values.
+    """
+
+
+def update_or_create(self, *args, **kwargs):
+    """
+    Looks up an object with the given kwargs, updating one with defaults
+    if it exists, otherwise creates a new one.
+    Returns a tuple (object, created), where created is a boolean
+    specifying whether an object was created.
+    """
+
+
+def using(self, *args, **kwargs):
+    """
+    Selects which database this QuerySet should execute its query against.
+    
+    @rtype: django.db.models.query.QuerySet
+    """
+
+
+def values(self, *args, **kwargs):
+    """
+
+    """
+
+
+def values_list(self, *args, **kwargs):
+    """
+
+    """
+
diff --git a/python/helpers/pydev/stubs/_get_tips.py b/python/helpers/pydev/stubs/_get_tips.py
new file mode 100644
index 0000000..b98e1c5
--- /dev/null
+++ b/python/helpers/pydev/stubs/_get_tips.py
@@ -0,0 +1,280 @@
+import os.path
+import inspect
+import sys
+
+# completion types.
+TYPE_IMPORT = '0'
+TYPE_CLASS = '1'
+TYPE_FUNCTION = '2'
+TYPE_ATTR = '3'
+TYPE_BUILTIN = '4'
+TYPE_PARAM = '5'
+
+def _imp(name, log=None):
+    try:
+        return __import__(name)
+    except:
+        if '.' in name:
+            sub = name[0:name.rfind('.')]
+
+            if log is not None:
+                log.AddContent('Unable to import', name, 'trying with', sub)
+                # log.AddContent('PYTHONPATH:')
+                # log.AddContent('\n'.join(sorted(sys.path)))
+                log.AddException()
+
+            return _imp(sub, log)
+        else:
+            s = 'Unable to import module: %s - sys.path: %s' % (str(name), sys.path)
+            if log is not None:
+                log.AddContent(s)
+                log.AddException()
+
+            raise ImportError(s)
+
+
+IS_IPY = False
+if sys.platform == 'cli':
+    IS_IPY = True
+    _old_imp = _imp
+    def _imp(name, log=None):
+        # We must add a reference in clr for .Net
+        import clr  # @UnresolvedImport
+        initial_name = name
+        while '.' in name:
+            try:
+                clr.AddReference(name)
+                break  # If it worked, that's OK.
+            except:
+                name = name[0:name.rfind('.')]
+        else:
+            try:
+                clr.AddReference(name)
+            except:
+                pass  # That's OK (not dot net module).
+
+        return _old_imp(initial_name, log)
+
+
+
+def GetFile(mod):
+    f = None
+    try:
+        f = inspect.getsourcefile(mod) or inspect.getfile(mod)
+    except:
+        if hasattr(mod, '__file__'):
+            f = mod.__file__
+            if f.lower(f[-4:]) in ['.pyc', '.pyo']:
+                filename = f[:-4] + '.py'
+                if os.path.exists(filename):
+                    f = filename
+
+    return f
+
+def Find(name, log=None):
+    f = None
+
+    mod = _imp(name, log)
+    parent = mod
+    foundAs = ''
+
+    if inspect.ismodule(mod):
+        f = GetFile(mod)
+
+    components = name.split('.')
+
+    old_comp = None
+    for comp in components[1:]:
+        try:
+            # this happens in the following case:
+            # we have mx.DateTime.mxDateTime.mxDateTime.pyd
+            # but after importing it, mx.DateTime.mxDateTime shadows access to mxDateTime.pyd
+            mod = getattr(mod, comp)
+        except AttributeError:
+            if old_comp != comp:
+                raise
+
+        if inspect.ismodule(mod):
+            f = GetFile(mod)
+        else:
+            if len(foundAs) > 0:
+                foundAs = foundAs + '.'
+            foundAs = foundAs + comp
+
+        old_comp = comp
+
+    return f, mod, parent, foundAs
+
+
+def GenerateTip(data, log=None):
+    data = data.replace('\n', '')
+    if data.endswith('.'):
+        data = data.rstrip('.')
+
+    f, mod, parent, foundAs = Find(data, log)
+    # print_ >> open('temp.txt', 'w'), f
+    tips = GenerateImportsTipForModule(mod)
+    return f, tips
+
+
+def CheckChar(c):
+    if c == '-' or c == '.':
+        return '_'
+    return c
+
+def GenerateImportsTipForModule(obj_to_complete, dirComps=None, getattr=getattr, filter=lambda name:True):
+    '''
+        @param obj_to_complete: the object from where we should get the completions
+        @param dirComps: if passed, we should not 'dir' the object and should just iterate those passed as a parameter
+        @param getattr: the way to get a given object from the obj_to_complete (used for the completer)
+        @param filter: a callable that receives the name and decides if it should be appended or not to the results
+        @return: list of tuples, so that each tuple represents a completion with:
+            name, doc, args, type (from the TYPE_* constants)
+    '''
+    ret = []
+
+    if dirComps is None:
+        dirComps = dir(obj_to_complete)
+        if hasattr(obj_to_complete, '__dict__'):
+            dirComps.append('__dict__')
+        if hasattr(obj_to_complete, '__class__'):
+            dirComps.append('__class__')
+
+    getCompleteInfo = True
+
+    if len(dirComps) > 1000:
+        # ok, we don't want to let our users wait forever...
+        # no complete info for you...
+
+        getCompleteInfo = False
+
+    dontGetDocsOn = (float, int, str, tuple, list)
+    for d in dirComps:
+
+        if d is None:
+            continue
+
+        if not filter(d):
+            continue
+
+        args = ''
+
+        try:
+            obj = getattr(obj_to_complete, d)
+        except:  # just ignore and get it without aditional info
+            ret.append((d, '', args, TYPE_BUILTIN))
+        else:
+
+            if getCompleteInfo:
+                retType = TYPE_BUILTIN
+
+                # check if we have to get docs
+                getDoc = True
+                for class_ in dontGetDocsOn:
+
+                    if isinstance(obj, class_):
+                        getDoc = False
+                        break
+
+                doc = ''
+                if getDoc:
+                    # no need to get this info... too many constants are defined and
+                    # makes things much slower (passing all that through sockets takes quite some time)
+                    try:
+                        doc = inspect.getdoc(obj)
+                        if doc is None:
+                            doc = ''
+                    except:  # may happen on jython when checking java classes (so, just ignore it)
+                        doc = ''
+
+
+                if inspect.ismethod(obj) or inspect.isbuiltin(obj) or inspect.isfunction(obj) or inspect.isroutine(obj):
+                    try:
+                        args, vargs, kwargs, defaults = inspect.getargspec(obj)
+                    except:
+                        args, vargs, kwargs, defaults = (('self',), None, None, None)
+                    if defaults is not None:
+                        start_defaults_at = len(args) - len(defaults)
+
+
+                    r = ''
+                    for i, a in enumerate(args):
+
+                        if len(r) > 0:
+                            r = r + ', '
+
+                        r = r + str(a)
+
+                        if defaults is not None and i >= start_defaults_at:
+                            default =  defaults[i - start_defaults_at]
+                            r += '=' +str(default)
+
+
+                    others = ''
+                    if vargs:
+                        others += '*' + vargs
+
+                    if kwargs:
+                        if others:
+                            others+= ', '
+                        others += '**' + kwargs
+
+                    if others:
+                        r+= ', '
+
+
+                    args = '(%s%s)' % (r, others)
+                    retType = TYPE_FUNCTION
+
+                elif inspect.isclass(obj):
+                    retType = TYPE_CLASS
+
+                elif inspect.ismodule(obj):
+                    retType = TYPE_IMPORT
+
+                else:
+                    retType = TYPE_ATTR
+
+
+                # add token and doc to return - assure only strings.
+                ret.append((d, doc, args, retType))
+
+
+            else:  # getCompleteInfo == False
+                if inspect.ismethod(obj) or inspect.isbuiltin(obj) or inspect.isfunction(obj) or inspect.isroutine(obj):
+                    retType = TYPE_FUNCTION
+
+                elif inspect.isclass(obj):
+                    retType = TYPE_CLASS
+
+                elif inspect.ismodule(obj):
+                    retType = TYPE_IMPORT
+
+                else:
+                    retType = TYPE_ATTR
+                # ok, no complete info, let's try to do this as fast and clean as possible
+                # so, no docs for this kind of information, only the signatures
+                ret.append((d, '', str(args), retType))
+
+    return ret
+
+
+
+
+if __name__ == '__main__':
+    # To use when we have some object: i.e.: obj_to_complete=MyModel.objects
+    temp = '''
+def %(method_name)s%(args)s:
+    """
+%(doc)s
+    """
+'''
+
+    for entry in GenerateImportsTipForModule(obj_to_complete):
+        import textwrap
+        doc = textwrap.dedent(entry[1])
+        lines = []
+        for line in doc.splitlines():
+            lines.append('    ' + line)
+        doc = '\n'.join(lines)
+        print temp % dict(method_name=entry[0], args=entry[2] or '(self)', doc=doc)
diff --git a/python/helpers/pydev/test_debug.py b/python/helpers/pydev/test_debug.py
index bc55de1..2196ca6 100644
--- a/python/helpers/pydev/test_debug.py
+++ b/python/helpers/pydev/test_debug.py
@@ -3,7 +3,7 @@
 import unittest
 import os
 
-test_data_path = os.path.abspath(os.path.join(os.path.dirname(os.path.realpath(__file__)), '..', '..', '..', '..', 'python', 'testData', 'debug'))
+test_data_path = os.path.abspath(os.path.join(os.path.dirname(os.path.realpath(__file__)), '..', '..', 'testData', 'debug'))
 
 class PyDevTestCase(unittest.TestCase):
     def testZipFileExits(self):
diff --git a/python/helpers/pydev/test_pydevd_reload/test_pydevd_reload.py b/python/helpers/pydev/test_pydevd_reload/test_pydevd_reload.py
new file mode 100644
index 0000000..062ead2
--- /dev/null
+++ b/python/helpers/pydev/test_pydevd_reload/test_pydevd_reload.py
@@ -0,0 +1,516 @@
+import os  # @NoMove
+import sys  # @NoMove
+sys.path.insert(0, os.path.realpath(os.path.abspath('..')))
+
+import pydevd_reload
+import tempfile
+import unittest
+
+
+SAMPLE_CODE = """
+class C:
+    def foo(self):
+        return 0
+
+    @classmethod
+    def bar(cls):
+        return (0, 0)
+
+    @staticmethod
+    def stomp():
+        return (0, 0, 0)
+
+    def unchanged(self):
+        return 'unchanged'
+"""
+
+
+
+class Test(unittest.TestCase):
+
+
+    def setUp(self):
+        unittest.TestCase.setUp(self)
+        self.tempdir = None
+        self.save_path = None
+        self.tempdir = tempfile.mkdtemp()
+        self.save_path = list(sys.path)
+        sys.path.append(self.tempdir)
+        try:
+            del sys.modules['x']
+        except:
+            pass
+
+
+    def tearDown(self):
+        unittest.TestCase.tearDown(self)
+        sys.path = self.save_path
+        try:
+            del sys.modules['x']
+        except:
+            pass
+
+    def make_mod(self, name="x", repl=None, subst=None, sample=SAMPLE_CODE):
+        fn = os.path.join(self.tempdir, name + ".py")
+        f = open(fn, "w")
+        if repl is not None and subst is not None:
+            sample = sample.replace(repl, subst)
+        try:
+            f.write(sample)
+        finally:
+            f.close()
+
+
+    def test_pydevd_reload(self):
+
+        self.make_mod()
+        import x
+
+        C = x.C
+        COut = C
+        Cfoo = C.foo
+        Cbar = C.bar
+        Cstomp = C.stomp
+
+        def check2(expected):
+            C = x.C
+            Cfoo = C.foo
+            Cbar = C.bar
+            Cstomp = C.stomp
+            b = C()
+            bfoo = b.foo
+            self.assertEqual(expected, b.foo())
+            self.assertEqual(expected, bfoo())
+            self.assertEqual(expected, Cfoo(b))
+
+        def check(expected):
+            b = COut()
+            bfoo = b.foo
+            self.assertEqual(expected, b.foo())
+            self.assertEqual(expected, bfoo())
+            self.assertEqual(expected, Cfoo(b))
+            self.assertEqual((expected, expected), Cbar())
+            self.assertEqual((expected, expected, expected), Cstomp())
+            check2(expected)
+
+        check(0)
+
+        # modify mod and reload
+        count = 0
+        while count < 1:
+            count += 1
+            self.make_mod(repl="0", subst=str(count))
+            pydevd_reload.xreload(x)
+            check(count)
+
+
+    def test_pydevd_reload2(self):
+
+        self.make_mod()
+        import x
+
+        c = x.C()
+        cfoo = c.foo
+        self.assertEqual(0, c.foo())
+        self.assertEqual(0, cfoo())
+
+        self.make_mod(repl="0", subst='1')
+        pydevd_reload.xreload(x)
+        self.assertEqual(1, c.foo())
+        self.assertEqual(1, cfoo())
+
+    def test_pydevd_reload3(self):
+        class F:
+            def m1(self):
+                return 1
+        class G:
+            def m1(self):
+                return 2
+
+        self.assertEqual(F().m1(), 1)
+        pydevd_reload.Reload(None)._update(None, None, F, G)
+        self.assertEqual(F().m1(), 2)
+
+
+    def test_pydevd_reload4(self):
+        class F:
+            pass
+        F.m1 = lambda a:None
+        class G:
+            pass
+        G.m1 = lambda a:10
+
+        self.assertEqual(F().m1(), None)
+        pydevd_reload.Reload(None)._update(None, None, F, G)
+        self.assertEqual(F().m1(), 10)
+
+
+
+    def test_if_code_obj_equals(self):
+        class F:
+            def m1(self):
+                return 1
+        class G:
+            def m1(self):
+                return 1
+        class H:
+            def m1(self):
+                return 2
+
+        if hasattr(F.m1, 'func_code'):
+            self.assertTrue(pydevd_reload.code_objects_equal(F.m1.func_code, G.m1.func_code))
+            self.assertFalse(pydevd_reload.code_objects_equal(F.m1.func_code, H.m1.func_code))
+        else:
+            self.assertTrue(pydevd_reload.code_objects_equal(F.m1.__code__, G.m1.__code__))
+            self.assertFalse(pydevd_reload.code_objects_equal(F.m1.__code__, H.m1.__code__))
+
+
+
+    def test_metaclass(self):
+
+        class Meta(type):
+            def __init__(cls, name, bases, attrs):
+                super(Meta, cls).__init__(name, bases, attrs)
+
+        class F:
+            __metaclass__ = Meta
+
+            def m1(self):
+                return 1
+
+
+        class G:
+            __metaclass__ = Meta
+
+            def m1(self):
+                return 2
+
+        self.assertEqual(F().m1(), 1)
+        pydevd_reload.Reload(None)._update(None, None, F, G)
+        self.assertEqual(F().m1(), 2)
+
+
+
+    def test_change_hierarchy(self):
+
+        class F(object):
+
+            def m1(self):
+                return 1
+
+
+        class B(object):
+            def super_call(self):
+                return 2
+
+        class G(B):
+
+            def m1(self):
+                return self.super_call()
+
+        self.assertEqual(F().m1(), 1)
+        old = pydevd_reload.notify_error
+        self._called = False
+        def on_error(*args):
+            self._called = True
+        try:
+            pydevd_reload.notify_error = on_error
+            pydevd_reload.Reload(None)._update(None, None, F, G)
+            self.assertTrue(self._called)
+        finally:
+            pydevd_reload.notify_error = old
+
+
+    def test_change_hierarchy_old_style(self):
+
+        class F:
+
+            def m1(self):
+                return 1
+
+
+        class B:
+            def super_call(self):
+                return 2
+
+        class G(B):
+
+            def m1(self):
+                return self.super_call()
+
+
+        self.assertEqual(F().m1(), 1)
+        old = pydevd_reload.notify_error
+        self._called = False
+        def on_error(*args):
+            self._called = True
+        try:
+            pydevd_reload.notify_error = on_error
+            pydevd_reload.Reload(None)._update(None, None, F, G)
+            self.assertTrue(self._called)
+        finally:
+            pydevd_reload.notify_error = old
+
+
+    def test_create_class(self):
+        SAMPLE_CODE1 = """
+class C:
+    def foo(self):
+        return 0
+"""
+        # Creating a new class and using it from old class
+        SAMPLE_CODE2 = """
+class B:
+    pass
+
+class C:
+    def foo(self):
+        return B
+"""
+
+        self.make_mod(sample=SAMPLE_CODE1)
+        import x
+        foo = x.C().foo
+        self.assertEqual(foo(), 0)
+        self.make_mod(sample=SAMPLE_CODE2)
+        pydevd_reload.xreload(x)
+        self.assertEqual(foo().__name__, 'B')
+
+    def test_create_class2(self):
+        SAMPLE_CODE1 = """
+class C(object):
+    def foo(self):
+        return 0
+"""
+        # Creating a new class and using it from old class
+        SAMPLE_CODE2 = """
+class B(object):
+    pass
+
+class C(object):
+    def foo(self):
+        return B
+"""
+
+        self.make_mod(sample=SAMPLE_CODE1)
+        import x
+        foo = x.C().foo
+        self.assertEqual(foo(), 0)
+        self.make_mod(sample=SAMPLE_CODE2)
+        pydevd_reload.xreload(x)
+        self.assertEqual(foo().__name__, 'B')
+
+    def test_parent_function(self):
+        SAMPLE_CODE1 = """
+class B(object):
+    def foo(self):
+        return 0
+
+class C(B):
+    def call(self):
+        return self.foo()
+"""
+        # Creating a new class and using it from old class
+        SAMPLE_CODE2 = """
+class B(object):
+    def foo(self):
+        return 0
+    def bar(self):
+        return 'bar'
+
+class C(B):
+    def call(self):
+        return self.bar()
+"""
+
+        self.make_mod(sample=SAMPLE_CODE1)
+        import x
+        call = x.C().call
+        self.assertEqual(call(), 0)
+        self.make_mod(sample=SAMPLE_CODE2)
+        pydevd_reload.xreload(x)
+        self.assertEqual(call(), 'bar')
+
+
+    def test_update_constant(self):
+        SAMPLE_CODE1 = """
+CONSTANT = 1
+
+class B(object):
+    def foo(self):
+        return CONSTANT
+"""
+        SAMPLE_CODE2 = """
+CONSTANT = 2
+
+class B(object):
+    def foo(self):
+        return CONSTANT
+"""
+
+        self.make_mod(sample=SAMPLE_CODE1)
+        import x
+        foo = x.B().foo
+        self.assertEqual(foo(), 1)
+        self.make_mod(sample=SAMPLE_CODE2)
+        pydevd_reload.xreload(x)
+        self.assertEqual(foo(), 1) #Just making it explicit we don't reload constants.
+
+
+    def test_update_constant_with_custom_code(self):
+        SAMPLE_CODE1 = """
+CONSTANT = 1
+
+class B(object):
+    def foo(self):
+        return CONSTANT
+"""
+        SAMPLE_CODE2 = """
+CONSTANT = 2
+
+def __xreload_old_new__(namespace, name, old, new):
+    if name == 'CONSTANT':
+        namespace[name] = new
+
+class B(object):
+    def foo(self):
+        return CONSTANT
+"""
+
+        self.make_mod(sample=SAMPLE_CODE1)
+        import x
+        foo = x.B().foo
+        self.assertEqual(foo(), 1)
+        self.make_mod(sample=SAMPLE_CODE2)
+        pydevd_reload.xreload(x)
+        self.assertEqual(foo(), 2) #Actually updated it now!
+
+
+    def test_reload_custom_code_after_changes(self):
+        SAMPLE_CODE1 = """
+CONSTANT = 1
+
+class B(object):
+    def foo(self):
+        return CONSTANT
+"""
+        SAMPLE_CODE2 = """
+CONSTANT = 1
+
+def __xreload_after_reload_update__(namespace):
+    namespace['CONSTANT'] = 2
+
+class B(object):
+    def foo(self):
+        return CONSTANT
+"""
+
+        self.make_mod(sample=SAMPLE_CODE1)
+        import x
+        foo = x.B().foo
+        self.assertEqual(foo(), 1)
+        self.make_mod(sample=SAMPLE_CODE2)
+        pydevd_reload.xreload(x)
+        self.assertEqual(foo(), 2) #Actually updated it now!
+
+
+    def test_reload_custom_code_after_changes_in_class(self):
+        SAMPLE_CODE1 = """
+
+class B(object):
+    CONSTANT = 1
+
+    def foo(self):
+        return self.CONSTANT
+"""
+        SAMPLE_CODE2 = """
+
+
+class B(object):
+    CONSTANT = 1
+
+    @classmethod
+    def __xreload_after_reload_update__(cls):
+        cls.CONSTANT = 2
+
+    def foo(self):
+        return self.CONSTANT
+"""
+
+        self.make_mod(sample=SAMPLE_CODE1)
+        import x
+        foo = x.B().foo
+        self.assertEqual(foo(), 1)
+        self.make_mod(sample=SAMPLE_CODE2)
+        pydevd_reload.xreload(x)
+        self.assertEqual(foo(), 2) #Actually updated it now!
+
+
+    def test_update_constant_with_custom_code(self):
+        SAMPLE_CODE1 = """
+
+class B(object):
+    CONSTANT = 1
+
+    def foo(self):
+        return self.CONSTANT
+"""
+        SAMPLE_CODE2 = """
+
+
+class B(object):
+
+    CONSTANT = 2
+
+    def __xreload_old_new__(cls, name, old, new):
+        if name == 'CONSTANT':
+            cls.CONSTANT = new
+    __xreload_old_new__ = classmethod(__xreload_old_new__)
+
+    def foo(self):
+        return self.CONSTANT
+"""
+
+        self.make_mod(sample=SAMPLE_CODE1)
+        import x
+        foo = x.B().foo
+        self.assertEqual(foo(), 1)
+        self.make_mod(sample=SAMPLE_CODE2)
+        pydevd_reload.xreload(x)
+        self.assertEqual(foo(), 2) #Actually updated it now!
+
+
+    def test_update_with_slots(self):
+        SAMPLE_CODE1 = """
+class B(object):
+
+    __slots__ = ['bar']
+
+"""
+        SAMPLE_CODE2 = """
+class B(object):
+
+    __slots__ = ['bar', 'foo']
+
+    def m1(self):
+        self.bar = 10
+        return 1
+
+"""
+
+        self.make_mod(sample=SAMPLE_CODE1)
+        import x
+        B = x.B
+        self.make_mod(sample=SAMPLE_CODE2)
+        pydevd_reload.xreload(x)
+        b = B()
+        self.assertEqual(1, b.m1())
+        self.assertEqual(10, b.bar)
+        self.assertRaises(Exception, setattr, b, 'foo', 20) #__slots__ can't be updated
+
+
+
+
+if __name__ == "__main__":
+#     import sys;sys.argv = ['', 'Test.test_reload_custom_code_after_changes_in_class']
+    unittest.main()
diff --git a/python/helpers/pydev/tests/__not_in_default_pythonpath.txt b/python/helpers/pydev/tests/__not_in_default_pythonpath.txt
new file mode 100644
index 0000000..29cdc5b
--- /dev/null
+++ b/python/helpers/pydev/tests/__not_in_default_pythonpath.txt
@@ -0,0 +1 @@
+(no __init__.py file)
\ No newline at end of file
diff --git a/python/helpers/pydev/tests/check_pydevconsole.py b/python/helpers/pydev/tests/check_pydevconsole.py
new file mode 100644
index 0000000..7d1b7ee
--- /dev/null
+++ b/python/helpers/pydev/tests/check_pydevconsole.py
@@ -0,0 +1,105 @@
+import sys
+import os
+
+#Put pydevconsole in the path.
+sys.argv[0] = os.path.dirname(sys.argv[0]) 
+sys.path.insert(1, os.path.join(os.path.dirname(sys.argv[0])))
+
+print('Running tests with:', sys.executable)
+print('PYTHONPATH:')
+print('\n'.join(sorted(sys.path)))
+
+import threading
+import unittest
+
+import pydevconsole
+from pydev_imports import xmlrpclib, SimpleXMLRPCServer
+
+try:
+    raw_input
+    raw_input_name = 'raw_input'
+except NameError:
+    raw_input_name = 'input'
+
+#=======================================================================================================================
+# Test
+#=======================================================================================================================
+class Test(unittest.TestCase):
+
+    
+    def startClientThread(self, client_port):
+        class ClientThread(threading.Thread):
+            def __init__(self, client_port):
+                threading.Thread.__init__(self)
+                self.client_port = client_port
+                
+            def run(self):
+                class HandleRequestInput:
+                    def RequestInput(self):
+                        return 'RequestInput: OK'
+                
+                handle_request_input = HandleRequestInput()
+                
+                import pydev_localhost
+                print('Starting client with:', pydev_localhost.get_localhost(), self.client_port)
+                client_server = SimpleXMLRPCServer((pydev_localhost.get_localhost(), self.client_port), logRequests=False)
+                client_server.register_function(handle_request_input.RequestInput)
+                client_server.serve_forever()
+                
+        client_thread = ClientThread(client_port)
+        client_thread.setDaemon(True)
+        client_thread.start()
+        return client_thread
+
+        
+    def getFreeAddresses(self):
+        import socket
+        s = socket.socket()
+        s.bind(('', 0))
+        port0 = s.getsockname()[1]
+        
+        s1 = socket.socket()
+        s1.bind(('', 0))
+        port1 = s1.getsockname()[1]
+        s.close()
+        s1.close()
+        return port0, port1
+        
+        
+    def testServer(self):
+        client_port, server_port = self.getFreeAddresses()
+        class ServerThread(threading.Thread):
+            def __init__(self, client_port, server_port):
+                threading.Thread.__init__(self)
+                self.client_port = client_port
+                self.server_port = server_port
+                
+            def run(self):
+                import pydev_localhost
+                print('Starting server with:', pydev_localhost.get_localhost(), self.server_port, self.client_port)
+                pydevconsole.StartServer(pydev_localhost.get_localhost(), self.server_port, self.client_port)
+        server_thread = ServerThread(client_port, server_port)
+        server_thread.setDaemon(True)
+        server_thread.start()
+
+        client_thread = self.startClientThread(client_port) #@UnusedVariable
+        
+        import time
+        time.sleep(.3) #let's give it some time to start the threads
+        
+        import pydev_localhost
+        server = xmlrpclib.Server('http://%s:%s' % (pydev_localhost.get_localhost(), server_port))
+        server.addExec("import sys; print('Running with: %s %s' % (sys.executable or sys.platform, sys.version))")
+        server.addExec('class Foo:')
+        server.addExec('    pass')
+        server.addExec('')
+        server.addExec('foo = Foo()')
+        server.addExec('a = %s()' % raw_input_name)
+        server.addExec('print (a)')
+        
+#=======================================================================================================================
+# main        
+#=======================================================================================================================
+if __name__ == '__main__':
+    unittest.main()
+
diff --git a/python/helpers/pydev/tests/test_get_referrers.py b/python/helpers/pydev/tests/test_get_referrers.py
new file mode 100644
index 0000000..7fc8514
--- /dev/null
+++ b/python/helpers/pydev/tests/test_get_referrers.py
@@ -0,0 +1,139 @@
+import os.path
+import sys
+import threading
+import time
+
+IS_JYTHON = sys.platform.find('java') != -1
+
+try:
+    this_file_name = __file__
+except NameError:
+    # stupid jython. plain old __file__ isnt working for some reason
+    import test_runfiles  #@UnresolvedImport - importing the module itself
+    this_file_name = test_runfiles.__file__
+
+
+desired_runfiles_path = os.path.normpath(os.path.dirname(this_file_name) + "/..")
+sys.path.insert(0, desired_runfiles_path)
+
+import unittest
+import pydevd_referrers
+from pydev_imports import StringIO
+
+#=======================================================================================================================
+# Test
+#=======================================================================================================================
+class Test(unittest.TestCase):
+
+
+    def testGetReferrers1(self):
+
+        container = []
+        contained = [1, 2]
+        container.append(0)
+        container.append(contained)
+
+        # Ok, we have the contained in this frame and inside the given list (which on turn is in this frame too).
+        # we should skip temporary references inside the get_referrer_info.
+        result = pydevd_referrers.get_referrer_info(contained)
+        assert 'list[1]' in result
+        pydevd_referrers.print_referrers(contained, stream=StringIO())
+
+    def testGetReferrers2(self):
+
+        class MyClass(object):
+            def __init__(self):
+                pass
+
+        contained = [1, 2]
+        obj = MyClass()
+        obj.contained = contained
+        del contained
+
+        # Ok, we have the contained in this frame and inside the given list (which on turn is in this frame too).
+        # we should skip temporary references inside the get_referrer_info.
+        result = pydevd_referrers.get_referrer_info(obj.contained)
+        assert 'found_as="contained"' in result
+        assert 'MyClass' in result
+
+
+    def testGetReferrers3(self):
+
+        class MyClass(object):
+            def __init__(self):
+                pass
+
+        contained = [1, 2]
+        obj = MyClass()
+        obj.contained = contained
+        del contained
+
+        # Ok, we have the contained in this frame and inside the given list (which on turn is in this frame too).
+        # we should skip temporary references inside the get_referrer_info.
+        result = pydevd_referrers.get_referrer_info(obj.contained)
+        assert 'found_as="contained"' in result
+        assert 'MyClass' in result
+
+
+    def testGetReferrers4(self):
+
+        class MyClass(object):
+            def __init__(self):
+                pass
+
+        obj = MyClass()
+        obj.me = obj
+
+        # Let's see if we detect the cycle...
+        result = pydevd_referrers.get_referrer_info(obj)
+        assert 'found_as="me"' in result  #Cyclic ref
+
+
+    def testGetReferrers5(self):
+        container = dict(a=[1])
+
+        # Let's see if we detect the cycle...
+        result = pydevd_referrers.get_referrer_info(container['a'])
+        assert 'testGetReferrers5' not in result  #I.e.: NOT in the current method
+        assert 'found_as="a"' in result
+        assert 'dict' in result
+        assert str(id(container)) in result
+
+
+    def testGetReferrers6(self):
+        container = dict(a=[1])
+
+        def should_appear(obj):
+            # Let's see if we detect the cycle...
+            return pydevd_referrers.get_referrer_info(obj)
+
+        result = should_appear(container['a'])
+        assert 'should_appear' in result
+
+
+    def testGetReferrers7(self):
+
+        class MyThread(threading.Thread):
+            def run(self):
+                #Note: we do that because if we do
+                self.frame = sys._getframe()
+
+        t = MyThread()
+        t.start()
+        while not hasattr(t, 'frame'):
+            time.sleep(0.01)
+
+        result = pydevd_referrers.get_referrer_info(t.frame)
+        assert 'MyThread' in result
+
+
+if __name__ == "__main__":
+    #this is so that we can run it frem the jython tests -- because we don't actually have an __main__ module
+    #(so, it won't try importing the __main__ module)
+    try:
+        import gc
+        gc.get_referrers(unittest)
+    except:
+        pass
+    else:
+        unittest.TextTestRunner().run(unittest.makeSuite(Test))
diff --git a/python/helpers/pydev/tests/test_jyserver.py b/python/helpers/pydev/tests/test_jyserver.py
new file mode 100644
index 0000000..8765400
--- /dev/null
+++ b/python/helpers/pydev/tests/test_jyserver.py
@@ -0,0 +1,165 @@
+'''
+@author Fabio Zadrozny 
+'''
+import sys
+import unittest
+import socket
+import urllib
+
+
+IS_JYTHON = sys.platform.find('java') != -1
+
+if IS_JYTHON:
+    import os
+    
+    #make it as if we were executing from the directory above this one (so that we can use jycompletionserver
+    #without the need for it being in the pythonpath)
+    sys.argv[0] = os.path.dirname(sys.argv[0]) 
+    #twice the dirname to get the previous level from this file.
+    sys.path.insert(1, os.path.join(os.path.dirname(sys.argv[0])))
+    
+    import pycompletionserver as jycompletionserver
+    
+    
+    DEBUG = 0
+
+def dbg(s):
+    if DEBUG:
+        sys.stdout.write('TEST %s\n' % s)
+
+class Test(unittest.TestCase):
+
+    def setUp(self):
+        unittest.TestCase.setUp(self)
+
+    def tearDown(self):
+        unittest.TestCase.tearDown(self)
+    
+    def testIt(self):
+        dbg('ok')
+        
+    def testMessage(self):
+        t = jycompletionserver.T(0)
+        
+        l = []
+        l.append(('Def', 'description'  , 'args'))
+        l.append(('Def1', 'description1', 'args1'))
+        l.append(('Def2', 'description2', 'args2'))
+        
+        msg = t.processor.formatCompletionMessage('test_jyserver.py', l)
+        
+        self.assertEquals('@@COMPLETIONS(test_jyserver.py,(Def,description,args),(Def1,description1,args1),(Def2,description2,args2))END@@', msg)
+        
+        l = []
+        l.append(('Def', 'desc,,r,,i()ption', ''))
+        l.append(('Def(1', 'descriptio(n1', ''))
+        l.append(('De,f)2', 'de,s,c,ription2', ''))
+        msg = t.processor.formatCompletionMessage(None, l)
+        expected = '@@COMPLETIONS(None,(Def,desc%2C%2Cr%2C%2Ci%28%29ption, ),(Def%281,descriptio%28n1, ),(De%2Cf%292,de%2Cs%2Cc%2Cription2, ))END@@'
+        
+        self.assertEquals(expected, msg)
+
+
+
+
+
+
+    def testCompletionSocketsAndMessages(self):
+        dbg('testCompletionSocketsAndMessages')
+        t, socket = self.createConnections()
+        self.socket = socket
+        dbg('connections created')
+        
+        try:
+            #now that we have the connections all set up, check the code completion messages.
+            msg = urllib.quote_plus('math')
+
+            toWrite = '@@IMPORTS:%sEND@@' % msg
+            dbg('writing' + str(toWrite))
+            socket.send(toWrite)  #math completions
+            completions = self.readMsg()
+            dbg(urllib.unquote_plus(completions))
+            
+            start = '@@COMPLETIONS('
+            self.assert_(completions.startswith(start), '%s DOESNT START WITH %s' % (completions, start))
+            self.assert_(completions.find('@@COMPLETIONS') != -1)
+            self.assert_(completions.find('END@@') != -1)
+
+
+            msg = urllib.quote_plus('__builtin__.str')
+            toWrite = '@@IMPORTS:%sEND@@' % msg
+            dbg('writing' + str(toWrite))
+            socket.send(toWrite)  #math completions
+            completions = self.readMsg()
+            dbg(urllib.unquote_plus(completions))
+            
+            start = '@@COMPLETIONS('
+            self.assert_(completions.startswith(start), '%s DOESNT START WITH %s' % (completions, start))
+            self.assert_(completions.find('@@COMPLETIONS') != -1)
+            self.assert_(completions.find('END@@') != -1)
+
+
+        
+        finally:
+            try:
+                self.sendKillMsg(socket)
+                
+        
+                while not t.ended:
+                    pass  #wait until it receives the message and quits.
+        
+                    
+                socket.close()
+            except:
+                pass
+
+
+
+
+    def createConnections(self, p1=50001):
+        '''
+        Creates the connections needed for testing.
+        '''
+        t = jycompletionserver.T(p1)
+        
+        t.start()
+
+        server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
+        server.bind((jycompletionserver.HOST, p1))
+        server.listen(1)
+
+        sock, _addr = server.accept()
+
+        return t, sock
+        
+
+    def readMsg(self):
+        msg = '@@PROCESSING_END@@'
+        while msg.startswith('@@PROCESSING'):
+            msg = self.socket.recv(1024)
+            if msg.startswith('@@PROCESSING:'):
+                dbg('Status msg:' + str(msg))
+        
+        while msg.find('END@@') == -1:
+            msg += self.socket.recv(1024)
+        
+        return msg
+        
+    def sendKillMsg(self, socket):
+        socket.send(jycompletionserver.MSG_KILL_SERVER)
+        
+    
+
+
+#"C:\Program Files\Java\jdk1.5.0_04\bin\java.exe" -Dpython.path="C:\bin\jython21\Lib";"C:\bin\jython21";"C:\Program Files\Java\jdk1.5.0_04\jre\lib\rt.jar" -classpath C:/bin/jython21/jython.jar org.python.util.jython D:\eclipse_workspace\org.python.pydev\pysrc\pycompletionserver.py 53795 58659
+#
+#"C:\Program Files\Java\jdk1.5.0_04\bin\java.exe" -Dpython.path="C:\bin\jython21\Lib";"C:\bin\jython21";"C:\Program Files\Java\jdk1.5.0_04\jre\lib\rt.jar" -classpath C:/bin/jython21/jython.jar org.python.util.jython D:\eclipse_workspace\org.python.pydev\pysrc\tests\test_jyserver.py
+#
+#"C:\Program Files\Java\jdk1.5.0_04\bin\java.exe" -Dpython.path="C:\bin\jython21\Lib";"C:\bin\jython21";"C:\Program Files\Java\jdk1.5.0_04\jre\lib\rt.jar" -classpath C:/bin/jython21/jython.jar org.python.util.jython d:\runtime-workbench-workspace\jython_test\src\test.py        
+if __name__ == '__main__':
+    if IS_JYTHON:
+        suite = unittest.makeSuite(Test)
+        unittest.TextTestRunner(verbosity=1).run(suite)
+    else:
+        sys.stdout.write('Not running jython tests for non-java platform: %s' % sys.platform)
+
diff --git a/python/helpers/pydev/tests/test_jysimpleTipper.py b/python/helpers/pydev/tests/test_jysimpleTipper.py
new file mode 100644
index 0000000..4a75563
--- /dev/null
+++ b/python/helpers/pydev/tests/test_jysimpleTipper.py
@@ -0,0 +1,255 @@
+#line to run:
+#java -classpath D:\bin\jython-2.1\jython.jar;D:\bin\eclipse331_1\plugins\org.junit_3.8.2.v200706111738\junit.jar;D:\bin\eclipse331_1\plugins\org.apache.ant_1.7.0.v200706080842\lib\ant.jar org.python.util.jython w:\org.python.pydev\pysrc\tests\test_jysimpleTipper.py
+
+import unittest
+import os
+import sys
+#make it as if we were executing from the directory above this one (so that we can use pycompletionserver
+#without the need for it being in the pythonpath)
+sys.argv[0] = os.path.dirname(sys.argv[0]) 
+#twice the dirname to get the previous level from this file.
+sys.path.insert(1, os.path.join(os.path.dirname(sys.argv[0])))
+
+#this does not work (they must be in the system pythonpath)
+#sys.path.insert(1, r"D:\bin\eclipse321\plugins\org.junit_3.8.1\junit.jar" ) #some late loading jar tests
+#sys.path.insert(1, r"D:\bin\eclipse331_1\plugins\org.apache.ant_1.7.0.v200706080842\lib\ant.jar" ) #some late loading jar tests
+
+if sys.platform.find('java') != -1:
+    from _pydev_jy_imports_tipper import ismethod
+    from _pydev_jy_imports_tipper import isclass
+    from _pydev_jy_imports_tipper import dirObj
+    import _pydev_jy_imports_tipper
+    from java.lang.reflect import Method #@UnresolvedImport
+    from java.lang import System #@UnresolvedImport
+    from java.lang import String #@UnresolvedImport
+    from java.lang.System import arraycopy #@UnresolvedImport
+    from java.lang.System import out #@UnresolvedImport
+    import java.lang.String #@UnresolvedImport
+
+__DBG = 0
+def dbg(s):
+    if __DBG:
+        sys.stdout.write('%s\n' % (s,))
+        
+
+
+class TestMod(unittest.TestCase):
+    
+    def assertArgs(self, tok, args, tips):
+        for a in tips:
+            if tok == a[0]:
+                self.assertEquals(args, a[2])
+                return
+        raise AssertionError('%s not in %s', tok, tips)
+
+    def assertIn(self, tok, tips):
+        self.assertEquals(4, len(tips[0]))
+        for a in tips:
+            if tok == a[0]:
+                return a
+        s = ''
+        for a in tips:
+            s += str(a)
+            s += '\n'
+        raise AssertionError('%s not in %s' % (tok, s))
+
+    def testImports1a(self):
+        f, tip = _pydev_jy_imports_tipper.GenerateTip('java.util.HashMap')
+        assert f.endswith('rt.jar')
+
+    def testImports1c(self):
+        f, tip = _pydev_jy_imports_tipper.GenerateTip('java.lang.Class')
+        assert f.endswith('rt.jar')
+        
+    def testImports1b(self):
+        try:
+            f, tip = _pydev_jy_imports_tipper.GenerateTip('__builtin__.m')
+            self.fail('err')
+        except:
+            pass
+
+    def testImports1(self):
+        f, tip = _pydev_jy_imports_tipper.GenerateTip('junit.framework.TestCase')
+        assert f.endswith('junit.jar')
+        ret = self.assertIn('assertEquals', tip)
+#        self.assertEquals('', ret[2])
+        
+    def testImports2(self):
+        f, tip = _pydev_jy_imports_tipper.GenerateTip('junit.framework')
+        assert f.endswith('junit.jar')
+        ret = self.assertIn('TestCase', tip)
+        self.assertEquals('', ret[2])
+        
+    def testImports2a(self):
+        f, tip = _pydev_jy_imports_tipper.GenerateTip('org.apache.tools.ant')
+        assert f.endswith('ant.jar')
+        ret = self.assertIn('Task', tip)
+        self.assertEquals('', ret[2])
+        
+    def testImports3(self):
+        f, tip = _pydev_jy_imports_tipper.GenerateTip('os')
+        assert f.endswith('os.py')
+        ret = self.assertIn('path', tip)
+        self.assertEquals('', ret[2])
+        
+    def testTipOnString(self):
+        f, tip = _pydev_jy_imports_tipper.GenerateTip('string')
+        self.assertIn('join', tip)
+        self.assertIn('uppercase', tip)
+        
+    def testImports(self):
+        tip = _pydev_jy_imports_tipper.GenerateTip('__builtin__')[1]
+        self.assertIn('tuple'          , tip)
+        self.assertIn('RuntimeError'   , tip)
+        self.assertIn('RuntimeWarning' , tip)
+
+    def testImports5(self):
+        f, tip = _pydev_jy_imports_tipper.GenerateTip('java.lang')
+        assert f.endswith('rt.jar')
+        tup = self.assertIn('String' , tip)
+        self.assertEquals(str(_pydev_jy_imports_tipper.TYPE_CLASS), tup[3])
+        
+        tip = _pydev_jy_imports_tipper.GenerateTip('java')[1]
+        tup = self.assertIn('lang' , tip)
+        self.assertEquals(str(_pydev_jy_imports_tipper.TYPE_IMPORT), tup[3])
+        
+        tip = _pydev_jy_imports_tipper.GenerateTip('java.lang.String')[1]
+        tup = self.assertIn('indexOf'          , tip)
+        self.assertEquals(str(_pydev_jy_imports_tipper.TYPE_FUNCTION), tup[3])
+
+        tip = _pydev_jy_imports_tipper.GenerateTip('java.lang.String')[1]
+        tup = self.assertIn('charAt'          , tip)
+        self.assertEquals(str(_pydev_jy_imports_tipper.TYPE_FUNCTION), tup[3])
+        self.assertEquals('(int)', tup[2])
+
+        tup = self.assertIn('format'          , tip)
+        self.assertEquals(str(_pydev_jy_imports_tipper.TYPE_FUNCTION), tup[3])
+        self.assertEquals('(string, objectArray)', tup[2])
+        self.assert_(tup[1].find('[Ljava.lang.Object;') == -1)
+
+        tup = self.assertIn('getBytes'          , tip)
+        self.assertEquals(str(_pydev_jy_imports_tipper.TYPE_FUNCTION), tup[3])
+        self.assert_(tup[1].find('[B') == -1)
+        self.assert_(tup[1].find('byte[]') != -1)
+
+        f, tip = _pydev_jy_imports_tipper.GenerateTip('__builtin__.str')
+        assert f.endswith('jython.jar')
+        self.assertIn('find'          , tip)
+
+        f, tip = _pydev_jy_imports_tipper.GenerateTip('__builtin__.dict')
+        assert f.endswith('jython.jar')
+        self.assertIn('get'          , tip)
+
+
+class TestSearch(unittest.TestCase):
+
+    def testSearchOnJython(self):
+        self.assertEqual('javaos.py', _pydev_jy_imports_tipper.Search('os')[0][0].split(os.sep)[-1])
+        self.assertEqual(0, _pydev_jy_imports_tipper.Search('os')[0][1])
+        
+        self.assertEqual('javaos.py', _pydev_jy_imports_tipper.Search('os.makedirs')[0][0].split(os.sep)[-1])
+        self.assertNotEqual(0, _pydev_jy_imports_tipper.Search('os.makedirs')[0][1])
+        
+        #print _pydev_jy_imports_tipper.Search('os.makedirs')
+
+class TestCompl(unittest.TestCase):
+
+    def setUp(self):
+        unittest.TestCase.setUp(self)
+
+    def tearDown(self):
+        unittest.TestCase.tearDown(self)
+
+    def testGettingInfoOnJython(self):
+        
+        dbg('\n\n--------------------------- java')
+        assert not ismethod(java)[0]
+        assert not isclass(java)
+        assert _pydev_jy_imports_tipper.ismodule(java)
+            
+        dbg('\n\n--------------------------- java.lang')
+        assert not ismethod(java.lang)[0]
+        assert not isclass(java.lang)
+        assert _pydev_jy_imports_tipper.ismodule(java.lang)
+            
+        dbg('\n\n--------------------------- Method')
+        assert not ismethod(Method)[0]
+        assert isclass(Method)
+            
+        dbg('\n\n--------------------------- System')
+        assert not ismethod(System)[0]
+        assert isclass(System)
+            
+        dbg('\n\n--------------------------- String')
+        assert not ismethod(System)[0]
+        assert isclass(String)
+        assert len(dirObj(String)) > 10
+            
+        dbg('\n\n--------------------------- arraycopy')
+        isMet = ismethod(arraycopy)
+        assert isMet[0]
+        assert isMet[1][0].basicAsStr() == "function:arraycopy args=['java.lang.Object', 'int', 'java.lang.Object', 'int', 'int'], varargs=None, kwargs=None, docs:None"
+        assert not isclass(arraycopy)
+            
+        dbg('\n\n--------------------------- out')
+        isMet = ismethod(out)
+        assert not isMet[0]
+        assert not isclass(out)
+            
+        dbg('\n\n--------------------------- out.println')
+        isMet = ismethod(out.println) #@UndefinedVariable
+        assert isMet[0]
+        assert len(isMet[1]) == 10
+        self.assertEquals(isMet[1][0].basicAsStr(), "function:println args=[], varargs=None, kwargs=None, docs:None")
+        assert isMet[1][1].basicAsStr() == "function:println args=['long'], varargs=None, kwargs=None, docs:None"
+        assert not isclass(out.println) #@UndefinedVariable
+        
+        dbg('\n\n--------------------------- str')
+        isMet = ismethod(str)
+        #the code below should work, but is failing on jython 22a1
+        #assert isMet[0]
+        #assert isMet[1][0].basicAsStr() == "function:str args=['org.python.core.PyObject'], varargs=None, kwargs=None, docs:None"
+        assert not isclass(str)
+        
+        
+        def met1():
+            a = 3
+            return a
+        
+        dbg('\n\n--------------------------- met1')
+        isMet = ismethod(met1)
+        assert isMet[0]
+        assert isMet[1][0].basicAsStr() == "function:met1 args=[], varargs=None, kwargs=None, docs:None"
+        assert not isclass(met1)
+        
+        def met2(arg1, arg2, *vararg, **kwarg):
+            '''docmet2'''
+            
+            a = 1
+            return a
+        
+        dbg('\n\n--------------------------- met2')
+        isMet = ismethod(met2)
+        assert isMet[0]
+        assert isMet[1][0].basicAsStr() == "function:met2 args=['arg1', 'arg2'], varargs=vararg, kwargs=kwarg, docs:docmet2"
+        assert not isclass(met2)
+        
+
+
+if __name__ == '__main__':
+    if sys.platform.find('java') != -1:
+        #Only run if jython
+        suite = unittest.makeSuite(TestCompl)
+        suite2 = unittest.makeSuite(TestMod)
+        suite3 = unittest.makeSuite(TestSearch)
+        
+        unittest.TextTestRunner(verbosity=1).run(suite)
+        unittest.TextTestRunner(verbosity=1).run(suite2)
+        unittest.TextTestRunner(verbosity=1).run(suite3)
+        
+#        suite.addTest(Test('testCase12'))
+#        suite = unittest.TestSuite()
+#        unittest.TextTestRunner(verbosity=1).run(suite)
+
+    else:
+        sys.stdout.write('Not running jython tests for non-java platform: %s' % sys.platform)
diff --git a/python/helpers/pydev/tests/test_pydev_ipython_010.py b/python/helpers/pydev/tests/test_pydev_ipython_010.py
new file mode 100644
index 0000000..5ce1dc3
--- /dev/null
+++ b/python/helpers/pydev/tests/test_pydev_ipython_010.py
@@ -0,0 +1,80 @@
+#TODO: This test no longer works (check if it should be fixed or removed altogether).
+
+#import unittest
+#import sys
+#import os
+##make it as if we were executing from the directory above this one
+#sys.argv[0] = os.path.dirname(sys.argv[0])
+##twice the dirname to get the previous level from this file.
+#sys.path.insert(1, os.path.join(os.path.dirname(sys.argv[0])))
+#
+#from pydev_localhost import get_localhost
+#
+#
+#IS_JYTHON = sys.platform.find('java') != -1
+#
+##=======================================================================================================================
+## TestCase
+##=======================================================================================================================
+#class TestCase(unittest.TestCase):
+#
+#    def setUp(self):
+#        unittest.TestCase.setUp(self)
+#
+#    def tearDown(self):
+#        unittest.TestCase.tearDown(self)
+#
+#    def testIPython(self):
+#        try:
+#            from pydev_ipython_console import PyDevFrontEnd
+#        except:
+#            if IS_JYTHON:
+#                return
+#        front_end = PyDevFrontEnd(get_localhost(), 0)
+#
+#        front_end.input_buffer = 'if True:'
+#        self.assert_(not front_end._on_enter())
+#
+#        front_end.input_buffer = 'if True:\n' + \
+#            front_end.continuation_prompt() + '    a = 10\n'
+#        self.assert_(not front_end._on_enter())
+#
+#
+#        front_end.input_buffer = 'if True:\n' + \
+#            front_end.continuation_prompt() + '    a = 10\n\n'
+#        self.assert_(front_end._on_enter())
+#
+#
+##        front_end.input_buffer = '  print a'
+##        self.assert_(not front_end._on_enter())
+##        front_end.input_buffer = ''
+##        self.assert_(front_end._on_enter())
+#
+#
+##        front_end.input_buffer = 'a.'
+##        front_end.complete_current_input()
+##        front_end.input_buffer = 'if True:'
+##        front_end._on_enter()
+#        front_end.input_buffer = 'a = 30'
+#        front_end._on_enter()
+#        front_end.input_buffer = 'print a'
+#        front_end._on_enter()
+#        front_end.input_buffer = 'a?'
+#        front_end._on_enter()
+#        print front_end.complete('%')
+#        print front_end.complete('%e')
+#        print front_end.complete('cd c:/t')
+#        print front_end.complete('cd c:/temp/')
+##        front_end.input_buffer = 'print raw_input("press enter\\n")'
+##        front_end._on_enter()
+##
+#
+##=======================================================================================================================
+## main
+##=======================================================================================================================
+#if __name__ == '__main__':
+#    if sys.platform.find('java') == -1:
+#        #IPython not available for Jython
+#        unittest.main()
+#    else:
+#        print('not supported on Jython')
diff --git a/python/helpers/pydev/tests/test_pydev_ipython_011.py b/python/helpers/pydev/tests/test_pydev_ipython_011.py
new file mode 100644
index 0000000..3cfa70f
--- /dev/null
+++ b/python/helpers/pydev/tests/test_pydev_ipython_011.py
@@ -0,0 +1,193 @@
+import sys
+import unittest
+import threading
+import os
+from nose.tools import eq_
+from pydev_imports import StringIO, SimpleXMLRPCServer
+from pydev_localhost import get_localhost
+from pydev_console_utils import StdIn
+import socket
+
+# make it as if we were executing from the directory above this one
+sys.argv[0] = os.path.dirname(sys.argv[0])
+# twice the dirname to get the previous level from this file.
+sys.path.insert(1, os.path.join(os.path.dirname(sys.argv[0])))
+
+# PyDevFrontEnd depends on singleton in IPython, so you
+# can't make multiple versions. So we reuse front_end for
+# all the tests
+
+orig_stdout = sys.stdout
+orig_stderr = sys.stderr
+
+stdout = sys.stdout = StringIO()
+stderr = sys.stderr = StringIO()
+
+from pydev_ipython_console_011 import PyDevFrontEnd
+s = socket.socket()
+s.bind(('', 0))
+client_port = s.getsockname()[1]
+s.close()
+front_end = PyDevFrontEnd(get_localhost(), client_port)
+
+
+def addExec(code, expected_more=False):
+    more = front_end.addExec(code)
+    eq_(expected_more, more)
+
+class TestBase(unittest.TestCase):
+    def setUp(self):
+        front_end.input_splitter.reset()
+        stdout.truncate(0)
+        stdout.seek(0)
+        stderr.truncate(0)
+        stderr.seek(0)
+    def tearDown(self):
+        pass
+
+
+class TestPyDevFrontEnd(TestBase):
+    def testAddExec_1(self):
+        addExec('if True:', True)
+    def testAddExec_2(self):
+        addExec('if True:\n    testAddExec_a = 10\n', True)
+    def testAddExec_3(self):
+        assert 'testAddExec_a' not in front_end.getNamespace()
+        addExec('if True:\n    testAddExec_a = 10\n\n')
+        assert 'testAddExec_a' in front_end.getNamespace()
+        eq_(front_end.getNamespace()['testAddExec_a'], 10)
+
+    def testGetNamespace(self):
+        assert 'testGetNamespace_a' not in front_end.getNamespace()
+        addExec('testGetNamespace_a = 10')
+        assert 'testGetNamespace_a' in front_end.getNamespace()
+        eq_(front_end.getNamespace()['testGetNamespace_a'], 10)
+
+    def testComplete(self):
+        unused_text, matches = front_end.complete('%')
+        assert len(matches) > 1, 'at least one magic should appear in completions'
+
+    def testCompleteDoesNotDoPythonMatches(self):
+        # Test that IPython's completions do not do the things that
+        # PyDev's completions will handle
+        addExec('testComplete_a = 5')
+        addExec('testComplete_b = 10')
+        addExec('testComplete_c = 15')
+        unused_text, matches = front_end.complete('testComplete_')
+        assert len(matches) == 0
+
+    def testGetCompletions_1(self):
+        # Test the merged completions include the standard completions
+        addExec('testComplete_a = 5')
+        addExec('testComplete_b = 10')
+        addExec('testComplete_c = 15')
+        res = front_end.getCompletions('testComplete_', 'testComplete_')
+        matches = [f[0] for f in res]
+        assert len(matches) == 3
+        eq_(set(['testComplete_a', 'testComplete_b', 'testComplete_c']), set(matches))
+
+    def testGetCompletions_2(self):
+        # Test that we get IPython completions in results
+        # we do this by checking kw completion which PyDev does
+        # not do by default
+        addExec('def ccc(ABC=123): pass')
+        res = front_end.getCompletions('ccc(', '')
+        matches = [f[0] for f in res]
+        assert 'ABC=' in matches
+
+    def testGetCompletions_3(self):
+        # Test that magics return IPYTHON magic as type
+        res = front_end.getCompletions('%cd', '%cd')
+        assert len(res) == 1
+        eq_(res[0][3], '12')  # '12' == IToken.TYPE_IPYTHON_MAGIC
+        assert len(res[0][1]) > 100, 'docstring for %cd should be a reasonably long string'
+
+class TestRunningCode(TestBase):
+    def testPrint(self):
+        addExec('print("output")')
+        eq_(stdout.getvalue(), 'output\n')
+
+    def testQuestionMark_1(self):
+        addExec('?')
+        assert len(stdout.getvalue()) > 1000, 'IPython help should be pretty big'
+
+    def testQuestionMark_2(self):
+        addExec('int?')
+        assert stdout.getvalue().find('Convert') != -1
+
+
+    def testGui(self):
+        from pydev_ipython.inputhook import get_inputhook, set_stdin_file
+        set_stdin_file(sys.stdin)
+        assert get_inputhook() is None
+        addExec('%gui tk')
+        # we can't test the GUI works here because we aren't connected to XML-RPC so
+        # nowhere for hook to run
+        assert get_inputhook() is not None
+        addExec('%gui none')
+        assert get_inputhook() is None
+
+    def testHistory(self):
+        ''' Make sure commands are added to IPython's history '''
+        addExec('a=1')
+        addExec('b=2')
+        _ih = front_end.getNamespace()['_ih']
+        eq_(_ih[-1], 'b=2')
+        eq_(_ih[-2], 'a=1')
+
+        addExec('history')
+        hist = stdout.getvalue().split('\n')
+        eq_(hist[-1], '')
+        eq_(hist[-2], 'history')
+        eq_(hist[-3], 'b=2')
+        eq_(hist[-4], 'a=1')
+
+    def testEdit(self):
+        ''' Make sure we can issue an edit command '''
+        called_RequestInput = [False]
+        called_IPythonEditor = [False]
+        def startClientThread(client_port):
+            class ClientThread(threading.Thread):
+                def __init__(self, client_port):
+                    threading.Thread.__init__(self)
+                    self.client_port = client_port
+                def run(self):
+                    class HandleRequestInput:
+                        def RequestInput(self):
+                            called_RequestInput[0] = True
+                            return '\n'
+                        def IPythonEditor(self, name, line):
+                            called_IPythonEditor[0] = (name, line)
+                            return True
+
+                    handle_request_input = HandleRequestInput()
+
+                    import pydev_localhost
+                    client_server = SimpleXMLRPCServer((pydev_localhost.get_localhost(), self.client_port), logRequests=False)
+                    client_server.register_function(handle_request_input.RequestInput)
+                    client_server.register_function(handle_request_input.IPythonEditor)
+                    client_server.serve_forever()
+
+            client_thread = ClientThread(client_port)
+            client_thread.setDaemon(True)
+            client_thread.start()
+            return client_thread
+
+        startClientThread(client_port)
+        orig_stdin = sys.stdin
+        sys.stdin = StdIn(self, get_localhost(), client_port)
+        try:
+            filename = 'made_up_file.py'
+            addExec('%edit ' + filename)
+            eq_(called_IPythonEditor[0], (os.path.abspath(filename), 0))
+            assert called_RequestInput[0], "Make sure the 'wait' parameter has been respected"
+        finally:
+            sys.stdin = orig_stdin
+
+if __name__ == '__main__':
+
+    #Just doing: unittest.main() was not working for me when run directly (not sure why)
+    #And doing it the way below the test with the import: from pydev_ipython.inputhook import get_inputhook, set_stdin_file
+    #is failing (but if I do a Ctrl+F9 in PyDev to run it, it works properly, so, I'm a bit puzzled here).
+    unittest.TextTestRunner(verbosity=1).run(unittest.makeSuite(TestRunningCode))
+    unittest.TextTestRunner(verbosity=1).run(unittest.makeSuite(TestPyDevFrontEnd))
diff --git a/python/helpers/pydev/tests/test_pydevconsole.py b/python/helpers/pydev/tests/test_pydevconsole.py
new file mode 100644
index 0000000..9a9e3ed
--- /dev/null
+++ b/python/helpers/pydev/tests/test_pydevconsole.py
@@ -0,0 +1,231 @@
+import threading
+import unittest
+import sys
+import os
+
+sys.argv[0] = os.path.dirname(sys.argv[0])
+sys.path.insert(1, os.path.join(os.path.dirname(sys.argv[0])))
+import pydevconsole
+from pydev_imports import xmlrpclib, SimpleXMLRPCServer, StringIO
+
+try:
+    raw_input
+    raw_input_name = 'raw_input'
+except NameError:
+    raw_input_name = 'input'
+
+#=======================================================================================================================
+# Test
+#=======================================================================================================================
+class Test(unittest.TestCase):
+
+    def setUp(self):
+        self.original_stdout = sys.stdout
+        sys.stdout = StringIO()
+
+
+    def tearDown(self):
+        ret = sys.stdout  #@UnusedVariable
+        sys.stdout = self.original_stdout
+        #print_ ret.getvalue() -- use to see test output
+
+    def testConsoleHello(self):
+        client_port, _server_port = self.getFreeAddresses()
+        client_thread = self.startClientThread(client_port)  #@UnusedVariable
+        import time
+        time.sleep(.3)  #let's give it some time to start the threads
+
+        import pydev_localhost
+        interpreter = pydevconsole.InterpreterInterface(pydev_localhost.get_localhost(), client_port, server=None)
+
+        (result,) = interpreter.hello("Hello pydevconsole")
+        self.assertEqual(result, "Hello eclipse")
+
+
+    def testConsoleRequests(self):
+        client_port, _server_port = self.getFreeAddresses()
+        client_thread = self.startClientThread(client_port)  #@UnusedVariable
+        import time
+        time.sleep(.3)  #let's give it some time to start the threads
+
+        import pydev_localhost
+        interpreter = pydevconsole.InterpreterInterface(pydev_localhost.get_localhost(), client_port, server=None)
+        interpreter.addExec('class Foo:')
+        interpreter.addExec('   CONSTANT=1')
+        interpreter.addExec('')
+        interpreter.addExec('foo=Foo()')
+        interpreter.addExec('foo.__doc__=None')
+        interpreter.addExec('val = %s()' % (raw_input_name,))
+        interpreter.addExec('50')
+        interpreter.addExec('print (val)')
+        found = sys.stdout.getvalue().split()
+        try:
+            self.assertEqual(['50', 'input_request'], found)
+        except:
+            self.assertEqual(['input_request'], found)  #IPython
+
+        comps = interpreter.getCompletions('foo.', 'foo.')
+        self.assert_(
+            ('CONSTANT', '', '', '3') in comps or ('CONSTANT', '', '', '4') in comps, \
+            'Found: %s' % comps
+        )
+
+        comps = interpreter.getCompletions('"".', '"".')
+        self.assert_(
+            ('__add__', 'x.__add__(y) <==> x+y', '', '3') in comps or
+            ('__add__', '', '', '4') in comps or
+            ('__add__', 'x.__add__(y) <==> x+y\r\nx.__add__(y) <==> x+y', '()', '2') in comps or
+            ('__add__', 'x.\n__add__(y) <==> x+yx.\n__add__(y) <==> x+y', '()', '2'),
+            'Did not find __add__ in : %s' % (comps,)
+        )
+
+
+        completions = interpreter.getCompletions('', '')
+        for c in completions:
+            if c[0] == 'AssertionError':
+                break
+        else:
+            self.fail('Could not find AssertionError')
+
+        completions = interpreter.getCompletions('Assert', 'Assert')
+        for c in completions:
+            if c[0] == 'RuntimeError':
+                self.fail('Did not expect to find RuntimeError there')
+
+        self.assert_(('__doc__', None, '', '3') not in interpreter.getCompletions('foo.CO', 'foo.'))
+
+        comps = interpreter.getCompletions('va', 'va')
+        self.assert_(('val', '', '', '3') in comps or ('val', '', '', '4') in comps)
+
+        interpreter.addExec('s = "mystring"')
+
+        desc = interpreter.getDescription('val')
+        self.assert_(desc.find('str(object) -> string') >= 0 or
+                     desc == "'input_request'" or
+                     desc.find('str(string[, encoding[, errors]]) -> str') >= 0 or
+                     desc.find('str(Char* value)') >= 0 or
+                     desc.find('str(value: Char*)') >= 0,
+                     'Could not find what was needed in %s' % desc)
+
+        desc = interpreter.getDescription('val.join')
+        self.assert_(desc.find('S.join(sequence) -> string') >= 0 or
+                     desc.find('S.join(sequence) -> str') >= 0 or
+                     desc.find('S.join(iterable) -> string') >= 0 or
+                     desc == "<builtin method 'join'>"  or
+                     desc == "<built-in method join of str object>" or
+                     desc.find('str join(str self, list sequence)') >= 0 or
+                     desc.find('S.join(iterable) -> str') >= 0 or
+                     desc.find('join(self: str, sequence: list) -> str') >= 0,
+                     "Could not recognize: %s" % (desc,))
+
+
+    def startClientThread(self, client_port):
+        class ClientThread(threading.Thread):
+            def __init__(self, client_port):
+                threading.Thread.__init__(self)
+                self.client_port = client_port
+            def run(self):
+                class HandleRequestInput:
+                    def RequestInput(self):
+                        return 'input_request'
+
+                handle_request_input = HandleRequestInput()
+
+                import pydev_localhost
+                client_server = SimpleXMLRPCServer((pydev_localhost.get_localhost(), self.client_port), logRequests=False)
+                client_server.register_function(handle_request_input.RequestInput)
+                client_server.serve_forever()
+
+        client_thread = ClientThread(client_port)
+        client_thread.setDaemon(True)
+        client_thread.start()
+        return client_thread
+
+
+    def startDebuggerServerThread(self, debugger_port, socket_code):
+        class DebuggerServerThread(threading.Thread):
+            def __init__(self, debugger_port, socket_code):
+                threading.Thread.__init__(self)
+                self.debugger_port = debugger_port
+                self.socket_code = socket_code
+            def run(self):
+                import socket
+                s = socket.socket()
+                s.bind(('', debugger_port))
+                s.listen(1)
+                socket, unused_addr = s.accept()
+                socket_code(socket)
+
+        debugger_thread = DebuggerServerThread(debugger_port, socket_code)
+        debugger_thread.setDaemon(True)
+        debugger_thread.start()
+        return debugger_thread
+
+
+    def getFreeAddresses(self):
+        import socket
+        s = socket.socket()
+        s.bind(('', 0))
+        port0 = s.getsockname()[1]
+
+        s1 = socket.socket()
+        s1.bind(('', 0))
+        port1 = s1.getsockname()[1]
+        s.close()
+        s1.close()
+
+        if port0 <= 0 or port1 <= 0:
+            #This happens in Jython...
+            from java.net import ServerSocket
+            s0 = ServerSocket(0)
+            port0 = s0.getLocalPort()
+
+            s1 = ServerSocket(0)
+            port1 = s1.getLocalPort()
+
+            s0.close()
+            s1.close()
+
+        assert port0 != port1
+        assert port0 > 0
+        assert port1 > 0
+
+        return port0, port1
+
+
+    def testServer(self):
+        client_port, server_port = self.getFreeAddresses()
+        class ServerThread(threading.Thread):
+            def __init__(self, client_port, server_port):
+                threading.Thread.__init__(self)
+                self.client_port = client_port
+                self.server_port = server_port
+
+            def run(self):
+                import pydev_localhost
+                pydevconsole.StartServer(pydev_localhost.get_localhost(), self.server_port, self.client_port)
+        server_thread = ServerThread(client_port, server_port)
+        server_thread.setDaemon(True)
+        server_thread.start()
+
+        client_thread = self.startClientThread(client_port)  #@UnusedVariable
+
+        import time
+        time.sleep(.3)  #let's give it some time to start the threads
+
+        import pydev_localhost
+        server = xmlrpclib.Server('http://%s:%s' % (pydev_localhost.get_localhost(), server_port))
+        server.addExec('class Foo:')
+        server.addExec('    pass')
+        server.addExec('')
+        server.addExec('foo = Foo()')
+        server.addExec('a = %s()' % (raw_input_name,))
+        server.addExec('print (a)')
+        self.assertEqual(['input_request'], sys.stdout.getvalue().split())
+
+#=======================================================================================================================
+# main
+#=======================================================================================================================
+if __name__ == '__main__':
+    unittest.main()
+
diff --git a/python/helpers/pydev/tests/test_pyserver.py b/python/helpers/pydev/tests/test_pyserver.py
new file mode 100644
index 0000000..a74876b
--- /dev/null
+++ b/python/helpers/pydev/tests/test_pyserver.py
@@ -0,0 +1,173 @@
+'''
+@author Fabio Zadrozny 
+'''
+import sys
+import os
+
+#make it as if we were executing from the directory above this one (so that we can use pycompletionserver
+#without the need for it being in the pythonpath)
+sys.argv[0] = os.path.dirname(sys.argv[0]) 
+#twice the dirname to get the previous level from this file.
+sys.path.insert(1, os.path.join(os.path.dirname(sys.argv[0])))
+
+IS_PYTHON_3K = 0
+if sys.platform.find('java') == -1:
+    
+    
+    try:
+        import inspect
+        import pycompletionserver
+        import socket
+        try:
+            from urllib import quote_plus, unquote_plus
+            def send(s, msg):
+                s.send(msg)
+        except ImportError:
+            IS_PYTHON_3K = 1
+            from urllib.parse import quote_plus, unquote_plus  #Python 3.0
+            def send(s, msg):
+                s.send(bytearray(msg, 'utf-8'))
+    except ImportError:
+        pass  #Not available in jython
+    
+    import unittest
+    
+    class Test(unittest.TestCase):
+    
+        def setUp(self):
+            unittest.TestCase.setUp(self)
+    
+        def tearDown(self):
+            unittest.TestCase.tearDown(self)
+        
+        def testMessage(self):
+            t = pycompletionserver.T(0)
+            
+            l = []
+            l.append(('Def', 'description'  , 'args'))
+            l.append(('Def1', 'description1', 'args1'))
+            l.append(('Def2', 'description2', 'args2'))
+            
+            msg = t.processor.formatCompletionMessage(None, l)
+            self.assertEquals('@@COMPLETIONS(None,(Def,description,args),(Def1,description1,args1),(Def2,description2,args2))END@@', msg)
+            
+            l = []
+            l.append(('Def', 'desc,,r,,i()ption', ''))
+            l.append(('Def(1', 'descriptio(n1', ''))
+            l.append(('De,f)2', 'de,s,c,ription2', ''))
+            msg = t.processor.formatCompletionMessage(None, l)
+            self.assertEquals('@@COMPLETIONS(None,(Def,desc%2C%2Cr%2C%2Ci%28%29ption, ),(Def%281,descriptio%28n1, ),(De%2Cf%292,de%2Cs%2Cc%2Cription2, ))END@@', msg)
+    
+        def createConnections(self, p1=50002):
+            '''
+            Creates the connections needed for testing.
+            '''
+            t = pycompletionserver.T(p1)
+            
+            t.start()
+    
+            server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
+            server.bind((pycompletionserver.HOST, p1))
+            server.listen(1)  #socket to receive messages.
+    
+            s, addr = server.accept()
+    
+            return t, s
+            
+    
+        def readMsg(self):
+            finish = False
+            msg = ''
+            while finish == False:
+                m = self.socket.recv(1024 * 4)
+                if IS_PYTHON_3K:
+                    m = m.decode('utf-8')
+                if m.startswith('@@PROCESSING'):
+                    sys.stdout.write('Status msg: %s\n' % (msg,))
+                else:
+                    msg += m
+    
+                if msg.find('END@@') != -1:
+                    finish = True
+    
+            return msg
+    
+        def testCompletionSocketsAndMessages(self):
+            t, socket = self.createConnections()
+            self.socket = socket
+            
+            try:
+                #now that we have the connections all set up, check the code completion messages.
+                msg = quote_plus('math')
+                send(socket, '@@IMPORTS:%sEND@@' % msg)  #math completions
+                completions = self.readMsg()
+                #print_ unquote_plus(completions)
+                
+                #math is a builtin and because of that, it starts with None as a file
+                start = '@@COMPLETIONS(None,(__doc__,'
+                start_2 = '@@COMPLETIONS(None,(__name__,'
+                self.assert_(completions.startswith(start) or completions.startswith(start_2), '%s DOESNT START WITH %s' % (completions, (start, start_2)))
+        
+                self.assert_('@@COMPLETIONS' in completions)
+                self.assert_('END@@' in completions)
+    
+    
+                #now, test i
+                msg = quote_plus('__builtin__.list')
+                send(socket, "@@IMPORTS:%s\nEND@@" % msg)
+                found = self.readMsg()
+                self.assert_('sort' in found, 'Could not find sort in: %s' % (found,))
+    
+                #now, test search
+                msg = quote_plus('inspect.ismodule')
+                send(socket, '@@SEARCH%sEND@@' % msg)  #math completions
+                found = self.readMsg()
+                self.assert_('inspect.py' in found)
+                self.assert_('33' in found or '34' in found or '51' in found or '50' in found, 'Could not find 33, 34, 50 or 51 in %s' % found)
+    
+                #now, test search
+                msg = quote_plus('inspect.CO_NEWLOCALS')
+                send(socket, '@@SEARCH%sEND@@' % msg)  #math completions
+                found = self.readMsg()
+                self.assert_('inspect.py' in found)
+                self.assert_('CO_NEWLOCALS' in found)
+                
+                #now, test search
+                msg = quote_plus('inspect.BlockFinder.tokeneater')
+                send(socket, '@@SEARCH%sEND@@' % msg) 
+                found = self.readMsg()
+                self.assert_('inspect.py' in found)
+    #            self.assert_('CO_NEWLOCALS' in found)
+    
+            #reload modules test
+    #        send(socket, '@@RELOAD_MODULES_END@@')
+    #        ok = self.readMsg()
+    #        self.assertEquals('@@MSG_OK_END@@' , ok)
+    #        this test is not executed because it breaks our current enviroment.
+            
+            
+            finally:
+                try:
+                    sys.stdout.write('succedded...sending kill msg\n')
+                    self.sendKillMsg(socket)
+                    
+            
+    #                while not hasattr(t, 'ended'):
+    #                    pass #wait until it receives the message and quits.
+            
+                        
+                    socket.close()
+                    self.socket.close()
+                except:
+                    pass
+            
+        def sendKillMsg(self, socket):
+            socket.send(pycompletionserver.MSG_KILL_SERVER)
+
+        
+if __name__ == '__main__':
+    if sys.platform.find('java') == -1:
+        unittest.main()
+    else:
+        sys.stdout.write('Not running python tests in platform: %s\n' % (sys.platform,))
+
diff --git a/python/helpers/pydev/tests/test_simpleTipper.py b/python/helpers/pydev/tests/test_simpleTipper.py
new file mode 100644
index 0000000..f759ad6
--- /dev/null
+++ b/python/helpers/pydev/tests/test_simpleTipper.py
@@ -0,0 +1,209 @@
+'''
+@author Fabio Zadrozny 
+'''
+import os
+import sys
+#make it as if we were executing from the directory above this one (so that we can use pycompletionserver
+#without the need for it being in the pythonpath)
+#twice the dirname to get the previous level from this file.
+sys.path.insert(1, os.path.split(os.path.split(__file__)[0])[0])
+
+try:
+    import __builtin__ #@UnusedImport
+    BUILTIN_MOD = '__builtin__'
+except ImportError:
+    BUILTIN_MOD = 'builtins'
+
+
+if sys.platform.find('java') == -1:
+    
+    HAS_WX = False
+    
+    import unittest
+    import _pydev_imports_tipper
+    import inspect
+    
+    class Test(unittest.TestCase):
+    
+        def p(self, t):
+            for a in t:
+                sys.stdout.write('%s\n' % (a,))
+     
+        def testImports3(self):
+            tip = _pydev_imports_tipper.GenerateTip('os')
+            ret = self.assertIn('path', tip)
+            self.assertEquals('', ret[2])
+    
+        def testImports2(self):
+            try:
+                tip = _pydev_imports_tipper.GenerateTip('OpenGL.GLUT')
+                self.assertIn('glutDisplayFunc', tip)
+                self.assertIn('glutInitDisplayMode', tip)
+            except ImportError:
+                pass
+    
+        def testImports4(self):
+            try:
+                tip = _pydev_imports_tipper.GenerateTip('mx.DateTime.mxDateTime.mxDateTime')
+                self.assertIn('now', tip)
+            except ImportError:
+                pass
+    
+        def testImports5(self):
+            tip = _pydev_imports_tipper.GenerateTip('__builtin__.list')
+            s = self.assertIn('sort', tip)
+            self.CheckArgs(
+                s, 
+                '(cmp=None, key=None, reverse=False)', 
+                '(self, object cmp, object key, bool reverse)',
+                '(self, cmp: object, key: object, reverse: bool)'
+            )
+            
+        def testImports2a(self):
+            tips = _pydev_imports_tipper.GenerateTip('%s.RuntimeError' % BUILTIN_MOD)
+            self.assertIn('__doc__', tips)
+            
+        def testImports2b(self):
+            tips = _pydev_imports_tipper.GenerateTip('%s' % BUILTIN_MOD)
+            t = self.assertIn('file' , tips)
+            self.assert_('->' in t[1].strip() or 'file' in t[1])
+            
+        def testImports2c(self):
+            tips = _pydev_imports_tipper.GenerateTip('%s.file' % BUILTIN_MOD)
+            t = self.assertIn('readlines' , tips)
+            self.assert_('->' in t[1] or 'sizehint' in t[1])
+            
+        def testImports(self):
+            '''
+            You can print_ the results to check...
+            '''
+            if HAS_WX:
+                tip = _pydev_imports_tipper.GenerateTip('wxPython.wx')
+                self.assertIn('wxApp'        , tip)
+                
+                tip = _pydev_imports_tipper.GenerateTip('wxPython.wx.wxApp')
+                
+                try:
+                    tip = _pydev_imports_tipper.GenerateTip('qt')
+                    self.assertIn('QWidget'        , tip)
+                    self.assertIn('QDialog'        , tip)
+                    
+                    tip = _pydev_imports_tipper.GenerateTip('qt.QWidget')
+                    self.assertIn('rect'           , tip)
+                    self.assertIn('rect'           , tip)
+                    self.assertIn('AltButton'      , tip)
+            
+                    tip = _pydev_imports_tipper.GenerateTip('qt.QWidget.AltButton')
+                    self.assertIn('__xor__'      , tip)
+            
+                    tip = _pydev_imports_tipper.GenerateTip('qt.QWidget.AltButton.__xor__')
+                    self.assertIn('__class__'      , tip)
+                except ImportError:
+                    pass
+                
+            tip = _pydev_imports_tipper.GenerateTip(BUILTIN_MOD)
+    #        for t in tip[1]:
+    #            print_ t
+            self.assertIn('object'         , tip)
+            self.assertIn('tuple'          , tip)
+            self.assertIn('list'          , tip)
+            self.assertIn('RuntimeError'   , tip)
+            self.assertIn('RuntimeWarning' , tip)
+            
+            t = self.assertIn('cmp' , tip)
+            
+            self.CheckArgs(t, '(x, y)', '(object x, object y)', '(x: object, y: object)') #args
+            
+            t = self.assertIn('isinstance' , tip)
+            self.CheckArgs(t, '(object, class_or_type_or_tuple)', '(object o, type typeinfo)', '(o: object, typeinfo: type)') #args
+            
+            t = self.assertIn('compile' , tip)
+            self.CheckArgs(t, '(source, filename, mode)', '()', '(o: object, name: str, val: object)') #args
+            
+            t = self.assertIn('setattr' , tip)
+            self.CheckArgs(t, '(object, name, value)', '(object o, str name, object val)', '(o: object, name: str, val: object)') #args
+            
+            try:
+                import compiler
+                compiler_module = 'compiler'
+            except ImportError:
+                try:
+                    import ast
+                    compiler_module = 'ast'
+                except ImportError:
+                    compiler_module = None
+                
+            if compiler_module is not None: #Not available in iron python
+                tip = _pydev_imports_tipper.GenerateTip(compiler_module) 
+                if compiler_module == 'compiler':
+                    self.assertArgs('parse', '(buf, mode)', tip)
+                    self.assertArgs('walk', '(tree, visitor, walker, verbose)', tip)
+                    self.assertIn('parseFile'      , tip)
+                else:
+                    self.assertArgs('parse', '(source, filename, mode)', tip)
+                    self.assertArgs('walk', '(node)', tip)
+                self.assertIn('parse'          , tip)
+            
+            
+        def CheckArgs(self, t, *expected):
+            for x in expected:
+                if x == t[2]:
+                    return
+            self.fail('Found: %s. Expected: %s' % (t[2], expected))
+            
+            
+        def assertArgs(self, tok, args, tips):
+            for a in tips[1]:
+                if tok == a[0]:
+                    self.assertEquals(args, a[2])
+                    return
+            raise AssertionError('%s not in %s', tok, tips)
+    
+        def assertIn(self, tok, tips):
+            for a in tips[1]:
+                if tok == a[0]:
+                    return a
+            raise AssertionError('%s not in %s' % (tok, tips))
+        
+        
+        def testSearch(self):
+            s = _pydev_imports_tipper.Search('inspect.ismodule')
+            (f, line, col), foundAs = s
+            self.assert_(line > 0)
+            
+            
+        def testDotNetLibraries(self):
+            if sys.platform == 'cli':
+                tip = _pydev_imports_tipper.GenerateTip('System.Drawing')
+                self.assertIn('Brushes' , tip)
+                
+                tip = _pydev_imports_tipper.GenerateTip('System.Drawing.Brushes')
+                self.assertIn('Aqua' , tip)
+            
+    
+        def testInspect(self):
+            
+            class C(object):
+                def metA(self, a, b):
+                    pass
+            
+            obj = C.metA
+            if inspect.ismethod (obj):
+                pass
+    #            print_ obj.im_func
+    #            print_ inspect.getargspec(obj.im_func)
+                
+            
+    def suite():
+        s = unittest.TestSuite()
+        s.addTest(Test("testImports5"))
+        unittest.TextTestRunner(verbosity=2).run(s)
+
+
+if __name__ == '__main__':
+    if sys.platform.find('java') == -1:
+#        suite()
+        unittest.main()
+    else:
+        sys.stdout.write('Not running python tests in platform: %s\n' % (sys.platform,))
+    
diff --git a/python/helpers/pydev/tests_mainloop/README b/python/helpers/pydev/tests_mainloop/README
new file mode 100644
index 0000000..65e699b
--- /dev/null
+++ b/python/helpers/pydev/tests_mainloop/README
@@ -0,0 +1,4 @@
+# Parts of IPython, files from: https://github.com/ipython/ipython/tree/rel-1.0.0/examples/lib
+# The files in this folder are manual tests for main loop integration
+
+# These tests have been modified to work in the PyDev Console context
diff --git a/python/helpers/pydev/tests_mainloop/__not_in_default_pythonpath.txt b/python/helpers/pydev/tests_mainloop/__not_in_default_pythonpath.txt
new file mode 100644
index 0000000..29cdc5b
--- /dev/null
+++ b/python/helpers/pydev/tests_mainloop/__not_in_default_pythonpath.txt
@@ -0,0 +1 @@
+(no __init__.py file)
\ No newline at end of file
diff --git a/python/helpers/pydev/tests_mainloop/gui-glut.py b/python/helpers/pydev/tests_mainloop/gui-glut.py
new file mode 100644
index 0000000..f05a4bc
--- /dev/null
+++ b/python/helpers/pydev/tests_mainloop/gui-glut.py
@@ -0,0 +1,50 @@
+#!/usr/bin/env python
+"""Simple GLUT example to manually test event loop integration.
+
+To run this:
+1) Enable the PyDev GUI event loop integration for glut
+2) do an execfile on this script
+3) ensure you have a working GUI simultaneously with an
+   interactive console
+4) run: gl.glClearColor(1,1,1,1)
+"""
+
+#!/usr/bin/env python
+import sys
+import OpenGL.GL as gl
+import OpenGL.GLUT as glut
+
+def close():
+    glut.glutDestroyWindow(glut.glutGetWindow())
+
+def display():
+    gl.glClear (gl.GL_COLOR_BUFFER_BIT | gl.GL_DEPTH_BUFFER_BIT)
+    glut.glutSwapBuffers()
+
+def resize(width,height):
+    gl.glViewport(0, 0, width, height+4)
+    gl.glMatrixMode(gl.GL_PROJECTION)
+    gl.glLoadIdentity()
+    gl.glOrtho(0, width, 0, height+4, -1, 1)
+    gl.glMatrixMode(gl.GL_MODELVIEW)
+
+if glut.glutGetWindow() > 0:
+    interactive = True
+    glut.glutInit(sys.argv)
+    glut.glutInitDisplayMode(glut.GLUT_DOUBLE |
+                             glut.GLUT_RGBA   |
+                             glut.GLUT_DEPTH)
+else:
+    interactive = False
+
+glut.glutCreateWindow('gui-glut')
+glut.glutDisplayFunc(display)
+glut.glutReshapeFunc(resize)
+# This is necessary on osx to be able to close the window
+#  (else the close button is disabled)
+if sys.platform == 'darwin' and not bool(glut.HAVE_FREEGLUT):
+    glut.glutWMCloseFunc(close)
+gl.glClearColor(0,0,0,1)
+
+if not interactive:
+    glut.glutMainLoop()
diff --git a/python/helpers/pydev/tests_mainloop/gui-gtk.py b/python/helpers/pydev/tests_mainloop/gui-gtk.py
new file mode 100644
index 0000000..978f8f9
--- /dev/null
+++ b/python/helpers/pydev/tests_mainloop/gui-gtk.py
@@ -0,0 +1,34 @@
+#!/usr/bin/env python
+"""Simple GTK example to manually test event loop integration.
+
+To run this:
+1) Enable the PyDev GUI event loop integration for gtk
+2) do an execfile on this script
+3) ensure you have a working GUI simultaneously with an
+   interactive console
+"""
+
+import pygtk
+pygtk.require('2.0')
+import gtk
+
+
+def hello_world(wigdet, data=None):
+    print("Hello World")
+
+def delete_event(widget, event, data=None):
+    return False
+
+def destroy(widget, data=None):
+    gtk.main_quit()
+
+window = gtk.Window(gtk.WINDOW_TOPLEVEL)
+window.connect("delete_event", delete_event)
+window.connect("destroy", destroy)
+button = gtk.Button("Hello World")
+button.connect("clicked", hello_world, None)
+
+window.add(button)
+button.show()
+window.show()
+
diff --git a/python/helpers/pydev/tests_mainloop/gui-gtk3.py b/python/helpers/pydev/tests_mainloop/gui-gtk3.py
new file mode 100644
index 0000000..a787f7e
--- /dev/null
+++ b/python/helpers/pydev/tests_mainloop/gui-gtk3.py
@@ -0,0 +1,32 @@
+#!/usr/bin/env python
+"""Simple Gtk example to manually test event loop integration.
+
+To run this:
+1) Enable the PyDev GUI event loop integration for gtk3
+2) do an execfile on this script
+3) ensure you have a working GUI simultaneously with an
+   interactive console
+"""
+
+from gi.repository import Gtk
+
+
+def hello_world(wigdet, data=None):
+    print("Hello World")
+
+def delete_event(widget, event, data=None):
+    return False
+
+def destroy(widget, data=None):
+    Gtk.main_quit()
+
+window = Gtk.Window(Gtk.WindowType.TOPLEVEL)
+window.connect("delete_event", delete_event)
+window.connect("destroy", destroy)
+button = Gtk.Button("Hello World")
+button.connect("clicked", hello_world, None)
+
+window.add(button)
+button.show()
+window.show()
+
diff --git a/python/helpers/pydev/tests_mainloop/gui-pyglet.py b/python/helpers/pydev/tests_mainloop/gui-pyglet.py
new file mode 100644
index 0000000..b646093
--- /dev/null
+++ b/python/helpers/pydev/tests_mainloop/gui-pyglet.py
@@ -0,0 +1,27 @@
+#!/usr/bin/env python
+"""Simple pyglet example to manually test event loop integration.
+
+To run this:
+1) Enable the PyDev GUI event loop integration for pyglet
+2) do an execfile on this script
+3) ensure you have a working GUI simultaneously with an
+   interactive console
+"""
+
+import pyglet
+
+
+window = pyglet.window.Window()
+label = pyglet.text.Label('Hello, world',
+                          font_name='Times New Roman',
+                          font_size=36,
+                          x=window.width//2, y=window.height//2,
+                          anchor_x='center', anchor_y='center')
+@window.event
+def on_close():
+    window.close()
+
+@window.event
+def on_draw():
+    window.clear()
+    label.draw()
diff --git a/python/helpers/pydev/tests_mainloop/gui-qt.py b/python/helpers/pydev/tests_mainloop/gui-qt.py
new file mode 100644
index 0000000..c27cbd6
--- /dev/null
+++ b/python/helpers/pydev/tests_mainloop/gui-qt.py
@@ -0,0 +1,35 @@
+#!/usr/bin/env python
+"""Simple Qt4 example to manually test event loop integration.
+
+To run this:
+1) Enable the PyDev GUI event loop integration for qt
+2) do an execfile on this script
+3) ensure you have a working GUI simultaneously with an
+   interactive console
+
+Ref: Modified from http://zetcode.com/tutorials/pyqt4/firstprograms/
+"""
+
+import sys
+from PyQt4 import QtGui, QtCore
+
+class SimpleWindow(QtGui.QWidget):
+    def __init__(self, parent=None):
+        QtGui.QWidget.__init__(self, parent)
+
+        self.setGeometry(300, 300, 200, 80)
+        self.setWindowTitle('Hello World')
+
+        quit = QtGui.QPushButton('Close', self)
+        quit.setGeometry(10, 10, 60, 35)
+
+        self.connect(quit, QtCore.SIGNAL('clicked()'),
+                     self, QtCore.SLOT('close()'))
+
+if __name__ == '__main__':
+    app = QtCore.QCoreApplication.instance()
+    if app is None:
+        app = QtGui.QApplication([])
+
+    sw = SimpleWindow()
+    sw.show()
diff --git a/python/helpers/pydev/tests_mainloop/gui-tk.py b/python/helpers/pydev/tests_mainloop/gui-tk.py
new file mode 100644
index 0000000..69ceb0b
--- /dev/null
+++ b/python/helpers/pydev/tests_mainloop/gui-tk.py
@@ -0,0 +1,31 @@
+#!/usr/bin/env python
+"""Simple Tk example to manually test event loop integration.
+
+To run this:
+1) Enable the PyDev GUI event loop integration for tk
+2) do an execfile on this script
+3) ensure you have a working GUI simultaneously with an
+   interactive console
+"""
+
+try:
+    from Tkinter import *
+except:
+    # Python 3
+    from tkinter import *
+
+class MyApp:
+
+    def __init__(self, root):
+        frame = Frame(root)
+        frame.pack()
+
+        self.button = Button(frame, text="Hello", command=self.hello_world)
+        self.button.pack(side=LEFT)
+
+    def hello_world(self):
+        print("Hello World!")
+
+root = Tk()
+
+app = MyApp(root)
diff --git a/python/helpers/pydev/tests_mainloop/gui-wx.py b/python/helpers/pydev/tests_mainloop/gui-wx.py
new file mode 100644
index 0000000..2101e7f
--- /dev/null
+++ b/python/helpers/pydev/tests_mainloop/gui-wx.py
@@ -0,0 +1,101 @@
+#!/usr/bin/env python
+"""
+A Simple wx example to test PyDev's event loop integration.
+
+To run this:
+1) Enable the PyDev GUI event loop integration for wx
+2) do an execfile on this script
+3) ensure you have a working GUI simultaneously with an
+   interactive console
+
+Ref: Modified from wxPython source code wxPython/samples/simple/simple.py
+"""
+
+import wx
+
+
+class MyFrame(wx.Frame):
+    """
+    This is MyFrame.  It just shows a few controls on a wxPanel,
+    and has a simple menu.
+    """
+    def __init__(self, parent, title):
+        wx.Frame.__init__(self, parent, -1, title,
+                          pos=(150, 150), size=(350, 200))
+
+        # Create the menubar
+        menuBar = wx.MenuBar()
+
+        # and a menu
+        menu = wx.Menu()
+
+        # add an item to the menu, using \tKeyName automatically
+        # creates an accelerator, the third param is some help text
+        # that will show up in the statusbar
+        menu.Append(wx.ID_EXIT, "E&xit\tAlt-X", "Exit this simple sample")
+
+        # bind the menu event to an event handler
+        self.Bind(wx.EVT_MENU, self.OnTimeToClose, id=wx.ID_EXIT)
+
+        # and put the menu on the menubar
+        menuBar.Append(menu, "&File")
+        self.SetMenuBar(menuBar)
+
+        self.CreateStatusBar()
+
+        # Now create the Panel to put the other controls on.
+        panel = wx.Panel(self)
+
+        # and a few controls
+        text = wx.StaticText(panel, -1, "Hello World!")
+        text.SetFont(wx.Font(14, wx.SWISS, wx.NORMAL, wx.BOLD))
+        text.SetSize(text.GetBestSize())
+        btn = wx.Button(panel, -1, "Close")
+        funbtn = wx.Button(panel, -1, "Just for fun...")
+
+        # bind the button events to handlers
+        self.Bind(wx.EVT_BUTTON, self.OnTimeToClose, btn)
+        self.Bind(wx.EVT_BUTTON, self.OnFunButton, funbtn)
+
+        # Use a sizer to layout the controls, stacked vertically and with
+        # a 10 pixel border around each
+        sizer = wx.BoxSizer(wx.VERTICAL)
+        sizer.Add(text, 0, wx.ALL, 10)
+        sizer.Add(btn, 0, wx.ALL, 10)
+        sizer.Add(funbtn, 0, wx.ALL, 10)
+        panel.SetSizer(sizer)
+        panel.Layout()
+
+
+    def OnTimeToClose(self, evt):
+        """Event handler for the button click."""
+        print("See ya later!")
+        self.Close()
+
+    def OnFunButton(self, evt):
+        """Event handler for the button click."""
+        print("Having fun yet?")
+
+
+class MyApp(wx.App):
+    def OnInit(self):
+        frame = MyFrame(None, "Simple wxPython App")
+        self.SetTopWindow(frame)
+
+        print("Print statements go to this stdout window by default.")
+
+        frame.Show(True)
+        return True
+
+
+if __name__ == '__main__':
+
+    app = wx.GetApp()
+    if app is None:
+        app = MyApp(redirect=False, clearSigInt=False)
+    else:
+        frame = MyFrame(None, "Simple wxPython App")
+        app.SetTopWindow(frame)
+        print("Print statements go to this stdout window by default.")
+        frame.Show(True)
+
diff --git a/python/helpers/pydev/tests_python/__not_in_default_pythonpath.txt b/python/helpers/pydev/tests_python/__not_in_default_pythonpath.txt
new file mode 100644
index 0000000..29cdc5b
--- /dev/null
+++ b/python/helpers/pydev/tests_python/__not_in_default_pythonpath.txt
@@ -0,0 +1 @@
+(no __init__.py file)
\ No newline at end of file
diff --git a/python/helpers/pydev/tests_python/_debugger_case1.py b/python/helpers/pydev/tests_python/_debugger_case1.py
new file mode 100644
index 0000000..964d951
--- /dev/null
+++ b/python/helpers/pydev/tests_python/_debugger_case1.py
@@ -0,0 +1,61 @@
+import sys
+import weakref
+
+def SetUp():
+    observable = Observable()
+    observer = Observer()
+    observable.AddObserver(observer)
+    return observable
+
+
+class Observable(object):
+    def __init__(self):
+        self.observers = []
+        
+    def AddObserver(self, observer):
+        sys.stdout.write( 'observer %s\n' % (observer,))
+        ref = weakref.ref(observer)
+        self.observers.append(ref)
+        sys.stdout.write('weakref: %s\n' % (ref(),))
+        
+    def Notify(self):
+        for o in self.observers:
+            o = o()
+            
+            
+            try:
+                import gc
+            except ImportError:
+                o = None #some jython does not have gc, so, there's no sense testing this in it
+            else:
+                try:
+                    gc.get_referrers(o)
+                except:
+                    o = None #jython and ironpython do not have get_referrers
+            
+            if o is not None:
+                sys.stdout.write('still observing %s\n' % (o,))
+                sys.stdout.write('number of referrers: %s\n' % len(gc.get_referrers(o)))
+                frame = gc.get_referrers(o)[0]
+                frame_referrers = gc.get_referrers(frame)
+                sys.stdout.write('frame referrer %s\n' % (frame_referrers,))
+                referrers1 = gc.get_referrers(frame_referrers[1])
+                sys.stdout.write('%s\n' % (referrers1,))
+                sys.stderr.write('TEST FAILED: The observer should have died, even when running in debug\n')
+            else:
+                sys.stdout.write('TEST SUCEEDED: observer died\n')
+                
+            sys.stdout.flush()
+            sys.stderr.flush()
+                
+class Observer(object):
+    pass
+
+    
+def main():
+    observable = SetUp()
+    observable.Notify()
+    
+    
+if __name__ == '__main__':
+    main()
diff --git a/python/helpers/pydev/tests_python/_debugger_case10.py b/python/helpers/pydev/tests_python/_debugger_case10.py
new file mode 100644
index 0000000..323deda
--- /dev/null
+++ b/python/helpers/pydev/tests_python/_debugger_case10.py
@@ -0,0 +1,18 @@
+def Method1():
+    print('m1')
+    print('m1')
+    
+def Method1a():
+    print('m1a')
+    print('m1a')
+
+def Method2():
+    print('m2 before')
+    Method1()
+    Method1a()
+    print('m2 after')
+
+   
+if __name__ == '__main__': 
+    Method2()
+    print('TEST SUCEEDED!')
diff --git a/python/helpers/pydev/tests_python/_debugger_case13.py b/python/helpers/pydev/tests_python/_debugger_case13.py
new file mode 100644
index 0000000..dbdbbd4
--- /dev/null
+++ b/python/helpers/pydev/tests_python/_debugger_case13.py
@@ -0,0 +1,43 @@
+
+class TestProperty(object):
+    def __init__(self, name = "Default"):
+        self._x = None
+        self.name = name
+
+    def get_name(self):
+        return self.__name
+
+
+    def set_name(self, value):
+        self.__name = value
+
+
+    def del_name(self):
+        del self.__name
+    name = property(get_name, set_name, del_name, "name's docstring")
+
+    @property
+    def x(self):
+        return self._x
+
+    @x.setter
+    def x(self, value):
+        self._x = value
+
+    @x.deleter
+    def x(self):
+        del self._x
+
+def main():
+    """
+    """
+    testObj = TestProperty()
+    testObj.x = 10
+    val = testObj.x
+    
+    testObj.name = "Pydev"
+    debugType = testObj.name
+    print('TEST SUCEEDED!')
+    
+if __name__ == '__main__':
+    main()
\ No newline at end of file
diff --git a/python/helpers/pydev/tests_python/_debugger_case14.py b/python/helpers/pydev/tests_python/_debugger_case14.py
new file mode 100644
index 0000000..2a5e181
--- /dev/null
+++ b/python/helpers/pydev/tests_python/_debugger_case14.py
@@ -0,0 +1,29 @@
+
+class Car(object):
+    """A car class"""
+    def __init__(self, model, make, color):
+        self.model = model
+        self.make = make
+        self.color = color
+        self.price = None
+
+    def get_price(self):
+        return self.price
+
+    def set_price(self, value):
+        self.price = value
+
+availableCars = []
+def main():
+    global availableCars
+
+    #Create a new car obj
+    carObj = Car("Maruti SX4", "2011", "Black")
+    carObj.set_price(950000)  # Set price
+    # Add this to available cars
+    availableCars.append(carObj)
+
+    print('TEST SUCEEDED')
+
+if __name__ == '__main__':
+    main()
diff --git a/python/helpers/pydev/tests_python/_debugger_case15.py b/python/helpers/pydev/tests_python/_debugger_case15.py
new file mode 100644
index 0000000..2a5e181
--- /dev/null
+++ b/python/helpers/pydev/tests_python/_debugger_case15.py
@@ -0,0 +1,29 @@
+
+class Car(object):
+    """A car class"""
+    def __init__(self, model, make, color):
+        self.model = model
+        self.make = make
+        self.color = color
+        self.price = None
+
+    def get_price(self):
+        return self.price
+
+    def set_price(self, value):
+        self.price = value
+
+availableCars = []
+def main():
+    global availableCars
+
+    #Create a new car obj
+    carObj = Car("Maruti SX4", "2011", "Black")
+    carObj.set_price(950000)  # Set price
+    # Add this to available cars
+    availableCars.append(carObj)
+
+    print('TEST SUCEEDED')
+
+if __name__ == '__main__':
+    main()
diff --git a/python/helpers/pydev/tests_python/_debugger_case15_execfile.py b/python/helpers/pydev/tests_python/_debugger_case15_execfile.py
new file mode 100644
index 0000000..7123209
--- /dev/null
+++ b/python/helpers/pydev/tests_python/_debugger_case15_execfile.py
@@ -0,0 +1 @@
+f=lambda x: 'val=%s' % x
diff --git a/python/helpers/pydev/tests_python/_debugger_case16.py b/python/helpers/pydev/tests_python/_debugger_case16.py
new file mode 100644
index 0000000..5622813
--- /dev/null
+++ b/python/helpers/pydev/tests_python/_debugger_case16.py
@@ -0,0 +1,12 @@
+# this test requires numpy to be installed
+import numpy
+
+def main():
+    smallarray = numpy.arange(100) * 1 + 1j
+    bigarray = numpy.arange(100000).reshape((10,10000)) # 100 thousand
+    hugearray = numpy.arange(10000000)  # 10 million
+
+    pass  # location of breakpoint after all arrays defined
+
+main()
+print('TEST SUCEEDED')
diff --git a/python/helpers/pydev/tests_python/_debugger_case17.py b/python/helpers/pydev/tests_python/_debugger_case17.py
new file mode 100644
index 0000000..0177683
--- /dev/null
+++ b/python/helpers/pydev/tests_python/_debugger_case17.py
@@ -0,0 +1,38 @@
+def get_here():
+    a = 10
+
+def foo(func): 
+    return func
+
+def m1(): # @DontTrace
+    get_here()
+
+# @DontTrace
+def m2():
+    get_here()
+
+# @DontTrace
+@foo
+def m3():
+    get_here()
+
+@foo
+@foo
+def m4(): # @DontTrace
+    get_here()
+
+
+def main():
+
+    m1()
+    
+    m2()
+    
+    m3()
+    
+    m4()
+
+if __name__ == '__main__':
+    main()
+    
+    print('TEST SUCEEDED')
diff --git a/python/helpers/pydev/tests_python/_debugger_case18.py b/python/helpers/pydev/tests_python/_debugger_case18.py
new file mode 100644
index 0000000..69717b2
--- /dev/null
+++ b/python/helpers/pydev/tests_python/_debugger_case18.py
@@ -0,0 +1,23 @@
+
+
+def m2(a):
+    a = 10
+    b = 20 #Break here and set a = 40
+    c = 30
+    
+    def function2():
+        print a
+
+    return a
+
+
+def m1(a):
+    return m2(a)
+
+
+if __name__ == '__main__':
+    found = m1(10)
+    if found == 40:
+        print('TEST SUCEEDED')
+    else:
+        raise AssertionError('Expected variable to be changed to 40. Found: %s' % (found,))
diff --git a/python/helpers/pydev/tests_python/_debugger_case19.py b/python/helpers/pydev/tests_python/_debugger_case19.py
new file mode 100644
index 0000000..aaf380c
--- /dev/null
+++ b/python/helpers/pydev/tests_python/_debugger_case19.py
@@ -0,0 +1,10 @@
+class A:
+    
+    def __init__(self):
+        self.__var = 10
+
+if __name__ == '__main__':
+    a = A()
+    print a._A__var
+    # Evaluate 'a.__var' should give a._A__var_
+    print('TEST SUCEEDED')
diff --git a/python/helpers/pydev/tests_python/_debugger_case2.py b/python/helpers/pydev/tests_python/_debugger_case2.py
new file mode 100644
index 0000000..e47a5e21
--- /dev/null
+++ b/python/helpers/pydev/tests_python/_debugger_case2.py
@@ -0,0 +1,24 @@
+
+def Call4():
+    print('Start Call4')
+    print('End Call4')
+
+def Call3():
+    print('Start Call3')
+    Call4()
+    print('End Call3')
+
+def Call2():
+    print('Start Call2')
+    Call3()
+    print('End Call2 - a')
+    print('End Call2 - b')
+
+def Call1():
+    print('Start Call1')
+    Call2()
+    print('End Call1')
+
+if __name__ == '__main__':
+    Call1()
+    print('TEST SUCEEDED!')
diff --git a/python/helpers/pydev/tests_python/_debugger_case3.py b/python/helpers/pydev/tests_python/_debugger_case3.py
new file mode 100644
index 0000000..aa05032
--- /dev/null
+++ b/python/helpers/pydev/tests_python/_debugger_case3.py
@@ -0,0 +1,8 @@
+import time
+if __name__ == '__main__':
+    for i in range(15):
+        print('here')
+        time.sleep(.2)
+    
+    print('TEST SUCEEDED')
+        
diff --git a/python/helpers/pydev/tests_python/_debugger_case4.py b/python/helpers/pydev/tests_python/_debugger_case4.py
new file mode 100644
index 0000000..009da4a
--- /dev/null
+++ b/python/helpers/pydev/tests_python/_debugger_case4.py
@@ -0,0 +1,8 @@
+import time
+if __name__ == '__main__':
+    for i in range(10):
+        print('here %s' % i)
+        time.sleep(1)
+    
+    print('TEST SUCEEDED')
+        
diff --git a/python/helpers/pydev/tests_python/_debugger_case56.py b/python/helpers/pydev/tests_python/_debugger_case56.py
new file mode 100644
index 0000000..e5de28d
--- /dev/null
+++ b/python/helpers/pydev/tests_python/_debugger_case56.py
@@ -0,0 +1,9 @@
+def Call2():
+    print('Call2')
+
+def Call1(a):
+    print('Call1')
+    
+if __name__ == '__main__':
+    Call1(Call2())
+    print('TEST SUCEEDED!')
diff --git a/python/helpers/pydev/tests_python/_debugger_case7.py b/python/helpers/pydev/tests_python/_debugger_case7.py
new file mode 100644
index 0000000..263110b
--- /dev/null
+++ b/python/helpers/pydev/tests_python/_debugger_case7.py
@@ -0,0 +1,8 @@
+def Call():
+    variable_for_test_1 = 10
+    variable_for_test_2 = 20
+    variable_for_test_3 = 30
+    
+if __name__ == '__main__':
+    Call()
+    print 'TEST SUCEEDED!'
diff --git a/python/helpers/pydev/tests_python/_debugger_case89.py b/python/helpers/pydev/tests_python/_debugger_case89.py
new file mode 100644
index 0000000..e6f32dd
--- /dev/null
+++ b/python/helpers/pydev/tests_python/_debugger_case89.py
@@ -0,0 +1,16 @@
+def Method1():
+    print 'm1'
+
+def Method2():
+    print 'm2 before'
+    Method1()
+    print 'm2 after'
+
+def Method3():
+    print 'm3 before'
+    Method2()
+    print 'm3 after'
+   
+if __name__ == '__main__': 
+    Method3()
+    print 'TEST SUCEEDED!'
diff --git a/python/helpers/pydev/tests_python/test_additional_thread_info.py b/python/helpers/pydev/tests_python/test_additional_thread_info.py
new file mode 100644
index 0000000..6ae260d
--- /dev/null
+++ b/python/helpers/pydev/tests_python/test_additional_thread_info.py
@@ -0,0 +1,111 @@
+import sys
+import os
+sys.path.insert(0, os.path.split(os.path.split(__file__)[0])[0])
+
+from pydevd_constants import Null
+import unittest
+
+#=======================================================================================================================
+# TestCase
+#=======================================================================================================================
+class TestCase(unittest.TestCase):
+    '''
+        Used for profiling the PyDBAdditionalThreadInfoWithoutCurrentFramesSupport version
+    '''
+    
+    def testMetNoFramesSupport(self):
+        from pydevd_additional_thread_info import PyDBAdditionalThreadInfoWithoutCurrentFramesSupport
+        info = PyDBAdditionalThreadInfoWithoutCurrentFramesSupport()
+        
+        mainDebugger = Null()
+        filename = ''
+        base = ''
+        additionalInfo = Null()
+        t = Null()
+        frame = Null()
+        
+        times = 10
+        for i in range(times):
+            info.CreateDbFrame((mainDebugger, filename, additionalInfo, t, frame))
+            
+        #we haven't kept any reference, so, they must have been garbage-collected already!
+        self.assertEqual(0, len(info.IterFrames()))
+        
+        kept_frames = []
+        for i in range(times):
+            kept_frames.append(info.CreateDbFrame((mainDebugger, filename, additionalInfo, t, frame)))
+        
+        for i in range(times):
+            self.assertEqual(times, len(info.IterFrames()))
+            
+            
+    def testStartNewThread(self):
+        import pydevd
+        import thread
+        original = thread.start_new_thread
+        thread.start_new_thread = pydevd.pydev_start_new_thread
+        try:
+            found = {}
+            def function(a, b, *args, **kwargs):
+                found['a'] = a
+                found['b'] = b
+                found['args'] = args
+                found['kwargs'] = kwargs
+            thread.start_new_thread(function, (1,2,3,4), {'d':1, 'e':2})
+            import time
+            for _i in xrange(20):
+                if len(found) == 4:
+                    break
+                time.sleep(.1)
+            else:
+                raise AssertionError('Could not get to condition before 2 seconds')
+            
+            self.assertEqual({'a': 1, 'b': 2, 'args': (3, 4), 'kwargs': {'e': 2, 'd': 1}}, found)
+        finally:
+            thread.start_new_thread = original
+            
+            
+    def testStartNewThread2(self):
+        import pydevd
+        import thread
+        
+        original = thread.start_new_thread
+        thread.start_new_thread = pydevd.pydev_start_new_thread
+        try:
+            found = {}
+            
+            class F(object):
+                start_new_thread = thread.start_new_thread
+                
+                def start_it(self):
+                    try:
+                        self.start_new_thread(self.function, (1,2,3,4), {'d':1, 'e':2})
+                    except:
+                        import traceback;traceback.print_exc()
+
+                def function(self, a, b, *args, **kwargs):
+                    found['a'] = a
+                    found['b'] = b
+                    found['args'] = args
+                    found['kwargs'] = kwargs
+            
+            f = F()
+            f.start_it()
+            import time
+            for _i in xrange(20):
+                if len(found) == 4:
+                    break
+                time.sleep(.1)
+            else:
+                raise AssertionError('Could not get to condition before 2 seconds')
+            
+            self.assertEqual({'a': 1, 'b': 2, 'args': (3, 4), 'kwargs': {'e': 2, 'd': 1}}, found)
+        finally:
+            thread.start_new_thread = original
+        
+
+#=======================================================================================================================
+# main        
+#=======================================================================================================================
+if __name__ == '__main__':
+    unittest.main()
diff --git a/python/helpers/pydev/tests_python/test_debugger.py b/python/helpers/pydev/tests_python/test_debugger.py
new file mode 100644
index 0000000..3a216cb
--- /dev/null
+++ b/python/helpers/pydev/tests_python/test_debugger.py
@@ -0,0 +1,1205 @@
+'''
+    The idea is that we record the commands sent to the debugger and reproduce them from this script
+    (so, this works as the client, which spawns the debugger as a separate process and communicates
+    to it as if it was run from the outside)
+
+    Note that it's a python script but it'll spawn a process to run as jython, ironpython and as python.
+'''
+CMD_SET_PROPERTY_TRACE, CMD_EVALUATE_CONSOLE_EXPRESSION, CMD_RUN_CUSTOM_OPERATION, CMD_ENABLE_DONT_TRACE = 133, 134, 135, 141
+PYTHON_EXE = None
+IRONPYTHON_EXE = None
+JYTHON_JAR_LOCATION = None
+JAVA_LOCATION = None
+
+
+import unittest
+import pydev_localhost
+
+port = None
+
+def UpdatePort():
+    global port
+    s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
+    s.bind((pydev_localhost.get_localhost(), 0))
+    _, port = s.getsockname()
+    s.close()
+
+import os
+def NormFile(filename):
+    try:
+        rPath = os.path.realpath  # @UndefinedVariable
+    except:
+        # jython does not support os.path.realpath
+        # realpath is a no-op on systems without islink support
+        rPath = os.path.abspath
+    return os.path.normcase(rPath(filename))
+
+PYDEVD_FILE = NormFile('../pydevd.py')
+import sys
+sys.path.append(os.path.dirname(PYDEVD_FILE))
+
+SHOW_WRITES_AND_READS = False
+SHOW_RESULT_STR = False
+SHOW_OTHER_DEBUG_INFO = False
+
+
+import subprocess
+import socket
+import threading
+import time
+from urllib import quote_plus, quote, unquote_plus
+
+
+#=======================================================================================================================
+# ReaderThread
+#=======================================================================================================================
+class ReaderThread(threading.Thread):
+
+    def __init__(self, sock):
+        threading.Thread.__init__(self)
+        self.setDaemon(True)
+        self.sock = sock
+        self.lastReceived = None
+
+    def run(self):
+        try:
+            buf = ''
+            while True:
+                l = self.sock.recv(1024)
+                buf += l
+
+                if '\n' in buf:
+                    self.lastReceived = buf
+                    buf = ''
+
+                if SHOW_WRITES_AND_READS:
+                    print 'Test Reader Thread Received %s' % self.lastReceived.strip()
+        except:
+            pass  # ok, finished it
+
+    def DoKill(self):
+        self.sock.close()
+
+#=======================================================================================================================
+# AbstractWriterThread
+#=======================================================================================================================
+class AbstractWriterThread(threading.Thread):
+
+    def __init__(self):
+        threading.Thread.__init__(self)
+        self.setDaemon(True)
+        self.finishedOk = False
+        self._next_breakpoint_id = 0
+
+    def DoKill(self):
+        if hasattr(self, 'readerThread'):
+            # if it's not created, it's not there...
+            self.readerThread.DoKill()
+        self.sock.close()
+
+    def Write(self, s):
+        last = self.readerThread.lastReceived
+        if SHOW_WRITES_AND_READS:
+            print 'Test Writer Thread Written %s' % (s,)
+        self.sock.send(s + '\n')
+        time.sleep(0.2)
+
+        i = 0
+        while last == self.readerThread.lastReceived and i < 10:
+            i += 1
+            time.sleep(0.1)
+
+
+    def StartSocket(self):
+        if SHOW_WRITES_AND_READS:
+            print 'StartSocket'
+
+        s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
+        s.bind(('', port))
+        s.listen(1)
+        if SHOW_WRITES_AND_READS:
+            print 'Waiting in socket.accept()'
+        newSock, addr = s.accept()
+        if SHOW_WRITES_AND_READS:
+            print 'Test Writer Thread Socket:', newSock, addr
+
+        readerThread = self.readerThread = ReaderThread(newSock)
+        readerThread.start()
+        self.sock = newSock
+
+        self._sequence = -1
+        # initial command is always the version
+        self.WriteVersion()
+
+    def NextBreakpointId(self):
+        self._next_breakpoint_id += 1
+        return self._next_breakpoint_id
+
+    def NextSeq(self):
+        self._sequence += 2
+        return self._sequence
+
+
+    def WaitForNewThread(self):
+        i = 0
+        # wait for hit breakpoint
+        while not '<xml><thread name="' in self.readerThread.lastReceived or '<xml><thread name="pydevd.' in self.readerThread.lastReceived:
+            i += 1
+            time.sleep(1)
+            if i >= 15:
+                raise AssertionError('After %s seconds, a thread was not created.' % i)
+
+        # we have something like <xml><thread name="MainThread" id="12103472" /></xml>
+        splitted = self.readerThread.lastReceived.split('"')
+        threadId = splitted[3]
+        return threadId
+
+    def WaitForBreakpointHit(self, reason='111', get_line=False):
+        '''
+            108 is over
+            109 is return
+            111 is breakpoint
+        '''
+        i = 0
+        # wait for hit breakpoint
+        while not ('stop_reason="%s"' % reason) in self.readerThread.lastReceived:
+            i += 1
+            time.sleep(1)
+            if i >= 10:
+                raise AssertionError('After %s seconds, a break with reason: %s was not hit. Found: %s' % \
+                    (i, reason, self.readerThread.lastReceived))
+
+        # we have something like <xml><thread id="12152656" stop_reason="111"><frame id="12453120" ...
+        splitted = self.readerThread.lastReceived.split('"')
+        threadId = splitted[1]
+        frameId = splitted[7]
+        if get_line:
+            return threadId, frameId, int(splitted[13])
+
+        return threadId, frameId
+
+    def WaitForCustomOperation(self, expected):
+        i = 0
+        # wait for custom operation response, the response is double encoded
+        expectedEncoded = quote(quote_plus(expected))
+        while not expectedEncoded in self.readerThread.lastReceived:
+            i += 1
+            time.sleep(1)
+            if i >= 10:
+                raise AssertionError('After %s seconds, the custom operation not received. Last found:\n%s\nExpected (encoded)\n%s' % 
+                    (i, self.readerThread.lastReceived, expectedEncoded))
+
+        return True
+
+    def WaitForEvaluation(self, expected):
+        return self._WaitFor(expected, 'the expected evaluation was not found')
+
+
+    def WaitForVars(self, expected):
+        i = 0
+        # wait for hit breakpoint
+        while not expected in self.readerThread.lastReceived:
+            i += 1
+            time.sleep(1)
+            if i >= 10:
+                raise AssertionError('After %s seconds, the vars were not found. Last found:\n%s' % 
+                    (i, self.readerThread.lastReceived))
+
+        return True
+
+    def WaitForVar(self, expected):
+        self._WaitFor(expected, 'the var was not found')
+        
+    def _WaitFor(self, expected, error_msg):
+        '''
+        :param expected:
+            If a list we'll work with any of the choices.
+        '''
+        if not isinstance(expected, (list, tuple)):
+            expected = [expected]
+            
+        i = 0
+        found = False
+        while not found:
+            last = self.readerThread.lastReceived
+            for e in expected:
+                if e in last:
+                    found = True
+                    break
+                
+            last = unquote_plus(last)
+            for e in expected:
+                if e in last:
+                    found = True
+                    break
+
+            if found:
+                break
+                        
+            i += 1
+            time.sleep(1)
+            if i >= 10:
+                raise AssertionError('After %s seconds, %s. Last found:\n%s' % 
+                    (i, error_msg, last))
+
+        return True
+
+    def WaitForMultipleVars(self, expected_vars):
+        i = 0
+        # wait for hit breakpoint
+        while True:
+            for expected in expected_vars:
+                if expected not in self.readerThread.lastReceived:
+                    break  # Break out of loop (and don't get to else)
+            else:
+                return True
+
+            i += 1
+            time.sleep(1)
+            if i >= 10:
+                raise AssertionError('After %s seconds, the vars were not found. Last found:\n%s' % 
+                    (i, self.readerThread.lastReceived))
+
+        return True
+
+    def WriteMakeInitialRun(self):
+        self.Write("101\t%s\t" % self.NextSeq())
+
+    def WriteVersion(self):
+        self.Write("501\t%s\t1.0\tWINDOWS\tID" % self.NextSeq())
+
+    def WriteAddBreakpoint(self, line, func):
+        '''
+            @param line: starts at 1
+        '''
+        breakpoint_id = self.NextBreakpointId()
+        self.Write("111\t%s\t%s\t%s\t%s\t%s\t%s\tNone\tNone" % (self.NextSeq(), breakpoint_id, 'python-line', self.TEST_FILE, line, func))
+        return breakpoint_id
+
+    def WriteRemoveBreakpoint(self, breakpoint_id):
+        self.Write("112\t%s\t%s\t%s\t%s" % (self.NextSeq(), 'python-line', self.TEST_FILE, breakpoint_id))
+
+    def WriteChangeVariable(self, thread_id, frame_id, varname, value):
+        self.Write("117\t%s\t%s\t%s\t%s\t%s\t%s" % (self.NextSeq(), thread_id, frame_id, 'FRAME', varname, value))
+
+    def WriteGetFrame(self, threadId, frameId):
+        self.Write("114\t%s\t%s\t%s\tFRAME" % (self.NextSeq(), threadId, frameId))
+
+    def WriteGetVariable(self, threadId, frameId, var_attrs):
+        self.Write("110\t%s\t%s\t%s\tFRAME\t%s" % (self.NextSeq(), threadId, frameId, var_attrs))
+
+    def WriteStepOver(self, threadId):
+        self.Write("108\t%s\t%s" % (self.NextSeq(), threadId,))
+
+    def WriteStepIn(self, threadId):
+        self.Write("107\t%s\t%s" % (self.NextSeq(), threadId,))
+
+    def WriteStepReturn(self, threadId):
+        self.Write("109\t%s\t%s" % (self.NextSeq(), threadId,))
+
+    def WriteSuspendThread(self, threadId):
+        self.Write("105\t%s\t%s" % (self.NextSeq(), threadId,))
+
+    def WriteRunThread(self, threadId):
+        self.Write("106\t%s\t%s" % (self.NextSeq(), threadId,))
+
+    def WriteKillThread(self, threadId):
+        self.Write("104\t%s\t%s" % (self.NextSeq(), threadId,))
+
+    def WriteDebugConsoleExpression(self, locator):
+        self.Write("%s\t%s\t%s" % (CMD_EVALUATE_CONSOLE_EXPRESSION, self.NextSeq(), locator))
+
+    def WriteCustomOperation(self, locator, style, codeOrFile, operation_fn_name):
+        self.Write("%s\t%s\t%s||%s\t%s\t%s" % (CMD_RUN_CUSTOM_OPERATION, self.NextSeq(), locator, style, codeOrFile, operation_fn_name))
+        
+    def WriteEvaluateExpression(self, locator, expression):
+        self.Write("113\t%s\t%s\t%s\t1" % (self.NextSeq(), locator, expression))
+
+    def WriteEnableDontTrace(self, enable):
+        if enable:
+            enable = 'true'
+        else:
+            enable = 'false'
+        self.Write("%s\t%s\t%s" % (CMD_ENABLE_DONT_TRACE, self.NextSeq(), enable))
+
+
+#=======================================================================================================================
+# WriterThreadCase19 - [Test Case]: Evaluate '__' attributes
+#======================================================================================================================
+class WriterThreadCase19(AbstractWriterThread):
+
+    TEST_FILE = NormFile('_debugger_case19.py')
+
+    def run(self):
+        self.StartSocket()
+        self.WriteAddBreakpoint(8, None)
+        self.WriteMakeInitialRun()
+
+        threadId, frameId, line = self.WaitForBreakpointHit('111', True)
+
+        assert line == 8, 'Expected return to be in line 8, was: %s' % line
+        
+        self.WriteEvaluateExpression('%s\t%s\t%s' % (threadId, frameId, 'LOCAL'), 'a.__var')
+        self.WaitForEvaluation('<var name="a.__var" type="int" value="int')
+        self.WriteRunThread(threadId)
+
+        
+        self.finishedOk = True
+
+
+#=======================================================================================================================
+# WriterThreadCase18 - [Test Case]: change local variable
+#======================================================================================================================
+class WriterThreadCase18(AbstractWriterThread):
+
+    TEST_FILE = NormFile('_debugger_case18.py')
+
+    def run(self):
+        self.StartSocket()
+        self.WriteAddBreakpoint(5, 'm2')
+        self.WriteMakeInitialRun()
+
+        thread_id, frame_id, line = self.WaitForBreakpointHit('111', True)
+        assert line == 5, 'Expected return to be in line 2, was: %s' % line
+
+        self.WriteChangeVariable(thread_id, frame_id, 'a', '40')
+        self.WriteRunThread(thread_id)
+        
+        self.finishedOk = True
+
+#=======================================================================================================================
+# WriterThreadCase17 - [Test Case]: dont trace
+#======================================================================================================================
+class WriterThreadCase17(AbstractWriterThread):
+
+    TEST_FILE = NormFile('_debugger_case17.py')
+
+    def run(self):
+        self.StartSocket()
+        self.WriteEnableDontTrace(True)
+        self.WriteAddBreakpoint(27, 'main')
+        self.WriteAddBreakpoint(29, 'main')
+        self.WriteAddBreakpoint(31, 'main')
+        self.WriteAddBreakpoint(33, 'main')
+        self.WriteMakeInitialRun()
+
+        for i in range(4):
+            threadId, frameId, line = self.WaitForBreakpointHit('111', True)
+    
+            self.WriteStepIn(threadId)
+            threadId, frameId, line = self.WaitForBreakpointHit('107', True)
+            # Should Skip step into properties setter
+            assert line == 2, 'Expected return to be in line 2, was: %s' % line
+            self.WriteRunThread(threadId)
+
+        
+        self.finishedOk = True
+
+#=======================================================================================================================
+# WriterThreadCase16 - [Test Case]: numpy.ndarray resolver
+#======================================================================================================================
+class WriterThreadCase16(AbstractWriterThread):
+
+    TEST_FILE = NormFile('_debugger_case16.py')
+
+    def run(self):
+        self.StartSocket()
+        self.WriteAddBreakpoint(9, 'main')
+        self.WriteMakeInitialRun()
+
+        threadId, frameId, line = self.WaitForBreakpointHit('111', True)
+
+        # In this test we check that the three arrays of different shapes, sizes and types
+        # are all resolved properly as ndarrays.
+
+        # First pass check is that we have all three expected variables defined
+        self.WriteGetFrame(threadId, frameId)
+        self.WaitForVars('<var name="smallarray" type="ndarray" value="ndarray%253A %255B  0.%252B1.j   1.%252B1.j   2.%252B1.j   3.%252B1.j   4.%252B1.j   5.%252B1.j   6.%252B1.j   7.%252B1.j%250A   8.%252B1.j   9.%252B1.j  10.%252B1.j  11.%252B1.j  12.%252B1.j  13.%252B1.j  14.%252B1.j  15.%252B1.j%250A  16.%252B1.j  17.%252B1.j  18.%252B1.j  19.%252B1.j  20.%252B1.j  21.%252B1.j  22.%252B1.j  23.%252B1.j%250A  24.%252B1.j  25.%252B1.j  26.%252B1.j  27.%252B1.j  28.%252B1.j  29.%252B1.j  30.%252B1.j  31.%252B1.j%250A  32.%252B1.j  33.%252B1.j  34.%252B1.j  35.%252B1.j  36.%252B1.j  37.%252B1.j  38.%252B1.j  39.%252B1.j%250A  40.%252B1.j  41.%252B1.j  42.%252B1.j  43.%252B1.j  44.%252B1.j  45.%252B1.j  46.%252B1.j  47.%252B1.j%250A  48.%252B1.j  49.%252B1.j  50.%252B1.j  51.%252B1.j  52.%252B1.j  53.%252B1.j  54.%252B1.j  55.%252B1.j%250A  56.%252B1.j  57.%252B1.j  58.%252B1.j  59.%252B1.j  60.%252B1.j  61.%252B1.j  62.%252B1.j  63.%252B1.j%250A  64.%252B1.j  65.%252B1.j  66.%252B1.j  67.%252B1.j  68.%252B1.j  69.%252B1.j  70.%252B1.j  71.%252B1.j%250A  72.%252B1.j  73.%252B1.j  74.%252B1.j  75.%252B1.j  76.%252B1.j  77.%252B1.j  78.%252B1.j  79.%252B1.j%250A  80.%252B1.j  81.%252B1.j  82.%252B1.j  83.%252B1.j  84.%252B1.j  85.%252B1.j  86.%252B1.j  87.%252B1.j%250A  88.%252B1.j  89.%252B1.j  90.%252B1.j  91.%252B1.j  92.%252B1.j  93.%252B1.j  94.%252B1.j  95.%252B1.j%250A  96.%252B1.j  97.%252B1.j  98.%252B1.j  99.%252B1.j%255D" isContainer="True" />')
+        self.WaitForVars('<var name="bigarray" type="ndarray" value="ndarray%253A %255B%255B    0     1     2 ...%252C  9997  9998  9999%255D%250A %255B10000 10001 10002 ...%252C 19997 19998 19999%255D%250A %255B20000 20001 20002 ...%252C 29997 29998 29999%255D%250A ...%252C %250A %255B70000 70001 70002 ...%252C 79997 79998 79999%255D%250A %255B80000 80001 80002 ...%252C 89997 89998 89999%255D%250A %255B90000 90001 90002 ...%252C 99997 99998 99999%255D%255D" isContainer="True" />')
+        self.WaitForVars('<var name="hugearray" type="ndarray" value="ndarray%253A %255B      0       1       2 ...%252C 9999997 9999998 9999999%255D" isContainer="True" />')
+
+        # For each variable, check each of the resolved (meta data) attributes...
+        self.WriteGetVariable(threadId, frameId, 'smallarray')
+        self.WaitForVar('<var name="min" type="complex128"')
+        self.WaitForVar('<var name="max" type="complex128"')
+        self.WaitForVar('<var name="shape" type="tuple"')
+        self.WaitForVar('<var name="dtype" type="dtype"')
+        self.WaitForVar('<var name="size" type="int"')
+        # ...and check that the internals are resolved properly
+        self.WriteGetVariable(threadId, frameId, 'smallarray\t__internals__')
+        self.WaitForVar('<var name="%27size%27')
+
+        self.WriteGetVariable(threadId, frameId, 'bigarray')
+        self.WaitForVar(['<var name="min" type="int64" value="int64%253A 0" />', '<var name="size" type="int" value="int%3A 100000" />'])  # TODO: When on a 32 bit python we get an int32 (which makes this test fail).
+        self.WaitForVar(['<var name="max" type="int64" value="int64%253A 99999" />', '<var name="max" type="int32" value="int32%253A 99999" />'])
+        self.WaitForVar('<var name="shape" type="tuple"')
+        self.WaitForVar('<var name="dtype" type="dtype"')
+        self.WaitForVar('<var name="size" type="int"')
+        self.WriteGetVariable(threadId, frameId, 'bigarray\t__internals__')
+        self.WaitForVar('<var name="%27size%27')
+
+        # this one is different because it crosses the magic threshold where we don't calculate
+        # the min/max
+        self.WriteGetVariable(threadId, frameId, 'hugearray')
+        self.WaitForVar('<var name="min" type="str" value="str%253A ndarray too big%252C calculating min would slow down debugging" />')
+        self.WaitForVar('<var name="max" type="str" value="str%253A ndarray too big%252C calculating max would slow down debugging" />')
+        self.WaitForVar('<var name="shape" type="tuple"')
+        self.WaitForVar('<var name="dtype" type="dtype"')
+        self.WaitForVar('<var name="size" type="int"')
+        self.WriteGetVariable(threadId, frameId, 'hugearray\t__internals__')
+        self.WaitForVar('<var name="%27size%27')
+
+        self.WriteRunThread(threadId)
+        self.finishedOk = True
+
+
+#=======================================================================================================================
+# WriterThreadCase15 - [Test Case]: Custom Commands
+#======================================================================================================================
+class WriterThreadCase15(AbstractWriterThread):
+
+    TEST_FILE = NormFile('_debugger_case15.py')
+
+    def run(self):
+        self.StartSocket()
+        self.WriteAddBreakpoint(22, 'main')
+        self.WriteMakeInitialRun()
+
+        threadId, frameId, line = self.WaitForBreakpointHit('111', True)
+
+        # Access some variable
+        self.WriteCustomOperation("%s\t%s\tEXPRESSION\tcarObj.color" % (threadId, frameId), "EXEC", "f=lambda x: 'val=%s' % x", "f")
+        self.WaitForCustomOperation('val=Black')
+        assert 7 == self._sequence, 'Expected 7. Had: %s' % self._sequence
+
+        self.WriteCustomOperation("%s\t%s\tEXPRESSION\tcarObj.color" % (threadId, frameId), "EXECFILE", NormFile('_debugger_case15_execfile.py'), "f")
+        self.WaitForCustomOperation('val=Black')
+        assert 9 == self._sequence, 'Expected 9. Had: %s' % self._sequence
+
+        self.WriteRunThread(threadId)
+        self.finishedOk = True
+
+
+
+#=======================================================================================================================
+# WriterThreadCase14 - [Test Case]: Interactive Debug Console
+#======================================================================================================================
+class WriterThreadCase14(AbstractWriterThread):
+
+    TEST_FILE = NormFile('_debugger_case14.py')
+
+    def run(self):
+        self.StartSocket()
+        self.WriteAddBreakpoint(22, 'main')
+        self.WriteMakeInitialRun()
+
+        threadId, frameId, line = self.WaitForBreakpointHit('111', True)
+        assert threadId, '%s not valid.' % threadId
+        assert frameId, '%s not valid.' % frameId
+
+        # Access some variable
+        self.WriteDebugConsoleExpression("%s\t%s\tEVALUATE\tcarObj.color" % (threadId, frameId))
+        self.WaitForMultipleVars(['<more>False</more>', '%27Black%27'])
+        assert 7 == self._sequence, 'Expected 9. Had: %s' % self._sequence
+
+        # Change some variable
+        self.WriteDebugConsoleExpression("%s\t%s\tEVALUATE\tcarObj.color='Red'" % (threadId, frameId))
+        self.WriteDebugConsoleExpression("%s\t%s\tEVALUATE\tcarObj.color" % (threadId, frameId))
+        self.WaitForMultipleVars(['<more>False</more>', '%27Red%27'])
+        assert 11 == self._sequence, 'Expected 13. Had: %s' % self._sequence
+
+        # Iterate some loop
+        self.WriteDebugConsoleExpression("%s\t%s\tEVALUATE\tfor i in range(3):" % (threadId, frameId))
+        self.WaitForVars('<xml><more>True</more></xml>')
+        self.WriteDebugConsoleExpression("%s\t%s\tEVALUATE\t    print i" % (threadId, frameId))
+        self.WriteDebugConsoleExpression("%s\t%s\tEVALUATE\t" % (threadId, frameId))
+        self.WaitForVars('<xml><more>False</more><output message="0"></output><output message="1"></output><output message="2"></output></xml>')
+        assert 17 == self._sequence, 'Expected 19. Had: %s' % self._sequence
+
+        self.WriteRunThread(threadId)
+        self.finishedOk = True
+
+
+#=======================================================================================================================
+# WriterThreadCase13
+#======================================================================================================================
+class WriterThreadCase13(AbstractWriterThread):
+
+    TEST_FILE = NormFile('_debugger_case13.py')
+
+    def run(self):
+        self.StartSocket()
+        self.WriteAddBreakpoint(35, 'main')
+        self.Write("%s\t%s\t%s" % (CMD_SET_PROPERTY_TRACE, self.NextSeq(), "true;false;false;true"))
+        self.WriteMakeInitialRun()
+        threadId, frameId, line = self.WaitForBreakpointHit('111', True)
+
+        self.WriteGetFrame(threadId, frameId)
+
+        self.WriteStepIn(threadId)
+        threadId, frameId, line = self.WaitForBreakpointHit('107', True)
+        # Should go inside setter method
+        assert line == 25, 'Expected return to be in line 25, was: %s' % line
+
+        self.WriteStepIn(threadId)
+        threadId, frameId, line = self.WaitForBreakpointHit('107', True)
+
+        self.WriteStepIn(threadId)
+        threadId, frameId, line = self.WaitForBreakpointHit('107', True)
+        # Should go inside getter method
+        assert line == 21, 'Expected return to be in line 21, was: %s' % line
+
+        self.WriteStepIn(threadId)
+        threadId, frameId, line = self.WaitForBreakpointHit('107', True)
+
+        # Disable property tracing
+        self.Write("%s\t%s\t%s" % (CMD_SET_PROPERTY_TRACE, self.NextSeq(), "true;true;true;true"))
+        self.WriteStepIn(threadId)
+        threadId, frameId, line = self.WaitForBreakpointHit('107', True)
+        # Should Skip step into properties setter
+        assert line == 39, 'Expected return to be in line 39, was: %s' % line
+
+        # Enable property tracing
+        self.Write("%s\t%s\t%s" % (CMD_SET_PROPERTY_TRACE, self.NextSeq(), "true;false;false;true"))
+        self.WriteStepIn(threadId)
+        threadId, frameId, line = self.WaitForBreakpointHit('107', True)
+        # Should go inside getter method
+        assert line == 8, 'Expected return to be in line 8, was: %s' % line
+
+        self.WriteRunThread(threadId)
+
+        self.finishedOk = True
+
+#=======================================================================================================================
+# WriterThreadCase12
+#======================================================================================================================
+class WriterThreadCase12(AbstractWriterThread):
+
+    TEST_FILE = NormFile('_debugger_case10.py')
+
+    def run(self):
+        self.StartSocket()
+        self.WriteAddBreakpoint(2, '')  # Should not be hit: setting empty function (not None) should only hit global.
+        self.WriteAddBreakpoint(6, 'Method1a')
+        self.WriteAddBreakpoint(11, 'Method2')
+        self.WriteMakeInitialRun()
+
+        threadId, frameId, line = self.WaitForBreakpointHit('111', True)
+
+        assert line == 11, 'Expected return to be in line 11, was: %s' % line
+
+        self.WriteStepReturn(threadId)
+
+        threadId, frameId, line = self.WaitForBreakpointHit('111', True)  # not a return (it stopped in the other breakpoint)
+
+        assert line == 6, 'Expected return to be in line 6, was: %s' % line
+
+        self.WriteRunThread(threadId)
+
+        assert 13 == self._sequence, 'Expected 13. Had: %s' % self._sequence
+
+        self.finishedOk = True
+
+
+
+#=======================================================================================================================
+# WriterThreadCase11
+#======================================================================================================================
+class WriterThreadCase11(AbstractWriterThread):
+
+    TEST_FILE = NormFile('_debugger_case10.py')
+
+    def run(self):
+        self.StartSocket()
+        self.WriteAddBreakpoint(2, 'Method1')
+        self.WriteMakeInitialRun()
+
+        threadId, frameId = self.WaitForBreakpointHit('111')
+
+        self.WriteStepOver(threadId)
+
+        threadId, frameId, line = self.WaitForBreakpointHit('108', True)
+
+        assert line == 3, 'Expected return to be in line 3, was: %s' % line
+
+        self.WriteStepOver(threadId)
+
+        threadId, frameId, line = self.WaitForBreakpointHit('108', True)
+
+        assert line == 11, 'Expected return to be in line 11, was: %s' % line
+
+        self.WriteStepOver(threadId)
+
+        threadId, frameId, line = self.WaitForBreakpointHit('108', True)
+
+        assert line == 12, 'Expected return to be in line 12, was: %s' % line
+
+        self.WriteRunThread(threadId)
+
+        assert 13 == self._sequence, 'Expected 13. Had: %s' % self._sequence
+
+        self.finishedOk = True
+
+
+
+
+#=======================================================================================================================
+# WriterThreadCase10
+#======================================================================================================================
+class WriterThreadCase10(AbstractWriterThread):
+
+    TEST_FILE = NormFile('_debugger_case10.py')
+
+    def run(self):
+        self.StartSocket()
+        self.WriteAddBreakpoint(2, 'None')  # None or Method should make hit.
+        self.WriteMakeInitialRun()
+
+        threadId, frameId = self.WaitForBreakpointHit('111')
+
+        self.WriteStepReturn(threadId)
+
+        threadId, frameId, line = self.WaitForBreakpointHit('109', True)
+
+        assert line == 11, 'Expected return to be in line 11, was: %s' % line
+
+        self.WriteStepOver(threadId)
+
+        threadId, frameId, line = self.WaitForBreakpointHit('108', True)
+
+        assert line == 12, 'Expected return to be in line 12, was: %s' % line
+
+        self.WriteRunThread(threadId)
+
+        assert 11 == self._sequence, 'Expected 11. Had: %s' % self._sequence
+
+        self.finishedOk = True
+
+
+
+#=======================================================================================================================
+# WriterThreadCase9
+#======================================================================================================================
+class WriterThreadCase9(AbstractWriterThread):
+
+    TEST_FILE = NormFile('_debugger_case89.py')
+
+    def run(self):
+        self.StartSocket()
+        self.WriteAddBreakpoint(10, 'Method3')
+        self.WriteMakeInitialRun()
+
+        threadId, frameId = self.WaitForBreakpointHit('111')
+
+        self.WriteStepOver(threadId)
+
+        threadId, frameId, line = self.WaitForBreakpointHit('108', True)
+
+        assert line == 11, 'Expected return to be in line 11, was: %s' % line
+
+        self.WriteStepOver(threadId)
+
+        threadId, frameId, line = self.WaitForBreakpointHit('108', True)
+
+        assert line == 12, 'Expected return to be in line 12, was: %s' % line
+
+        self.WriteRunThread(threadId)
+
+        assert 11 == self._sequence, 'Expected 11. Had: %s' % self._sequence
+
+        self.finishedOk = True
+
+
+#=======================================================================================================================
+# WriterThreadCase8
+#======================================================================================================================
+class WriterThreadCase8(AbstractWriterThread):
+
+    TEST_FILE = NormFile('_debugger_case89.py')
+
+    def run(self):
+        self.StartSocket()
+        self.WriteAddBreakpoint(10, 'Method3')
+        self.WriteMakeInitialRun()
+
+        threadId, frameId = self.WaitForBreakpointHit('111')
+
+        self.WriteStepReturn(threadId)
+
+        threadId, frameId, line = self.WaitForBreakpointHit('109', True)
+
+        assert line == 15, 'Expected return to be in line 15, was: %s' % line
+
+        self.WriteRunThread(threadId)
+
+        assert 9 == self._sequence, 'Expected 9. Had: %s' % self._sequence
+
+        self.finishedOk = True
+
+
+
+
+#=======================================================================================================================
+# WriterThreadCase7
+#======================================================================================================================
+class WriterThreadCase7(AbstractWriterThread):
+
+    TEST_FILE = NormFile('_debugger_case7.py')
+
+    def run(self):
+        self.StartSocket()
+        self.WriteAddBreakpoint(2, 'Call')
+        self.WriteMakeInitialRun()
+
+        threadId, frameId = self.WaitForBreakpointHit('111')
+
+        self.WriteGetFrame(threadId, frameId)
+
+        self.WaitForVars('<xml></xml>')  # no vars at this point
+
+        self.WriteStepOver(threadId)
+
+        self.WriteGetFrame(threadId, frameId)
+
+        self.WaitForVars('<xml><var name="variable_for_test_1" type="int" value="int%253A 10" />%0A</xml>')
+
+        self.WriteStepOver(threadId)
+
+        self.WriteGetFrame(threadId, frameId)
+
+        self.WaitForVars('<xml><var name="variable_for_test_1" type="int" value="int%253A 10" />%0A<var name="variable_for_test_2" type="int" value="int%253A 20" />%0A</xml>')
+
+        self.WriteRunThread(threadId)
+
+        assert 17 == self._sequence, 'Expected 17. Had: %s' % self._sequence
+
+        self.finishedOk = True
+
+
+
+#=======================================================================================================================
+# WriterThreadCase6
+#=======================================================================================================================
+class WriterThreadCase6(AbstractWriterThread):
+
+    TEST_FILE = NormFile('_debugger_case56.py')
+
+    def run(self):
+        self.StartSocket()
+        self.WriteAddBreakpoint(2, 'Call2')
+        self.WriteMakeInitialRun()
+
+        threadId, frameId = self.WaitForBreakpointHit()
+
+        self.WriteGetFrame(threadId, frameId)
+
+        self.WriteStepReturn(threadId)
+
+        threadId, frameId, line = self.WaitForBreakpointHit('109', True)
+
+        assert line == 8, 'Expecting it to go to line 8. Went to: %s' % line
+
+        self.WriteStepIn(threadId)
+
+        threadId, frameId, line = self.WaitForBreakpointHit('107', True)
+
+        # goes to line 4 in jython (function declaration line)
+        assert line in (4, 5), 'Expecting it to go to line 4 or 5. Went to: %s' % line
+
+        self.WriteRunThread(threadId)
+
+        assert 13 == self._sequence, 'Expected 15. Had: %s' % self._sequence
+
+        self.finishedOk = True
+
+#=======================================================================================================================
+# WriterThreadCase5
+#=======================================================================================================================
+class WriterThreadCase5(AbstractWriterThread):
+
+    TEST_FILE = NormFile('_debugger_case56.py')
+
+    def run(self):
+        self.StartSocket()
+        breakpoint_id = self.WriteAddBreakpoint(2, 'Call2')
+        self.WriteMakeInitialRun()
+
+        threadId, frameId = self.WaitForBreakpointHit()
+
+        self.WriteGetFrame(threadId, frameId)
+
+        self.WriteRemoveBreakpoint(breakpoint_id)
+
+        self.WriteStepReturn(threadId)
+
+        threadId, frameId, line = self.WaitForBreakpointHit('109', True)
+
+        assert line == 8, 'Expecting it to go to line 8. Went to: %s' % line
+
+        self.WriteStepIn(threadId)
+
+        threadId, frameId, line = self.WaitForBreakpointHit('107', True)
+
+        # goes to line 4 in jython (function declaration line)
+        assert line in (4, 5), 'Expecting it to go to line 4 or 5. Went to: %s' % line
+
+        self.WriteRunThread(threadId)
+
+        assert 15 == self._sequence, 'Expected 15. Had: %s' % self._sequence
+
+        self.finishedOk = True
+
+
+#=======================================================================================================================
+# WriterThreadCase4
+#=======================================================================================================================
+class WriterThreadCase4(AbstractWriterThread):
+
+    TEST_FILE = NormFile('_debugger_case4.py')
+
+    def run(self):
+        self.StartSocket()
+        self.WriteMakeInitialRun()
+
+        threadId = self.WaitForNewThread()
+
+        self.WriteSuspendThread(threadId)
+
+        time.sleep(4)  # wait for time enough for the test to finish if it wasn't suspended
+
+        self.WriteRunThread(threadId)
+
+        self.finishedOk = True
+
+
+#=======================================================================================================================
+# WriterThreadCase3
+#=======================================================================================================================
+class WriterThreadCase3(AbstractWriterThread):
+
+    TEST_FILE = NormFile('_debugger_case3.py')
+
+    def run(self):
+        self.StartSocket()
+        self.WriteMakeInitialRun()
+        time.sleep(1)
+        breakpoint_id = self.WriteAddBreakpoint(4, '')
+        self.WriteAddBreakpoint(5, 'FuncNotAvailable')  # Check that it doesn't get hit in the global when a function is available
+
+        threadId, frameId = self.WaitForBreakpointHit()
+
+        self.WriteGetFrame(threadId, frameId)
+
+        self.WriteRunThread(threadId)
+
+        threadId, frameId = self.WaitForBreakpointHit()
+
+        self.WriteGetFrame(threadId, frameId)
+
+        self.WriteRemoveBreakpoint(breakpoint_id)
+
+        self.WriteRunThread(threadId)
+
+        assert 17 == self._sequence, 'Expected 17. Had: %s' % self._sequence
+
+        self.finishedOk = True
+
+#=======================================================================================================================
+# WriterThreadCase2
+#=======================================================================================================================
+class WriterThreadCase2(AbstractWriterThread):
+
+    TEST_FILE = NormFile('_debugger_case2.py')
+
+    def run(self):
+        self.StartSocket()
+        self.WriteAddBreakpoint(3, 'Call4')  # seq = 3
+        self.WriteMakeInitialRun()
+
+        threadId, frameId = self.WaitForBreakpointHit()
+
+        self.WriteGetFrame(threadId, frameId)
+
+        self.WriteAddBreakpoint(14, 'Call2')
+
+        self.WriteRunThread(threadId)
+
+        threadId, frameId = self.WaitForBreakpointHit()
+
+        self.WriteGetFrame(threadId, frameId)
+
+        self.WriteRunThread(threadId)
+
+        assert 15 == self._sequence, 'Expected 15. Had: %s' % self._sequence
+
+        self.finishedOk = True
+
+#=======================================================================================================================
+# WriterThreadCase1
+#=======================================================================================================================
+class WriterThreadCase1(AbstractWriterThread):
+
+    TEST_FILE = NormFile('_debugger_case1.py')
+
+    def run(self):
+        self.StartSocket()
+        self.WriteAddBreakpoint(6, 'SetUp')
+        self.WriteMakeInitialRun()
+
+        threadId, frameId = self.WaitForBreakpointHit()
+
+        self.WriteGetFrame(threadId, frameId)
+
+        self.WriteStepOver(threadId)
+
+        self.WriteGetFrame(threadId, frameId)
+
+        self.WriteRunThread(threadId)
+
+        assert 13 == self._sequence, 'Expected 13. Had: %s' % self._sequence
+
+        self.finishedOk = True
+
+#=======================================================================================================================
+# DebuggerBase
+#=======================================================================================================================
+class DebuggerBase(object):
+
+    def getCommandLine(self):
+        raise NotImplementedError
+
+    def CheckCase(self, writerThreadClass):
+        UpdatePort()
+        writerThread = writerThreadClass()
+        writerThread.start()
+
+        localhost = pydev_localhost.get_localhost()
+        args = self.getCommandLine()
+        args += [
+            PYDEVD_FILE,
+            '--DEBUG_RECORD_SOCKET_READS',
+            '--client',
+            localhost,
+            '--port',
+            str(port),
+            '--file',
+            writerThread.TEST_FILE,
+        ]
+
+        if SHOW_OTHER_DEBUG_INFO:
+            print 'executing', ' '.join(args)
+
+#         process = subprocess.Popen(args, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, cwd=os.path.dirname(PYDEVD_FILE))
+        process = subprocess.Popen(args, stdout=subprocess.PIPE, cwd=os.path.dirname(PYDEVD_FILE))
+        class ProcessReadThread(threading.Thread):
+            def run(self):
+                self.resultStr = None
+                self.resultStr = process.stdout.read()
+                process.stdout.close()
+
+            def DoKill(self):
+                process.stdout.close()
+
+        processReadThread = ProcessReadThread()
+        processReadThread.setDaemon(True)
+        processReadThread.start()
+        if SHOW_OTHER_DEBUG_INFO:
+            print 'Both processes started'
+
+        # polls can fail (because the process may finish and the thread still not -- so, we give it some more chances to
+        # finish successfully).
+        pools_failed = 0
+        while writerThread.isAlive():
+            if process.poll() is not None:
+                pools_failed += 1
+            time.sleep(.2)
+            if pools_failed == 10:
+                break
+
+        if process.poll() is None:
+            for i in range(10):
+                if processReadThread.resultStr is None:
+                    time.sleep(.5)
+                else:
+                    break
+            else:
+                writerThread.DoKill()
+
+        else:
+            if process.poll() < 0:
+                self.fail("The other process exited with error code: " + str(process.poll()) + " result:" + processReadThread.resultStr)
+
+
+        if SHOW_RESULT_STR:
+            print processReadThread.resultStr
+
+        if processReadThread.resultStr is None:
+            self.fail("The other process may still be running -- and didn't give any output")
+
+        if 'TEST SUCEEDED' not in processReadThread.resultStr:
+            self.fail(processReadThread.resultStr)
+
+        if not writerThread.finishedOk:
+            self.fail("The thread that was doing the tests didn't finish successfully. Output: %s" % processReadThread.resultStr)
+
+    def testCase1(self):
+        self.CheckCase(WriterThreadCase1)
+
+    def testCase2(self):
+        self.CheckCase(WriterThreadCase2)
+
+    def testCase3(self):
+        self.CheckCase(WriterThreadCase3)
+
+    def testCase4(self):
+        self.CheckCase(WriterThreadCase4)
+
+    def testCase5(self):
+        self.CheckCase(WriterThreadCase5)
+
+    def testCase6(self):
+        self.CheckCase(WriterThreadCase6)
+
+    def testCase7(self):
+        self.CheckCase(WriterThreadCase7)
+
+    def testCase8(self):
+        self.CheckCase(WriterThreadCase8)
+
+    def testCase9(self):
+        self.CheckCase(WriterThreadCase9)
+
+    def testCase10(self):
+        self.CheckCase(WriterThreadCase10)
+
+    def testCase11(self):
+        self.CheckCase(WriterThreadCase11)
+
+    def testCase12(self):
+        self.CheckCase(WriterThreadCase12)
+
+    def testCase13(self):
+        self.CheckCase(WriterThreadCase13)
+
+    def testCase14(self):
+        self.CheckCase(WriterThreadCase14)
+
+    def testCase15(self):
+        self.CheckCase(WriterThreadCase15)
+
+    def testCase16(self):
+        self.CheckCase(WriterThreadCase16)
+
+    def testCase17(self):
+        self.CheckCase(WriterThreadCase17)
+        
+    def testCase18(self):
+        self.CheckCase(WriterThreadCase18)
+        
+    def testCase19(self):
+        self.CheckCase(WriterThreadCase19)
+
+
+class TestPython(unittest.TestCase, DebuggerBase):
+    def getCommandLine(self):
+        return [PYTHON_EXE]
+
+class TestJython(unittest.TestCase, DebuggerBase):
+    def getCommandLine(self):
+        return [
+                JAVA_LOCATION,
+                '-classpath',
+                JYTHON_JAR_LOCATION,
+                'org.python.util.jython'
+            ]
+
+    # This case requires decorators to work (which are not present on Jython 2.1), so, this test is just removed from the jython run.
+    def testCase13(self):
+        self.skipTest("Unsupported Decorators")
+
+    def testCase16(self):
+        self.skipTest("Unsupported numpy")
+
+    # This case requires decorators to work (which are not present on Jython 2.1), so, this test is just removed from the jython run.
+    def testCase17(self):
+        self.skipTest("Unsupported Decorators")
+
+    def testCase18(self):
+        self.skipTest("Unsupported assign to local")
+
+class TestIronPython(unittest.TestCase, DebuggerBase):
+    def getCommandLine(self):
+        return [
+                IRONPYTHON_EXE,
+                '-X:Frames'
+            ]
+
+    def testCase16(self):
+        self.skipTest("Unsupported numpy")
+
+
+def GetLocationFromLine(line):
+    loc = line.split('=')[1].strip()
+    if loc.endswith(';'):
+        loc = loc[:-1]
+    if loc.endswith('"'):
+        loc = loc[:-1]
+    if loc.startswith('"'):
+        loc = loc[1:]
+    return loc
+
+
+def SplitLine(line):
+    if '=' not in line:
+        return None, None
+    var = line.split('=')[0].strip()
+    return var, GetLocationFromLine(line)
+
+
+
+import platform
+sysname = platform.system().lower()
+test_dependent = os.path.join('../../../', 'org.python.pydev.core', 'tests', 'org', 'python', 'pydev', 'core', 'TestDependent.' + sysname + '.properties')
+f = open(test_dependent)
+try:
+    for line in f.readlines():
+        var, loc = SplitLine(line)
+        if 'PYTHON_EXE' == var:
+            PYTHON_EXE = loc
+
+        if 'IRONPYTHON_EXE' == var:
+            IRONPYTHON_EXE = loc
+
+        if 'JYTHON_JAR_LOCATION' == var:
+            JYTHON_JAR_LOCATION = loc
+
+        if 'JAVA_LOCATION' == var:
+            JAVA_LOCATION = loc
+finally:
+    f.close()
+
+assert PYTHON_EXE, 'PYTHON_EXE not found in %s' % (test_dependent,)
+assert IRONPYTHON_EXE, 'IRONPYTHON_EXE not found in %s' % (test_dependent,)
+assert JYTHON_JAR_LOCATION, 'JYTHON_JAR_LOCATION not found in %s' % (test_dependent,)
+assert JAVA_LOCATION, 'JAVA_LOCATION not found in %s' % (test_dependent,)
+assert os.path.exists(PYTHON_EXE), 'The location: %s is not valid' % (PYTHON_EXE,)
+assert os.path.exists(IRONPYTHON_EXE), 'The location: %s is not valid' % (IRONPYTHON_EXE,)
+assert os.path.exists(JYTHON_JAR_LOCATION), 'The location: %s is not valid' % (JYTHON_JAR_LOCATION,)
+assert os.path.exists(JAVA_LOCATION), 'The location: %s is not valid' % (JAVA_LOCATION,)
+
+if False:
+    suite = unittest.TestSuite()
+    #PYTHON_EXE = r'C:\bin\Anaconda\python.exe'
+#     suite.addTest(TestPython('testCase10'))
+#     suite.addTest(TestPython('testCase3'))
+#     suite.addTest(TestPython('testCase16'))
+#     suite.addTest(TestPython('testCase17'))
+#     suite.addTest(TestPython('testCase18'))
+#     suite.addTest(TestPython('testCase19'))
+    suite = unittest.makeSuite(TestPython)
+    unittest.TextTestRunner(verbosity=3).run(suite)
+    
+#    unittest.TextTestRunner(verbosity=3).run(suite)
+#    
+#    suite = unittest.makeSuite(TestJython)
+#    unittest.TextTestRunner(verbosity=3).run(suite)
diff --git a/python/helpers/pydev/tests_python/test_pydev_monkey.py b/python/helpers/pydev/tests_python/test_pydev_monkey.py
new file mode 100644
index 0000000..3eb7930
--- /dev/null
+++ b/python/helpers/pydev/tests_python/test_pydev_monkey.py
@@ -0,0 +1,21 @@
+import unittest
+import pydev_monkey
+import sys
+
+
+
+class TestCase(unittest.TestCase):
+
+    def test_monkey(self):
+        check='''C:\\bin\\python.exe -u -c "
+connect(\\"127.0.0.1\\")
+"'''
+        sys.original_argv = []
+        self.assertEqual('"-u" "-c" "\nconnect(\\"127.0.0.1\\")\n"', pydev_monkey.patch_arg_str_win(check))
+
+    def test_str_to_args_windows(self):
+        
+        self.assertEqual(['a', 'b'], pydev_monkey.str_to_args_windows('a "b"'))
+        
+if __name__ == '__main__':
+    unittest.main()
\ No newline at end of file
diff --git a/python/helpers/pydev/tests_python/test_save_locals.py b/python/helpers/pydev/tests_python/test_save_locals.py
new file mode 100644
index 0000000..fe65d4d
--- /dev/null
+++ b/python/helpers/pydev/tests_python/test_save_locals.py
@@ -0,0 +1,99 @@
+import inspect
+import sys
+import unittest
+
+from pydevd_save_locals import save_locals
+
+
+def use_save_locals(name, value):
+    """
+    Attempt to set the local of the given name to value, using locals_to_fast.
+    """
+    frame = inspect.currentframe().f_back
+    locals_dict = frame.f_locals
+    locals_dict[name] = value
+
+    save_locals(frame)
+
+
+def test_method(fn):
+    """
+    A harness for testing methods that attempt to modify the values of locals on the stack.
+    """
+    x = 1
+
+    # The method 'fn' should attempt to set x = 2 in the current frame.
+    fn('x', 2)
+
+    return x
+
+
+
+class TestSetLocals(unittest.TestCase):
+    """
+    Test setting locals in one function from another function using several approaches.
+    """
+
+
+    def test_set_locals_using_save_locals(self):
+        x = test_method(use_save_locals)
+        self.assertEqual(x, 2)  # Expected to succeed
+
+
+    def test_frame_simple_change(self):
+        frame = sys._getframe()
+        a = 20
+        frame.f_locals['a'] = 50
+        save_locals(frame)
+        self.assertEquals(50, a)
+
+
+    def test_frame_co_freevars(self):
+
+        outer_var = 20
+
+        def func():
+            frame = sys._getframe()
+            frame.f_locals['outer_var'] = 50
+            save_locals(frame)
+            self.assertEquals(50, outer_var)
+
+        func()
+
+    def test_frame_co_cellvars(self):
+
+        def check_co_vars(a):
+            frame = sys._getframe()
+            def function2():
+                print a
+
+            assert 'a' in frame.f_code.co_cellvars
+            frame = sys._getframe()
+            frame.f_locals['a'] = 50
+            save_locals(frame)
+            self.assertEquals(50, a)
+
+        check_co_vars(1)
+
+
+    def test_frame_change_in_inner_frame(self):
+        def change(f):
+            self.assert_(f is not sys._getframe())
+            f.f_locals['a']= 50
+            save_locals(f)
+
+
+        frame = sys._getframe()
+        a = 20
+        change(frame)
+        self.assertEquals(50, a)
+
+
+if __name__ == '__main__':
+    suite = unittest.TestSuite()
+#    suite.addTest(TestSetLocals('test_set_locals_using_dict'))
+#    #suite.addTest(Test('testCase10a'))
+#    unittest.TextTestRunner(verbosity=3).run(suite)
+
+    suite = unittest.makeSuite(TestSetLocals)
+    unittest.TextTestRunner(verbosity=3).run(suite)
diff --git a/python/helpers/pydev/tests_runfiles/not_in_default_pythonpath.txt b/python/helpers/pydev/tests_runfiles/not_in_default_pythonpath.txt
new file mode 100644
index 0000000..29cdc5b
--- /dev/null
+++ b/python/helpers/pydev/tests_runfiles/not_in_default_pythonpath.txt
@@ -0,0 +1 @@
+(no __init__.py file)
\ No newline at end of file
diff --git a/python/helpers/pydev/tests_runfiles/samples/.cvsignore b/python/helpers/pydev/tests_runfiles/samples/.cvsignore
new file mode 100644
index 0000000..d1c8995
--- /dev/null
+++ b/python/helpers/pydev/tests_runfiles/samples/.cvsignore
@@ -0,0 +1,2 @@
+*.class
+*.pyc
diff --git a/python/helpers/pydev/tests_runfiles/samples/__init__.py b/python/helpers/pydev/tests_runfiles/samples/__init__.py
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/python/helpers/pydev/tests_runfiles/samples/__init__.py
diff --git a/python/helpers/pydev/tests_runfiles/samples/nested_dir/__init__.py b/python/helpers/pydev/tests_runfiles/samples/nested_dir/__init__.py
new file mode 100644
index 0000000..8b13789
--- /dev/null
+++ b/python/helpers/pydev/tests_runfiles/samples/nested_dir/__init__.py
@@ -0,0 +1 @@
+
diff --git a/python/helpers/pydev/tests_runfiles/samples/nested_dir/nested2/__init__.py b/python/helpers/pydev/tests_runfiles/samples/nested_dir/nested2/__init__.py
new file mode 100644
index 0000000..8b13789
--- /dev/null
+++ b/python/helpers/pydev/tests_runfiles/samples/nested_dir/nested2/__init__.py
@@ -0,0 +1 @@
+
diff --git a/python/helpers/pydev/tests_runfiles/samples/nested_dir/nested2/deep_nest_test.py b/python/helpers/pydev/tests_runfiles/samples/nested_dir/nested2/deep_nest_test.py
new file mode 100644
index 0000000..7b1972b
--- /dev/null
+++ b/python/helpers/pydev/tests_runfiles/samples/nested_dir/nested2/deep_nest_test.py
@@ -0,0 +1,22 @@
+import unittest
+
+class SampleTest(unittest.TestCase):
+    
+    def setUp(self):
+        return
+
+    def tearDown(self):
+        return
+
+    def test_non_unique_name(self):
+        pass
+
+    def test_asdf2(self):
+        pass
+
+    def test_i_am_a_unique_test_name(self):
+        pass
+
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/python/helpers/pydev/tests_runfiles/samples/nested_dir/nested2/non_test_file.py b/python/helpers/pydev/tests_runfiles/samples/nested_dir/nested2/non_test_file.py
new file mode 100644
index 0000000..470c650
--- /dev/null
+++ b/python/helpers/pydev/tests_runfiles/samples/nested_dir/nested2/non_test_file.py
@@ -0,0 +1,3 @@
+
+""" i am a python file with no tests """
+pass
diff --git a/python/helpers/pydev/tests_runfiles/samples/nested_dir/nested3/__init__.py b/python/helpers/pydev/tests_runfiles/samples/nested_dir/nested3/__init__.py
new file mode 100644
index 0000000..8b13789
--- /dev/null
+++ b/python/helpers/pydev/tests_runfiles/samples/nested_dir/nested3/__init__.py
@@ -0,0 +1 @@
+
diff --git a/python/helpers/pydev/tests_runfiles/samples/nested_dir/nested3/junk.txt b/python/helpers/pydev/tests_runfiles/samples/nested_dir/nested3/junk.txt
new file mode 100644
index 0000000..14dd4dd
--- /dev/null
+++ b/python/helpers/pydev/tests_runfiles/samples/nested_dir/nested3/junk.txt
@@ -0,0 +1 @@
+im a junk file
diff --git a/python/helpers/pydev/tests_runfiles/samples/nested_dir/nested3/non_test_file.py b/python/helpers/pydev/tests_runfiles/samples/nested_dir/nested3/non_test_file.py
new file mode 100644
index 0000000..470c650
--- /dev/null
+++ b/python/helpers/pydev/tests_runfiles/samples/nested_dir/nested3/non_test_file.py
@@ -0,0 +1,3 @@
+
+""" i am a python file with no tests """
+pass
diff --git a/python/helpers/pydev/tests_runfiles/samples/nested_dir/non_test_file.py b/python/helpers/pydev/tests_runfiles/samples/nested_dir/non_test_file.py
new file mode 100644
index 0000000..470c650
--- /dev/null
+++ b/python/helpers/pydev/tests_runfiles/samples/nested_dir/non_test_file.py
@@ -0,0 +1,3 @@
+
+""" i am a python file with no tests """
+pass
diff --git a/python/helpers/pydev/tests_runfiles/samples/nested_dir/simple4_test.py b/python/helpers/pydev/tests_runfiles/samples/nested_dir/simple4_test.py
new file mode 100644
index 0000000..ba5d45f
--- /dev/null
+++ b/python/helpers/pydev/tests_runfiles/samples/nested_dir/simple4_test.py
@@ -0,0 +1,16 @@
+import unittest
+
+class NestedSampleTest(unittest.TestCase):
+    
+    def setUp(self):
+        return
+
+    def tearDown(self):
+        return
+
+    def test_non_unique_name(self):
+        pass
+
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/python/helpers/pydev/tests_runfiles/samples/non_test_file.py b/python/helpers/pydev/tests_runfiles/samples/non_test_file.py
new file mode 100644
index 0000000..470c650
--- /dev/null
+++ b/python/helpers/pydev/tests_runfiles/samples/non_test_file.py
@@ -0,0 +1,3 @@
+
+""" i am a python file with no tests """
+pass
diff --git a/python/helpers/pydev/tests_runfiles/samples/simple2_test.py b/python/helpers/pydev/tests_runfiles/samples/simple2_test.py
new file mode 100644
index 0000000..d46468e
--- /dev/null
+++ b/python/helpers/pydev/tests_runfiles/samples/simple2_test.py
@@ -0,0 +1,16 @@
+import unittest
+
+class YetAnotherSampleTest(unittest.TestCase):
+    
+    def setUp(self):
+        return
+
+    def tearDown(self):
+        return
+
+    def test_abc(self):
+        pass
+
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/python/helpers/pydev/tests_runfiles/samples/simple3_test.py b/python/helpers/pydev/tests_runfiles/samples/simple3_test.py
new file mode 100644
index 0000000..da1ccbf
--- /dev/null
+++ b/python/helpers/pydev/tests_runfiles/samples/simple3_test.py
@@ -0,0 +1,16 @@
+import unittest
+
+class StillYetAnotherSampleTest(unittest.TestCase):
+    
+    def setUp(self):
+        return
+
+    def tearDown(self):
+        return
+
+    def test_non_unique_name(self):
+        pass
+
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/python/helpers/pydev/tests_runfiles/samples/simpleClass_test.py b/python/helpers/pydev/tests_runfiles/samples/simpleClass_test.py
new file mode 100644
index 0000000..3a9c900
--- /dev/null
+++ b/python/helpers/pydev/tests_runfiles/samples/simpleClass_test.py
@@ -0,0 +1,14 @@
+import unittest
+
+class SetUpClassTest(unittest.TestCase):
+
+    @classmethod
+    def setUpClass(cls):
+        raise ValueError("This is an INTENTIONAL value error in setUpClass.")
+
+    def test_blank(self):
+        pass
+
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/python/helpers/pydev/tests_runfiles/samples/simpleModule_test.py b/python/helpers/pydev/tests_runfiles/samples/simpleModule_test.py
new file mode 100644
index 0000000..fdde67e
--- /dev/null
+++ b/python/helpers/pydev/tests_runfiles/samples/simpleModule_test.py
@@ -0,0 +1,16 @@
+import unittest
+
+def setUpModule():
+    raise ValueError("This is an INTENTIONAL value error in setUpModule.")
+
+class SetUpModuleTest(unittest.TestCase):
+    
+    def setUp(cls):
+        pass
+
+    def test_blank(self):
+        pass
+
+
+if __name__ == '__main__':
+    unittest.main()
diff --git a/python/helpers/pydev/tests_runfiles/samples/simple_test.py b/python/helpers/pydev/tests_runfiles/samples/simple_test.py
new file mode 100644
index 0000000..619df7c
--- /dev/null
+++ b/python/helpers/pydev/tests_runfiles/samples/simple_test.py
@@ -0,0 +1,45 @@
+import unittest
+
+class SampleTest(unittest.TestCase):
+    
+    def setUp(self):
+        pass
+
+    def tearDown(self):
+        pass
+
+    def test_xxxxxx1(self):
+        self.fail('Fail test 2')
+    def test_xxxxxx2(self):
+        pass
+    def test_xxxxxx3(self):
+        pass
+    def test_xxxxxx4(self):
+        pass
+    def test_non_unique_name(self):
+        print('non unique name ran')
+
+
+class AnotherSampleTest(unittest.TestCase):
+    def setUp(self):
+        pass
+
+    def tearDown(self):
+        pass
+
+    def test_1(self):
+        pass
+    def test_2(self):
+        """ im a doc string"""
+        pass
+    def todo_not_tested(self):
+        '''
+        Not there by default!
+        '''
+
+
+if __name__ == '__main__':
+#    suite = unittest.makeSuite(SampleTest, 'test')
+#    runner = unittest.TextTestRunner( verbosity=3 )
+#    runner.run(suite)
+    unittest.main()
diff --git a/python/helpers/pydev/tests_runfiles/test_pydevd_property.py b/python/helpers/pydev/tests_runfiles/test_pydevd_property.py
new file mode 100644
index 0000000..64fa9b6
--- /dev/null
+++ b/python/helpers/pydev/tests_runfiles/test_pydevd_property.py
@@ -0,0 +1,134 @@
+'''
+Created on Aug 22, 2011
+
+@author: hussain.bohra
+@author: fabioz
+'''
+
+import os
+import sys
+import unittest
+
+#=======================================================================================================================
+# Test
+#=======================================================================================================================
+class Test(unittest.TestCase):
+    """Test cases to validate custom property implementation in pydevd 
+    """
+    
+    def setUp(self, nused=None):
+        self.tempdir = os.path.join(os.path.dirname(os.path.dirname(__file__)))
+        sys.path.insert(0, self.tempdir)
+        import pydevd_traceproperty
+        self.old = pydevd_traceproperty.replace_builtin_property()
+    
+    
+    def tearDown(self, unused=None):
+        import pydevd_traceproperty
+        pydevd_traceproperty.replace_builtin_property(self.old)
+        sys.path.remove(self.tempdir)
+
+
+    def testProperty(self):
+        """Test case to validate custom property
+        """
+        
+        import pydevd_traceproperty
+        class TestProperty(object):
+            
+            def __init__(self):
+                self._get = 0
+                self._set = 0
+                self._del = 0
+                
+            def get_name(self):
+                self._get += 1
+                return self.__name
+            
+            def set_name(self, value):
+                self._set += 1
+                self.__name = value
+                
+            def del_name(self):
+                self._del += 1
+                del self.__name
+            name = property(get_name, set_name, del_name, "name's docstring")
+            self.assertEqual(name.__class__, pydevd_traceproperty.DebugProperty)
+            
+        testObj = TestProperty()
+        self._check(testObj)
+        
+        
+    def testProperty2(self):
+        """Test case to validate custom property
+        """
+        
+        class TestProperty(object):
+            
+            def __init__(self):
+                self._get = 0
+                self._set = 0
+                self._del = 0
+            
+            def name(self):
+                self._get += 1
+                return self.__name
+            name = property(name)
+            
+            def set_name(self, value):
+                self._set += 1
+                self.__name = value
+            name.setter(set_name)
+                
+            def del_name(self):
+                self._del += 1
+                del self.__name
+            name.deleter(del_name)
+
+        testObj = TestProperty()
+        self._check(testObj)
+        
+        
+    def testProperty3(self):
+        """Test case to validate custom property
+        """
+        
+        class TestProperty(object):
+            
+            def __init__(self):
+                self._name = 'foo'
+            
+            def name(self):
+                return self._name
+            name = property(name)
+
+        testObj = TestProperty()
+        self.assertRaises(AttributeError, setattr, testObj, 'name', 'bar')
+        self.assertRaises(AttributeError, delattr, testObj, 'name')
+        
+        
+    def _check(self, testObj):
+        testObj.name = "Custom"
+        self.assertEqual(1, testObj._set)
+        
+        self.assertEqual(testObj.name, "Custom")
+        self.assertEqual(1, testObj._get)
+        
+        self.assert_(hasattr(testObj, 'name'))
+        del testObj.name
+        self.assertEqual(1, testObj._del)
+        
+        self.assert_(not hasattr(testObj, 'name'))
+        testObj.name = "Custom2"
+        self.assertEqual(testObj.name, "Custom2")
+
+
+        
+#=======================================================================================================================
+# main
+#=======================================================================================================================
+if __name__ == '__main__':
+    #this is so that we can run it from the jython tests -- because we don't actually have an __main__ module
+    #(so, it won't try importing the __main__ module)
+    unittest.TextTestRunner().run(unittest.makeSuite(Test))
+    
diff --git a/python/helpers/pydev/tests_runfiles/test_pydevdio.py b/python/helpers/pydev/tests_runfiles/test_pydevdio.py
new file mode 100644
index 0000000..7a48a63
--- /dev/null
+++ b/python/helpers/pydev/tests_runfiles/test_pydevdio.py
@@ -0,0 +1,40 @@
+import sys
+import os
+
+
+import unittest
+
+class Test(unittest.TestCase):
+    
+    def testIt(self):
+        #make it as if we were executing from the directory above this one (so that we can use jycompletionserver
+        #without the need for it being in the pythonpath)
+        #(twice the dirname to get the previous level from this file.)
+        import test_pydevdio #@UnresolvedImport - importing itself
+        ADD_TO_PYTHONPATH = os.path.join(os.path.dirname(os.path.dirname(test_pydevdio.__file__)))
+        sys.path.insert(0, ADD_TO_PYTHONPATH)
+        
+        try:
+            import pydevd_io
+            original = sys.stdout
+            
+            try:
+                sys.stdout = pydevd_io.IOBuf()
+                print('foo')
+                print('bar')
+                
+                self.assertEquals('foo\nbar\n', sys.stdout.getvalue()) #@UndefinedVariable
+                
+                print('ww')
+                print('xx')
+                self.assertEquals('ww\nxx\n', sys.stdout.getvalue()) #@UndefinedVariable
+            finally:
+                sys.stdout = original
+        finally:
+            #remove it to leave it ok for other tests
+            sys.path.remove(ADD_TO_PYTHONPATH)
+        
+if __name__ == '__main__':
+    #this is so that we can run it frem the jython tests -- because we don't actually have an __main__ module
+    #(so, it won't try importing the __main__ module)
+    unittest.TextTestRunner().run(unittest.makeSuite(Test))
diff --git a/python/helpers/pydev/tests_runfiles/test_runfiles.py b/python/helpers/pydev/tests_runfiles/test_runfiles.py
new file mode 100644
index 0000000..0c04764
--- /dev/null
+++ b/python/helpers/pydev/tests_runfiles/test_runfiles.py
@@ -0,0 +1,393 @@
+import os.path
+import sys
+
+IS_JYTHON = sys.platform.find('java') != -1
+
+try:
+    this_file_name = __file__
+except NameError:
+    # stupid jython. plain old __file__ isnt working for some reason
+    import test_runfiles  #@UnresolvedImport - importing the module itself
+    this_file_name = test_runfiles.__file__
+
+
+desired_runfiles_path = os.path.normpath(os.path.dirname(this_file_name) + "/..")
+sys.path.insert(0, desired_runfiles_path)
+
+import pydev_runfiles_unittest
+import pydev_runfiles_xml_rpc
+import pydevd_io
+
+#remove existing pydev_runfiles from modules (if any), so that we can be sure we have the correct version
+if 'pydev_runfiles' in sys.modules:
+    del sys.modules['pydev_runfiles']
+
+
+import pydev_runfiles
+import unittest
+import tempfile
+
+try:
+    set
+except:
+    from sets import Set as set
+
+#this is an early test because it requires the sys.path changed
+orig_syspath = sys.path
+a_file = pydev_runfiles.__file__
+pydev_runfiles.PydevTestRunner(pydev_runfiles.Configuration(files_or_dirs=[a_file]))
+file_dir = os.path.dirname(a_file)
+assert file_dir in sys.path
+sys.path = orig_syspath[:]
+
+#remove it so that we leave it ok for other tests
+sys.path.remove(desired_runfiles_path)
+
+class RunfilesTest(unittest.TestCase):
+
+    def _setup_scenario(
+        self,
+        path,
+        include_tests=None,
+        tests=None,
+        files_to_tests=None,
+        exclude_files=None,
+        exclude_tests=None,
+        include_files=None,
+        ):
+        self.MyTestRunner = pydev_runfiles.PydevTestRunner(
+            pydev_runfiles.Configuration(
+                files_or_dirs=path,
+                include_tests=include_tests,
+                verbosity=1,
+                tests=tests,
+                files_to_tests=files_to_tests,
+                exclude_files=exclude_files,
+                exclude_tests=exclude_tests,
+                include_files=include_files,
+            )
+        )
+        self.files = self.MyTestRunner.find_import_files()
+        self.modules = self.MyTestRunner.find_modules_from_files(self.files)
+        self.all_tests = self.MyTestRunner.find_tests_from_modules(self.modules)
+        self.filtered_tests = self.MyTestRunner.filter_tests(self.all_tests)
+
+    def setUp(self):
+        self.file_dir = [os.path.abspath(os.path.join(desired_runfiles_path, 'tests_runfiles/samples'))]
+        self._setup_scenario(self.file_dir, None)
+
+
+    def test_suite_used(self):
+        for suite in self.all_tests + self.filtered_tests:
+            self.assert_(isinstance(suite, pydev_runfiles_unittest.PydevTestSuite))
+
+    def test_parse_cmdline(self):
+        sys.argv = "pydev_runfiles.py ./".split()
+        configuration = pydev_runfiles.parse_cmdline()
+        self.assertEquals([sys.argv[1]], configuration.files_or_dirs)
+        self.assertEquals(2, configuration.verbosity)  # default value
+        self.assertEquals(None, configuration.include_tests)  # default value
+
+        sys.argv = "pydev_runfiles.py ../images c:/temp".split()
+        configuration = pydev_runfiles.parse_cmdline()
+        self.assertEquals(sys.argv[1:3], configuration.files_or_dirs)
+        self.assertEquals(2, configuration.verbosity)
+
+        sys.argv = "pydev_runfiles.py --verbosity 3 ../junk c:/asdf ".split()
+        configuration = pydev_runfiles.parse_cmdline()
+        self.assertEquals(sys.argv[3:], configuration.files_or_dirs)
+        self.assertEquals(int(sys.argv[2]), configuration.verbosity)
+
+        sys.argv = "pydev_runfiles.py --include_tests test_def ./".split()
+        configuration = pydev_runfiles.parse_cmdline()
+        self.assertEquals([sys.argv[-1]], configuration.files_or_dirs)
+        self.assertEquals([sys.argv[2]], configuration.include_tests)
+
+        sys.argv = "pydev_runfiles.py --include_tests Abc.test_def,Mod.test_abc c:/junk/".split()
+        configuration = pydev_runfiles.parse_cmdline()
+        self.assertEquals([sys.argv[-1]], configuration.files_or_dirs)
+        self.assertEquals(sys.argv[2].split(','), configuration.include_tests)
+
+        sys.argv = ('C:\\eclipse-SDK-3.2-win32\\eclipse\\plugins\\org.python.pydev.debug_1.2.2\\pysrc\\pydev_runfiles.py ' + 
+                    '--verbosity 1 ' + 
+                    'C:\\workspace_eclipse\\fronttpa\\tests\\gui_tests\\calendar_popup_control_test.py ').split()
+        configuration = pydev_runfiles.parse_cmdline()
+        self.assertEquals([sys.argv[-1]], configuration.files_or_dirs)
+        self.assertEquals(1, configuration.verbosity)
+
+        sys.argv = "pydev_runfiles.py --verbosity 1 --include_tests Mod.test_abc c:/junk/ ./".split()
+        configuration = pydev_runfiles.parse_cmdline()
+        self.assertEquals(sys.argv[5:], configuration.files_or_dirs)
+        self.assertEquals(int(sys.argv[2]), configuration.verbosity)
+        self.assertEquals([sys.argv[4]], configuration.include_tests)
+
+        sys.argv = "pydev_runfiles.py --exclude_files=*.txt,a*.py".split()
+        configuration = pydev_runfiles.parse_cmdline()
+        self.assertEquals(['*.txt', 'a*.py'], configuration.exclude_files)
+
+        sys.argv = "pydev_runfiles.py --exclude_tests=*__todo,test*bar".split()
+        configuration = pydev_runfiles.parse_cmdline()
+        self.assertEquals(['*__todo', 'test*bar'], configuration.exclude_tests)
+
+
+    def test___adjust_python_path_works_for_directories(self):
+        orig_syspath = sys.path
+        tempdir = tempfile.gettempdir()
+        pydev_runfiles.PydevTestRunner(pydev_runfiles.Configuration(files_or_dirs=[tempdir]))
+        self.assertEquals(1, tempdir in sys.path)
+        sys.path = orig_syspath[:]
+
+
+    def test___is_valid_py_file(self):
+        isvalid = self.MyTestRunner._PydevTestRunner__is_valid_py_file
+        self.assertEquals(1, isvalid("test.py"))
+        self.assertEquals(0, isvalid("asdf.pyc"))
+        self.assertEquals(0, isvalid("__init__.py"))
+        self.assertEquals(0, isvalid("__init__.pyc"))
+        self.assertEquals(1, isvalid("asdf asdf.pyw"))
+
+    def test___unixify(self):
+        unixify = self.MyTestRunner._PydevTestRunner__unixify
+        self.assertEquals("c:/temp/junk/asdf.py", unixify("c:SEPtempSEPjunkSEPasdf.py".replace('SEP', os.sep)))
+
+    def test___importify(self):
+        importify = self.MyTestRunner._PydevTestRunner__importify
+        self.assertEquals("temp.junk.asdf", importify("temp/junk/asdf.py"))
+        self.assertEquals("asdf", importify("asdf.py"))
+        self.assertEquals("abc.def.hgi", importify("abc/def/hgi"))
+
+    def test_finding_a_file_from_file_system(self):
+        test_file = "simple_test.py"
+        self.MyTestRunner.files_or_dirs = [self.file_dir[0] + test_file]
+        files = self.MyTestRunner.find_import_files()
+        self.assertEquals(1, len(files))
+        self.assertEquals(files[0], self.file_dir[0] + test_file)
+
+    def test_finding_files_in_dir_from_file_system(self):
+        self.assertEquals(1, len(self.files) > 0)
+        for import_file in self.files:
+            self.assertEquals(-1, import_file.find(".pyc"))
+            self.assertEquals(-1, import_file.find("__init__.py"))
+            self.assertEquals(-1, import_file.find("\\"))
+            self.assertEquals(-1, import_file.find(".txt"))
+
+    def test___get_module_from_str(self):
+        my_importer = self.MyTestRunner._PydevTestRunner__get_module_from_str
+        my_os_path = my_importer("os.path", True, 'unused')
+        from os import path
+        import os.path as path2
+        self.assertEquals(path, my_os_path)
+        self.assertEquals(path2, my_os_path)
+        self.assertNotEquals(__import__("os.path"), my_os_path)
+        self.assertNotEquals(__import__("os"), my_os_path)
+
+    def test_finding_modules_from_import_strings(self):
+        self.assertEquals(1, len(self.modules) > 0)
+
+    def test_finding_tests_when_no_filter(self):
+        # unittest.py will create a TestCase with 0 tests in it
+        # since it just imports what is given
+        self.assertEquals(1, len(self.all_tests) > 0)
+        files_with_tests = [1 for t in self.all_tests if len(t._tests) > 0]
+        self.assertNotEquals(len(self.files), len(files_with_tests))
+
+    def count_tests(self, tests):
+        total = 0
+        for t in tests:
+            total += t.countTestCases()
+        return total
+
+    def test___match(self):
+        matcher = self.MyTestRunner._PydevTestRunner__match
+        self.assertEquals(1, matcher(None, "aname"))
+        self.assertEquals(1, matcher([".*"], "aname"))
+        self.assertEquals(0, matcher(["^x$"], "aname"))
+        self.assertEquals(0, matcher(["abc"], "aname"))
+        self.assertEquals(1, matcher(["abc", "123"], "123"))
+
+    def test_finding_tests_from_modules_with_bad_filter_returns_0_tests(self):
+        self._setup_scenario(self.file_dir, ["NO_TESTS_ARE_SURE_TO_HAVE_THIS_NAME"])
+        self.assertEquals(0, self.count_tests(self.all_tests))
+
+    def test_finding_test_with_unique_name_returns_1_test(self):
+        self._setup_scenario(self.file_dir, include_tests=["test_i_am_a_unique_test_name"])
+        filtered_tests = self.MyTestRunner.filter_tests(self.all_tests)
+        self.assertEquals(1, self.count_tests(filtered_tests))
+
+    def test_finding_test_with_non_unique_name(self):
+        self._setup_scenario(self.file_dir, include_tests=["test_non_unique_name"])
+        filtered_tests = self.MyTestRunner.filter_tests(self.all_tests)
+        self.assertEquals(1, self.count_tests(filtered_tests) > 2)
+
+    def test_finding_tests_with_regex_filters(self):
+        self._setup_scenario(self.file_dir, include_tests=["test_non*"])
+        filtered_tests = self.MyTestRunner.filter_tests(self.all_tests)
+        self.assertEquals(1, self.count_tests(filtered_tests) > 2)
+
+        self._setup_scenario(self.file_dir, ["^$"])
+        filtered_tests = self.MyTestRunner.filter_tests(self.all_tests)
+        self.assertEquals(0, self.count_tests(filtered_tests))
+
+        self._setup_scenario(self.file_dir, None, exclude_tests=["*"])
+        filtered_tests = self.MyTestRunner.filter_tests(self.all_tests)
+        self.assertEquals(0, self.count_tests(filtered_tests))
+
+    def test_matching_tests(self):
+        self._setup_scenario(self.file_dir, None, ['StillYetAnotherSampleTest'])
+        filtered_tests = self.MyTestRunner.filter_tests(self.all_tests)
+        self.assertEqual(1, self.count_tests(filtered_tests))
+
+        self._setup_scenario(self.file_dir, None, ['SampleTest.test_xxxxxx1'])
+        filtered_tests = self.MyTestRunner.filter_tests(self.all_tests)
+        self.assertEqual(1, self.count_tests(filtered_tests))
+
+        self._setup_scenario(self.file_dir, None, ['SampleTest'])
+        filtered_tests = self.MyTestRunner.filter_tests(self.all_tests)
+        self.assertEqual(8, self.count_tests(filtered_tests))
+
+        self._setup_scenario(self.file_dir, None, ['AnotherSampleTest.todo_not_tested'])
+        filtered_tests = self.MyTestRunner.filter_tests(self.all_tests)
+        self.assertEqual(1, self.count_tests(filtered_tests))
+
+        self._setup_scenario(self.file_dir, None, ['StillYetAnotherSampleTest', 'SampleTest.test_xxxxxx1'])
+        filtered_tests = self.MyTestRunner.filter_tests(self.all_tests)
+        self.assertEqual(2, self.count_tests(filtered_tests))
+
+        self._setup_scenario(self.file_dir, None, exclude_tests=['*'])
+        filtered_tests = self.MyTestRunner.filter_tests(self.all_tests)
+        self.assertEqual(self.count_tests(filtered_tests), 0)
+
+
+        self._setup_scenario(self.file_dir, None, exclude_tests=['*a*'])
+        filtered_tests = self.MyTestRunner.filter_tests(self.all_tests)
+        self.assertEqual(self.count_tests(filtered_tests), 6)
+
+        self.assertEqual(
+            set(self.MyTestRunner.list_test_names(filtered_tests)),
+            set(['test_1', 'test_2', 'test_xxxxxx1', 'test_xxxxxx2', 'test_xxxxxx3', 'test_xxxxxx4'])
+        )
+
+        self._setup_scenario(self.file_dir, None, exclude_tests=['*a*', '*x*'])
+        filtered_tests = self.MyTestRunner.filter_tests(self.all_tests)
+        self.assertEqual(self.count_tests(filtered_tests), 2)
+
+        self.assertEqual(
+            set(self.MyTestRunner.list_test_names(filtered_tests)),
+            set(['test_1', 'test_2'])
+        )
+
+        self._setup_scenario(self.file_dir, None, exclude_files=['simple_test.py'])
+        filtered_tests = self.MyTestRunner.filter_tests(self.all_tests)
+        names = self.MyTestRunner.list_test_names(filtered_tests)
+        self.assert_('test_xxxxxx1' not in names, 'Found: %s' % (names,))
+
+        self.assertEqual(
+            set(['test_abc', 'test_non_unique_name', 'test_non_unique_name', 'test_asdf2', 'test_i_am_a_unique_test_name', 'test_non_unique_name', 'test_blank']),
+            set(names)
+        )
+
+        self._setup_scenario(self.file_dir, None, include_files=['simple3_test.py'])
+        filtered_tests = self.MyTestRunner.filter_tests(self.all_tests)
+        names = self.MyTestRunner.list_test_names(filtered_tests)
+        self.assert_('test_xxxxxx1' not in names, 'Found: %s' % (names,))
+
+        self.assertEqual(
+            set(['test_non_unique_name']),
+            set(names)
+        )
+
+    def test_xml_rpc_communication(self):
+        notifications = []
+        class Server:
+
+            def __init__(self, notifications):
+                self.notifications = notifications
+
+            def notifyConnected(self):
+                #This method is called at the very start (in runfiles.py), and we do not check this here
+                raise AssertionError('Should not be called from the run tests.')
+
+
+            def notifyTestsCollected(self, number_of_tests):
+                self.notifications.append(('notifyTestsCollected', number_of_tests))
+
+
+            def notifyStartTest(self, file, test):
+                pass
+
+            def notifyTest(self, cond, captured_output, error_contents, file, test, time):
+                try:
+                    #I.e.: when marked as Binary in xml-rpc
+                    captured_output = captured_output.data
+                except:
+                    pass
+                try:
+                    #I.e.: when marked as Binary in xml-rpc
+                    error_contents = error_contents.data
+                except:
+                    pass
+                if error_contents:
+                    error_contents = error_contents.splitlines()[-1].strip()
+                self.notifications.append(('notifyTest', cond, captured_output.strip(), error_contents, file, test))
+
+            def notifyTestRunFinished(self, total_time):
+                self.notifications.append(('notifyTestRunFinished',))
+
+        server = Server(notifications)
+        pydev_runfiles_xml_rpc.SetServer(server)
+        simple_test = os.path.join(self.file_dir[0], 'simple_test.py')
+        simple_test2 = os.path.join(self.file_dir[0], 'simple2_test.py')
+        simpleClass_test = os.path.join(self.file_dir[0], 'simpleClass_test.py')
+        simpleModule_test = os.path.join(self.file_dir[0], 'simpleModule_test.py')
+
+        files_to_tests = {}
+        files_to_tests.setdefault(simple_test , []).append('SampleTest.test_xxxxxx1')
+        files_to_tests.setdefault(simple_test , []).append('SampleTest.test_xxxxxx2')
+        files_to_tests.setdefault(simple_test , []).append('SampleTest.test_non_unique_name')
+        files_to_tests.setdefault(simple_test2, []).append('YetAnotherSampleTest.test_abc')
+        files_to_tests.setdefault(simpleClass_test, []).append('SetUpClassTest.test_blank')
+        files_to_tests.setdefault(simpleModule_test, []).append('SetUpModuleTest.test_blank')
+
+        self._setup_scenario(None, files_to_tests=files_to_tests)
+        self.MyTestRunner.verbosity = 2
+
+        buf = pydevd_io.StartRedirect(keep_original_redirection=False)
+        try:
+            self.MyTestRunner.run_tests()
+            self.assertEqual(8, len(notifications))
+            expected = [
+                    ('notifyTestsCollected', 6),
+                    ('notifyTest', 'ok', 'non unique name ran', '', simple_test, 'SampleTest.test_non_unique_name'),
+                    ('notifyTest', 'fail', '', 'AssertionError: Fail test 2', simple_test, 'SampleTest.test_xxxxxx1'),
+                    ('notifyTest', 'ok', '', '', simple_test, 'SampleTest.test_xxxxxx2'),
+                    ('notifyTest', 'ok', '', '', simple_test2, 'YetAnotherSampleTest.test_abc'),
+                ]
+            if not IS_JYTHON:
+                expected.append(('notifyTest', 'error', '', 'ValueError: This is an INTENTIONAL value error in setUpClass.',
+                        simpleClass_test.replace('/', os.path.sep), 'samples.simpleClass_test.SetUpClassTest <setUpClass>'))
+                expected.append(('notifyTest', 'error', '', 'ValueError: This is an INTENTIONAL value error in setUpModule.',
+                            simpleModule_test.replace('/', os.path.sep), 'samples.simpleModule_test <setUpModule>'))
+            else:
+                expected.append(('notifyTest', 'ok', '', '', simpleClass_test, 'SetUpClassTest.test_blank'))
+                expected.append(('notifyTest', 'ok', '', '', simpleModule_test, 'SetUpModuleTest.test_blank'))
+
+            expected.append(('notifyTestRunFinished',))
+            expected.sort()
+            notifications.sort()
+            self.assertEqual(
+                expected,
+                notifications
+            )
+        finally:
+            pydevd_io.EndRedirect()
+        b = buf.getvalue()
+        if not IS_JYTHON:
+            self.assert_(b.find('Ran 4 tests in ') != -1, 'Found: ' + b)
+        else:
+            self.assert_(b.find('Ran 6 tests in ') != -1, 'Found: ' + b)
+
+
+if __name__ == "__main__":
+    #this is so that we can run it frem the jython tests -- because we don't actually have an __main__ module
+    #(so, it won't try importing the __main__ module)
+    unittest.TextTestRunner().run(unittest.makeSuite(RunfilesTest))