git-svn-id: http://webrtc.googlecode.com/svn/trunk@7 4adac7df-926f-26a2-2b94-8c16560cd09d
diff --git a/third_party_mods/ace/LICENSE b/third_party_mods/ace/LICENSE
new file mode 100644
index 0000000..9204394
--- /dev/null
+++ b/third_party_mods/ace/LICENSE
@@ -0,0 +1,66 @@
+Copyright and Licensing Information for ACE(TM), TAO(TM), CIAO(TM), DAnCE(TM),
+and CoSMIC(TM)
+
+ACE(TM), TAO(TM), CIAO(TM), DAnCE>(TM), and CoSMIC(TM) (henceforth referred to
+as "DOC software") are copyrighted by Douglas C. Schmidt and his research
+group at Washington University, University of California, Irvine, and
+Vanderbilt University, Copyright (c) 1993-2009, all rights reserved. Since DOC
+software is open-source, freely available software, you are free to use,
+modify, copy, and distribute--perpetually and irrevocably--the DOC software
+source code and object code produced from the source, as well as copy and
+distribute modified versions of this software. You must, however, include this
+copyright statement along with any code built using DOC software that you
+release. No copyright statement needs to be provided if you just ship binary
+executables of your software products.
+You can use DOC software in commercial and/or binary software releases and are
+under no obligation to redistribute any of your source code that is built
+using DOC software. Note, however, that you may not misappropriate the DOC
+software code, such as copyrighting it yourself or claiming authorship of the
+DOC software code, in a way that will prevent DOC software from being
+distributed freely using an open-source development model. You needn't inform
+anyone that you're using DOC software in your software, though we encourage
+you to let us know so we can promote your project in the DOC software success
+stories.
+
+The ACE, TAO, CIAO, DAnCE, and CoSMIC web sites are maintained by the DOC
+Group at the Institute for Software Integrated Systems (ISIS) and the Center
+for Distributed Object Computing of Washington University, St. Louis for the
+development of open-source software as part of the open-source software
+community. Submissions are provided by the submitter ``as is'' with no
+warranties whatsoever, including any warranty of merchantability,
+noninfringement of third party intellectual property, or fitness for any
+particular purpose. In no event shall the submitter be liable for any direct,
+indirect, special, exemplary, punitive, or consequential damages, including
+without limitation, lost profits, even if advised of the possibility of such
+damages. Likewise, DOC software is provided as is with no warranties of any
+kind, including the warranties of design, merchantability, and fitness for a
+particular purpose, noninfringement, or arising from a course of dealing,
+usage or trade practice. Washington University, UC Irvine, Vanderbilt
+University, their employees, and students shall have no liability with respect
+to the infringement of copyrights, trade secrets or any patents by DOC
+software or any part thereof. Moreover, in no event will Washington
+University, UC Irvine, or Vanderbilt University, their employees, or students
+be liable for any lost revenue or profits or other special, indirect and
+consequential damages.
+
+DOC software is provided with no support and without any obligation on the
+part of Washington University, UC Irvine, Vanderbilt University, their
+employees, or students to assist in its use, correction, modification, or
+enhancement. A number of companies around the world provide commercial support
+for DOC software, however. DOC software is Y2K-compliant, as long as the
+underlying OS platform is Y2K-compliant. Likewise, DOC software is compliant
+with the new US daylight savings rule passed by Congress as "The Energy Policy
+Act of 2005," which established new daylight savings times (DST) rules for the
+United States that expand DST as of March 2007. Since DOC software obtains
+time/date and calendaring information from operating systems users will not be
+affected by the new DST rules as long as they upgrade their operating systems
+accordingly.
+
+The names ACE(TM), TAO(TM), CIAO(TM), DAnCE(TM), CoSMIC(TM), Washington
+University, UC Irvine, and Vanderbilt University, may not be used to endorse
+or promote products or services derived from this source without express
+written permission from Washington University, UC Irvine, or Vanderbilt
+University. This license grants no permission to call products or services
+derived from this source ACE(TM), TAO(TM), CIAO(TM), DAnCE(TM), or CoSMIC(TM),
+nor does it grant permission for the name Washington University, UC Irvine, or
+Vanderbilt University to appear in their names.
\ No newline at end of file
diff --git a/third_party_mods/chromium/LICENSE b/third_party_mods/chromium/LICENSE
new file mode 100644
index 0000000..8dc3504
--- /dev/null
+++ b/third_party_mods/chromium/LICENSE
@@ -0,0 +1,27 @@
+// Copyright (c) 2010 The Chromium Authors. All rights reserved.
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+// * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+// * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff --git a/third_party_mods/jsoncpp/jsoncpp.gyp b/third_party_mods/jsoncpp/jsoncpp.gyp
new file mode 100644
index 0000000..dce0c3d
--- /dev/null
+++ b/third_party_mods/jsoncpp/jsoncpp.gyp
@@ -0,0 +1,42 @@
+# Copyright (c) 2011 The Chromium Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+{
+ 'targets': [
+ {
+ 'target_name': 'jsoncpp',
+ 'type': '<(library)',
+ 'sources': [
+ 'include/json/autolink.h',
+ 'include/json/config.h',
+ 'include/json/forwards.h',
+ 'include/json/json.h',
+ 'include/json/reader.h',
+ 'include/json/value.h',
+ 'include/json/writer.h',
+ 'src/lib_json/json_batchallocator.h',
+ 'src/lib_json/json_internalarray.inl.h',
+ 'src/lib_json/json_internalmap.inl.h',
+ 'src/lib_json/json_reader.cpp',
+ 'src/lib_json/json_value.cpp',
+ 'src/lib_json/json_valueiterator.inl.h',
+ 'src/lib_json/json_writer.cpp',
+ ],
+ 'include_dirs': [
+ 'include/',
+ ],
+ 'direct_dependent_settings': {
+ 'include_dirs': [
+ 'include/',
+ ],
+ },
+ },
+ ],
+}
+
+# Local Variables:
+# tab-width:2
+# indent-tabs-mode:nil
+# End:
+# vim: set expandtab tabstop=2 shiftwidth=2:
diff --git a/third_party_mods/jsoncpp/src/lib_json/json_reader.cpp b/third_party_mods/jsoncpp/src/lib_json/json_reader.cpp
new file mode 100644
index 0000000..572a9c4
--- /dev/null
+++ b/third_party_mods/jsoncpp/src/lib_json/json_reader.cpp
@@ -0,0 +1,885 @@
+#include <json/reader.h>
+#include <json/value.h>
+#include <utility>
+#include <cstdio>
+#include <cassert>
+#include <cstring>
+#include <iostream>
+#include <stdexcept>
+
+#if _MSC_VER >= 1400 // VC++ 8.0
+#pragma warning( disable : 4996 ) // disable warning about strdup being deprecated.
+#endif
+
+namespace Json {
+
+// Implementation of class Features
+// ////////////////////////////////
+
+Features::Features()
+ : allowComments_( true )
+ , strictRoot_( false )
+{
+}
+
+
+Features
+Features::all()
+{
+ return Features();
+}
+
+
+Features
+Features::strictMode()
+{
+ Features features;
+ features.allowComments_ = false;
+ features.strictRoot_ = true;
+ return features;
+}
+
+// Implementation of class Reader
+// ////////////////////////////////
+
+
+static inline bool
+in( Reader::Char c, Reader::Char c1, Reader::Char c2, Reader::Char c3, Reader::Char c4 )
+{
+ return c == c1 || c == c2 || c == c3 || c == c4;
+}
+
+static inline bool
+in( Reader::Char c, Reader::Char c1, Reader::Char c2, Reader::Char c3, Reader::Char c4, Reader::Char c5 )
+{
+ return c == c1 || c == c2 || c == c3 || c == c4 || c == c5;
+}
+
+
+static bool
+containsNewLine( Reader::Location begin,
+ Reader::Location end )
+{
+ for ( ;begin < end; ++begin )
+ if ( *begin == '\n' || *begin == '\r' )
+ return true;
+ return false;
+}
+
+static std::string codePointToUTF8(unsigned int cp)
+{
+ std::string result;
+
+ // based on description from http://en.wikipedia.org/wiki/UTF-8
+
+ if (cp <= 0x7f)
+ {
+ result.resize(1);
+ result[0] = static_cast<char>(cp);
+ }
+ else if (cp <= 0x7FF)
+ {
+ result.resize(2);
+ result[1] = static_cast<char>(0x80 | (0x3f & cp));
+ result[0] = static_cast<char>(0xC0 | (0x1f & (cp >> 6)));
+ }
+ else if (cp <= 0xFFFF)
+ {
+ result.resize(3);
+ result[2] = static_cast<char>(0x80 | (0x3f & cp));
+ result[1] = 0x80 | static_cast<char>((0x3f & (cp >> 6)));
+ result[0] = 0xE0 | static_cast<char>((0xf & (cp >> 12)));
+ }
+ else if (cp <= 0x10FFFF)
+ {
+ result.resize(4);
+ result[3] = static_cast<char>(0x80 | (0x3f & cp));
+ result[2] = static_cast<char>(0x80 | (0x3f & (cp >> 6)));
+ result[1] = static_cast<char>(0x80 | (0x3f & (cp >> 12)));
+ result[0] = static_cast<char>(0xF0 | (0x7 & (cp >> 18)));
+ }
+
+ return result;
+}
+
+
+// Class Reader
+// //////////////////////////////////////////////////////////////////
+
+Reader::Reader()
+ : features_( Features::all() )
+{
+}
+
+
+Reader::Reader( const Features &features )
+ : features_( features )
+{
+}
+
+
+bool
+Reader::parse( const std::string &document,
+ Value &root,
+ bool collectComments )
+{
+ document_ = document;
+ const char *begin = document_.c_str();
+ const char *end = begin + document_.length();
+ return parse( begin, end, root, collectComments );
+}
+
+
+bool
+Reader::parse( std::istream& sin,
+ Value &root,
+ bool collectComments )
+{
+ //std::istream_iterator<char> begin(sin);
+ //std::istream_iterator<char> end;
+ // Those would allow streamed input from a file, if parse() were a
+ // template function.
+
+ // Since std::string is reference-counted, this at least does not
+ // create an extra copy.
+ std::string doc;
+ std::getline(sin, doc, (char)EOF);
+ return parse( doc, root, collectComments );
+}
+
+bool
+Reader::parse( const char *beginDoc, const char *endDoc,
+ Value &root,
+ bool collectComments )
+{
+ if ( !features_.allowComments_ )
+ {
+ collectComments = false;
+ }
+
+ begin_ = beginDoc;
+ end_ = endDoc;
+ collectComments_ = collectComments;
+ current_ = begin_;
+ lastValueEnd_ = 0;
+ lastValue_ = 0;
+ commentsBefore_ = "";
+ errors_.clear();
+ while ( !nodes_.empty() )
+ nodes_.pop();
+ nodes_.push( &root );
+
+ bool successful = readValue();
+ Token token;
+ skipCommentTokens( token );
+ if ( collectComments_ && !commentsBefore_.empty() )
+ root.setComment( commentsBefore_, commentAfter );
+ if ( features_.strictRoot_ )
+ {
+ if ( !root.isArray() && !root.isObject() )
+ {
+ // Set error location to start of doc, ideally should be first token found in doc
+ token.type_ = tokenError;
+ token.start_ = beginDoc;
+ token.end_ = endDoc;
+ addError( "A valid JSON document must be either an array or an object value.",
+ token );
+ return false;
+ }
+ }
+ return successful;
+}
+
+
+bool
+Reader::readValue()
+{
+ Token token;
+ skipCommentTokens( token );
+ bool successful = true;
+
+ if ( collectComments_ && !commentsBefore_.empty() )
+ {
+ currentValue().setComment( commentsBefore_, commentBefore );
+ commentsBefore_ = "";
+ }
+
+
+ switch ( token.type_ )
+ {
+ case tokenObjectBegin:
+ successful = readObject( token );
+ break;
+ case tokenArrayBegin:
+ successful = readArray( token );
+ break;
+ case tokenNumber:
+ successful = decodeNumber( token );
+ break;
+ case tokenString:
+ successful = decodeString( token );
+ break;
+ case tokenTrue:
+ currentValue() = true;
+ break;
+ case tokenFalse:
+ currentValue() = false;
+ break;
+ case tokenNull:
+ currentValue() = Value();
+ break;
+ default:
+ return addError( "Syntax error: value, object or array expected.", token );
+ }
+
+ if ( collectComments_ )
+ {
+ lastValueEnd_ = current_;
+ lastValue_ = ¤tValue();
+ }
+
+ return successful;
+}
+
+
+void
+Reader::skipCommentTokens( Token &token )
+{
+ if ( features_.allowComments_ )
+ {
+ do
+ {
+ readToken( token );
+ }
+ while ( token.type_ == tokenComment );
+ }
+ else
+ {
+ readToken( token );
+ }
+}
+
+
+bool
+Reader::expectToken( TokenType type, Token &token, const char *message )
+{
+ readToken( token );
+ if ( token.type_ != type )
+ return addError( message, token );
+ return true;
+}
+
+
+bool
+Reader::readToken( Token &token )
+{
+ skipSpaces();
+ token.start_ = current_;
+ Char c = getNextChar();
+ bool ok = true;
+ switch ( c )
+ {
+ case '{':
+ token.type_ = tokenObjectBegin;
+ break;
+ case '}':
+ token.type_ = tokenObjectEnd;
+ break;
+ case '[':
+ token.type_ = tokenArrayBegin;
+ break;
+ case ']':
+ token.type_ = tokenArrayEnd;
+ break;
+ case '"':
+ token.type_ = tokenString;
+ ok = readString();
+ break;
+ case '/':
+ token.type_ = tokenComment;
+ ok = readComment();
+ break;
+ case '0':
+ case '1':
+ case '2':
+ case '3':
+ case '4':
+ case '5':
+ case '6':
+ case '7':
+ case '8':
+ case '9':
+ case '-':
+ token.type_ = tokenNumber;
+ readNumber();
+ break;
+ case 't':
+ token.type_ = tokenTrue;
+ ok = match( "rue", 3 );
+ break;
+ case 'f':
+ token.type_ = tokenFalse;
+ ok = match( "alse", 4 );
+ break;
+ case 'n':
+ token.type_ = tokenNull;
+ ok = match( "ull", 3 );
+ break;
+ case ',':
+ token.type_ = tokenArraySeparator;
+ break;
+ case ':':
+ token.type_ = tokenMemberSeparator;
+ break;
+ case 0:
+ token.type_ = tokenEndOfStream;
+ break;
+ default:
+ ok = false;
+ break;
+ }
+ if ( !ok )
+ token.type_ = tokenError;
+ token.end_ = current_;
+ return true;
+}
+
+
+void
+Reader::skipSpaces()
+{
+ while ( current_ != end_ )
+ {
+ Char c = *current_;
+ if ( c == ' ' || c == '\t' || c == '\r' || c == '\n' )
+ ++current_;
+ else
+ break;
+ }
+}
+
+
+bool
+Reader::match( Location pattern,
+ int patternLength )
+{
+ if ( end_ - current_ < patternLength )
+ return false;
+ int index = patternLength;
+ while ( index-- )
+ if ( current_[index] != pattern[index] )
+ return false;
+ current_ += patternLength;
+ return true;
+}
+
+
+bool
+Reader::readComment()
+{
+ Location commentBegin = current_ - 1;
+ Char c = getNextChar();
+ bool successful = false;
+ if ( c == '*' )
+ successful = readCStyleComment();
+ else if ( c == '/' )
+ successful = readCppStyleComment();
+ if ( !successful )
+ return false;
+
+ if ( collectComments_ )
+ {
+ CommentPlacement placement = commentBefore;
+ if ( lastValueEnd_ && !containsNewLine( lastValueEnd_, commentBegin ) )
+ {
+ if ( c != '*' || !containsNewLine( commentBegin, current_ ) )
+ placement = commentAfterOnSameLine;
+ }
+
+ addComment( commentBegin, current_, placement );
+ }
+ return true;
+}
+
+
+void
+Reader::addComment( Location begin,
+ Location end,
+ CommentPlacement placement )
+{
+ assert( collectComments_ );
+ if ( placement == commentAfterOnSameLine )
+ {
+ assert( lastValue_ != 0 );
+ lastValue_->setComment( std::string( begin, end ), placement );
+ }
+ else
+ {
+ if ( !commentsBefore_.empty() )
+ commentsBefore_ += "\n";
+ commentsBefore_ += std::string( begin, end );
+ }
+}
+
+
+bool
+Reader::readCStyleComment()
+{
+ while ( current_ != end_ )
+ {
+ Char c = getNextChar();
+ if ( c == '*' && *current_ == '/' )
+ break;
+ }
+ return getNextChar() == '/';
+}
+
+
+bool
+Reader::readCppStyleComment()
+{
+ while ( current_ != end_ )
+ {
+ Char c = getNextChar();
+ if ( c == '\r' || c == '\n' )
+ break;
+ }
+ return true;
+}
+
+
+void
+Reader::readNumber()
+{
+ while ( current_ != end_ )
+ {
+ if ( !(*current_ >= '0' && *current_ <= '9') &&
+ !in( *current_, '.', 'e', 'E', '+', '-' ) )
+ break;
+ ++current_;
+ }
+}
+
+bool
+Reader::readString()
+{
+ Char c = 0;
+ while ( current_ != end_ )
+ {
+ c = getNextChar();
+ if ( c == '\\' )
+ getNextChar();
+ else if ( c == '"' )
+ break;
+ }
+ return c == '"';
+}
+
+
+bool
+Reader::readObject( Token &tokenStart )
+{
+ Token tokenName;
+ std::string name;
+ currentValue() = Value( objectValue );
+ while ( readToken( tokenName ) )
+ {
+ bool initialTokenOk = true;
+ while ( tokenName.type_ == tokenComment && initialTokenOk )
+ initialTokenOk = readToken( tokenName );
+ if ( !initialTokenOk )
+ break;
+ if ( tokenName.type_ == tokenObjectEnd && name.empty() ) // empty object
+ return true;
+ if ( tokenName.type_ != tokenString )
+ break;
+
+ name = "";
+ if ( !decodeString( tokenName, name ) )
+ return recoverFromError( tokenObjectEnd );
+
+ Token colon;
+ if ( !readToken( colon ) || colon.type_ != tokenMemberSeparator )
+ {
+ return addErrorAndRecover( "Missing ':' after object member name",
+ colon,
+ tokenObjectEnd );
+ }
+ Value &value = currentValue()[ name ];
+ nodes_.push( &value );
+ bool ok = readValue();
+ nodes_.pop();
+ if ( !ok ) // error already set
+ return recoverFromError( tokenObjectEnd );
+
+ Token comma;
+ if ( !readToken( comma )
+ || ( comma.type_ != tokenObjectEnd &&
+ comma.type_ != tokenArraySeparator &&
+ comma.type_ != tokenComment ) )
+ {
+ return addErrorAndRecover( "Missing ',' or '}' in object declaration",
+ comma,
+ tokenObjectEnd );
+ }
+ bool finalizeTokenOk = true;
+ while ( comma.type_ == tokenComment &&
+ finalizeTokenOk )
+ finalizeTokenOk = readToken( comma );
+ if ( comma.type_ == tokenObjectEnd )
+ return true;
+ }
+ return addErrorAndRecover( "Missing '}' or object member name",
+ tokenName,
+ tokenObjectEnd );
+}
+
+
+bool
+Reader::readArray( Token &tokenStart )
+{
+ currentValue() = Value( arrayValue );
+ skipSpaces();
+ if ( *current_ == ']' ) // empty array
+ {
+ Token endArray;
+ readToken( endArray );
+ return true;
+ }
+ int index = 0;
+ while ( true )
+ {
+ Value &value = currentValue()[ index++ ];
+ nodes_.push( &value );
+ bool ok = readValue();
+ nodes_.pop();
+ if ( !ok ) // error already set
+ return recoverFromError( tokenArrayEnd );
+
+ Token token;
+ // Accept Comment after last item in the array.
+ ok = readToken( token );
+ while ( token.type_ == tokenComment && ok )
+ {
+ ok = readToken( token );
+ }
+ bool badTokenType = ( token.type_ == tokenArraySeparator &&
+ token.type_ == tokenArrayEnd );
+ if ( !ok || badTokenType )
+ {
+ return addErrorAndRecover( "Missing ',' or ']' in array declaration",
+ token,
+ tokenArrayEnd );
+ }
+ if ( token.type_ == tokenArrayEnd )
+ break;
+ }
+ return true;
+}
+
+
+bool
+Reader::decodeNumber( Token &token )
+{
+ bool isDouble = false;
+ for ( Location inspect = token.start_; inspect != token.end_; ++inspect )
+ {
+ isDouble = isDouble
+ || in( *inspect, '.', 'e', 'E', '+' )
+ || ( *inspect == '-' && inspect != token.start_ );
+ }
+ if ( isDouble )
+ return decodeDouble( token );
+ Location current = token.start_;
+ bool isNegative = *current == '-';
+ if ( isNegative )
+ ++current;
+ Value::UInt threshold = (isNegative ? Value::UInt(-Value::minInt)
+ : Value::maxUInt) / 10;
+ Value::UInt value = 0;
+ while ( current < token.end_ )
+ {
+ Char c = *current++;
+ if ( c < '0' || c > '9' )
+ return addError( "'" + std::string( token.start_, token.end_ ) + "' is not a number.", token );
+ if ( value >= threshold )
+ return decodeDouble( token );
+ value = value * 10 + Value::UInt(c - '0');
+ }
+ if ( isNegative )
+ currentValue() = -Value::Int( value );
+ else if ( value <= Value::UInt(Value::maxInt) )
+ currentValue() = Value::Int( value );
+ else
+ currentValue() = value;
+ return true;
+}
+
+
+bool
+Reader::decodeDouble( Token &token )
+{
+ double value = 0;
+ const int bufferSize = 32;
+ int count;
+ int length = int(token.end_ - token.start_);
+ if ( length <= bufferSize )
+ {
+ Char buffer[bufferSize];
+ memcpy( buffer, token.start_, length );
+ buffer[length] = 0;
+ count = sscanf( buffer, "%lf", &value );
+ }
+ else
+ {
+ std::string buffer( token.start_, token.end_ );
+ count = sscanf( buffer.c_str(), "%lf", &value );
+ }
+
+ if ( count != 1 )
+ return addError( "'" + std::string( token.start_, token.end_ ) + "' is not a number.", token );
+ currentValue() = value;
+ return true;
+}
+
+
+bool
+Reader::decodeString( Token &token )
+{
+ std::string decoded;
+ if ( !decodeString( token, decoded ) )
+ return false;
+ currentValue() = decoded;
+ return true;
+}
+
+
+bool
+Reader::decodeString( Token &token, std::string &decoded )
+{
+ decoded.reserve( token.end_ - token.start_ - 2 );
+ Location current = token.start_ + 1; // skip '"'
+ Location end = token.end_ - 1; // do not include '"'
+ while ( current != end )
+ {
+ Char c = *current++;
+ if ( c == '"' )
+ break;
+ else if ( c == '\\' )
+ {
+ if ( current == end )
+ return addError( "Empty escape sequence in string", token, current );
+ Char escape = *current++;
+ switch ( escape )
+ {
+ case '"': decoded += '"'; break;
+ case '/': decoded += '/'; break;
+ case '\\': decoded += '\\'; break;
+ case 'b': decoded += '\b'; break;
+ case 'f': decoded += '\f'; break;
+ case 'n': decoded += '\n'; break;
+ case 'r': decoded += '\r'; break;
+ case 't': decoded += '\t'; break;
+ case 'u':
+ {
+ unsigned int unicode;
+ if ( !decodeUnicodeCodePoint( token, current, end, unicode ) )
+ return false;
+ decoded += codePointToUTF8(unicode);
+ }
+ break;
+ default:
+ return addError( "Bad escape sequence in string", token, current );
+ }
+ }
+ else
+ {
+ decoded += c;
+ }
+ }
+ return true;
+}
+
+bool
+Reader::decodeUnicodeCodePoint( Token &token,
+ Location ¤t,
+ Location end,
+ unsigned int &unicode )
+{
+
+ if ( !decodeUnicodeEscapeSequence( token, current, end, unicode ) )
+ return false;
+ if (unicode >= 0xD800 && unicode <= 0xDBFF)
+ {
+ // surrogate pairs
+ if (end - current < 6)
+ return addError( "additional six characters expected to parse unicode surrogate pair.", token, current );
+ unsigned int surrogatePair;
+ if (*(current++) == '\\' && *(current++)== 'u')
+ {
+ if (decodeUnicodeEscapeSequence( token, current, end, surrogatePair ))
+ {
+ unicode = 0x10000 + ((unicode & 0x3FF) << 10) + (surrogatePair & 0x3FF);
+ }
+ else
+ return false;
+ }
+ else
+ return addError( "expecting another \\u token to begin the second half of a unicode surrogate pair", token, current );
+ }
+ return true;
+}
+
+bool
+Reader::decodeUnicodeEscapeSequence( Token &token,
+ Location ¤t,
+ Location end,
+ unsigned int &unicode )
+{
+ if ( end - current < 4 )
+ return addError( "Bad unicode escape sequence in string: four digits expected.", token, current );
+ unicode = 0;
+ for ( int index =0; index < 4; ++index )
+ {
+ Char c = *current++;
+ unicode *= 16;
+ if ( c >= '0' && c <= '9' )
+ unicode += c - '0';
+ else if ( c >= 'a' && c <= 'f' )
+ unicode += c - 'a' + 10;
+ else if ( c >= 'A' && c <= 'F' )
+ unicode += c - 'A' + 10;
+ else
+ return addError( "Bad unicode escape sequence in string: hexadecimal digit expected.", token, current );
+ }
+ return true;
+}
+
+
+bool
+Reader::addError( const std::string &message,
+ Token &token,
+ Location extra )
+{
+ ErrorInfo info;
+ info.token_ = token;
+ info.message_ = message;
+ info.extra_ = extra;
+ errors_.push_back( info );
+ return false;
+}
+
+
+bool
+Reader::recoverFromError( TokenType skipUntilToken )
+{
+ int errorCount = int(errors_.size());
+ Token skip;
+ while ( true )
+ {
+ if ( !readToken(skip) )
+ errors_.resize( errorCount ); // discard errors caused by recovery
+ if ( skip.type_ == skipUntilToken || skip.type_ == tokenEndOfStream )
+ break;
+ }
+ errors_.resize( errorCount );
+ return false;
+}
+
+
+bool
+Reader::addErrorAndRecover( const std::string &message,
+ Token &token,
+ TokenType skipUntilToken )
+{
+ addError( message, token );
+ return recoverFromError( skipUntilToken );
+}
+
+
+Value &
+Reader::currentValue()
+{
+ return *(nodes_.top());
+}
+
+
+Reader::Char
+Reader::getNextChar()
+{
+ if ( current_ == end_ )
+ return 0;
+ return *current_++;
+}
+
+
+void
+Reader::getLocationLineAndColumn( Location location,
+ int &line,
+ int &column ) const
+{
+ Location current = begin_;
+ Location lastLineStart = current;
+ line = 0;
+ while ( current < location && current != end_ )
+ {
+ Char c = *current++;
+ if ( c == '\r' )
+ {
+ if ( *current == '\n' )
+ ++current;
+ lastLineStart = current;
+ ++line;
+ }
+ else if ( c == '\n' )
+ {
+ lastLineStart = current;
+ ++line;
+ }
+ }
+ // column & line start at 1
+ column = int(location - lastLineStart) + 1;
+ ++line;
+}
+
+
+std::string
+Reader::getLocationLineAndColumn( Location location ) const
+{
+ int line, column;
+ getLocationLineAndColumn( location, line, column );
+ char buffer[18+16+16+1];
+ sprintf( buffer, "Line %d, Column %d", line, column );
+ return buffer;
+}
+
+
+std::string
+Reader::getFormatedErrorMessages() const
+{
+ std::string formattedMessage;
+ for ( Errors::const_iterator itError = errors_.begin();
+ itError != errors_.end();
+ ++itError )
+ {
+ const ErrorInfo &error = *itError;
+ formattedMessage += "* " + getLocationLineAndColumn( error.token_.start_ ) + "\n";
+ formattedMessage += " " + error.message_ + "\n";
+ if ( error.extra_ )
+ formattedMessage += "See " + getLocationLineAndColumn( error.extra_ ) + " for detail.\n";
+ }
+ return formattedMessage;
+}
+
+
+std::istream& operator>>( std::istream &sin, Value &root )
+{
+ Json::Reader reader;
+ bool ok = reader.parse(sin, root, true);
+ //JSON_ASSERT( ok );
+ //if (!ok) throw std::runtime_error(reader.getFormatedErrorMessages());
+ return sin;
+}
+
+
+} // namespace Json
diff --git a/third_party_mods/jsoncpp/src/lib_json/json_value.cpp b/third_party_mods/jsoncpp/src/lib_json/json_value.cpp
new file mode 100644
index 0000000..7fde79d
--- /dev/null
+++ b/third_party_mods/jsoncpp/src/lib_json/json_value.cpp
@@ -0,0 +1,1718 @@
+#include <iostream>
+#include <json/value.h>
+#include <json/writer.h>
+#include <utility>
+#include <stdexcept>
+#include <cstring>
+#include <cassert>
+#ifdef JSON_USE_CPPTL
+# include <cpptl/conststring.h>
+#endif
+#include <cstddef> // size_t
+#ifndef JSON_USE_SIMPLE_INTERNAL_ALLOCATOR
+# include "json_batchallocator.h"
+#endif // #ifndef JSON_USE_SIMPLE_INTERNAL_ALLOCATOR
+
+#define JSON_ASSERT_UNREACHABLE assert( false )
+#define JSON_ASSERT( condition ) assert( condition ); // @todo <= change this into an exception throw
+#define JSON_ASSERT_MESSAGE( condition, message ) assert( condition && message ); // if (!( condition )) throw std::runtime_error( message );
+
+namespace Json {
+
+const Value Value::null;
+const Int Value::minInt = Int( ~(UInt(-1)/2) );
+const Int Value::maxInt = Int( UInt(-1)/2 );
+const UInt Value::maxUInt = UInt(-1);
+
+// A "safe" implementation of strdup. Allow null pointer to be passed.
+// Also avoid warning on msvc80.
+//
+//inline char *safeStringDup( const char *czstring )
+//{
+// if ( czstring )
+// {
+// const size_t length = (unsigned int)( strlen(czstring) + 1 );
+// char *newString = static_cast<char *>( malloc( length ) );
+// memcpy( newString, czstring, length );
+// return newString;
+// }
+// return 0;
+//}
+//
+//inline char *safeStringDup( const std::string &str )
+//{
+// if ( !str.empty() )
+// {
+// const size_t length = str.length();
+// char *newString = static_cast<char *>( malloc( length + 1 ) );
+// memcpy( newString, str.c_str(), length );
+// newString[length] = 0;
+// return newString;
+// }
+// return 0;
+//}
+
+ValueAllocator::~ValueAllocator()
+{
+}
+
+class DefaultValueAllocator : public ValueAllocator
+{
+public:
+ virtual ~DefaultValueAllocator()
+ {
+ }
+
+ virtual char *makeMemberName( const char *memberName )
+ {
+ return duplicateStringValue( memberName );
+ }
+
+ virtual void releaseMemberName( char *memberName )
+ {
+ releaseStringValue( memberName );
+ }
+
+ virtual char *duplicateStringValue( const char *value,
+ unsigned int length = unknown )
+ {
+ //@todo invesgate this old optimization
+ //if ( !value || value[0] == 0 )
+ // return 0;
+
+ if ( length == unknown )
+ length = (unsigned int)strlen(value);
+ char *newString = static_cast<char *>( malloc( length + 1 ) );
+ memcpy( newString, value, length );
+ newString[length] = 0;
+ return newString;
+ }
+
+ virtual void releaseStringValue( char *value )
+ {
+ if ( value )
+ free( value );
+ }
+};
+
+static ValueAllocator *&valueAllocator()
+{
+ static DefaultValueAllocator defaultAllocator;
+ static ValueAllocator *valueAllocator = &defaultAllocator;
+ return valueAllocator;
+}
+
+static struct DummyValueAllocatorInitializer {
+ DummyValueAllocatorInitializer()
+ {
+ valueAllocator(); // ensure valueAllocator() statics are initialized before main().
+ }
+} dummyValueAllocatorInitializer;
+
+
+
+// //////////////////////////////////////////////////////////////////
+// //////////////////////////////////////////////////////////////////
+// //////////////////////////////////////////////////////////////////
+// ValueInternals...
+// //////////////////////////////////////////////////////////////////
+// //////////////////////////////////////////////////////////////////
+// //////////////////////////////////////////////////////////////////
+#ifdef JSON_VALUE_USE_INTERNAL_MAP
+# include "json_internalarray.inl"
+# include "json_internalmap.inl"
+#endif // JSON_VALUE_USE_INTERNAL_MAP
+
+# include "json_valueiterator.inl"
+
+
+// //////////////////////////////////////////////////////////////////
+// //////////////////////////////////////////////////////////////////
+// //////////////////////////////////////////////////////////////////
+// class Value::CommentInfo
+// //////////////////////////////////////////////////////////////////
+// //////////////////////////////////////////////////////////////////
+// //////////////////////////////////////////////////////////////////
+
+
+Value::CommentInfo::CommentInfo()
+ : comment_( 0 )
+{
+}
+
+Value::CommentInfo::~CommentInfo()
+{
+ if ( comment_ )
+ valueAllocator()->releaseStringValue( comment_ );
+}
+
+
+void
+Value::CommentInfo::setComment( const char *text )
+{
+ if ( comment_ )
+ valueAllocator()->releaseStringValue( comment_ );
+ JSON_ASSERT( text );
+ JSON_ASSERT_MESSAGE( text[0]=='\0' || text[0]=='/', "Comments must start with /");
+ // It seems that /**/ style comments are acceptable as well.
+ comment_ = valueAllocator()->duplicateStringValue( text );
+}
+
+
+// //////////////////////////////////////////////////////////////////
+// //////////////////////////////////////////////////////////////////
+// //////////////////////////////////////////////////////////////////
+// class Value::CZString
+// //////////////////////////////////////////////////////////////////
+// //////////////////////////////////////////////////////////////////
+// //////////////////////////////////////////////////////////////////
+# ifndef JSON_VALUE_USE_INTERNAL_MAP
+
+// Notes: index_ indicates if the string was allocated when
+// a string is stored.
+
+Value::CZString::CZString( int index )
+ : cstr_( 0 )
+ , index_( index )
+{
+}
+
+Value::CZString::CZString( const char *cstr, DuplicationPolicy allocate )
+ : cstr_( allocate == duplicate ? valueAllocator()->makeMemberName(cstr)
+ : cstr )
+ , index_( allocate )
+{
+}
+
+Value::CZString::CZString( const CZString &other )
+: cstr_( other.index_ != noDuplication && other.cstr_ != 0
+ ? valueAllocator()->makeMemberName( other.cstr_ )
+ : other.cstr_ )
+ , index_( other.cstr_ ? (other.index_ == noDuplication ? noDuplication : duplicate)
+ : other.index_ )
+{
+}
+
+Value::CZString::~CZString()
+{
+ if ( cstr_ && index_ == duplicate )
+ valueAllocator()->releaseMemberName( const_cast<char *>( cstr_ ) );
+}
+
+void
+Value::CZString::swap( CZString &other )
+{
+ std::swap( cstr_, other.cstr_ );
+ std::swap( index_, other.index_ );
+}
+
+Value::CZString &
+Value::CZString::operator =( const CZString &other )
+{
+ CZString temp( other );
+ swap( temp );
+ return *this;
+}
+
+bool
+Value::CZString::operator<( const CZString &other ) const
+{
+ if ( cstr_ )
+ return strcmp( cstr_, other.cstr_ ) < 0;
+ return index_ < other.index_;
+}
+
+bool
+Value::CZString::operator==( const CZString &other ) const
+{
+ if ( cstr_ )
+ return strcmp( cstr_, other.cstr_ ) == 0;
+ return index_ == other.index_;
+}
+
+
+int
+Value::CZString::index() const
+{
+ return index_;
+}
+
+
+const char *
+Value::CZString::c_str() const
+{
+ return cstr_;
+}
+
+bool
+Value::CZString::isStaticString() const
+{
+ return index_ == noDuplication;
+}
+
+#endif // ifndef JSON_VALUE_USE_INTERNAL_MAP
+
+
+// //////////////////////////////////////////////////////////////////
+// //////////////////////////////////////////////////////////////////
+// //////////////////////////////////////////////////////////////////
+// class Value::Value
+// //////////////////////////////////////////////////////////////////
+// //////////////////////////////////////////////////////////////////
+// //////////////////////////////////////////////////////////////////
+
+/*! \internal Default constructor initialization must be equivalent to:
+ * memset( this, 0, sizeof(Value) )
+ * This optimization is used in ValueInternalMap fast allocator.
+ */
+Value::Value( ValueType type )
+ : type_( type )
+ , allocated_( 0 )
+ , comments_( 0 )
+# ifdef JSON_VALUE_USE_INTERNAL_MAP
+ , itemIsUsed_( 0 )
+#endif
+{
+ switch ( type )
+ {
+ case nullValue:
+ break;
+ case intValue:
+ case uintValue:
+ value_.int_ = 0;
+ break;
+ case realValue:
+ value_.real_ = 0.0;
+ break;
+ case stringValue:
+ value_.string_ = 0;
+ break;
+#ifndef JSON_VALUE_USE_INTERNAL_MAP
+ case arrayValue:
+ case objectValue:
+ value_.map_ = new ObjectValues();
+ break;
+#else
+ case arrayValue:
+ value_.array_ = arrayAllocator()->newArray();
+ break;
+ case objectValue:
+ value_.map_ = mapAllocator()->newMap();
+ break;
+#endif
+ case booleanValue:
+ value_.bool_ = false;
+ break;
+ default:
+ JSON_ASSERT_UNREACHABLE;
+ }
+}
+
+
+Value::Value( Int value )
+ : type_( intValue )
+ , comments_( 0 )
+# ifdef JSON_VALUE_USE_INTERNAL_MAP
+ , itemIsUsed_( 0 )
+#endif
+{
+ value_.int_ = value;
+}
+
+
+Value::Value( UInt value )
+ : type_( uintValue )
+ , comments_( 0 )
+# ifdef JSON_VALUE_USE_INTERNAL_MAP
+ , itemIsUsed_( 0 )
+#endif
+{
+ value_.uint_ = value;
+}
+
+Value::Value( double value )
+ : type_( realValue )
+ , comments_( 0 )
+# ifdef JSON_VALUE_USE_INTERNAL_MAP
+ , itemIsUsed_( 0 )
+#endif
+{
+ value_.real_ = value;
+}
+
+Value::Value( const char *value )
+ : type_( stringValue )
+ , allocated_( true )
+ , comments_( 0 )
+# ifdef JSON_VALUE_USE_INTERNAL_MAP
+ , itemIsUsed_( 0 )
+#endif
+{
+ value_.string_ = valueAllocator()->duplicateStringValue( value );
+}
+
+
+Value::Value( const char *beginValue,
+ const char *endValue )
+ : type_( stringValue )
+ , allocated_( true )
+ , comments_( 0 )
+# ifdef JSON_VALUE_USE_INTERNAL_MAP
+ , itemIsUsed_( 0 )
+#endif
+{
+ value_.string_ = valueAllocator()->duplicateStringValue( beginValue,
+ UInt(endValue - beginValue) );
+}
+
+
+Value::Value( const std::string &value )
+ : type_( stringValue )
+ , allocated_( true )
+ , comments_( 0 )
+# ifdef JSON_VALUE_USE_INTERNAL_MAP
+ , itemIsUsed_( 0 )
+#endif
+{
+ value_.string_ = valueAllocator()->duplicateStringValue( value.c_str(),
+ (unsigned int)value.length() );
+
+}
+
+Value::Value( const StaticString &value )
+ : type_( stringValue )
+ , allocated_( false )
+ , comments_( 0 )
+# ifdef JSON_VALUE_USE_INTERNAL_MAP
+ , itemIsUsed_( 0 )
+#endif
+{
+ value_.string_ = const_cast<char *>( value.c_str() );
+}
+
+
+# ifdef JSON_USE_CPPTL
+Value::Value( const CppTL::ConstString &value )
+ : type_( stringValue )
+ , allocated_( true )
+ , comments_( 0 )
+# ifdef JSON_VALUE_USE_INTERNAL_MAP
+ , itemIsUsed_( 0 )
+#endif
+{
+ value_.string_ = valueAllocator()->duplicateStringValue( value, value.length() );
+}
+# endif
+
+Value::Value( bool value )
+ : type_( booleanValue )
+ , comments_( 0 )
+# ifdef JSON_VALUE_USE_INTERNAL_MAP
+ , itemIsUsed_( 0 )
+#endif
+{
+ value_.bool_ = value;
+}
+
+
+Value::Value( const Value &other )
+ : type_( other.type_ )
+ , comments_( 0 )
+# ifdef JSON_VALUE_USE_INTERNAL_MAP
+ , itemIsUsed_( 0 )
+#endif
+{
+ switch ( type_ )
+ {
+ case nullValue:
+ case intValue:
+ case uintValue:
+ case realValue:
+ case booleanValue:
+ value_ = other.value_;
+ break;
+ case stringValue:
+ if ( other.value_.string_ )
+ {
+ value_.string_ = valueAllocator()->duplicateStringValue( other.value_.string_ );
+ allocated_ = true;
+ }
+ else
+ value_.string_ = 0;
+ break;
+#ifndef JSON_VALUE_USE_INTERNAL_MAP
+ case arrayValue:
+ case objectValue:
+ value_.map_ = new ObjectValues( *other.value_.map_ );
+ break;
+#else
+ case arrayValue:
+ value_.array_ = arrayAllocator()->newArrayCopy( *other.value_.array_ );
+ break;
+ case objectValue:
+ value_.map_ = mapAllocator()->newMapCopy( *other.value_.map_ );
+ break;
+#endif
+ default:
+ JSON_ASSERT_UNREACHABLE;
+ }
+ if ( other.comments_ )
+ {
+ comments_ = new CommentInfo[numberOfCommentPlacement];
+ for ( int comment =0; comment < numberOfCommentPlacement; ++comment )
+ {
+ const CommentInfo &otherComment = other.comments_[comment];
+ if ( otherComment.comment_ )
+ comments_[comment].setComment( otherComment.comment_ );
+ }
+ }
+}
+
+
+Value::~Value()
+{
+ switch ( type_ )
+ {
+ case nullValue:
+ case intValue:
+ case uintValue:
+ case realValue:
+ case booleanValue:
+ break;
+ case stringValue:
+ if ( allocated_ )
+ valueAllocator()->releaseStringValue( value_.string_ );
+ break;
+#ifndef JSON_VALUE_USE_INTERNAL_MAP
+ case arrayValue:
+ case objectValue:
+ delete value_.map_;
+ break;
+#else
+ case arrayValue:
+ arrayAllocator()->destructArray( value_.array_ );
+ break;
+ case objectValue:
+ mapAllocator()->destructMap( value_.map_ );
+ break;
+#endif
+ default:
+ JSON_ASSERT_UNREACHABLE;
+ }
+
+ if ( comments_ )
+ delete[] comments_;
+}
+
+Value &
+Value::operator=( const Value &other )
+{
+ Value temp( other );
+ swap( temp );
+ return *this;
+}
+
+void
+Value::swap( Value &other )
+{
+ ValueType temp = type_;
+ type_ = other.type_;
+ other.type_ = temp;
+ std::swap( value_, other.value_ );
+ int temp2 = allocated_;
+ allocated_ = other.allocated_;
+ other.allocated_ = temp2;
+}
+
+ValueType
+Value::type() const
+{
+ return type_;
+}
+
+
+int
+Value::compare( const Value &other )
+{
+ /*
+ int typeDelta = other.type_ - type_;
+ switch ( type_ )
+ {
+ case nullValue:
+
+ return other.type_ == type_;
+ case intValue:
+ if ( other.type_.isNumeric()
+ case uintValue:
+ case realValue:
+ case booleanValue:
+ break;
+ case stringValue,
+ break;
+ case arrayValue:
+ delete value_.array_;
+ break;
+ case objectValue:
+ delete value_.map_;
+ default:
+ JSON_ASSERT_UNREACHABLE;
+ }
+ */
+ return 0; // unreachable
+}
+
+bool
+Value::operator <( const Value &other ) const
+{
+ int typeDelta = type_ - other.type_;
+ if ( typeDelta )
+ return typeDelta < 0 ? true : false;
+ switch ( type_ )
+ {
+ case nullValue:
+ return false;
+ case intValue:
+ return value_.int_ < other.value_.int_;
+ case uintValue:
+ return value_.uint_ < other.value_.uint_;
+ case realValue:
+ return value_.real_ < other.value_.real_;
+ case booleanValue:
+ return value_.bool_ < other.value_.bool_;
+ case stringValue:
+ return ( value_.string_ == 0 && other.value_.string_ )
+ || ( other.value_.string_
+ && value_.string_
+ && strcmp( value_.string_, other.value_.string_ ) < 0 );
+#ifndef JSON_VALUE_USE_INTERNAL_MAP
+ case arrayValue:
+ case objectValue:
+ {
+ int delta = int( value_.map_->size() - other.value_.map_->size() );
+ if ( delta )
+ return delta < 0;
+ return (*value_.map_) < (*other.value_.map_);
+ }
+#else
+ case arrayValue:
+ return value_.array_->compare( *(other.value_.array_) ) < 0;
+ case objectValue:
+ return value_.map_->compare( *(other.value_.map_) ) < 0;
+#endif
+ default:
+ JSON_ASSERT_UNREACHABLE;
+ }
+ return 0; // unreachable
+}
+
+bool
+Value::operator <=( const Value &other ) const
+{
+ return !(other > *this);
+}
+
+bool
+Value::operator >=( const Value &other ) const
+{
+ return !(*this < other);
+}
+
+bool
+Value::operator >( const Value &other ) const
+{
+ return other < *this;
+}
+
+bool
+Value::operator ==( const Value &other ) const
+{
+ //if ( type_ != other.type_ )
+ // GCC 2.95.3 says:
+ // attempt to take address of bit-field structure member `Json::Value::type_'
+ // Beats me, but a temp solves the problem.
+ int temp = other.type_;
+ if ( type_ != temp )
+ return false;
+ switch ( type_ )
+ {
+ case nullValue:
+ return true;
+ case intValue:
+ return value_.int_ == other.value_.int_;
+ case uintValue:
+ return value_.uint_ == other.value_.uint_;
+ case realValue:
+ return value_.real_ == other.value_.real_;
+ case booleanValue:
+ return value_.bool_ == other.value_.bool_;
+ case stringValue:
+ return ( value_.string_ == other.value_.string_ )
+ || ( other.value_.string_
+ && value_.string_
+ && strcmp( value_.string_, other.value_.string_ ) == 0 );
+#ifndef JSON_VALUE_USE_INTERNAL_MAP
+ case arrayValue:
+ case objectValue:
+ return value_.map_->size() == other.value_.map_->size()
+ && (*value_.map_) == (*other.value_.map_);
+#else
+ case arrayValue:
+ return value_.array_->compare( *(other.value_.array_) ) == 0;
+ case objectValue:
+ return value_.map_->compare( *(other.value_.map_) ) == 0;
+#endif
+ default:
+ JSON_ASSERT_UNREACHABLE;
+ }
+ return 0; // unreachable
+}
+
+bool
+Value::operator !=( const Value &other ) const
+{
+ return !( *this == other );
+}
+
+const char *
+Value::asCString() const
+{
+ JSON_ASSERT( type_ == stringValue );
+ return value_.string_;
+}
+
+
+std::string
+Value::asString() const
+{
+ switch ( type_ )
+ {
+ case nullValue:
+ return "";
+ case stringValue:
+ return value_.string_ ? value_.string_ : "";
+ case booleanValue:
+ return value_.bool_ ? "true" : "false";
+ case intValue:
+ case uintValue:
+ case realValue:
+ case arrayValue:
+ case objectValue:
+ JSON_ASSERT_MESSAGE( false, "Type is not convertible to string" );
+ default:
+ JSON_ASSERT_UNREACHABLE;
+ }
+ return ""; // unreachable
+}
+
+# ifdef JSON_USE_CPPTL
+CppTL::ConstString
+Value::asConstString() const
+{
+ return CppTL::ConstString( asString().c_str() );
+}
+# endif
+
+Value::Int
+Value::asInt() const
+{
+ switch ( type_ )
+ {
+ case nullValue:
+ return 0;
+ case intValue:
+ return value_.int_;
+ case uintValue:
+ JSON_ASSERT_MESSAGE( value_.uint_ < (unsigned)maxInt, "integer out of signed integer range" );
+ return value_.uint_;
+ case realValue:
+ JSON_ASSERT_MESSAGE( value_.real_ >= minInt && value_.real_ <= maxInt, "Real out of signed integer range" );
+ return Int( value_.real_ );
+ case booleanValue:
+ return value_.bool_ ? 1 : 0;
+ case stringValue:
+ case arrayValue:
+ case objectValue:
+ JSON_ASSERT_MESSAGE( false, "Type is not convertible to int" );
+ default:
+ JSON_ASSERT_UNREACHABLE;
+ }
+ return 0; // unreachable;
+}
+
+Value::UInt
+Value::asUInt() const
+{
+ switch ( type_ )
+ {
+ case nullValue:
+ return 0;
+ case intValue:
+ JSON_ASSERT_MESSAGE( value_.int_ >= 0, "Negative integer can not be converted to unsigned integer" );
+ return value_.int_;
+ case uintValue:
+ return value_.uint_;
+ case realValue:
+ JSON_ASSERT_MESSAGE( value_.real_ >= 0 && value_.real_ <= maxUInt, "Real out of unsigned integer range" );
+ return UInt( value_.real_ );
+ case booleanValue:
+ return value_.bool_ ? 1 : 0;
+ case stringValue:
+ case arrayValue:
+ case objectValue:
+ JSON_ASSERT_MESSAGE( false, "Type is not convertible to uint" );
+ default:
+ JSON_ASSERT_UNREACHABLE;
+ }
+ return 0; // unreachable;
+}
+
+double
+Value::asDouble() const
+{
+ switch ( type_ )
+ {
+ case nullValue:
+ return 0.0;
+ case intValue:
+ return value_.int_;
+ case uintValue:
+ return value_.uint_;
+ case realValue:
+ return value_.real_;
+ case booleanValue:
+ return value_.bool_ ? 1.0 : 0.0;
+ case stringValue:
+ case arrayValue:
+ case objectValue:
+ JSON_ASSERT_MESSAGE( false, "Type is not convertible to double" );
+ default:
+ JSON_ASSERT_UNREACHABLE;
+ }
+ return 0; // unreachable;
+}
+
+bool
+Value::asBool() const
+{
+ switch ( type_ )
+ {
+ case nullValue:
+ return false;
+ case intValue:
+ case uintValue:
+ return value_.int_ != 0;
+ case realValue:
+ return value_.real_ != 0.0;
+ case booleanValue:
+ return value_.bool_;
+ case stringValue:
+ return value_.string_ && value_.string_[0] != 0;
+ case arrayValue:
+ case objectValue:
+ return value_.map_->size() != 0;
+ default:
+ JSON_ASSERT_UNREACHABLE;
+ }
+ return false; // unreachable;
+}
+
+
+bool
+Value::isConvertibleTo( ValueType other ) const
+{
+ switch ( type_ )
+ {
+ case nullValue:
+ return true;
+ case intValue:
+ return ( other == nullValue && value_.int_ == 0 )
+ || other == intValue
+ || ( other == uintValue && value_.int_ >= 0 )
+ || other == realValue
+ || other == stringValue
+ || other == booleanValue;
+ case uintValue:
+ return ( other == nullValue && value_.uint_ == 0 )
+ || ( other == intValue && value_.uint_ <= (unsigned)maxInt )
+ || other == uintValue
+ || other == realValue
+ || other == stringValue
+ || other == booleanValue;
+ case realValue:
+ return ( other == nullValue && value_.real_ == 0.0 )
+ || ( other == intValue && value_.real_ >= minInt && value_.real_ <= maxInt )
+ || ( other == uintValue && value_.real_ >= 0 && value_.real_ <= maxUInt )
+ || other == realValue
+ || other == stringValue
+ || other == booleanValue;
+ case booleanValue:
+ return ( other == nullValue && value_.bool_ == false )
+ || other == intValue
+ || other == uintValue
+ || other == realValue
+ || other == stringValue
+ || other == booleanValue;
+ case stringValue:
+ return other == stringValue
+ || ( other == nullValue && (!value_.string_ || value_.string_[0] == 0) );
+ case arrayValue:
+ return other == arrayValue
+ || ( other == nullValue && value_.map_->size() == 0 );
+ case objectValue:
+ return other == objectValue
+ || ( other == nullValue && value_.map_->size() == 0 );
+ default:
+ JSON_ASSERT_UNREACHABLE;
+ }
+ return false; // unreachable;
+}
+
+
+/// Number of values in array or object
+Value::UInt
+Value::size() const
+{
+ switch ( type_ )
+ {
+ case nullValue:
+ case intValue:
+ case uintValue:
+ case realValue:
+ case booleanValue:
+ case stringValue:
+ return 0;
+#ifndef JSON_VALUE_USE_INTERNAL_MAP
+ case arrayValue: // size of the array is highest index + 1
+ if ( !value_.map_->empty() )
+ {
+ ObjectValues::const_iterator itLast = value_.map_->end();
+ --itLast;
+ return (*itLast).first.index()+1;
+ }
+ return 0;
+ case objectValue:
+ return Int( value_.map_->size() );
+#else
+ case arrayValue:
+ return Int( value_.array_->size() );
+ case objectValue:
+ return Int( value_.map_->size() );
+#endif
+ default:
+ JSON_ASSERT_UNREACHABLE;
+ }
+ return 0; // unreachable;
+}
+
+
+bool
+Value::empty() const
+{
+ if ( isNull() || isArray() || isObject() )
+ return size() == 0u;
+ else
+ return false;
+}
+
+
+bool
+Value::operator!() const
+{
+ return isNull();
+}
+
+
+void
+Value::clear()
+{
+ JSON_ASSERT( type_ == nullValue || type_ == arrayValue || type_ == objectValue );
+
+ switch ( type_ )
+ {
+#ifndef JSON_VALUE_USE_INTERNAL_MAP
+ case arrayValue:
+ case objectValue:
+ value_.map_->clear();
+ break;
+#else
+ case arrayValue:
+ value_.array_->clear();
+ break;
+ case objectValue:
+ value_.map_->clear();
+ break;
+#endif
+ default:
+ break;
+ }
+}
+
+void
+Value::resize( UInt newSize )
+{
+ JSON_ASSERT( type_ == nullValue || type_ == arrayValue );
+ if ( type_ == nullValue )
+ *this = Value( arrayValue );
+#ifndef JSON_VALUE_USE_INTERNAL_MAP
+ UInt oldSize = size();
+ if ( newSize == 0 )
+ clear();
+ else if ( newSize > oldSize )
+ (*this)[ newSize - 1 ];
+ else
+ {
+ for ( UInt index = newSize; index < oldSize; ++index )
+ value_.map_->erase( index );
+ assert( size() == newSize );
+ }
+#else
+ value_.array_->resize( newSize );
+#endif
+}
+
+
+Value &
+Value::operator[]( UInt index )
+{
+ JSON_ASSERT( type_ == nullValue || type_ == arrayValue );
+ if ( type_ == nullValue )
+ *this = Value( arrayValue );
+#ifndef JSON_VALUE_USE_INTERNAL_MAP
+ CZString key( index );
+ ObjectValues::iterator it = value_.map_->lower_bound( key );
+ if ( it != value_.map_->end() && (*it).first == key )
+ return (*it).second;
+
+ ObjectValues::value_type defaultValue( key, null );
+ it = value_.map_->insert( it, defaultValue );
+ return (*it).second;
+#else
+ return value_.array_->resolveReference( index );
+#endif
+}
+
+
+const Value &
+Value::operator[]( UInt index ) const
+{
+ JSON_ASSERT( type_ == nullValue || type_ == arrayValue );
+ if ( type_ == nullValue )
+ return null;
+#ifndef JSON_VALUE_USE_INTERNAL_MAP
+ CZString key( index );
+ ObjectValues::const_iterator it = value_.map_->find( key );
+ if ( it == value_.map_->end() )
+ return null;
+ return (*it).second;
+#else
+ Value *value = value_.array_->find( index );
+ return value ? *value : null;
+#endif
+}
+
+
+Value &
+Value::operator[]( const char *key )
+{
+ return resolveReference( key, false );
+}
+
+
+Value &
+Value::resolveReference( const char *key,
+ bool isStatic )
+{
+ JSON_ASSERT( type_ == nullValue || type_ == objectValue );
+ if ( type_ == nullValue )
+ *this = Value( objectValue );
+#ifndef JSON_VALUE_USE_INTERNAL_MAP
+ CZString actualKey( key, isStatic ? CZString::noDuplication
+ : CZString::duplicateOnCopy );
+ ObjectValues::iterator it = value_.map_->lower_bound( actualKey );
+ if ( it != value_.map_->end() && (*it).first == actualKey )
+ return (*it).second;
+
+ ObjectValues::value_type defaultValue( actualKey, null );
+ it = value_.map_->insert( it, defaultValue );
+ Value &value = (*it).second;
+ return value;
+#else
+ return value_.map_->resolveReference( key, isStatic );
+#endif
+}
+
+
+Value
+Value::get( UInt index,
+ const Value &defaultValue ) const
+{
+ const Value *value = &((*this)[index]);
+ return value == &null ? defaultValue : *value;
+}
+
+
+bool
+Value::isValidIndex( UInt index ) const
+{
+ return index < size();
+}
+
+
+
+const Value &
+Value::operator[]( const char *key ) const
+{
+ JSON_ASSERT( type_ == nullValue || type_ == objectValue );
+ if ( type_ == nullValue )
+ return null;
+#ifndef JSON_VALUE_USE_INTERNAL_MAP
+ CZString actualKey( key, CZString::noDuplication );
+ ObjectValues::const_iterator it = value_.map_->find( actualKey );
+ if ( it == value_.map_->end() )
+ return null;
+ return (*it).second;
+#else
+ const Value *value = value_.map_->find( key );
+ return value ? *value : null;
+#endif
+}
+
+
+Value &
+Value::operator[]( const std::string &key )
+{
+ return (*this)[ key.c_str() ];
+}
+
+
+const Value &
+Value::operator[]( const std::string &key ) const
+{
+ return (*this)[ key.c_str() ];
+}
+
+Value &
+Value::operator[]( const StaticString &key )
+{
+ return resolveReference( key, true );
+}
+
+
+# ifdef JSON_USE_CPPTL
+Value &
+Value::operator[]( const CppTL::ConstString &key )
+{
+ return (*this)[ key.c_str() ];
+}
+
+
+const Value &
+Value::operator[]( const CppTL::ConstString &key ) const
+{
+ return (*this)[ key.c_str() ];
+}
+# endif
+
+
+Value &
+Value::append( const Value &value )
+{
+ return (*this)[size()] = value;
+}
+
+
+Value
+Value::get( const char *key,
+ const Value &defaultValue ) const
+{
+ const Value *value = &((*this)[key]);
+ return value == &null ? defaultValue : *value;
+}
+
+
+Value
+Value::get( const std::string &key,
+ const Value &defaultValue ) const
+{
+ return get( key.c_str(), defaultValue );
+}
+
+Value
+Value::removeMember( const char* key )
+{
+ JSON_ASSERT( type_ == nullValue || type_ == objectValue );
+ if ( type_ == nullValue )
+ return null;
+#ifndef JSON_VALUE_USE_INTERNAL_MAP
+ CZString actualKey( key, CZString::noDuplication );
+ ObjectValues::iterator it = value_.map_->find( actualKey );
+ if ( it == value_.map_->end() )
+ return null;
+ Value old(it->second);
+ value_.map_->erase(it);
+ return old;
+#else
+ Value *value = value_.map_->find( key );
+ if (value){
+ Value old(*value);
+ value_.map_.remove( key );
+ return old;
+ } else {
+ return null;
+ }
+#endif
+}
+
+Value
+Value::removeMember( const std::string &key )
+{
+ return removeMember( key.c_str() );
+}
+
+# ifdef JSON_USE_CPPTL
+Value
+Value::get( const CppTL::ConstString &key,
+ const Value &defaultValue ) const
+{
+ return get( key.c_str(), defaultValue );
+}
+# endif
+
+bool
+Value::isMember( const char *key ) const
+{
+ const Value *value = &((*this)[key]);
+ return value != &null;
+}
+
+
+bool
+Value::isMember( const std::string &key ) const
+{
+ return isMember( key.c_str() );
+}
+
+
+# ifdef JSON_USE_CPPTL
+bool
+Value::isMember( const CppTL::ConstString &key ) const
+{
+ return isMember( key.c_str() );
+}
+#endif
+
+Value::Members
+Value::getMemberNames() const
+{
+ JSON_ASSERT( type_ == nullValue || type_ == objectValue );
+ if ( type_ == nullValue )
+ return Value::Members();
+ Members members;
+ members.reserve( value_.map_->size() );
+#ifndef JSON_VALUE_USE_INTERNAL_MAP
+ ObjectValues::const_iterator it = value_.map_->begin();
+ ObjectValues::const_iterator itEnd = value_.map_->end();
+ for ( ; it != itEnd; ++it )
+ members.push_back( std::string( (*it).first.c_str() ) );
+#else
+ ValueInternalMap::IteratorState it;
+ ValueInternalMap::IteratorState itEnd;
+ value_.map_->makeBeginIterator( it );
+ value_.map_->makeEndIterator( itEnd );
+ for ( ; !ValueInternalMap::equals( it, itEnd ); ValueInternalMap::increment(it) )
+ members.push_back( std::string( ValueInternalMap::key( it ) ) );
+#endif
+ return members;
+}
+//
+//# ifdef JSON_USE_CPPTL
+//EnumMemberNames
+//Value::enumMemberNames() const
+//{
+// if ( type_ == objectValue )
+// {
+// return CppTL::Enum::any( CppTL::Enum::transform(
+// CppTL::Enum::keys( *(value_.map_), CppTL::Type<const CZString &>() ),
+// MemberNamesTransform() ) );
+// }
+// return EnumMemberNames();
+//}
+//
+//
+//EnumValues
+//Value::enumValues() const
+//{
+// if ( type_ == objectValue || type_ == arrayValue )
+// return CppTL::Enum::anyValues( *(value_.map_),
+// CppTL::Type<const Value &>() );
+// return EnumValues();
+//}
+//
+//# endif
+
+
+bool
+Value::isNull() const
+{
+ return type_ == nullValue;
+}
+
+
+bool
+Value::isBool() const
+{
+ return type_ == booleanValue;
+}
+
+
+bool
+Value::isInt() const
+{
+ return type_ == intValue;
+}
+
+
+bool
+Value::isUInt() const
+{
+ return type_ == uintValue;
+}
+
+
+bool
+Value::isIntegral() const
+{
+ return type_ == intValue
+ || type_ == uintValue
+ || type_ == booleanValue;
+}
+
+
+bool
+Value::isDouble() const
+{
+ return type_ == realValue;
+}
+
+
+bool
+Value::isNumeric() const
+{
+ return isIntegral() || isDouble();
+}
+
+
+bool
+Value::isString() const
+{
+ return type_ == stringValue;
+}
+
+
+bool
+Value::isArray() const
+{
+ return type_ == nullValue || type_ == arrayValue;
+}
+
+
+bool
+Value::isObject() const
+{
+ return type_ == nullValue || type_ == objectValue;
+}
+
+
+void
+Value::setComment( const char *comment,
+ CommentPlacement placement )
+{
+ if ( !comments_ )
+ comments_ = new CommentInfo[numberOfCommentPlacement];
+ comments_[placement].setComment( comment );
+}
+
+
+void
+Value::setComment( const std::string &comment,
+ CommentPlacement placement )
+{
+ setComment( comment.c_str(), placement );
+}
+
+
+bool
+Value::hasComment( CommentPlacement placement ) const
+{
+ return comments_ != 0 && comments_[placement].comment_ != 0;
+}
+
+std::string
+Value::getComment( CommentPlacement placement ) const
+{
+ if ( hasComment(placement) )
+ return comments_[placement].comment_;
+ return "";
+}
+
+
+std::string
+Value::toStyledString() const
+{
+ StyledWriter writer;
+ return writer.write( *this );
+}
+
+
+Value::const_iterator
+Value::begin() const
+{
+ switch ( type_ )
+ {
+#ifdef JSON_VALUE_USE_INTERNAL_MAP
+ case arrayValue:
+ if ( value_.array_ )
+ {
+ ValueInternalArray::IteratorState it;
+ value_.array_->makeBeginIterator( it );
+ return const_iterator( it );
+ }
+ break;
+ case objectValue:
+ if ( value_.map_ )
+ {
+ ValueInternalMap::IteratorState it;
+ value_.map_->makeBeginIterator( it );
+ return const_iterator( it );
+ }
+ break;
+#else
+ case arrayValue:
+ case objectValue:
+ if ( value_.map_ )
+ return const_iterator( value_.map_->begin() );
+ break;
+#endif
+ default:
+ break;
+ }
+ return const_iterator();
+}
+
+Value::const_iterator
+Value::end() const
+{
+ switch ( type_ )
+ {
+#ifdef JSON_VALUE_USE_INTERNAL_MAP
+ case arrayValue:
+ if ( value_.array_ )
+ {
+ ValueInternalArray::IteratorState it;
+ value_.array_->makeEndIterator( it );
+ return const_iterator( it );
+ }
+ break;
+ case objectValue:
+ if ( value_.map_ )
+ {
+ ValueInternalMap::IteratorState it;
+ value_.map_->makeEndIterator( it );
+ return const_iterator( it );
+ }
+ break;
+#else
+ case arrayValue:
+ case objectValue:
+ if ( value_.map_ )
+ return const_iterator( value_.map_->end() );
+ break;
+#endif
+ default:
+ break;
+ }
+ return const_iterator();
+}
+
+
+Value::iterator
+Value::begin()
+{
+ switch ( type_ )
+ {
+#ifdef JSON_VALUE_USE_INTERNAL_MAP
+ case arrayValue:
+ if ( value_.array_ )
+ {
+ ValueInternalArray::IteratorState it;
+ value_.array_->makeBeginIterator( it );
+ return iterator( it );
+ }
+ break;
+ case objectValue:
+ if ( value_.map_ )
+ {
+ ValueInternalMap::IteratorState it;
+ value_.map_->makeBeginIterator( it );
+ return iterator( it );
+ }
+ break;
+#else
+ case arrayValue:
+ case objectValue:
+ if ( value_.map_ )
+ return iterator( value_.map_->begin() );
+ break;
+#endif
+ default:
+ break;
+ }
+ return iterator();
+}
+
+Value::iterator
+Value::end()
+{
+ switch ( type_ )
+ {
+#ifdef JSON_VALUE_USE_INTERNAL_MAP
+ case arrayValue:
+ if ( value_.array_ )
+ {
+ ValueInternalArray::IteratorState it;
+ value_.array_->makeEndIterator( it );
+ return iterator( it );
+ }
+ break;
+ case objectValue:
+ if ( value_.map_ )
+ {
+ ValueInternalMap::IteratorState it;
+ value_.map_->makeEndIterator( it );
+ return iterator( it );
+ }
+ break;
+#else
+ case arrayValue:
+ case objectValue:
+ if ( value_.map_ )
+ return iterator( value_.map_->end() );
+ break;
+#endif
+ default:
+ break;
+ }
+ return iterator();
+}
+
+
+// class PathArgument
+// //////////////////////////////////////////////////////////////////
+
+PathArgument::PathArgument()
+ : kind_( kindNone )
+{
+}
+
+
+PathArgument::PathArgument( Value::UInt index )
+ : index_( index )
+ , kind_( kindIndex )
+{
+}
+
+
+PathArgument::PathArgument( const char *key )
+ : key_( key )
+ , kind_( kindKey )
+{
+}
+
+
+PathArgument::PathArgument( const std::string &key )
+ : key_( key.c_str() )
+ , kind_( kindKey )
+{
+}
+
+// class Path
+// //////////////////////////////////////////////////////////////////
+
+Path::Path( const std::string &path,
+ const PathArgument &a1,
+ const PathArgument &a2,
+ const PathArgument &a3,
+ const PathArgument &a4,
+ const PathArgument &a5 )
+{
+ InArgs in;
+ in.push_back( &a1 );
+ in.push_back( &a2 );
+ in.push_back( &a3 );
+ in.push_back( &a4 );
+ in.push_back( &a5 );
+ makePath( path, in );
+}
+
+
+void
+Path::makePath( const std::string &path,
+ const InArgs &in )
+{
+ const char *current = path.c_str();
+ const char *end = current + path.length();
+ InArgs::const_iterator itInArg = in.begin();
+ while ( current != end )
+ {
+ if ( *current == '[' )
+ {
+ ++current;
+ if ( *current == '%' )
+ addPathInArg( path, in, itInArg, PathArgument::kindIndex );
+ else
+ {
+ Value::UInt index = 0;
+ for ( ; current != end && *current >= '0' && *current <= '9'; ++current )
+ index = index * 10 + Value::UInt(*current - '0');
+ args_.push_back( index );
+ }
+ if ( current == end || *current++ != ']' )
+ invalidPath( path, int(current - path.c_str()) );
+ }
+ else if ( *current == '%' )
+ {
+ addPathInArg( path, in, itInArg, PathArgument::kindKey );
+ ++current;
+ }
+ else if ( *current == '.' )
+ {
+ ++current;
+ }
+ else
+ {
+ const char *beginName = current;
+ while ( current != end && !strchr( "[.", *current ) )
+ ++current;
+ args_.push_back( std::string( beginName, current ) );
+ }
+ }
+}
+
+
+void
+Path::addPathInArg( const std::string &path,
+ const InArgs &in,
+ InArgs::const_iterator &itInArg,
+ PathArgument::Kind kind )
+{
+ if ( itInArg == in.end() )
+ {
+ // Error: missing argument %d
+ }
+ else if ( (*itInArg)->kind_ != kind )
+ {
+ // Error: bad argument type
+ }
+ else
+ {
+ args_.push_back( **itInArg );
+ }
+}
+
+
+void
+Path::invalidPath( const std::string &path,
+ int location )
+{
+ // Error: invalid path.
+}
+
+
+const Value &
+Path::resolve( const Value &root ) const
+{
+ const Value *node = &root;
+ for ( Args::const_iterator it = args_.begin(); it != args_.end(); ++it )
+ {
+ const PathArgument &arg = *it;
+ if ( arg.kind_ == PathArgument::kindIndex )
+ {
+ if ( !node->isArray() || node->isValidIndex( arg.index_ ) )
+ {
+ // Error: unable to resolve path (array value expected at position...
+ }
+ node = &((*node)[arg.index_]);
+ }
+ else if ( arg.kind_ == PathArgument::kindKey )
+ {
+ if ( !node->isObject() )
+ {
+ // Error: unable to resolve path (object value expected at position...)
+ }
+ node = &((*node)[arg.key_]);
+ if ( node == &Value::null )
+ {
+ // Error: unable to resolve path (object has no member named '' at position...)
+ }
+ }
+ }
+ return *node;
+}
+
+
+Value
+Path::resolve( const Value &root,
+ const Value &defaultValue ) const
+{
+ const Value *node = &root;
+ for ( Args::const_iterator it = args_.begin(); it != args_.end(); ++it )
+ {
+ const PathArgument &arg = *it;
+ if ( arg.kind_ == PathArgument::kindIndex )
+ {
+ if ( !node->isArray() || node->isValidIndex( arg.index_ ) )
+ return defaultValue;
+ node = &((*node)[arg.index_]);
+ }
+ else if ( arg.kind_ == PathArgument::kindKey )
+ {
+ if ( !node->isObject() )
+ return defaultValue;
+ node = &((*node)[arg.key_]);
+ if ( node == &Value::null )
+ return defaultValue;
+ }
+ }
+ return *node;
+}
+
+
+Value &
+Path::make( Value &root ) const
+{
+ Value *node = &root;
+ for ( Args::const_iterator it = args_.begin(); it != args_.end(); ++it )
+ {
+ const PathArgument &arg = *it;
+ if ( arg.kind_ == PathArgument::kindIndex )
+ {
+ if ( !node->isArray() )
+ {
+ // Error: node is not an array at position ...
+ }
+ node = &((*node)[arg.index_]);
+ }
+ else if ( arg.kind_ == PathArgument::kindKey )
+ {
+ if ( !node->isObject() )
+ {
+ // Error: node is not an object at position...
+ }
+ node = &((*node)[arg.key_]);
+ }
+ }
+ return *node;
+}
+
+
+} // namespace Json
diff --git a/third_party_mods/libjingle/libjingle.gyp b/third_party_mods/libjingle/libjingle.gyp
new file mode 100644
index 0000000..42f34bb
--- /dev/null
+++ b/third_party_mods/libjingle/libjingle.gyp
@@ -0,0 +1,639 @@
+# Copyright (c) 2011 The Chromium Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+{
+ 'variables': {
+ # Chromium targets will have set inside_chromium_build to 1.
+ # We declare a default value of 0 for standalone builds.
+ 'inside_chromium_build%': 0,
+ 'no_libjingle_logging%': 0,
+ },
+ 'target_defaults': {
+ 'defines': [
+ 'FEATURE_ENABLE_SSL',
+ 'FEATURE_ENABLE_VOICEMAIL', # TODO(ncarter): Do we really need this?
+ '_USE_32BIT_TIME_T',
+ 'SAFE_TO_DEFINE_TALK_BASE_LOGGING_MACROS',
+ 'EXPAT_RELATIVE_PATH',
+ 'HAVE_WEBRTC',
+ ],
+ 'configurations': {
+ 'Debug': {
+ 'defines': [
+ # TODO(sergeyu): Fix libjingle to use NDEBUG instead of
+ # _DEBUG and remove this define. See below as well.
+ '_DEBUG',
+ ],
+ }
+ },
+ 'dependencies': [
+ '../expat/expat.gyp:expat',
+ ],
+ 'direct_dependent_settings': {
+ 'defines': [
+ 'FEATURE_ENABLE_SSL',
+ 'FEATURE_ENABLE_VOICEMAIL',
+ 'EXPAT_RELATIVE_PATH',
+ ],
+ 'conditions': [
+ ['OS=="win"', {
+ 'link_settings': {
+ 'libraries': [
+ '-lsecur32.lib',
+ '-lcrypt32.lib',
+ '-liphlpapi.lib',
+ ],
+ },
+ }],
+ ['OS=="win"', {
+ 'include_dirs': [
+ '../third_party/platformsdk_win7/files/Include',
+ ],
+ 'defines': [
+ '_CRT_SECURE_NO_WARNINGS', # Suppres warnings about _vsnprinf
+ ],
+ }],
+ ['OS=="linux"', {
+ 'defines': [
+ 'LINUX',
+ ],
+ }],
+ ['OS=="mac"', {
+ 'defines': [
+ 'OSX',
+ ],
+ }],
+ ['OS=="linux" or OS=="mac" or OS=="freebsd" or OS=="openbsd"', {
+ 'defines': [
+ 'POSIX',
+ ],
+ }],
+ ['OS=="openbsd" or OS=="freebsd"', {
+ 'defines': [
+ 'BSD',
+ ],
+ }],
+ ['no_libjingle_logging==1', {
+ 'defines': [
+ 'NO_LIBJINGLE_LOGGING',
+ ],
+ }],
+ ],
+ },
+ 'all_dependent_settings': {
+ 'configurations': {
+ 'Debug': {
+ 'defines': [
+ # TODO(sergeyu): Fix libjingle to use NDEBUG instead of
+ # _DEBUG and remove this define. See above as well.
+ '_DEBUG',
+ ],
+ }
+ },
+ },
+ 'conditions': [
+ ['inside_chromium_build==1', {
+ 'include_dirs': [
+ './overrides',
+ '../..', # the third_party folder for webrtc includes
+ './source',
+ '../../third_party/expat/files',
+ ],
+ 'direct_dependent_settings': {
+ 'include_dirs': [
+ './overrides',
+ './source',
+ '../../third_party/expat/files'
+ ],
+ },
+ 'dependencies': [
+ '../../base/base.gyp:base',
+ '../../net/net.gyp:net',
+ ],
+ },{
+ 'include_dirs': [
+ # the third_party folder for webrtc/ includes (non-chromium).
+ '../../trunk',
+ './source',
+ '../../third_party/expat/files',
+ ],
+ }],
+ ['OS=="win"', {
+ 'include_dirs': [
+ '../third_party/platformsdk_win7/files/Include',
+ ],
+ }],
+ ['OS=="linux"', {
+ 'defines': [
+ 'LINUX',
+ ],
+ }],
+ ['OS=="mac"', {
+ 'defines': [
+ 'OSX',
+ ],
+ }],
+ ['OS=="linux" or OS=="mac" or OS=="freebsd" or OS=="openbsd"', {
+ 'defines': [
+ 'POSIX',
+ ],
+ }],
+ ['OS=="openbsd" or OS=="freebsd"', {
+ 'defines': [
+ 'BSD',
+ ],
+ }],
+ ],
+ },
+ 'targets': [
+ {
+ 'target_name': 'libjingle',
+ 'variables': {
+ 'conditions': [
+ ['inside_chromium_build==1', {
+ 'overrides': 'overrides',
+ },{
+ 'overrides': 'source',
+ }],
+ ],
+ },
+ 'type': '<(library)',
+ 'sources': [
+ '<(overrides)/talk/base/basictypes.h',
+ '<(overrides)/talk/base/constructormagic.h',
+
+ # Need to override logging.h because we need
+ # SAFE_TO_DEFINE_TALK_BASE_LOGGING_MACROS to work.
+ # TODO(sergeyu): push SAFE_TO_DEFINE_TALK_BASE_LOGGING_MACROS to
+ # libjingle and remove this override.
+ '<(overrides)/talk/base/logging.h',
+
+ '<(overrides)/talk/base/scoped_ptr.h',
+
+ # Libjingle's QName is not threadsafe, so we need to use our own version
+ # here.
+ # TODO(sergeyu): Fix QName in Libjingle.
+ '<(overrides)/talk/xmllite/qname.cc',
+ '<(overrides)/talk/xmllite/qname.h',
+
+ 'source/talk/base/Equifax_Secure_Global_eBusiness_CA-1.h',
+ 'source/talk/base/asyncfile.cc',
+ 'source/talk/base/asyncfile.h',
+ 'source/talk/base/asynchttprequest.cc',
+ 'source/talk/base/asynchttprequest.h',
+ 'source/talk/base/asyncpacketsocket.h',
+ 'source/talk/base/asyncsocket.cc',
+ 'source/talk/base/asyncsocket.h',
+ 'source/talk/base/asynctcpsocket.cc',
+ 'source/talk/base/asynctcpsocket.h',
+ 'source/talk/base/asyncudpsocket.cc',
+ 'source/talk/base/asyncudpsocket.h',
+ 'source/talk/base/autodetectproxy.cc',
+ 'source/talk/base/autodetectproxy.h',
+ 'source/talk/base/base64.cc',
+ 'source/talk/base/base64.h',
+ 'source/talk/base/basicdefs.h',
+ 'source/talk/base/basicpacketsocketfactory.cc',
+ 'source/talk/base/basicpacketsocketfactory.h',
+ 'source/talk/base/bytebuffer.cc',
+ 'source/talk/base/bytebuffer.h',
+ 'source/talk/base/byteorder.h',
+ 'source/talk/base/checks.cc',
+ 'source/talk/base/checks.h',
+ 'source/talk/base/common.cc',
+ 'source/talk/base/common.h',
+ 'source/talk/base/criticalsection.h',
+ 'source/talk/base/cryptstring.h',
+ 'source/talk/base/diskcache.cc',
+ 'source/talk/base/diskcache.h',
+ 'source/talk/base/event.cc',
+ 'source/talk/base/event.h',
+ 'source/talk/base/fileutils.cc',
+ 'source/talk/base/fileutils.h',
+ 'source/talk/base/firewallsocketserver.cc',
+ 'source/talk/base/firewallsocketserver.h',
+ 'source/talk/base/flags.cc',
+ 'source/talk/base/flags.h',
+ 'source/talk/base/helpers.cc',
+ 'source/talk/base/helpers.h',
+ 'source/talk/base/host.cc',
+ 'source/talk/base/host.h',
+ 'source/talk/base/httpbase.cc',
+ 'source/talk/base/httpbase.h',
+ 'source/talk/base/httpclient.h',
+ 'source/talk/base/httpclient.cc',
+ 'source/talk/base/httpcommon-inl.h',
+ 'source/talk/base/httpcommon.cc',
+ 'source/talk/base/httpcommon.h',
+ 'source/talk/base/httprequest.cc',
+ 'source/talk/base/httprequest.h',
+ 'source/talk/base/json.cc',
+ 'source/talk/base/json.h',
+ 'source/talk/base/linked_ptr.h',
+ 'source/talk/base/logging.cc',
+ 'source/talk/base/md5.h',
+ 'source/talk/base/md5c.c',
+ 'source/talk/base/messagehandler.cc',
+ 'source/talk/base/messagehandler.h',
+ 'source/talk/base/messagequeue.cc',
+ 'source/talk/base/messagequeue.h',
+ 'source/talk/base/nethelpers.cc',
+ 'source/talk/base/nethelpers.h',
+ 'source/talk/base/network.cc',
+ 'source/talk/base/network.h',
+ 'source/talk/base/pathutils.cc',
+ 'source/talk/base/pathutils.h',
+ 'source/talk/base/physicalsocketserver.cc',
+ 'source/talk/base/physicalsocketserver.h',
+ 'source/talk/base/proxydetect.cc',
+ 'source/talk/base/proxydetect.h',
+ 'source/talk/base/proxyinfo.cc',
+ 'source/talk/base/proxyinfo.h',
+ 'source/talk/base/ratetracker.cc',
+ 'source/talk/base/ratetracker.h',
+ 'source/talk/base/sec_buffer.h',
+ 'source/talk/base/signalthread.cc',
+ 'source/talk/base/signalthread.h',
+ 'source/talk/base/sigslot.h',
+ 'source/talk/base/sigslotrepeater.h',
+ 'source/talk/base/socket.h',
+ 'source/talk/base/socketadapters.cc',
+ 'source/talk/base/socketadapters.h',
+ 'source/talk/base/socketaddress.cc',
+ 'source/talk/base/socketaddress.h',
+ 'source/talk/base/socketaddresspair.cc',
+ 'source/talk/base/socketaddresspair.h',
+ 'source/talk/base/socketfactory.h',
+ 'source/talk/base/socketpool.cc',
+ 'source/talk/base/socketpool.h',
+ 'source/talk/base/socketserver.h',
+ 'source/talk/base/socketstream.cc',
+ 'source/talk/base/socketstream.h',
+ 'source/talk/base/ssladapter.cc',
+ 'source/talk/base/ssladapter.h',
+ 'source/talk/base/sslsocketfactory.cc',
+ 'source/talk/base/sslsocketfactory.h',
+ 'source/talk/base/stream.cc',
+ 'source/talk/base/stream.h',
+ 'source/talk/base/stringdigest.cc',
+ 'source/talk/base/stringdigest.h',
+ 'source/talk/base/stringencode.cc',
+ 'source/talk/base/stringencode.h',
+ 'source/talk/base/stringutils.cc',
+ 'source/talk/base/stringutils.h',
+ 'source/talk/base/task.cc',
+ 'source/talk/base/task.h',
+ 'source/talk/base/taskparent.cc',
+ 'source/talk/base/taskparent.h',
+ 'source/talk/base/taskrunner.cc',
+ 'source/talk/base/taskrunner.h',
+ 'source/talk/base/thread.cc',
+ 'source/talk/base/thread.h',
+ 'source/talk/base/time.cc',
+ 'source/talk/base/time.h',
+ 'source/talk/base/urlencode.cc',
+ 'source/talk/base/urlencode.h',
+ 'source/talk/xmllite/xmlbuilder.cc',
+ 'source/talk/xmllite/xmlbuilder.h',
+ 'source/talk/xmllite/xmlconstants.cc',
+ 'source/talk/xmllite/xmlconstants.h',
+ 'source/talk/xmllite/xmlelement.cc',
+ 'source/talk/xmllite/xmlelement.h',
+ 'source/talk/xmllite/xmlnsstack.cc',
+ 'source/talk/xmllite/xmlnsstack.h',
+ 'source/talk/xmllite/xmlparser.cc',
+ 'source/talk/xmllite/xmlparser.h',
+ 'source/talk/xmllite/xmlprinter.cc',
+ 'source/talk/xmllite/xmlprinter.h',
+ 'source/talk/xmpp/asyncsocket.h',
+ 'source/talk/xmpp/constants.cc',
+ 'source/talk/xmpp/constants.h',
+ 'source/talk/xmpp/jid.cc',
+ 'source/talk/xmpp/jid.h',
+ 'source/talk/xmpp/plainsaslhandler.h',
+ 'source/talk/xmpp/prexmppauth.h',
+ 'source/talk/xmpp/ratelimitmanager.cc',
+ 'source/talk/xmpp/ratelimitmanager.h',
+ 'source/talk/xmpp/saslcookiemechanism.h',
+ 'source/talk/xmpp/saslhandler.h',
+ 'source/talk/xmpp/saslmechanism.cc',
+ 'source/talk/xmpp/saslmechanism.h',
+ 'source/talk/xmpp/saslplainmechanism.h',
+ 'source/talk/xmpp/xmppclient.cc',
+ 'source/talk/xmpp/xmppclient.h',
+ 'source/talk/xmpp/xmppclientsettings.h',
+ 'source/talk/xmpp/xmppengine.h',
+ 'source/talk/xmpp/xmppengineimpl.cc',
+ 'source/talk/xmpp/xmppengineimpl.h',
+ 'source/talk/xmpp/xmppengineimpl_iq.cc',
+ 'source/talk/xmpp/xmpplogintask.cc',
+ 'source/talk/xmpp/xmpplogintask.h',
+ 'source/talk/xmpp/xmppstanzaparser.cc',
+ 'source/talk/xmpp/xmppstanzaparser.h',
+ 'source/talk/xmpp/xmpptask.cc',
+ 'source/talk/xmpp/xmpptask.h',
+ ],
+ 'conditions': [
+ ['OS=="win"', {
+ 'sources': [
+ '<(overrides)/talk/base/win32socketinit.cc',
+ 'source/talk/base/schanneladapter.cc',
+ 'source/talk/base/schanneladapter.h',
+ 'source/talk/base/win32.h',
+ 'source/talk/base/win32.cc',
+ 'source/talk/base/win32filesystem.cc',
+ 'source/talk/base/win32filesystem.h',
+ 'source/talk/base/win32window.h',
+ 'source/talk/base/win32window.cc',
+ 'source/talk/base/win32securityerrors.cc',
+ 'source/talk/base/winfirewall.cc',
+ 'source/talk/base/winfirewall.h',
+ 'source/talk/base/winping.cc',
+ 'source/talk/base/winping.h',
+ ],
+ }],
+ ['OS=="linux" or OS=="mac" or OS=="freebsd" or OS=="openbsd"', {
+ 'sources': [
+ 'source/talk/base/latebindingsymboltable.cc',
+ 'source/talk/base/latebindingsymboltable.h',
+ 'source/talk/base/sslstreamadapter.cc',
+ 'source/talk/base/sslstreamadapter.h',
+ 'source/talk/base/unixfilesystem.cc',
+ 'source/talk/base/unixfilesystem.h',
+ ],
+ }],
+ ['OS=="linux"', {
+ 'sources': [
+ 'source/talk/base/linux.cc',
+ 'source/talk/base/linux.h',
+ ],
+ }],
+ ['OS=="mac"', {
+ 'sources': [
+ 'source/talk/base/macconversion.cc',
+ 'source/talk/base/macconversion.h',
+ 'source/talk/base/macutils.cc',
+ 'source/talk/base/macutils.h',
+ ],
+ }],
+ ['inside_chromium_build==1', {
+ 'dependencies': [
+ 'source/talk/third_party/jsoncpp/jsoncpp.gyp:jsoncpp',
+ ],
+ }, {
+ 'dependencies': [
+ '../../third_party/jsoncpp/jsoncpp.gyp:jsoncpp',
+ ],
+ } ], # inside_chromium_build
+ ],
+ },
+ # This has to be is a separate project due to a bug in MSVS:
+ # https://connect.microsoft.com/VisualStudio/feedback/details/368272/duplicate-cpp-filename-in-c-project-visual-studio-2008
+ # We have two files named "constants.cc" and MSVS doesn't handle this
+ # properly.
+ {
+ 'target_name': 'libjingle_p2p',
+ 'type': 'static_library',
+ 'sources': [
+ 'source/talk/p2p/base/candidate.h',
+ 'source/talk/p2p/base/common.h',
+ 'source/talk/p2p/base/constants.cc',
+ 'source/talk/p2p/base/constants.h',
+ 'source/talk/p2p/base/p2ptransport.cc',
+ 'source/talk/p2p/base/p2ptransport.h',
+ 'source/talk/p2p/base/p2ptransportchannel.cc',
+ 'source/talk/p2p/base/p2ptransportchannel.h',
+ 'source/talk/p2p/base/port.cc',
+ 'source/talk/p2p/base/port.h',
+ 'source/talk/p2p/base/portallocator.h',
+ 'source/talk/p2p/base/pseudotcp.cc',
+ 'source/talk/p2p/base/pseudotcp.h',
+ 'source/talk/p2p/base/rawtransport.cc',
+ 'source/talk/p2p/base/rawtransport.h',
+ 'source/talk/p2p/base/rawtransportchannel.cc',
+ 'source/talk/p2p/base/rawtransportchannel.h',
+ 'source/talk/p2p/base/relayport.cc',
+ 'source/talk/p2p/base/relayport.h',
+ 'source/talk/p2p/base/session.cc',
+ 'source/talk/p2p/base/session.h',
+ 'source/talk/p2p/base/sessionclient.h',
+ 'source/talk/p2p/base/sessiondescription.cc',
+ 'source/talk/p2p/base/sessiondescription.h',
+ 'source/talk/p2p/base/sessionid.h',
+ 'source/talk/p2p/base/sessionmanager.cc',
+ 'source/talk/p2p/base/sessionmanager.h',
+ 'source/talk/p2p/base/sessionmessages.cc',
+ 'source/talk/p2p/base/sessionmessages.h',
+ 'source/talk/p2p/base/parsing.cc',
+ 'source/talk/p2p/base/parsing.h',
+ 'source/talk/p2p/base/stun.cc',
+ 'source/talk/p2p/base/stun.h',
+ 'source/talk/p2p/base/stunport.cc',
+ 'source/talk/p2p/base/stunport.h',
+ 'source/talk/p2p/base/stunrequest.cc',
+ 'source/talk/p2p/base/stunrequest.h',
+ 'source/talk/p2p/base/tcpport.cc',
+ 'source/talk/p2p/base/tcpport.h',
+ 'source/talk/p2p/base/transport.cc',
+ 'source/talk/p2p/base/transport.h',
+ 'source/talk/p2p/base/transportchannel.cc',
+ 'source/talk/p2p/base/transportchannel.h',
+ 'source/talk/p2p/base/transportchannelimpl.h',
+ 'source/talk/p2p/base/transportchannelproxy.cc',
+ 'source/talk/p2p/base/transportchannelproxy.h',
+ 'source/talk/p2p/base/udpport.cc',
+ 'source/talk/p2p/base/udpport.h',
+ 'source/talk/p2p/client/basicportallocator.cc',
+ 'source/talk/p2p/client/basicportallocator.h',
+ 'source/talk/p2p/client/httpportallocator.cc',
+ 'source/talk/p2p/client/httpportallocator.h',
+ 'source/talk/p2p/client/sessionmanagertask.h',
+ 'source/talk/p2p/client/sessionsendtask.h',
+ 'source/talk/p2p/client/socketmonitor.cc',
+ 'source/talk/p2p/client/socketmonitor.h',
+ 'source/talk/session/phone/audiomonitor.cc',
+ 'source/talk/session/phone/audiomonitor.h',
+ 'source/talk/session/phone/call.cc',
+ 'source/talk/session/phone/call.h',
+ 'source/talk/session/phone/channel.cc',
+ 'source/talk/session/phone/channel.h',
+ 'source/talk/session/phone/channelmanager.cc',
+ 'source/talk/session/phone/channelmanager.h',
+ 'source/talk/session/phone/codec.cc',
+ 'source/talk/session/phone/codec.h',
+ 'source/talk/session/phone/cryptoparams.h',
+ 'source/talk/session/phone/devicemanager.cc',
+ 'source/talk/session/phone/devicemanager.h',
+ 'source/talk/session/phone/filemediaengine.cc',
+ 'source/talk/session/phone/filemediaengine.h',
+ 'source/talk/session/phone/mediachannel.h',
+ 'source/talk/session/phone/mediaengine.cc',
+ 'source/talk/session/phone/mediaengine.h',
+ 'source/talk/session/phone/mediamessages.cc',
+ 'source/talk/session/phone/mediamessages.h',
+ 'source/talk/session/phone/mediamonitor.cc',
+ 'source/talk/session/phone/mediamonitor.h',
+ 'source/talk/session/phone/mediasessionclient.cc',
+ 'source/talk/session/phone/mediasessionclient.h',
+ 'source/talk/session/phone/mediasink.h',
+ 'source/talk/session/phone/rtcpmuxfilter.cc',
+ 'source/talk/session/phone/rtcpmuxfilter.h',
+ 'source/talk/session/phone/rtpdump.cc',
+ 'source/talk/session/phone/rtpdump.h',
+ 'source/talk/session/phone/rtputils.cc',
+ 'source/talk/session/phone/rtputils.h',
+ 'source/talk/session/phone/soundclip.cc',
+ 'source/talk/session/phone/soundclip.h',
+ 'source/talk/session/phone/srtpfilter.cc',
+ 'source/talk/session/phone/srtpfilter.h',
+ 'source/talk/session/phone/videocommon.h',
+ 'source/talk/session/phone/voicechannel.h',
+ 'source/talk/session/tunnel/pseudotcpchannel.cc',
+ 'source/talk/session/tunnel/pseudotcpchannel.h',
+ 'source/talk/session/tunnel/tunnelsessionclient.cc',
+ 'source/talk/session/tunnel/tunnelsessionclient.h',
+ ],
+ 'conditions': [
+ ['OS=="linux"', {
+ 'sources': [
+ 'source/talk/session/phone/libudevsymboltable.cc',
+ 'source/talk/session/phone/libudevsymboltable.h',
+ 'source/talk/session/phone/v4llookup.cc',
+ 'source/talk/session/phone/v4llookup.h',
+ ],
+ 'include_dirs': [
+ 'source/talk/third_party/libudev',
+ ],
+ }],
+ ['inside_chromium_build==1', {
+ 'dependencies': [
+ 'libjingle',
+ '../webrtc/video_engine/main/source/video_engine_core.gyp:video_engine_core',
+ '../webrtc/voice_engine/main/source/voice_engine_core.gyp:voice_engine_core',
+ ],
+ 'defines': [
+ 'PLATFORM_CHROMIUM',
+ ],
+ }, {
+ 'dependencies': [
+ 'libjingle',
+ '../../trunk/video_engine/main/source/video_engine_core.gyp:video_engine_core',
+ '../../trunk/voice_engine/main/source/voice_engine_core.gyp:voice_engine_core',
+ ],
+ } ], # inside_chromium_build
+ ], # conditions
+ },
+ # seperate project for app
+ {
+ 'target_name': 'libjingle_app',
+ 'type': '<(library)',
+ 'sources': [
+ 'source/talk/app/peerconnection.cc',
+ 'source/talk/app/peerconnection.h',
+ 'source/talk/app/videoengine.h',
+ 'source/talk/app/videomediaengine.cc',
+ 'source/talk/app/videomediaengine.h',
+ 'source/talk/app/voiceengine.h',
+ 'source/talk/app/voicemediaengine.cc',
+ 'source/talk/app/voicemediaengine.h',
+ 'source/talk/app/webrtc_json.cc',
+ 'source/talk/app/webrtc_json.h',
+ 'source/talk/app/webrtcchannelmanager.cc',
+ 'source/talk/app/webrtcchannelmanager.h',
+ 'source/talk/app/webrtcsession.cc',
+ 'source/talk/app/webrtcsession.h',
+ 'source/talk/app/webrtcsessionimpl.cc',
+ 'source/talk/app/webrtcsessionimpl.h',
+ 'source/talk/app/pc_transport_impl.cc',
+ 'source/talk/app/pc_transport_impl.h',
+ ],
+ 'direct_dependent_settings': {
+ 'conditions': [
+ ['inside_chromium_build==1', {
+ 'defines': [
+ 'PLATFORM_CHROMIUM',
+ ],
+ },{
+ 'sources': [
+ 'source/talk/app/p2p_transport_manager.cc',
+ 'source/talk/app/p2p_transport_manager.h',
+ ],
+ }],
+ ],
+ },
+ 'dependencies': [
+ ],
+ 'conditions': [
+ ['inside_chromium_build==1', {
+ 'dependencies': [
+ '../webrtc/modules/video_capture/main/source/video_capture.gyp:video_capture_module',
+ '../webrtc/video_engine/main/source/video_engine_core.gyp:video_engine_core',
+ '../webrtc/voice_engine/main/source/voice_engine_core.gyp:voice_engine_core',
+ '../webrtc/system_wrappers/source/system_wrappers.gyp:system_wrappers',
+ 'libjingle_p2p',
+ 'source/talk/third_party/jsoncpp/jsoncpp.gyp:jsoncpp',
+ ],
+ 'defines': [
+ 'PLATFORM_CHROMIUM',
+ ],
+ }, {
+ 'dependencies': [
+ '../../third_party/jsoncpp/jsoncpp.gyp:jsoncpp',
+ '../../trunk/modules/video_capture/main/source/video_capture.gyp:video_capture_module',
+ '../../trunk/video_engine/main/source/video_engine_core.gyp:video_engine_core',
+ '../../trunk/voice_engine/main/source/voice_engine_core.gyp:voice_engine_core',
+ '../../trunk/system_wrappers/source/system_wrappers.gyp:system_wrappers',
+ 'libjingle_p2p',
+ ],
+ } ], # inside_chromium_build
+ ], # conditions
+ },
+
+ {
+ 'target_name': 'session_test_app',
+ 'conditions': [
+ ['OS=="win"', {
+ 'type': 'executable',
+ 'sources': [
+ 'source/talk/app/session_test/main_wnd.cc',
+ 'source/talk/app/session_test/main_wnd.h',
+ 'source/talk/app/session_test/session_test_main.cc',
+ ],
+ 'msvs_settings': {
+ 'VCLinkerTool': {
+ 'SubSystem': '2', # Windows
+ },
+ },
+ }, {
+ 'type': 'none',
+ }],
+ ['inside_chromium_build==1', {
+ 'dependencies': [
+ '../webrtc/modules/video_capture/main/source/video_capture.gyp:video_capture_module',
+ '../webrtc/video_engine/main/source/video_engine_core.gyp:video_engine_core',
+ '../webrtc/voice_engine/main/source/voice_engine_core.gyp:voice_engine_core',
+ '../webrtc/system_wrappers/source/system_wrappers.gyp:system_wrappers',
+ 'libjingle_app',
+ 'libjingle_p2p',
+ 'source/talk/third_party/jsoncpp/jsoncpp.gyp:jsoncpp',
+ ],
+ }, {
+ 'dependencies': [
+ '../../third_party/jsoncpp/jsoncpp.gyp:jsoncpp',
+ '../../trunk/modules/video_capture/main/source/video_capture.gyp:video_capture_module',
+ '../../trunk/voice_engine/main/source/voice_engine_core.gyp:voice_engine_core',
+ '../../trunk/system_wrappers/source/system_wrappers.gyp:system_wrappers',
+ 'libjingle_app',
+ ],
+ } ], # inside_chromium_build
+ ], # conditions
+ },
+ ],
+}
+
+# Local Variables:
+# tab-width:2
+# indent-tabs-mode:nil
+# End:
+# vim: set expandtab tabstop=2 shiftwidth=2:
diff --git a/third_party_mods/libjingle/source/talk/app/ClassDiagram2.png b/third_party_mods/libjingle/source/talk/app/ClassDiagram2.png
new file mode 100644
index 0000000..a264dd7
--- /dev/null
+++ b/third_party_mods/libjingle/source/talk/app/ClassDiagram2.png
Binary files differ
diff --git a/third_party_mods/libjingle/source/talk/app/p2p_transport_manager.cc b/third_party_mods/libjingle/source/talk/app/p2p_transport_manager.cc
new file mode 100644
index 0000000..c43d341
--- /dev/null
+++ b/third_party_mods/libjingle/source/talk/app/p2p_transport_manager.cc
@@ -0,0 +1,75 @@
+// Copyright (c) 2011 The Chromium Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style license that can be
+// found in the LICENSE file.
+
+#include "talk/app/p2p_transport_manager.h"
+
+#include "talk/base/socketaddress.h"
+#include "talk/p2p/base/p2ptransportchannel.h"
+#include "talk/p2p/client/httpportallocator.h"
+#include "talk/p2p/client/basicportallocator.h"
+
+namespace webrtc {
+
+P2PTransportManager::P2PTransportManager(cricket::PortAllocator* allocator)
+ : event_handler_(NULL)
+ ,state_(STATE_NONE)
+ ,allocator_(allocator) {
+}
+
+P2PTransportManager::~P2PTransportManager() {
+}
+
+bool P2PTransportManager::Init(const std::string& name,
+ Protocol protocol,
+ const std::string& config,
+ EventHandler* event_handler) {
+ name_ = name;
+ event_handler_ = event_handler;
+
+ channel_.reset(new cricket::P2PTransportChannel(
+ name, "", NULL, allocator_));
+ channel_->SignalRequestSignaling.connect(
+ this, &P2PTransportManager::OnRequestSignaling);
+ channel_->SignalWritableState.connect(
+ this, &P2PTransportManager::OnReadableState);
+ channel_->SignalWritableState.connect(
+ this, &P2PTransportManager::OnWriteableState);
+ channel_->SignalCandidateReady.connect(
+ this, &P2PTransportManager::OnCandidateReady);
+
+ channel_->Connect();
+ return true;
+}
+
+bool P2PTransportManager::AddRemoteCandidate(
+ const cricket::Candidate& candidate) {
+ channel_->OnCandidate(candidate);
+ return true;
+}
+
+cricket::P2PTransportChannel* P2PTransportManager::GetP2PChannel() {
+ return channel_.get();
+}
+
+void P2PTransportManager::OnRequestSignaling() {
+ channel_->OnSignalingReady();
+}
+
+void P2PTransportManager::OnCandidateReady(
+ cricket::TransportChannelImpl* channel,
+ const cricket::Candidate& candidate) {
+ event_handler_->OnCandidateReady(candidate);
+}
+
+void P2PTransportManager::OnReadableState(cricket::TransportChannel* channel) {
+ state_ = static_cast<State>(state_ | STATE_READABLE);
+ event_handler_->OnStateChange(state_);
+}
+
+void P2PTransportManager::OnWriteableState(cricket::TransportChannel* channel) {
+ state_ = static_cast<State>(state_ | STATE_WRITABLE);
+ event_handler_->OnStateChange(state_);
+}
+
+}
diff --git a/third_party_mods/libjingle/source/talk/app/p2p_transport_manager.h b/third_party_mods/libjingle/source/talk/app/p2p_transport_manager.h
new file mode 100644
index 0000000..ea60ad8
--- /dev/null
+++ b/third_party_mods/libjingle/source/talk/app/p2p_transport_manager.h
@@ -0,0 +1,87 @@
+// Copyright (c) 2011 The Chromium Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style license that can be
+// found in the LICENSE file.
+
+#ifndef TALK_APP_WEBRTC_P2P_TRANSPORT_MANAGER_H_
+#define TALK_APP_WEBRTC_P2P_TRANSPORT_MANAGER_H_
+
+#include <string>
+
+#include "talk/base/scoped_ptr.h"
+#include "talk/base/sigslot.h"
+
+namespace cricket {
+class Candidate;
+class P2PTransportChannel;
+class PortAllocator;
+class TransportChannel;
+class TransportChannelImpl;
+}
+
+namespace talk_base {
+class NetworkManager;
+class PacketSocketFactory;
+}
+
+namespace webrtc {
+class P2PTransportManager : public sigslot::has_slots<>{
+ public:
+ enum State {
+ STATE_NONE = 0,
+ STATE_WRITABLE = 1,
+ STATE_READABLE = 2,
+ };
+
+ enum Protocol {
+ PROTOCOL_UDP = 0,
+ PROTOCOL_TCP = 1,
+ };
+
+ class EventHandler {
+ public:
+ virtual ~EventHandler() {}
+
+ // Called for each local candidate.
+ virtual void OnCandidateReady(const cricket::Candidate& candidate) = 0;
+
+ // Called when readable of writable state of the stream changes.
+ virtual void OnStateChange(State state) = 0;
+
+ // Called when an error occures (e.g. TCP handshake
+ // failed). P2PTransportManager object is not usable after that and
+ // should be destroyed.
+ virtual void OnError(int error) = 0;
+ };
+
+ public:
+ // Create P2PTransportManager using specified NetworkManager and
+ // PacketSocketFactory. Takes ownership of |network_manager| and
+ // |socket_factory|.
+ P2PTransportManager(cricket::PortAllocator* allocator);
+ ~P2PTransportManager();
+
+ bool Init(const std::string& name,
+ Protocol protocol,
+ const std::string& config,
+ EventHandler* event_handler);
+ bool AddRemoteCandidate(const cricket::Candidate& address);
+ cricket::P2PTransportChannel* GetP2PChannel();
+
+ private:
+
+ void OnRequestSignaling();
+ void OnCandidateReady(cricket::TransportChannelImpl* channel,
+ const cricket::Candidate& candidate);
+ void OnReadableState(cricket::TransportChannel* channel);
+ void OnWriteableState(cricket::TransportChannel* channel);
+
+ std::string name_;
+ EventHandler* event_handler_;
+ State state_;
+
+ cricket::PortAllocator* allocator_;
+ talk_base::scoped_ptr<cricket::P2PTransportChannel> channel_;
+};
+
+}
+#endif // TALK_APP_WEBRTC_P2P_TRANSPORT_MANAGER_H_
diff --git a/third_party_mods/libjingle/source/talk/app/pc_transport_impl.cc b/third_party_mods/libjingle/source/talk/app/pc_transport_impl.cc
new file mode 100644
index 0000000..4029785
--- /dev/null
+++ b/third_party_mods/libjingle/source/talk/app/pc_transport_impl.cc
@@ -0,0 +1,359 @@
+/*
+ * pc_transport_impl.cc
+ *
+ * Created on: May 2, 2011
+ * Author: mallinath
+ */
+
+#include "talk/app/pc_transport_impl.h"
+
+#ifdef PLATFORM_CHROMIUM
+#include "base/values.h"
+#include "content/common/json_value_serializer.h"
+#include "content/renderer/p2p/p2p_transport_impl.h"
+#include "jingle/glue/thread_wrapper.h"
+#include "net/base/io_buffer.h"
+#include "net/socket/socket.h"
+#else
+#include "talk/app/p2p_transport_manager.h"
+#endif
+#include "talk/p2p/base/transportchannel.h"
+#include "talk/app/webrtcsessionimpl.h"
+#include "talk/app/peerconnection.h"
+
+namespace webrtc {
+enum {
+ MSG_RTC_ONREADPACKET = 1,
+ MSG_RTC_TRANSPORTINIT,
+ MSG_RTC_ADDREMOTECANDIDATE,
+ MSG_RTC_ONCANDIDATEREADY,
+};
+
+struct MediaDataMsgParams : public talk_base::MessageData {
+ MediaDataMsgParams(cricket::TransportChannel* channel,
+ const char* dataPtr,
+ int len)
+ : channel(channel), data(dataPtr), len(len) {}
+
+ cricket::TransportChannel* channel;
+ const char* data;
+ int len;
+};
+
+PC_Transport_Impl::PC_Transport_Impl (WebRTCSessionImpl* session)
+ : session_(session),
+#ifdef PLATFORM_CHROMIUM
+ ALLOW_THIS_IN_INITIALIZER_LIST(
+ channel_read_callback_(this, &PC_Transport_Impl::OnRead)),
+ ALLOW_THIS_IN_INITIALIZER_LIST(
+ channel_write_callback_(this, &PC_Transport_Impl::OnWrite)),
+#endif
+ writable_(false),
+ event_(false, false),
+ network_thread_jingle_(session_->connection()->media_thread())
+{
+#ifdef PLATFORM_CHROMIUM
+ // Before proceeding, ensure we have libjingle thread wrapper for
+ // the current thread.
+ jingle_glue::JingleThreadWrapper::EnsureForCurrentThread();
+ network_thread_chromium_ = talk_base::Thread::Current();
+#endif
+ event_.Set();
+}
+
+PC_Transport_Impl::~PC_Transport_Impl() {
+}
+
+bool PC_Transport_Impl::Init(const std::string& name) {
+#ifdef PLATFORM_CHROMIUM
+ if(network_thread_chromium_ != talk_base::Thread::Current()) {
+ network_thread_chromium_->Post(this, MSG_RTC_TRANSPORTINIT,
+ new talk_base::TypedMessageData<std::string> (name));
+ return true;
+ }
+#else
+ if(network_thread_jingle_ != talk_base::Thread::Current()) {
+ network_thread_jingle_->Send(this, MSG_RTC_TRANSPORTINIT,
+ new talk_base::TypedMessageData<std::string> (name));
+ return true;
+ }
+#endif
+
+ name_ = name;
+ p2p_transport_.reset(CreateP2PTransport());
+
+#ifdef PLATFORM_CHROMIUM
+ webkit_glue::P2PTransport::Protocol protocol =
+ webkit_glue::P2PTransport::PROTOCOL_UDP;
+#else
+ webrtc::P2PTransportManager::Protocol protocol =
+ webrtc::P2PTransportManager::PROTOCOL_UDP;
+#endif
+ p2p_transport_->Init(name_, protocol, "", this);
+
+#ifdef PLATFORM_CHROMIUM
+ StreamRead();
+#endif
+
+ return true;
+}
+
+#ifdef PLATFORM_CHROMIUM
+
+void PC_Transport_Impl::OnCandidateReady(const std::string& address) {
+ if(network_thread_chromium_ != talk_base::Thread::Current()) {
+ network_thread_chromium_->Post(this, MSG_RTC_ONCANDIDATEREADY,
+ new talk_base::TypedMessageData<std::string> (
+ address));
+ return;
+ }
+
+ // using only first candidate
+ // use p2p_transport_impl.cc Deserialize method
+ cricket::Candidate candidate;
+ if (local_candidates_.empty()) {
+ cricket::Candidate candidate;
+ DeserializeCandidate(address, &candidate);
+ local_candidates_.push_back(candidate);
+ session_->OnCandidateReady(candidate);
+ }
+}
+
+bool PC_Transport_Impl::AddRemoteCandidate(
+ const cricket::Candidate& candidate) {
+ if(network_thread_chromium_ != talk_base::Thread::Current()) {
+ network_thread_chromium_->Post(this, MSG_RTC_ADDREMOTECANDIDATE,
+ new talk_base::TypedMessageData<const cricket::Candidate*> (
+ &candidate));
+ // TODO: save the result
+ return true;
+ }
+
+ if (!p2p_transport_.get())
+ return false;
+
+ return p2p_transport_->AddRemoteCandidate(SerializeCandidate(candidate));
+}
+
+#else
+
+void PC_Transport_Impl::OnCandidateReady(const cricket::Candidate& candidate) {
+ if(network_thread_jingle_ != talk_base::Thread::Current()) {
+ network_thread_jingle_->Send(this, MSG_RTC_ONCANDIDATEREADY,
+ new talk_base::TypedMessageData<const cricket::Candidate*> (
+ &candidate));
+ return;
+ }
+
+ if (local_candidates_.empty()) {
+ local_candidates_.push_back(candidate);
+ session_->OnCandidateReady(candidate);
+ }
+}
+
+bool PC_Transport_Impl::AddRemoteCandidate(
+ const cricket::Candidate& candidate) {
+ if(network_thread_jingle_ != talk_base::Thread::Current()) {
+ network_thread_jingle_->Send(this, MSG_RTC_ADDREMOTECANDIDATE,
+ new talk_base::TypedMessageData<const cricket::Candidate*> (
+ &candidate));
+ // TODO: save the result
+ return true;
+ }
+
+ if (!p2p_transport_.get())
+ return false;
+
+ return p2p_transport_->AddRemoteCandidate(candidate);
+}
+
+#endif
+
+#ifdef PLATFORM_CHROMIUM
+
+int32 PC_Transport_Impl::DoRecv() {
+ if (!p2p_transport_.get())
+ return -1;
+
+ net::Socket* channel = p2p_transport_->GetChannel();
+ if (!channel)
+ return -1;
+
+ scoped_refptr<net::IOBuffer> buffer =
+ new net::WrappedIOBuffer(static_cast<const char*>(recv_buffer_));
+ int result = channel->Read(
+ buffer, kMaxRtpRtcpPacketLen, &channel_read_callback_);
+ return result;
+}
+
+void PC_Transport_Impl::OnRead(int result) {
+ network_thread_jingle_->Post(
+ this, MSG_RTC_ONREADPACKET, new MediaDataMsgParams(
+ GetP2PChannel(), recv_buffer_, result));
+ StreamRead();
+}
+
+void PC_Transport_Impl::OnWrite(int result) {
+ return;
+}
+
+net::Socket* PC_Transport_Impl::GetChannel() {
+ if (!p2p_transport_.get())
+ return NULL;
+
+ return p2p_transport_->GetChannel();
+}
+
+void PC_Transport_Impl::StreamRead() {
+ event_.Wait(talk_base::kForever);
+ DoRecv();
+}
+
+void PC_Transport_Impl::OnReadPacket_w(cricket::TransportChannel* channel,
+ const char* data,
+ size_t len) {
+ session()->SignalReadPacket(channel, data, len);
+ event_.Set();
+ return ;
+}
+
+std::string PC_Transport_Impl::SerializeCandidate(
+ const cricket::Candidate& candidate) {
+ // TODO(sergeyu): Use SDP to format candidates?
+ DictionaryValue value;
+ value.SetString("name", candidate.name());
+ value.SetString("ip", candidate.address().IPAsString());
+ value.SetInteger("port", candidate.address().port());
+ value.SetString("type", candidate.type());
+ value.SetString("protocol", candidate.protocol());
+ value.SetString("username", candidate.username());
+ value.SetString("password", candidate.password());
+ value.SetDouble("preference", candidate.preference());
+ value.SetInteger("generation", candidate.generation());
+
+ std::string result;
+ JSONStringValueSerializer serializer(&result);
+ serializer.Serialize(value);
+ return result;
+}
+
+bool PC_Transport_Impl::DeserializeCandidate(const std::string& address,
+ cricket::Candidate* candidate) {
+ JSONStringValueSerializer deserializer(address);
+ scoped_ptr<Value> value(deserializer.Deserialize(NULL, NULL));
+ if (!value.get() || !value->IsType(Value::TYPE_DICTIONARY)) {
+ return false;
+ }
+
+ DictionaryValue* dic_value = static_cast<DictionaryValue*>(value.get());
+
+ std::string name;
+ std::string ip;
+ int port;
+ std::string type;
+ std::string protocol;
+ std::string username;
+ std::string password;
+ double preference;
+ int generation;
+
+ if (!dic_value->GetString("name", &name) ||
+ !dic_value->GetString("ip", &ip) ||
+ !dic_value->GetInteger("port", &port) ||
+ !dic_value->GetString("type", &type) ||
+ !dic_value->GetString("protocol", &protocol) ||
+ !dic_value->GetString("username", &username) ||
+ !dic_value->GetString("password", &password) ||
+ !dic_value->GetDouble("preference", &preference) ||
+ !dic_value->GetInteger("generation", &generation)) {
+ return false;
+ }
+
+ candidate->set_name(name);
+ candidate->set_address(talk_base::SocketAddress(ip, port));
+ candidate->set_type(type);
+ candidate->set_protocol(protocol);
+ candidate->set_username(username);
+ candidate->set_password(password);
+ candidate->set_preference(static_cast<float>(preference));
+ candidate->set_generation(generation);
+
+ return true;
+}
+#endif
+
+void PC_Transport_Impl::OnStateChange(P2PTransportClass::State state) {
+ writable_ = (state | P2PTransportClass::STATE_WRITABLE) != 0;
+ if (writable_) {
+ session_->OnStateChange(state, p2p_transport()->GetP2PChannel());
+ }
+}
+
+void PC_Transport_Impl::OnError(int error) {
+
+}
+
+cricket::TransportChannel* PC_Transport_Impl::GetP2PChannel() {
+ if (!p2p_transport_.get())
+ return NULL;
+
+ return p2p_transport_->GetP2PChannel();
+}
+
+void PC_Transport_Impl::OnMessage(talk_base::Message* message) {
+ talk_base::MessageData* data = message->pdata;
+ switch(message->message_id) {
+ case MSG_RTC_TRANSPORTINIT : {
+ talk_base::TypedMessageData<std::string> *p =
+ static_cast<talk_base::TypedMessageData<std::string>* >(data);
+ Init(p->data());
+ delete p;
+ break;
+ }
+ case MSG_RTC_ADDREMOTECANDIDATE : {
+ talk_base::TypedMessageData<const cricket::Candidate*> *p =
+ static_cast<talk_base::TypedMessageData<const cricket::Candidate*>* >(data);
+ AddRemoteCandidate(*p->data());
+ delete p;
+ break;
+ }
+#ifdef PLATFORM_CHROMIUM
+ case MSG_RTC_ONCANDIDATEREADY : {
+ talk_base::TypedMessageData<std::string> *p =
+ static_cast<talk_base::TypedMessageData<std::string>* >(data);
+ OnCandidateReady(p->data());
+ delete p;
+ break;
+ }
+ case MSG_RTC_ONREADPACKET : {
+ MediaDataMsgParams* p = static_cast<MediaDataMsgParams*> (data);
+ ASSERT (p != NULL);
+ OnReadPacket_w(p->channel, p->data, p->len);
+ delete data;
+ break;
+ }
+#else
+ case MSG_RTC_ONCANDIDATEREADY : {
+ talk_base::TypedMessageData<const cricket::Candidate*> *p =
+ static_cast<talk_base::TypedMessageData<const cricket::Candidate*>* >(data);
+ OnCandidateReady(*p->data());
+ delete p;
+ break;
+ }
+#endif
+ default:
+ ASSERT(false);
+ }
+}
+
+P2PTransportClass* PC_Transport_Impl::CreateP2PTransport() {
+#ifdef PLATFORM_CHROMIUM
+ return new P2PTransportImpl(
+ session()->connection()->p2p_socket_dispatcher());
+#else
+ return new P2PTransportManager(session()->port_allocator());
+#endif
+}
+
+} //namespace webrtc
+
diff --git a/third_party_mods/libjingle/source/talk/app/pc_transport_impl.h b/third_party_mods/libjingle/source/talk/app/pc_transport_impl.h
new file mode 100644
index 0000000..95933f2
--- /dev/null
+++ b/third_party_mods/libjingle/source/talk/app/pc_transport_impl.h
@@ -0,0 +1,109 @@
+/*
+ * peerconnection_transport_impl.h
+ *
+ * Created on: May 2, 2011
+ * Author: mallinath
+ */
+
+#ifndef TALK_APP_PEERCONNECTION_TRANSPORT_IMPL_H_
+#define TALK_APP_PEERCONNECTION_TRANSPORT_IMPL_H_
+
+#include <vector>
+
+#include "talk/base/thread.h"
+#include "talk/base/event.h"
+#include "talk/base/messagehandler.h"
+#include "talk/base/scoped_ptr.h"
+
+#ifdef PLATFORM_CHROMIUM
+#include "net/base/completion_callback.h"
+#include "webkit/glue/p2p_transport.h"
+class P2PTransportImpl;
+#else
+#include "talk/app/p2p_transport_manager.h"
+#endif
+
+#ifdef PLATFORM_CHROMIUM
+typedef P2PTransportImpl TransportImplClass;
+typedef webkit_glue::P2PTransport::EventHandler TransportEventHandler;
+typedef webkit_glue::P2PTransport P2PTransportClass;
+#else
+typedef webrtc::P2PTransportManager TransportImplClass;
+typedef webrtc::P2PTransportManager::EventHandler TransportEventHandler;
+typedef webrtc::P2PTransportManager P2PTransportClass;
+#endif
+
+namespace cricket {
+class TransportChannel;
+class Candidate;
+}
+
+namespace webrtc {
+
+const int kMaxRtpRtcpPacketLen = 1500;
+
+class WebRTCSessionImpl;
+// PC - PeerConnection
+class PC_Transport_Impl : public talk_base::MessageHandler,
+ public TransportEventHandler {
+ public:
+ PC_Transport_Impl(WebRTCSessionImpl* session);
+ virtual ~PC_Transport_Impl();
+
+ bool Init(const std::string& name);
+#ifdef PLATFORM_CHROMIUM
+ virtual void OnCandidateReady(const std::string& address);
+#else
+ virtual void OnCandidateReady(const cricket::Candidate& candidate);
+#endif
+ virtual void OnStateChange(P2PTransportClass::State state);
+ virtual void OnError(int error);
+
+#ifdef PLATFORM_CHROMIUM
+ void OnRead(int result);
+ void OnWrite(int result);
+ net::Socket* GetChannel();
+#endif
+
+ void OnMessage(talk_base::Message* message);
+ cricket::TransportChannel* GetP2PChannel();
+ bool AddRemoteCandidate(const cricket::Candidate& candidate);
+ WebRTCSessionImpl* session() { return session_; }
+ P2PTransportClass* p2p_transport() { return p2p_transport_.get(); }
+ const std::string& name() { return name_; }
+ std::vector<cricket::Candidate>& local_candidates() {
+ return local_candidates_;
+ }
+
+ private:
+ void MsgSend(uint32 id);
+ P2PTransportClass* CreateP2PTransport();
+#ifdef PLATFORM_CHROMIUM
+ void OnReadPacket_w(
+ cricket::TransportChannel* channel, const char* data, size_t len);
+ int32 DoRecv();
+ void StreamRead();
+ std::string SerializeCandidate(const cricket::Candidate& candidate);
+ bool DeserializeCandidate(const std::string& address,
+ cricket::Candidate* candidate);
+#endif
+
+ std::string name_;
+ WebRTCSessionImpl* session_;
+ talk_base::scoped_ptr<P2PTransportClass> p2p_transport_;
+ std::vector<cricket::Candidate> local_candidates_;
+
+#ifdef PLATFORM_CHROMIUM
+ net::CompletionCallbackImpl<PC_Transport_Impl> channel_read_callback_;
+ net::CompletionCallbackImpl<PC_Transport_Impl> channel_write_callback_;
+ talk_base::Thread* network_thread_chromium_;
+#endif
+ bool writable_;
+ char recv_buffer_[kMaxRtpRtcpPacketLen];
+ talk_base::Event event_;
+ talk_base::Thread* network_thread_jingle_;
+};
+
+} // namespace webrtc
+
+#endif /* TALK_APP_PEERCONNECTION_TRANSPORT_IMPL_H_ */
diff --git a/third_party_mods/libjingle/source/talk/app/peerconnection.cc b/third_party_mods/libjingle/source/talk/app/peerconnection.cc
new file mode 100644
index 0000000..4a1962f
--- /dev/null
+++ b/third_party_mods/libjingle/source/talk/app/peerconnection.cc
@@ -0,0 +1,302 @@
+// Copyright 2011 Google Inc. All Rights Reserved.
+// Author: mallinath@google.com (Mallinath Bareddy)
+
+#include <vector>
+
+#include "talk/app/peerconnection.h"
+
+#include "talk/base/basicpacketsocketfactory.h"
+#include "talk/base/helpers.h"
+#include "talk/base/stringencode.h"
+#include "talk/base/logging.h"
+
+#include "talk/p2p/client/basicportallocator.h"
+#include "talk/session/phone/mediasessionclient.h"
+#include "talk/app/webrtcsessionimpl.h"
+#include "talk/app/webrtc_json.h"
+
+namespace webrtc {
+
+static const size_t kConfigTokens = 2;
+static const int kDefaultStunPort = 3478;
+
+#ifdef PLATFORM_CHROMIUM
+PeerConnection::PeerConnection(const std::string& config,
+ P2PSocketDispatcher* p2p_socket_dispatcher)
+#else
+PeerConnection::PeerConnection(const std::string& config)
+#endif // PLATFORM_CHROMIUM
+ : config_(config)
+ ,media_thread_(new talk_base::Thread)
+ ,network_manager_(new talk_base::NetworkManager)
+ ,signaling_thread_(new talk_base::Thread)
+ ,initialized_(false)
+ ,service_type_(SERVICE_COUNT)
+ ,event_callback_(NULL)
+ ,session_(NULL)
+ ,incoming_(false)
+#ifdef PLATFORM_CHROMIUM
+ ,p2p_socket_dispatcher_(p2p_socket_dispatcher)
+#endif // PLATFORM_CHROMIUM
+{
+}
+
+PeerConnection::~PeerConnection() {
+ if (session_ != NULL) {
+ // Before deleting the session, make sure that the signaling thread isn't
+ // running (or wait for it if it is).
+ signaling_thread_.reset();
+
+ ASSERT(!session_->HasAudioStream());
+ ASSERT(!session_->HasVideoStream());
+ // TODO: the RemoveAllStreams has to be asynchronous. At the same
+ //time "delete session_" should be called after RemoveAllStreams completed.
+ delete session_;
+ }
+}
+
+bool PeerConnection::Init() {
+ ASSERT(!initialized_);
+
+ std::vector<std::string> tokens;
+ talk_base::tokenize(config_, ' ', &tokens);
+
+ if (tokens.size() != kConfigTokens) {
+ LOG(LS_ERROR) << "Invalid config string";
+ return false;
+ }
+
+ service_type_ = SERVICE_COUNT;
+
+ // NOTE: Must be in the same order as the enum.
+ static const char* kValidServiceTypes[SERVICE_COUNT] = {
+ "STUN", "STUNS","TURN", "TURNS"
+ };
+ const std::string& type = tokens[0];
+ for (size_t i = 0; i < SERVICE_COUNT; ++i) {
+ if (type.compare(kValidServiceTypes[i]) == 0) {
+ service_type_ = static_cast<ServiceType>(i);
+ break;
+ }
+ }
+
+ if (service_type_ == SERVICE_COUNT) {
+ LOG(LS_ERROR) << "Invalid service type: " << type;
+ return false;
+ }
+
+ service_address_ = tokens[1];
+
+ int port;
+ tokens.clear();
+ talk_base::tokenize(service_address_, ':', &tokens);
+ if (tokens.size() != kConfigTokens) {
+ port = kDefaultStunPort;
+ } else {
+ port = atoi(tokens[1].c_str());
+ if (port <= 0 || port > 0xffff) {
+ LOG(LS_ERROR) << "Invalid port: " << tokens[1];
+ return false;
+ }
+ }
+
+ talk_base::SocketAddress stun_addr(tokens[0], port);
+
+ socket_factory_.reset(new talk_base::BasicPacketSocketFactory(
+ media_thread_.get()));
+
+ port_allocator_.reset(new cricket::BasicPortAllocator(network_manager_.get(),
+ stun_addr, talk_base::SocketAddress(), talk_base::SocketAddress(),
+ talk_base::SocketAddress()));
+
+ ASSERT(port_allocator_.get() != NULL);
+ port_allocator_->set_flags(cricket::PORTALLOCATOR_DISABLE_STUN |
+ cricket::PORTALLOCATOR_DISABLE_TCP |
+ cricket::PORTALLOCATOR_DISABLE_RELAY);
+
+ // create channel manager
+ channel_manager_.reset(new WebRtcChannelManager(media_thread_.get()));
+
+ //start the media thread
+ media_thread_->SetPriority(talk_base::PRIORITY_HIGH);
+ media_thread_->SetName("PeerConn", this);
+ if (!media_thread_->Start()) {
+ LOG(LS_ERROR) << "Failed to start media thread";
+ } else if (!channel_manager_->Init()) {
+ LOG(LS_ERROR) << "Failed to initialize the channel manager";
+ } if (!signaling_thread_->SetName("Session Signaling Thread", this) ||
+ !signaling_thread_->Start()) {
+ LOG(LS_ERROR) << "Failed to start session signaling thread";
+ } else {
+ initialized_ = true;
+ }
+
+ return initialized_;
+}
+
+void PeerConnection::RegisterObserver(PeerConnectionObserver* observer) {
+ // This assert is to catch cases where two observer pointers are registered.
+ // We only support one and if another is to be used, the current one must be
+ // cleared first.
+ ASSERT(observer == NULL || event_callback_ == NULL);
+ event_callback_ = observer;
+}
+
+bool PeerConnection::SignalingMessage(const std::string& signaling_message) {
+ // Deserialize signaling message
+ cricket::SessionDescription* incoming_sdp = NULL;
+ std::vector<cricket::Candidate> candidates;
+ if (!ParseJSONSignalingMessage(signaling_message, incoming_sdp, candidates))
+ return false;
+
+ bool ret = false;
+ if (!session_) {
+ // this will be incoming call
+ std::string sid;
+ talk_base::CreateRandomString(8, &sid);
+ std::string direction("r");
+ session_ = CreateMediaSession(sid, direction);
+ ASSERT(session_ != NULL);
+ incoming_ = true;
+ ret = session_->OnInitiateMessage(incoming_sdp, candidates);
+ } else {
+ ret = session_->OnRemoteDescription(incoming_sdp, candidates);
+ }
+ return ret;
+}
+
+WebRTCSessionImpl* PeerConnection::CreateMediaSession(const std::string& id,
+ const std::string& dir) {
+ WebRTCSessionImpl* session = new WebRTCSessionImpl(id, dir,
+ port_allocator_.get(), channel_manager_.get(), this,
+ signaling_thread_.get());
+ if (session) {
+ session->SignalOnRemoveStream.connect(this,
+ &PeerConnection::SendRemoveSignal);
+ }
+ return session;
+}
+
+void PeerConnection::SendRemoveSignal(WebRTCSessionImpl* session) {
+ if (event_callback_) {
+ std::string message;
+ if (GetJSONSignalingMessage(session->remote_description(),
+ session->local_candidates(), &message)) {
+ event_callback_->OnSignalingMessage(message);
+ }
+ }
+}
+
+bool PeerConnection::AddStream(const std::string& stream_id, bool video) {
+ if (!session_) {
+ // if session doesn't exist then this should be an outgoing call
+ std::string sid;
+ if (!talk_base::CreateRandomString(8, &sid) ||
+ (session_ = CreateMediaSession(sid, "s")) == NULL) {
+ ASSERT(false && "failed to initialize a session");
+ return false;
+ }
+ }
+
+ bool ret = false;
+
+ if (session_->HasStream(stream_id)) {
+ ASSERT(false && "A stream with this name already exists");
+ } else {
+ //TODO: we should ensure CreateVoiceChannel/CreateVideoChannel be called
+ // after transportchannel is ready
+ if (!video) {
+ ret = !session_->HasAudioStream() &&
+ session_->CreateP2PTransportChannel(stream_id, video) &&
+ session_->CreateVoiceChannel(stream_id);
+ } else {
+ ret = !session_->HasVideoStream() &&
+ session_->CreateP2PTransportChannel(stream_id, video) &&
+ session_->CreateVideoChannel(stream_id);
+ }
+ }
+ return ret;
+}
+
+bool PeerConnection::RemoveStream(const std::string& stream_id) {
+ ASSERT(session_ != NULL);
+ return session_->RemoveStream(stream_id);
+}
+
+void PeerConnection::OnLocalDescription(
+ cricket::SessionDescription* desc,
+ const std::vector<cricket::Candidate>& candidates) {
+ if (!desc) {
+ LOG(LS_ERROR) << "no local SDP ";
+ return;
+ }
+
+ std::string message;
+ if (GetJSONSignalingMessage(desc, candidates, &message)) {
+ if (event_callback_) {
+ event_callback_->OnSignalingMessage(message);
+ }
+ }
+}
+
+bool PeerConnection::SetAudioDevice(const std::string& wave_in_device,
+ const std::string& wave_out_device, int opts) {
+ return channel_manager_->SetAudioOptions(wave_in_device, wave_out_device, opts);
+}
+
+bool PeerConnection::SetVideoRenderer(const std::string& stream_id,
+ ExternalRenderer* external_renderer) {
+ ASSERT(session_ != NULL);
+ return session_->SetVideoRenderer(stream_id, external_renderer);
+}
+
+bool PeerConnection::SetVideoRenderer(int channel_id,
+ void* window,
+ unsigned int zOrder,
+ float left,
+ float top,
+ float right,
+ float bottom) {
+ ASSERT(session_ != NULL);
+ return session_->SetVideoRenderer(channel_id, window, zOrder, left, top,
+ right, bottom);
+}
+
+bool PeerConnection::SetVideoCapture(const std::string& cam_device) {
+ return channel_manager_->SetVideoOptions(cam_device);
+}
+
+bool PeerConnection::Connect() {
+ return session_->Initiate();
+}
+
+void PeerConnection::OnAddStream(const std::string& stream_id,
+ int channel_id,
+ bool video) {
+ if (event_callback_) {
+ event_callback_->OnAddStream(stream_id, channel_id, video);
+ }
+}
+
+void PeerConnection::OnRemoveStream(const std::string& stream_id,
+ int channel_id,
+ bool video) {
+ if (event_callback_) {
+ event_callback_->OnRemoveStream(stream_id, channel_id, video);
+ }
+}
+
+void PeerConnection::OnRtcMediaChannelCreated(const std::string& stream_id,
+ int channel_id,
+ bool video) {
+ if (event_callback_) {
+ event_callback_->OnAddStream(stream_id, channel_id, video);
+ }
+}
+
+void PeerConnection::Close() {
+ if (session_)
+ session_->RemoveAllStreams();
+}
+
+} // namespace webrtc
diff --git a/third_party_mods/libjingle/source/talk/app/peerconnection.h b/third_party_mods/libjingle/source/talk/app/peerconnection.h
new file mode 100644
index 0000000..9ab8da5
--- /dev/null
+++ b/third_party_mods/libjingle/source/talk/app/peerconnection.h
@@ -0,0 +1,153 @@
+// Copyright 2011 Google Inc. All Rights Reserved.
+// Author: mallinath@google.com (Mallinath Bareddy)
+
+
+#ifndef TALK_APP_WEBRTC_PEERCONNECTION_H_
+#define TALK_APP_WEBRTC_PEERCONNECTION_H_
+
+#include <string>
+#include "talk/base/sigslot.h"
+#include "talk/base/thread.h"
+#include "talk/base/scoped_ptr.h"
+#include "talk/base/basicpacketsocketfactory.h"
+#include "talk/app/webrtcchannelmanager.h"
+
+namespace Json {
+class Value;
+}
+
+namespace cricket {
+class BasicPortAllocator;
+}
+
+#ifdef PLATFORM_CHROMIUM
+class P2PSocketDispatcher;
+#endif // PLATFORM_CHROMIUM
+
+namespace webrtc {
+
+class AudioDeviceModule;
+class ExternalRenderer;
+class WebRTCSessionImpl;
+
+class PeerConnectionObserver {
+ public:
+ virtual void OnError() = 0;
+ // serialized signaling message
+ virtual void OnSignalingMessage(const std::string& msg) = 0;
+
+ // Triggered when a remote peer accepts a media connection.
+ virtual void OnAddStream(const std::string& stream_id,
+ int channel_id,
+ bool video) = 0;
+
+ // Triggered when a remote peer closes a media stream.
+ virtual void OnRemoveStream(const std::string& stream_id,
+ int channel_id,
+ bool video) = 0;
+
+ protected:
+ // Dtor protected as objects shouldn't be deleted via this interface.
+ ~PeerConnectionObserver() {}
+};
+
+class PeerConnection : public sigslot::has_slots<> {
+ public:
+
+#ifdef PLATFORM_CHROMIUM
+ PeerConnection(const std::string& config,
+ P2PSocketDispatcher* p2p_socket_dispatcher);
+#else
+ explicit PeerConnection(const std::string& config);
+#endif // PLATFORM_CHROMIUM
+
+ ~PeerConnection();
+
+ bool Init();
+ void RegisterObserver(PeerConnectionObserver* observer);
+ bool SignalingMessage(const std::string& msg);
+ bool AddStream(const std::string& stream_id, bool video);
+ bool RemoveStream(const std::string& stream_id);
+ bool Connect();
+ void Close();
+
+ // TODO(ronghuawu): This section will be modified to reuse the existing libjingle APIs.
+ // Set Audio device
+ bool SetAudioDevice(const std::string& wave_in_device,
+ const std::string& wave_out_device, int opts);
+ // Set the video renderer
+ bool SetVideoRenderer(const std::string& stream_id,
+ ExternalRenderer* external_renderer);
+ // Set channel_id to -1 for the local preview
+ bool SetVideoRenderer(int channel_id,
+ void* window,
+ unsigned int zOrder,
+ float left,
+ float top,
+ float right,
+ float bottom);
+ // Set video capture device
+ // For Chromium the cam_device should use the capture session id.
+ // For standalone app, cam_device is the camera name. It will try to
+ // set the default capture device when cam_device is "".
+ bool SetVideoCapture(const std::string& cam_device);
+
+ // Access to the members
+ const std::string& config() const { return config_; }
+ bool incoming() const { return incoming_; }
+ talk_base::Thread* media_thread() {
+ return media_thread_.get();
+ }
+#ifdef PLATFORM_CHROMIUM
+ P2PSocketDispatcher* p2p_socket_dispatcher() {
+ return p2p_socket_dispatcher_;
+ }
+#endif // PLATFORM_CHROMIUM
+
+ // Callbacks
+ void OnAddStream(const std::string& stream_id, int channel_id, bool video);
+ void OnRemoveStream(const std::string& stream_id, int channel_id,
+ bool video);
+ void OnLocalDescription(cricket::SessionDescription* desc,
+ const std::vector<cricket::Candidate>& candidates);
+ void OnRtcMediaChannelCreated(const std::string& stream_id,
+ int channel_id,
+ bool video);
+ private:
+ void SendRemoveSignal(WebRTCSessionImpl* session);
+ WebRTCSessionImpl* CreateMediaSession(const std::string& id,
+ const std::string& dir);
+
+ std::string config_;
+ talk_base::scoped_ptr<talk_base::Thread> media_thread_;
+ talk_base::scoped_ptr<WebRtcChannelManager> channel_manager_;
+ talk_base::scoped_ptr<talk_base::NetworkManager> network_manager_;
+ talk_base::scoped_ptr<cricket::BasicPortAllocator> port_allocator_;
+ talk_base::scoped_ptr<talk_base::BasicPacketSocketFactory> socket_factory_;
+ talk_base::scoped_ptr<talk_base::Thread> signaling_thread_;
+ bool initialized_;
+
+ // NOTE: The order of the enum values must be in sync with the array
+ // in Init().
+ enum ServiceType {
+ STUN,
+ STUNS,
+ TURN,
+ TURNS,
+ SERVICE_COUNT, // Also means 'invalid'.
+ };
+
+ ServiceType service_type_;
+ std::string service_address_;
+ PeerConnectionObserver* event_callback_;
+ WebRTCSessionImpl* session_;
+ bool incoming_;
+
+#ifdef PLATFORM_CHROMIUM
+ P2PSocketDispatcher* p2p_socket_dispatcher_;
+#endif // PLATFORM_CHROMIUM
+};
+
+}
+
+#endif /* TALK_APP_WEBRTC_PEERCONNECTION_H_ */
diff --git a/third_party_mods/libjingle/source/talk/app/session_test/main_wnd.cc b/third_party_mods/libjingle/source/talk/app/session_test/main_wnd.cc
new file mode 100644
index 0000000..7cf873e
--- /dev/null
+++ b/third_party_mods/libjingle/source/talk/app/session_test/main_wnd.cc
@@ -0,0 +1,389 @@
+// Copyright 2011 Google Inc. All Rights Reserved.
+// Author: tommi@google.com (Tomas Gunnarsson)
+
+#include "talk/app/session_test/main_wnd.h"
+
+#include "talk/base/common.h"
+#include "talk/base/logging.h"
+
+ATOM MainWnd::wnd_class_ = 0;
+const wchar_t MainWnd::kClassName[] = L"WebRTC_MainWnd";
+
+// TODO(tommi): declare in header:
+std::string GetDefaultServerName();
+
+namespace {
+void CalculateWindowSizeForText(HWND wnd, const wchar_t* text,
+ size_t* width, size_t* height) {
+ HDC dc = ::GetDC(wnd);
+ RECT text_rc = {0};
+ ::DrawText(dc, text, -1, &text_rc, DT_CALCRECT | DT_SINGLELINE);
+ ::ReleaseDC(wnd, dc);
+ RECT client, window;
+ ::GetClientRect(wnd, &client);
+ ::GetWindowRect(wnd, &window);
+
+ *width = text_rc.right - text_rc.left;
+ *width += (window.right - window.left) -
+ (client.right - client.left);
+ *height = text_rc.bottom - text_rc.top;
+ *height += (window.bottom - window.top) -
+ (client.bottom - client.top);
+}
+
+HFONT GetDefaultFont() {
+ static HFONT font = reinterpret_cast<HFONT>(GetStockObject(DEFAULT_GUI_FONT));
+ return font;
+}
+
+std::string GetWindowText(HWND wnd) {
+ char text[MAX_PATH] = {0};
+ ::GetWindowTextA(wnd, &text[0], ARRAYSIZE(text));
+ return text;
+}
+
+void AddListBoxItem(HWND listbox, const std::string& str, LPARAM item_data) {
+ LRESULT index = ::SendMessageA(listbox, LB_ADDSTRING, 0,
+ reinterpret_cast<LPARAM>(str.c_str()));
+ ::SendMessageA(listbox, LB_SETITEMDATA, index, item_data);
+}
+
+} // namespace
+
+MainWnd::MainWnd()
+ : ui_(CONNECT_TO_SERVER), wnd_(NULL), edit1_(NULL), edit2_(NULL),
+ label1_(NULL), label2_(NULL), button_(NULL), listbox_(NULL),
+ destroyed_(false), callback_(NULL), nested_msg_(NULL) {
+}
+
+MainWnd::~MainWnd() {
+ ASSERT(!IsWindow());
+}
+
+bool MainWnd::Create() {
+ ASSERT(wnd_ == NULL);
+ if (!RegisterWindowClass())
+ return false;
+
+ wnd_ = ::CreateWindowExW(WS_EX_OVERLAPPEDWINDOW, kClassName, L"WebRTC",
+ WS_OVERLAPPEDWINDOW | WS_VISIBLE | WS_CLIPCHILDREN,
+ CW_USEDEFAULT, CW_USEDEFAULT, CW_USEDEFAULT, CW_USEDEFAULT,
+ NULL, NULL, GetModuleHandle(NULL), this);
+
+ ::SendMessage(wnd_, WM_SETFONT, reinterpret_cast<WPARAM>(GetDefaultFont()),
+ TRUE);
+
+ CreateChildWindows();
+ SwitchToConnectUI();
+
+ return wnd_ != NULL;
+}
+
+bool MainWnd::Destroy() {
+ BOOL ret = FALSE;
+ if (IsWindow()) {
+ ret = ::DestroyWindow(wnd_);
+ }
+
+ return ret != FALSE;
+}
+
+void MainWnd::RegisterObserver(MainWndCallback* callback) {
+ callback_ = callback;
+}
+
+bool MainWnd::IsWindow() const {
+ return wnd_ && ::IsWindow(wnd_) != FALSE;
+}
+
+bool MainWnd::PreTranslateMessage(MSG* msg) {
+ bool ret = false;
+ if (msg->message == WM_CHAR) {
+ if (msg->wParam == VK_TAB) {
+ HandleTabbing();
+ ret = true;
+ } else if (msg->wParam == VK_RETURN) {
+ OnDefaultAction();
+ ret = true;
+ } else if (msg->wParam == VK_ESCAPE) {
+ if (callback_) {
+ if (ui_ == STREAMING) {
+ callback_->DisconnectFromCurrentPeer();
+ } else {
+ callback_->DisconnectFromServer();
+ }
+ }
+ }
+ }
+ return ret;
+}
+
+void MainWnd::SwitchToConnectUI() {
+ ASSERT(IsWindow());
+ LayoutPeerListUI(false);
+ ui_ = CONNECT_TO_SERVER;
+ LayoutConnectUI(true);
+ ::SetFocus(edit1_);
+}
+
+void MainWnd::SwitchToPeerList(const Peers& peers) {
+ LayoutConnectUI(false);
+
+ ::SendMessage(listbox_, LB_RESETCONTENT, 0, 0);
+
+ AddListBoxItem(listbox_, "List of currently connected peers:", -1);
+ Peers::const_iterator i = peers.begin();
+ for (; i != peers.end(); ++i)
+ AddListBoxItem(listbox_, i->second.c_str(), i->first);
+
+ ui_ = LIST_PEERS;
+ LayoutPeerListUI(true);
+}
+
+void MainWnd::SwitchToStreamingUI() {
+ LayoutConnectUI(false);
+ LayoutPeerListUI(false);
+ ui_ = STREAMING;
+}
+
+void MainWnd::OnPaint() {
+ PAINTSTRUCT ps;
+ ::BeginPaint(handle(), &ps);
+
+ RECT rc;
+ ::GetClientRect(handle(), &rc);
+ HBRUSH brush = ::CreateSolidBrush(::GetSysColor(COLOR_WINDOW));
+ ::FillRect(ps.hdc, &rc, brush);
+ ::DeleteObject(brush);
+
+ ::EndPaint(handle(), &ps);
+}
+
+void MainWnd::OnDestroyed() {
+ PostQuitMessage(0);
+}
+
+void MainWnd::OnDefaultAction() {
+ if (!callback_)
+ return;
+ if (ui_ == CONNECT_TO_SERVER) {
+ std::string server(GetWindowText(edit1_));
+ std::string port_str(GetWindowText(edit2_));
+ int port = port_str.length() ? atoi(port_str.c_str()) : 0;
+ callback_->StartLogin(server, port);
+ } else if (ui_ == LIST_PEERS) {
+ LRESULT sel = ::SendMessage(listbox_, LB_GETCURSEL, 0, 0);
+ if (sel != LB_ERR) {
+ LRESULT peer_id = ::SendMessage(listbox_, LB_GETITEMDATA, sel, 0);
+ if (peer_id != -1 && callback_) {
+ callback_->ConnectToPeer(peer_id);
+ }
+ }
+ } else {
+ MessageBoxA(wnd_, "OK!", "Yeah", MB_OK);
+ }
+}
+
+bool MainWnd::OnMessage(UINT msg, WPARAM wp, LPARAM lp, LRESULT* result) {
+ switch (msg) {
+ case WM_ERASEBKGND:
+ *result = TRUE;
+ return true;
+ case WM_PAINT:
+ OnPaint();
+ return true;
+ case WM_SETFOCUS:
+ if (ui_ == CONNECT_TO_SERVER) {
+ SetFocus(edit1_);
+ }
+ return true;
+ case WM_SIZE:
+ if (ui_ == CONNECT_TO_SERVER) {
+ LayoutConnectUI(true);
+ } else if (ui_ == LIST_PEERS) {
+ LayoutPeerListUI(true);
+ }
+ break;
+ case WM_CTLCOLORSTATIC:
+ *result = reinterpret_cast<LRESULT>(GetSysColorBrush(COLOR_WINDOW));
+ return true;
+ case WM_COMMAND:
+ if (button_ == reinterpret_cast<HWND>(lp)) {
+ if (BN_CLICKED == HIWORD(wp))
+ OnDefaultAction();
+ } else if (listbox_ == reinterpret_cast<HWND>(lp)) {
+ if (LBN_DBLCLK == HIWORD(wp)) {
+ OnDefaultAction();
+ }
+ }
+ return true;
+ }
+ return false;
+}
+
+// static
+LRESULT CALLBACK MainWnd::WndProc(HWND hwnd, UINT msg, WPARAM wp, LPARAM lp) {
+ MainWnd* me = reinterpret_cast<MainWnd*>(
+ ::GetWindowLongPtr(hwnd, GWL_USERDATA));
+ if (!me && WM_CREATE == msg) {
+ CREATESTRUCT* cs = reinterpret_cast<CREATESTRUCT*>(lp);
+ me = reinterpret_cast<MainWnd*>(cs->lpCreateParams);
+ me->wnd_ = hwnd;
+ ::SetWindowLongPtr(hwnd, GWL_USERDATA, reinterpret_cast<LONG_PTR>(me));
+ }
+
+ LRESULT result = 0;
+ if (me) {
+ void* prev_nested_msg = me->nested_msg_;
+ me->nested_msg_ = &msg;
+
+ bool handled = me->OnMessage(msg, wp, lp, &result);
+ if (WM_NCDESTROY == msg) {
+ me->destroyed_ = true;
+ } else if (!handled) {
+ result = ::DefWindowProc(hwnd, msg, wp, lp);
+ }
+
+ if (me->destroyed_ && prev_nested_msg == NULL) {
+ me->OnDestroyed();
+ me->wnd_ = NULL;
+ me->destroyed_ = false;
+ }
+
+ me->nested_msg_ = prev_nested_msg;
+ } else {
+ result = ::DefWindowProc(hwnd, msg, wp, lp);
+ }
+
+ return result;
+}
+
+// static
+bool MainWnd::RegisterWindowClass() {
+ if (wnd_class_)
+ return true;
+
+ WNDCLASSEX wcex = { sizeof(WNDCLASSEX) };
+ wcex.style = CS_DBLCLKS;
+ wcex.hInstance = GetModuleHandle(NULL);
+ wcex.hbrBackground = reinterpret_cast<HBRUSH>(COLOR_WINDOW + 1);
+ wcex.hCursor = ::LoadCursor(NULL, IDC_ARROW);
+ wcex.lpfnWndProc = &WndProc;
+ wcex.lpszClassName = kClassName;
+ wnd_class_ = ::RegisterClassEx(&wcex);
+ ASSERT(wnd_class_);
+ return wnd_class_ != 0;
+}
+
+void MainWnd::CreateChildWindow(HWND* wnd, MainWnd::ChildWindowID id,
+ const wchar_t* class_name, DWORD control_style,
+ DWORD ex_style) {
+ if (::IsWindow(*wnd))
+ return;
+
+ // Child windows are invisible at first, and shown after being resized.
+ DWORD style = WS_CHILD | control_style;
+ *wnd = ::CreateWindowEx(ex_style, class_name, L"", style,
+ 100, 100, 100, 100, wnd_,
+ reinterpret_cast<HMENU>(id),
+ GetModuleHandle(NULL), NULL);
+ ASSERT(::IsWindow(*wnd));
+ ::SendMessage(*wnd, WM_SETFONT, reinterpret_cast<WPARAM>(GetDefaultFont()),
+ TRUE);
+}
+
+void MainWnd::CreateChildWindows() {
+ // Create the child windows in tab order.
+ CreateChildWindow(&label1_, LABEL1_ID, L"Static", ES_CENTER | ES_READONLY, 0);
+ CreateChildWindow(&edit1_, EDIT_ID, L"Edit",
+ ES_LEFT | ES_NOHIDESEL | WS_TABSTOP, WS_EX_CLIENTEDGE);
+ CreateChildWindow(&label2_, LABEL2_ID, L"Static", ES_CENTER | ES_READONLY, 0);
+ CreateChildWindow(&edit2_, EDIT_ID, L"Edit",
+ ES_LEFT | ES_NOHIDESEL | WS_TABSTOP, WS_EX_CLIENTEDGE);
+ CreateChildWindow(&button_, BUTTON_ID, L"Button", BS_CENTER | WS_TABSTOP, 0);
+
+ CreateChildWindow(&listbox_, LISTBOX_ID, L"ListBox",
+ LBS_HASSTRINGS | LBS_NOTIFY, WS_EX_CLIENTEDGE);
+
+ ::SetWindowTextA(edit1_, GetDefaultServerName().c_str());
+ ::SetWindowTextA(edit2_, "8888");
+}
+
+void MainWnd::LayoutConnectUI(bool show) {
+ struct Windows {
+ HWND wnd;
+ const wchar_t* text;
+ size_t width;
+ size_t height;
+ } windows[] = {
+ { label1_, L"Server" },
+ { edit1_, L"XXXyyyYYYgggXXXyyyYYYggg" },
+ { label2_, L":" },
+ { edit2_, L"XyXyX" },
+ { button_, L"Connect" },
+ };
+
+ if (show) {
+ const size_t kSeparator = 5;
+ size_t total_width = (ARRAYSIZE(windows) - 1) * kSeparator;
+
+ for (size_t i = 0; i < ARRAYSIZE(windows); ++i) {
+ CalculateWindowSizeForText(windows[i].wnd, windows[i].text,
+ &windows[i].width, &windows[i].height);
+ total_width += windows[i].width;
+ }
+
+ RECT rc;
+ ::GetClientRect(wnd_, &rc);
+ size_t x = (rc.right / 2) - (total_width / 2);
+ size_t y = rc.bottom / 2;
+ for (size_t i = 0; i < ARRAYSIZE(windows); ++i) {
+ size_t top = y - (windows[i].height / 2);
+ ::MoveWindow(windows[i].wnd, x, top, windows[i].width, windows[i].height,
+ TRUE);
+ x += kSeparator + windows[i].width;
+ if (windows[i].text[0] != 'X')
+ ::SetWindowText(windows[i].wnd, windows[i].text);
+ ::ShowWindow(windows[i].wnd, SW_SHOWNA);
+ }
+ } else {
+ for (size_t i = 0; i < ARRAYSIZE(windows); ++i) {
+ ::ShowWindow(windows[i].wnd, SW_HIDE);
+ }
+ }
+}
+
+void MainWnd::LayoutPeerListUI(bool show) {
+ if (show) {
+ RECT rc;
+ ::GetClientRect(wnd_, &rc);
+ ::MoveWindow(listbox_, 0, 0, rc.right, rc.bottom, TRUE);
+ ::ShowWindow(listbox_, SW_SHOWNA);
+ } else {
+ ::ShowWindow(listbox_, SW_HIDE);
+ }
+}
+
+void MainWnd::HandleTabbing() {
+ bool shift = ((::GetAsyncKeyState(VK_SHIFT) & 0x8000) != 0);
+ UINT next_cmd = shift ? GW_HWNDPREV : GW_HWNDNEXT;
+ UINT loop_around_cmd = shift ? GW_HWNDLAST : GW_HWNDFIRST;
+ HWND focus = GetFocus(), next;
+ do {
+ next = ::GetWindow(focus, next_cmd);
+ if (IsWindowVisible(next) &&
+ (GetWindowLong(next, GWL_STYLE) & WS_TABSTOP)) {
+ break;
+ }
+
+ if (!next) {
+ next = ::GetWindow(focus, loop_around_cmd);
+ if (IsWindowVisible(next) &&
+ (GetWindowLong(next, GWL_STYLE) & WS_TABSTOP)) {
+ break;
+ }
+ }
+ focus = next;
+ } while (true);
+ ::SetFocus(next);
+}
diff --git a/third_party_mods/libjingle/source/talk/app/session_test/main_wnd.h b/third_party_mods/libjingle/source/talk/app/session_test/main_wnd.h
new file mode 100644
index 0000000..18879d9
--- /dev/null
+++ b/third_party_mods/libjingle/source/talk/app/session_test/main_wnd.h
@@ -0,0 +1,96 @@
+// Copyright 2011 Google Inc. All Rights Reserved.
+// Author: tommi@google.com (Tomas Gunnarsson)
+
+
+#ifndef TALK_APP_SESSION_TEST_MAIN_WND_H_
+#define TALK_APP_SESSION_TEST_MAIN_WND_H_
+#pragma once
+
+#include "talk/base/win32.h"
+
+#include <map>
+
+// TODO(tommi): Move to same header as PeerConnectionClient.
+typedef std::map<int, std::string> Peers;
+
+
+class MainWndCallback {
+ public:
+ virtual void StartLogin(const std::string& server, int port) = 0;
+ virtual void DisconnectFromServer() = 0;
+ virtual void ConnectToPeer(int peer_id) = 0;
+ virtual void DisconnectFromCurrentPeer() = 0;
+};
+
+class MainWnd {
+ public:
+ static const wchar_t kClassName[];
+
+ enum UI {
+ CONNECT_TO_SERVER,
+ LIST_PEERS,
+ STREAMING,
+ };
+
+ MainWnd();
+ ~MainWnd();
+
+ bool Create();
+ bool Destroy();
+ bool IsWindow() const;
+
+ void RegisterObserver(MainWndCallback* callback);
+
+ bool PreTranslateMessage(MSG* msg);
+
+ void SwitchToConnectUI();
+ void SwitchToPeerList(const Peers& peers);
+ void SwitchToStreamingUI();
+
+ HWND handle() const { return wnd_; }
+ UI current_ui() const { return ui_; }
+
+ protected:
+ enum ChildWindowID {
+ EDIT_ID = 1,
+ BUTTON_ID,
+ LABEL1_ID,
+ LABEL2_ID,
+ LISTBOX_ID,
+ };
+
+ void OnPaint();
+ void OnDestroyed();
+
+ void OnDefaultAction();
+
+ bool OnMessage(UINT msg, WPARAM wp, LPARAM lp, LRESULT* result);
+
+ static LRESULT CALLBACK WndProc(HWND hwnd, UINT msg, WPARAM wp, LPARAM lp);
+ static bool RegisterWindowClass();
+
+ void CreateChildWindow(HWND* wnd, ChildWindowID id, const wchar_t* class_name,
+ DWORD control_style, DWORD ex_style);
+ void CreateChildWindows();
+
+ void LayoutConnectUI(bool show);
+ void LayoutPeerListUI(bool show);
+
+ void HandleTabbing();
+
+ private:
+ UI ui_;
+ HWND wnd_;
+ HWND edit1_;
+ HWND edit2_;
+ HWND label1_;
+ HWND label2_;
+ HWND button_;
+ HWND listbox_;
+ bool destroyed_;
+ void* nested_msg_;
+ MainWndCallback* callback_;
+ static ATOM wnd_class_;
+};
+
+#endif // TALK_APP_SESSION_TEST_MAIN_WND_H_
diff --git a/third_party_mods/libjingle/source/talk/app/session_test/session_test_main.cc b/third_party_mods/libjingle/source/talk/app/session_test/session_test_main.cc
new file mode 100644
index 0000000..99af54e
--- /dev/null
+++ b/third_party_mods/libjingle/source/talk/app/session_test/session_test_main.cc
@@ -0,0 +1,850 @@
+// Copyright 2011 Google Inc. All Rights Reserved.
+// Author: tommi@google.com (Tomas Gunnarsson)
+
+// This may not look like much but it has already uncovered several issues.
+// In the future this will be a p2p reference app for the webrtc API along
+// with a separate simple server implementation.
+
+#include "talk/base/win32.h" // Must be first
+
+#include <map>
+
+#include "talk/base/scoped_ptr.h"
+#include "talk/base/win32socketinit.cc"
+#include "talk/base/win32socketserver.h" // For Win32Socket
+#include "talk/base/win32socketserver.cc" // For Win32Socket
+
+#include "modules/audio_device/main/interface/audio_device.h"
+#include "modules/video_capture/main/interface/video_capture.h"
+#include "system_wrappers/source/trace_impl.h"
+#include "talk/app/peerconnection.h"
+#include "talk/app/session_test/main_wnd.h"
+#include "talk/base/logging.h"
+
+static const char kAudioLabel[] = "audio_label";
+static const char kVideoLabel[] = "video_label";
+const unsigned short kDefaultServerPort = 8888;
+
+using talk_base::scoped_ptr;
+using webrtc::AudioDeviceModule;
+using webrtc::PeerConnection;
+using webrtc::PeerConnectionObserver;
+
+std::string GetEnvVarOrDefault(const char* env_var_name,
+ const char* default_value) {
+ std::string value;
+ const char* env_var = getenv(env_var_name);
+ if (env_var)
+ value = env_var;
+
+ if (value.empty())
+ value = default_value;
+
+ return value;
+}
+
+std::string GetPeerConnectionString() {
+ return GetEnvVarOrDefault("WEBRTC_CONNECT", "STUN stun.l.google.com:19302");
+}
+
+std::string GetDefaultServerName() {
+ return GetEnvVarOrDefault("WEBRTC_SERVER", "localhost");
+}
+
+std::string GetPeerName() {
+ char computer_name[MAX_PATH] = {0}, user_name[MAX_PATH] = {0};
+ DWORD size = ARRAYSIZE(computer_name);
+ ::GetComputerNameA(computer_name, &size);
+ size = ARRAYSIZE(user_name);
+ ::GetUserNameA(user_name, &size);
+ std::string ret(user_name);
+ ret += '@';
+ ret += computer_name;
+ return ret;
+}
+
+struct PeerConnectionClientObserver {
+ virtual void OnSignedIn() = 0; // Called when we're "logged" on.
+ virtual void OnDisconnected() = 0;
+ virtual void OnPeerConnected(int id, const std::string& name) = 0;
+ virtual void OnPeerDisconnected(int id, const std::string& name) = 0;
+ virtual void OnMessageFromPeer(int peer_id, const std::string& message) = 0;
+};
+
+class PeerConnectionClient : public sigslot::has_slots<> {
+ public:
+ enum State {
+ NOT_CONNECTED,
+ SIGNING_IN,
+ CONNECTED,
+ SIGNING_OUT_WAITING,
+ SIGNING_OUT,
+ };
+
+ PeerConnectionClient() : callback_(NULL), my_id_(-1), state_(NOT_CONNECTED) {
+ control_socket_.SignalCloseEvent.connect(this,
+ &PeerConnectionClient::OnClose);
+ hanging_get_.SignalCloseEvent.connect(this,
+ &PeerConnectionClient::OnClose);
+ control_socket_.SignalConnectEvent.connect(this,
+ &PeerConnectionClient::OnConnect);
+ hanging_get_.SignalConnectEvent.connect(this,
+ &PeerConnectionClient::OnHangingGetConnect);
+ control_socket_.SignalReadEvent.connect(this,
+ &PeerConnectionClient::OnRead);
+ hanging_get_.SignalReadEvent.connect(this,
+ &PeerConnectionClient::OnHangingGetRead);
+ }
+
+ ~PeerConnectionClient() {
+ }
+
+ int id() const {
+ return my_id_;
+ }
+
+ bool is_connected() const {
+ return my_id_ != -1;
+ }
+
+ const Peers& peers() const {
+ return peers_;
+ }
+
+ void RegisterObserver(PeerConnectionClientObserver* callback) {
+ ASSERT(!callback_);
+ callback_ = callback;
+ }
+
+ bool Connect(const std::string& server, int port,
+ const std::string& client_name) {
+ ASSERT(!server.empty());
+ ASSERT(!client_name.empty());
+ ASSERT(state_ == NOT_CONNECTED);
+
+ if (server.empty() || client_name.empty())
+ return false;
+
+ if (port <= 0)
+ port = kDefaultServerPort;
+
+ server_address_.SetIP(server);
+ server_address_.SetPort(port);
+
+ if (server_address_.IsUnresolved()) {
+ hostent* h = gethostbyname(server_address_.IPAsString().c_str());
+ if (!h) {
+ LOG(LS_ERROR) << "Failed to resolve host name: "
+ << server_address_.IPAsString();
+ return false;
+ } else {
+ server_address_.SetResolvedIP(
+ ntohl(*reinterpret_cast<uint32*>(h->h_addr_list[0])));
+ }
+ }
+
+ char buffer[1024];
+ wsprintfA(buffer, "GET /sign_in?%s HTTP/1.0\r\n\r\n", client_name.c_str());
+ onconnect_data_ = buffer;
+
+ bool ret = ConnectControlSocket();
+ if (ret)
+ state_ = SIGNING_IN;
+
+ return ret;
+ }
+
+ bool SendToPeer(int peer_id, const std::string& message) {
+ if (state_ != CONNECTED)
+ return false;
+
+ ASSERT(is_connected());
+ ASSERT(control_socket_.GetState() == talk_base::Socket::CS_CLOSED);
+ if (!is_connected() || peer_id == -1)
+ return false;
+
+ char headers[1024];
+ wsprintfA(headers, "POST /message?peer_id=%i&to=%i HTTP/1.0\r\n"
+ "Content-Length: %i\r\n"
+ "Content-Type: text/plain\r\n"
+ "\r\n",
+ my_id_, peer_id, message.length());
+ onconnect_data_ = headers;
+ onconnect_data_ += message;
+ return ConnectControlSocket();
+ }
+
+ bool SignOut() {
+ if (state_ == NOT_CONNECTED || state_ == SIGNING_OUT)
+ return true;
+
+ if (hanging_get_.GetState() != talk_base::Socket::CS_CLOSED)
+ hanging_get_.Close();
+
+ if (control_socket_.GetState() == talk_base::Socket::CS_CLOSED) {
+ ASSERT(my_id_ != -1);
+ state_ = SIGNING_OUT;
+
+ char buffer[1024];
+ wsprintfA(buffer, "GET /sign_out?peer_id=%i HTTP/1.0\r\n\r\n", my_id_);
+ onconnect_data_ = buffer;
+ return ConnectControlSocket();
+ } else {
+ state_ = SIGNING_OUT_WAITING;
+ }
+
+ return true;
+ }
+
+ protected:
+ void Close() {
+ control_socket_.Close();
+ hanging_get_.Close();
+ onconnect_data_.clear();
+ peers_.clear();
+ my_id_ = -1;
+ state_ = NOT_CONNECTED;
+ }
+
+ bool ConnectControlSocket() {
+ ASSERT(control_socket_.GetState() == talk_base::Socket::CS_CLOSED);
+ int err = control_socket_.Connect(server_address_);
+ if (err == SOCKET_ERROR) {
+ Close();
+ return false;
+ }
+ return true;
+ }
+
+ void OnConnect(talk_base::AsyncSocket* socket) {
+ int sent = socket->Send(onconnect_data_.c_str(), onconnect_data_.length());
+ ASSERT(sent == onconnect_data_.length());
+ onconnect_data_.clear();
+ }
+
+ void OnHangingGetConnect(talk_base::AsyncSocket* socket) {
+ char buffer[1024];
+ wsprintfA(buffer, "GET /wait?peer_id=%i HTTP/1.0\r\n\r\n", my_id_);
+ int len = lstrlenA(buffer);
+ int sent = socket->Send(buffer, len);
+ ASSERT(sent == len);
+ }
+
+ // Quick and dirty support for parsing HTTP header values.
+ bool GetHeaderValue(const std::string& data, size_t eoh,
+ const char* header_pattern, size_t* value) {
+ ASSERT(value);
+ size_t found = data.find(header_pattern);
+ if (found != std::string::npos && found < eoh) {
+ *value = atoi(&data[found + lstrlenA(header_pattern)]);
+ return true;
+ }
+ return false;
+ }
+
+ bool GetHeaderValue(const std::string& data, size_t eoh,
+ const char* header_pattern, std::string* value) {
+ ASSERT(value);
+ size_t found = data.find(header_pattern);
+ if (found != std::string::npos && found < eoh) {
+ size_t begin = found + lstrlenA(header_pattern);
+ size_t end = data.find("\r\n", begin);
+ if (end == std::string::npos)
+ end = eoh;
+ value->assign(data.substr(begin, end - begin));
+ return true;
+ }
+ return false;
+ }
+
+ // Returns true if the whole response has been read.
+ bool ReadIntoBuffer(talk_base::AsyncSocket* socket, std::string* data,
+ size_t* content_length) {
+ LOG(INFO) << __FUNCTION__;
+
+ char buffer[0xffff];
+ do {
+ int bytes = socket->Recv(buffer, sizeof(buffer));
+ if (bytes <= 0)
+ break;
+ data->append(buffer, bytes);
+ } while (true);
+
+ bool ret = false;
+ size_t i = data->find("\r\n\r\n");
+ if (i != std::string::npos) {
+ LOG(INFO) << "Headers received";
+ const char kContentLengthHeader[] = "\r\nContent-Length: ";
+ if (GetHeaderValue(*data, i, "\r\nContent-Length: ", content_length)) {
+ LOG(INFO) << "Expecting " << *content_length << " bytes.";
+ size_t total_response_size = (i + 4) + *content_length;
+ if (data->length() >= total_response_size) {
+ ret = true;
+ std::string should_close;
+ const char kConnection[] = "\r\nConnection: ";
+ if (GetHeaderValue(*data, i, kConnection, &should_close) &&
+ should_close.compare("close") == 0) {
+ socket->Close();
+ }
+ } else {
+ // We haven't received everything. Just continue to accept data.
+ }
+ } else {
+ LOG(LS_ERROR) << "No content length field specified by the server.";
+ }
+ }
+ return ret;
+ }
+
+ void OnRead(talk_base::AsyncSocket* socket) {
+ LOG(INFO) << __FUNCTION__;
+ size_t content_length = 0;
+ if (ReadIntoBuffer(socket, &control_data_, &content_length)) {
+ size_t peer_id = 0, eoh = 0;
+ bool ok = ParseServerResponse(control_data_, content_length, &peer_id,
+ &eoh);
+ if (ok) {
+ if (my_id_ == -1) {
+ // First response. Let's store our server assigned ID.
+ ASSERT(state_ == SIGNING_IN);
+ my_id_ = peer_id;
+ ASSERT(my_id_ != -1);
+
+ // The body of the response will be a list of already connected peers.
+ if (content_length) {
+ size_t pos = eoh + 4;
+ while (pos < control_data_.size()) {
+ size_t eol = control_data_.find('\n', pos);
+ if (eol == std::string::npos)
+ break;
+ int id = 0;
+ std::string name;
+ bool connected;
+ if (ParseEntry(control_data_.substr(pos, eol - pos), &name, &id,
+ &connected) && id != my_id_) {
+ peers_[id] = name;
+ callback_->OnPeerConnected(id, name);
+ }
+ pos = eol + 1;
+ }
+ }
+ ASSERT(is_connected());
+ callback_->OnSignedIn();
+ } else if (state_ == SIGNING_OUT) {
+ Close();
+ callback_->OnDisconnected();
+ } else if (state_ == SIGNING_OUT_WAITING) {
+ SignOut();
+ }
+ }
+
+ control_data_.clear();
+
+ if (state_ == SIGNING_IN) {
+ ASSERT(hanging_get_.GetState() == talk_base::Socket::CS_CLOSED);
+ state_ = CONNECTED;
+ hanging_get_.Connect(server_address_);
+ }
+ }
+ }
+
+ void OnHangingGetRead(talk_base::AsyncSocket* socket) {
+ LOG(INFO) << __FUNCTION__;
+ size_t content_length = 0;
+ if (ReadIntoBuffer(socket, ¬ification_data_, &content_length)) {
+ size_t peer_id = 0, eoh = 0;
+ bool ok = ParseServerResponse(notification_data_, content_length,
+ &peer_id, &eoh);
+
+ if (ok) {
+ // Store the position where the body begins.
+ size_t pos = eoh + 4;
+
+ if (my_id_ == peer_id) {
+ // A notification about a new member or a member that just
+ // disconnected.
+ int id = 0;
+ std::string name;
+ bool connected = false;
+ if (ParseEntry(notification_data_.substr(pos), &name, &id,
+ &connected)) {
+ if (connected) {
+ peers_[id] = name;
+ callback_->OnPeerConnected(id, name);
+ } else {
+ peers_.erase(id);
+ callback_->OnPeerDisconnected(id, name);
+ }
+ }
+ } else {
+ callback_->OnMessageFromPeer(peer_id,
+ notification_data_.substr(pos));
+ }
+ }
+
+ notification_data_.clear();
+ }
+
+ if (hanging_get_.GetState() == talk_base::Socket::CS_CLOSED)
+ hanging_get_.Connect(server_address_);
+ }
+
+ // Parses a single line entry in the form "<name>,<id>,<connected>"
+ bool ParseEntry(const std::string& entry, std::string* name, int* id,
+ bool* connected) {
+ ASSERT(name);
+ ASSERT(id);
+ ASSERT(connected);
+ ASSERT(entry.length());
+
+ *connected = false;
+ size_t separator = entry.find(',');
+ if (separator != std::string::npos) {
+ *id = atoi(&entry[separator + 1]);
+ name->assign(entry.substr(0, separator));
+ separator = entry.find(',', separator + 1);
+ if (separator != std::string::npos) {
+ *connected = atoi(&entry[separator + 1]) ? true : false;
+ }
+ }
+ return !name->empty();
+ }
+
+ int GetResponseStatus(const std::string& response) {
+ int status = -1;
+ size_t pos = response.find(' ');
+ if (pos != std::string::npos)
+ status = atoi(&response[pos + 1]);
+ return status;
+ }
+
+ bool ParseServerResponse(const std::string& response, size_t content_length,
+ size_t* peer_id, size_t* eoh) {
+ LOG(INFO) << response;
+
+ int status = GetResponseStatus(response.c_str());
+ if (status != 200) {
+ LOG(LS_ERROR) << "Received error from server";
+ Close();
+ callback_->OnDisconnected();
+ return false;
+ }
+
+ *eoh = response.find("\r\n\r\n");
+ ASSERT(*eoh != std::string::npos);
+ if (*eoh == std::string::npos)
+ return false;
+
+ *peer_id = -1;
+
+ // See comment in peer_channel.cc for why we use the Pragma header and
+ // not e.g. "X-Peer-Id".
+ GetHeaderValue(response, *eoh, "\r\nPragma: ", peer_id);
+
+ return true;
+ }
+
+ void OnClose(talk_base::AsyncSocket* socket, int err) {
+ LOG(INFO) << __FUNCTION__;
+ socket->Close();
+ if (err != WSAECONNREFUSED) {
+ if (socket == &hanging_get_) {
+ if (state_ == CONNECTED) {
+ LOG(INFO) << "Issuing a new hanging get";
+ hanging_get_.Close();
+ hanging_get_.Connect(server_address_);
+ }
+ }
+ } else {
+ // Failed to connect to the server.
+ Close();
+ callback_->OnDisconnected();
+ }
+ }
+
+ PeerConnectionClientObserver* callback_;
+ talk_base::SocketAddress server_address_;
+ talk_base::Win32Socket control_socket_;
+ talk_base::Win32Socket hanging_get_;
+ std::string onconnect_data_;
+ std::string control_data_;
+ std::string notification_data_;
+ Peers peers_;
+ State state_;
+ int my_id_;
+};
+
+class ConnectionObserver
+ : public PeerConnectionObserver,
+ public PeerConnectionClientObserver,
+ public MainWndCallback,
+ public talk_base::Win32Window {
+ public:
+ enum WindowMessages {
+ MEDIA_CHANNELS_INITIALIZED = WM_APP + 1,
+ PEER_CONNECTION_CLOSED,
+ SEND_MESSAGE_TO_PEER,
+ };
+
+ enum HandshakeState {
+ NONE,
+ INITIATOR,
+ ANSWER_RECEIVED,
+ OFFER_RECEIVED,
+ QUIT_SENT,
+ };
+
+ ConnectionObserver(PeerConnectionClient* client,
+ MainWnd* main_wnd)
+ : handshake_(NONE),
+ waiting_for_audio_(false),
+ waiting_for_video_(false),
+ peer_id_(-1),
+ video_channel_(-1),
+ audio_channel_(-1),
+ client_(client),
+ main_wnd_(main_wnd) {
+ // Create a window for posting notifications back to from other threads.
+ bool ok = Create(HWND_MESSAGE, L"ConnectionObserver", 0, 0, 0, 0, 0, 0);
+ ASSERT(ok);
+ client_->RegisterObserver(this);
+ main_wnd->RegisterObserver(this);
+ }
+
+ ~ConnectionObserver() {
+ ASSERT(peer_connection_.get() == NULL);
+ Destroy();
+ DeletePeerConnection();
+ }
+
+ bool has_video() const {
+ return video_channel_ != -1;
+ }
+
+ bool has_audio() const {
+ return audio_channel_ != -1;
+ }
+
+ bool connection_active() const {
+ return peer_connection_.get() != NULL;
+ }
+
+ void Close() {
+ if (peer_connection_.get()) {
+ peer_connection_->Close();
+ } else {
+ client_->SignOut();
+ }
+ }
+
+ protected:
+ bool InitializePeerConnection() {
+ ASSERT(peer_connection_.get() == NULL);
+ peer_connection_.reset(new PeerConnection(GetPeerConnectionString()));
+ peer_connection_->RegisterObserver(this);
+ if (!peer_connection_->Init()) {
+ DeletePeerConnection();
+ } else {
+ bool audio = peer_connection_->SetAudioDevice("", "", 0);
+ LOG(INFO) << "SetAudioDevice " << (audio ? "succeeded." : "failed.");
+ }
+ return peer_connection_.get() != NULL;
+ }
+
+ void DeletePeerConnection() {
+ peer_connection_.reset();
+ }
+
+ void StartCaptureDevice() {
+ ASSERT(peer_connection_.get());
+ if (main_wnd_->IsWindow()) {
+ main_wnd_->SwitchToStreamingUI();
+
+ if (peer_connection_->SetVideoCapture("")) {
+ peer_connection_->SetVideoRenderer(-1, main_wnd_->handle(), 0,
+ 0.7f, 0.7f, 0.95f, 0.95f);
+ } else {
+ ASSERT(false);
+ }
+ }
+ }
+
+ //
+ // PeerConnectionObserver implementation.
+ //
+
+ virtual void OnError() {
+ LOG(INFO) << __FUNCTION__;
+ ASSERT(false);
+ }
+
+ virtual void OnSignalingMessage(const std::string& msg) {
+ LOG(INFO) << __FUNCTION__;
+
+ bool shutting_down = (video_channel_ == -1 && audio_channel_ == -1);
+
+ if (handshake_ == OFFER_RECEIVED && !shutting_down)
+ StartCaptureDevice();
+
+ // Send our answer/offer/shutting down message.
+ // If we're the initiator, this will be our offer. If we just received
+ // an offer, this will be an answer. If PeerConnection::Close has been
+ // called, then this is our signal to the other end that we're shutting
+ // down.
+ if (handshake_ != QUIT_SENT) {
+ SendMessage(handle(), SEND_MESSAGE_TO_PEER, 0,
+ reinterpret_cast<LPARAM>(&msg));
+ }
+
+ if (shutting_down) {
+ handshake_ = QUIT_SENT;
+ PostMessage(handle(), PEER_CONNECTION_CLOSED, 0, 0);
+ }
+ }
+
+ // Called when a remote stream is added
+ virtual void OnAddStream(const std::string& stream_id, int channel_id,
+ bool video) {
+ LOG(INFO) << __FUNCTION__ << " " << stream_id;
+ bool send_notification = (waiting_for_video_ || waiting_for_audio_);
+ if (video) {
+ ASSERT(video_channel_ == -1);
+ video_channel_ = channel_id;
+ waiting_for_video_ = false;
+ LOG(INFO) << "Setting video renderer for channel: " << channel_id;
+ bool ok = peer_connection_->SetVideoRenderer(channel_id,
+ main_wnd_->handle(), 1, 0.0f, 0.0f, 1.0f, 1.0f);
+ ASSERT(ok);
+ } else {
+ ASSERT(audio_channel_ == -1);
+ audio_channel_ = channel_id;
+ waiting_for_audio_ = false;
+ }
+
+ if (send_notification && !waiting_for_audio_ && !waiting_for_video_)
+ PostMessage(handle(), MEDIA_CHANNELS_INITIALIZED, 0, 0);
+ }
+
+ virtual void OnRemoveStream(const std::string& stream_id,
+ int channel_id,
+ bool video) {
+ LOG(INFO) << __FUNCTION__;
+ if (video) {
+ ASSERT(channel_id == video_channel_);
+ video_channel_ = -1;
+ } else {
+ ASSERT(channel_id == audio_channel_);
+ audio_channel_ = -1;
+ }
+ }
+
+ //
+ // PeerConnectionClientObserver implementation.
+ //
+
+ virtual void OnSignedIn() {
+ LOG(INFO) << __FUNCTION__;
+ main_wnd_->SwitchToPeerList(client_->peers());
+ }
+
+ virtual void OnDisconnected() {
+ LOG(INFO) << __FUNCTION__;
+ if (peer_connection_.get()) {
+ peer_connection_->Close();
+ } else if (main_wnd_->IsWindow()) {
+ main_wnd_->SwitchToConnectUI();
+ }
+ }
+
+ virtual void OnPeerConnected(int id, const std::string& name) {
+ LOG(INFO) << __FUNCTION__;
+ // Refresh the list if we're showing it.
+ if (main_wnd_->current_ui() == MainWnd::LIST_PEERS)
+ main_wnd_->SwitchToPeerList(client_->peers());
+ }
+
+ virtual void OnPeerDisconnected(int id, const std::string& name) {
+ LOG(INFO) << __FUNCTION__;
+ if (id == peer_id_) {
+ LOG(INFO) << "Our peer disconnected";
+ peer_id_ = -1;
+ // TODO: Somehow make sure that Close has been called?
+ if (peer_connection_.get())
+ peer_connection_->Close();
+ }
+
+ // Refresh the list if we're showing it.
+ if (main_wnd_->current_ui() == MainWnd::LIST_PEERS)
+ main_wnd_->SwitchToPeerList(client_->peers());
+ }
+
+ virtual void OnMessageFromPeer(int peer_id, const std::string& message) {
+ ASSERT(peer_id_ == peer_id || peer_id_ == -1);
+
+ if (handshake_ == NONE) {
+ handshake_ = OFFER_RECEIVED;
+ peer_id_ = peer_id;
+ if (!peer_connection_.get()) {
+ // Got an offer. Give it to the PeerConnection instance.
+ // Once processed, we will get a callback to OnSignalingMessage with
+ // our 'answer' which we'll send to the peer.
+ LOG(INFO) << "Got an offer from our peer: " << peer_id;
+ if (!InitializePeerConnection()) {
+ LOG(LS_ERROR) << "Failed to initialize our PeerConnection instance";
+ client_->SignOut();
+ return;
+ }
+ }
+ } else if (handshake_ == INITIATOR) {
+ LOG(INFO) << "Remote peer sent us an answer";
+ handshake_ = ANSWER_RECEIVED;
+ } else {
+ LOG(INFO) << "Remote peer is disconnecting";
+ handshake_ = QUIT_SENT;
+ }
+
+ peer_connection_->SignalingMessage(message);
+
+ if (handshake_ == QUIT_SENT) {
+ DisconnectFromCurrentPeer();
+ }
+ }
+
+ //
+ // MainWndCallback implementation.
+ //
+ virtual void StartLogin(const std::string& server, int port) {
+ ASSERT(!client_->is_connected());
+ if (!client_->Connect(server, port, GetPeerName())) {
+ MessageBoxA(main_wnd_->handle(),
+ ("Failed to connect to " + server).c_str(),
+ "Error", MB_OK | MB_ICONERROR);
+ }
+ }
+
+ virtual void DisconnectFromServer() {
+ if (!client_->is_connected())
+ return;
+ client_->SignOut();
+ }
+
+ virtual void ConnectToPeer(int peer_id) {
+ ASSERT(peer_id_ == -1);
+ ASSERT(peer_id != -1);
+ ASSERT(handshake_ == NONE);
+
+ if (handshake_ != NONE)
+ return;
+
+ if (InitializePeerConnection()) {
+ peer_id_ = peer_id;
+ waiting_for_video_ = peer_connection_->AddStream(kVideoLabel, true);
+ waiting_for_audio_ = peer_connection_->AddStream(kAudioLabel, false);
+ if (waiting_for_video_ || waiting_for_audio_)
+ handshake_ = INITIATOR;
+ ASSERT(waiting_for_video_ || waiting_for_audio_);
+ }
+
+ if (handshake_ == NONE) {
+ ::MessageBoxA(main_wnd_->handle(), "Failed to initialize PeerConnection",
+ "Error", MB_OK | MB_ICONERROR);
+ }
+ }
+
+ virtual void DisconnectFromCurrentPeer() {
+ if (peer_connection_.get())
+ peer_connection_->Close();
+ }
+
+ //
+ // Win32Window implementation.
+ //
+
+ virtual bool OnMessage(UINT msg, WPARAM wp, LPARAM lp, LRESULT& result) {
+ bool ret = true;
+ if (msg == MEDIA_CHANNELS_INITIALIZED) {
+ ASSERT(handshake_ == INITIATOR);
+ bool ok = peer_connection_->Connect();
+ ASSERT(ok);
+ StartCaptureDevice();
+ // When we get an OnSignalingMessage notification, we'll send our
+ // json encoded signaling message to the peer, which is the first step
+ // of establishing a connection.
+ } else if (msg == PEER_CONNECTION_CLOSED) {
+ LOG(INFO) << "PEER_CONNECTION_CLOSED";
+ DeletePeerConnection();
+ ::InvalidateRect(main_wnd_->handle(), NULL, TRUE);
+ handshake_ = NONE;
+ waiting_for_audio_ = false;
+ waiting_for_video_ = false;
+ peer_id_ = -1;
+ ASSERT(video_channel_ == -1);
+ ASSERT(audio_channel_ == -1);
+ if (main_wnd_->IsWindow()) {
+ if (client_->is_connected()) {
+ main_wnd_->SwitchToPeerList(client_->peers());
+ } else {
+ main_wnd_->SwitchToConnectUI();
+ }
+ } else {
+ DisconnectFromServer();
+ }
+ } else if (msg == SEND_MESSAGE_TO_PEER) {
+ client_->SendToPeer(peer_id_, *reinterpret_cast<std::string*>(lp));
+ } else {
+ ret = false;
+ }
+
+ return ret;
+ }
+
+ protected:
+ HandshakeState handshake_;
+ bool waiting_for_audio_;
+ bool waiting_for_video_;
+ int peer_id_;
+ scoped_ptr<PeerConnection> peer_connection_;
+ PeerConnectionClient* client_;
+ MainWnd* main_wnd_;
+ int video_channel_;
+ int audio_channel_;
+};
+
+int PASCAL wWinMain(HINSTANCE instance, HINSTANCE prev_instance,
+ wchar_t* cmd_line, int cmd_show) {
+ talk_base::EnsureWinsockInit();
+
+ webrtc::Trace::CreateTrace();
+ webrtc::Trace::SetTraceFile("session_test_trace.txt");
+ webrtc::Trace::SetLevelFilter(webrtc::kTraceWarning);
+
+ MainWnd wnd;
+ if (!wnd.Create()) {
+ ASSERT(false);
+ return -1;
+ }
+
+ PeerConnectionClient client;
+ ConnectionObserver observer(&client, &wnd);
+
+ // Main loop.
+ MSG msg;
+ BOOL gm;
+ while ((gm = ::GetMessage(&msg, NULL, 0, 0)) && gm != -1) {
+ if (!wnd.PreTranslateMessage(&msg)) {
+ ::TranslateMessage(&msg);
+ ::DispatchMessage(&msg);
+ }
+ }
+
+ if (observer.connection_active() || client.is_connected()) {
+ observer.Close();
+ while ((observer.connection_active() || client.is_connected()) &&
+ (gm = ::GetMessage(&msg, NULL, 0, 0)) && gm != -1) {
+ ::TranslateMessage(&msg);
+ ::DispatchMessage(&msg);
+ }
+ }
+
+ return 0;
+}
diff --git a/third_party_mods/libjingle/source/talk/app/videoengine.h b/third_party_mods/libjingle/source/talk/app/videoengine.h
new file mode 100644
index 0000000..aa7fb62
--- /dev/null
+++ b/third_party_mods/libjingle/source/talk/app/videoengine.h
@@ -0,0 +1,120 @@
+/*
+ * libjingle
+ * Copyright 2004--2011, Google Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright notice,
+ * this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ * 3. The name of the author may not be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED
+ * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO
+ * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
+ * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
+ * OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
+ * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+
+#ifndef TALK_APP_WEBRTC_VIDEOENGINE_H_
+#define TALK_APP_WEBRTC_VIDEOENGINE_H_
+
+#include "talk/base/common.h"
+#include "common_types.h"
+#include "video_engine/main/interface/vie_base.h"
+#include "video_engine/main/interface/vie_capture.h"
+#include "video_engine/main/interface/vie_codec.h"
+#include "video_engine/main/interface/vie_errors.h"
+#include "video_engine/main/interface/vie_image_process.h"
+#include "video_engine/main/interface/vie_network.h"
+#include "video_engine/main/interface/vie_render.h"
+#include "video_engine/main/interface/vie_rtp_rtcp.h"
+
+namespace webrtc {
+
+// all tracing macros should go to a common file
+
+// automatically handles lifetime of VideoEngine
+class scoped_video_engine {
+ public:
+ explicit scoped_video_engine(VideoEngine* e) : ptr(e) {}
+ // VERIFY, to ensure that there are no leaks at shutdown
+ ~scoped_video_engine() {
+ if (ptr) {
+ VideoEngine::Delete(ptr);
+ }
+ }
+ VideoEngine* get() const { return ptr; }
+ private:
+ VideoEngine* ptr;
+};
+
+// scoped_ptr class to handle obtaining and releasing VideoEngine
+// interface pointers
+template<class T> class scoped_video_ptr {
+ public:
+ explicit scoped_video_ptr(const scoped_video_engine& e)
+ : ptr(T::GetInterface(e.get())) {}
+ explicit scoped_video_ptr(T* p) : ptr(p) {}
+ ~scoped_video_ptr() { if (ptr) ptr->Release(); }
+ T* operator->() const { return ptr; }
+ T* get() const { return ptr; }
+ private:
+ T* ptr;
+};
+
+// Utility class for aggregating the various WebRTC interface.
+// Fake implementations can also be injected for testing.
+class VideoEngineWrapper {
+ public:
+ VideoEngineWrapper()
+ : engine_(VideoEngine::Create()),
+ base_(engine_), codec_(engine_), capture_(engine_),
+ network_(engine_), render_(engine_), rtp_(engine_),
+ image_(engine_) {
+ }
+
+ VideoEngineWrapper(ViEBase* base, ViECodec* codec, ViECapture* capture,
+ ViENetwork* network, ViERender* render,
+ ViERTP_RTCP* rtp, ViEImageProcess* image)
+ : engine_(NULL),
+ base_(base), codec_(codec), capture_(capture),
+ network_(network), render_(render), rtp_(rtp),
+ image_(image) {
+ }
+
+ virtual ~VideoEngineWrapper() {}
+ VideoEngine* engine() { return engine_.get(); }
+ ViEBase* base() { return base_.get(); }
+ ViECodec* codec() { return codec_.get(); }
+ ViECapture* capture() { return capture_.get(); }
+ ViENetwork* network() { return network_.get(); }
+ ViERender* render() { return render_.get(); }
+ ViERTP_RTCP* rtp() { return rtp_.get(); }
+ ViEImageProcess* sync() { return image_.get(); }
+ int error() { return base_->LastError(); }
+
+ private:
+ scoped_video_engine engine_;
+ scoped_video_ptr<ViEBase> base_;
+ scoped_video_ptr<ViECodec> codec_;
+ scoped_video_ptr<ViECapture> capture_;
+ scoped_video_ptr<ViENetwork> network_;
+ scoped_video_ptr<ViERender> render_;
+ scoped_video_ptr<ViERTP_RTCP> rtp_;
+ scoped_video_ptr<ViEImageProcess> image_;
+};
+
+} //namespace webrtc
+
+#endif // TALK_APP_WEBRTC_VOICEENGINE_H_
diff --git a/third_party_mods/libjingle/source/talk/app/videomediaengine.cc b/third_party_mods/libjingle/source/talk/app/videomediaengine.cc
new file mode 100644
index 0000000..dd5bdde
--- /dev/null
+++ b/third_party_mods/libjingle/source/talk/app/videomediaengine.cc
@@ -0,0 +1,756 @@
+
+
+#include "talk/app/videomediaengine.h"
+
+#include <iostream>
+
+#ifdef PLATFORM_CHROMIUM
+#include "content/renderer/video_capture_chrome.h"
+#endif
+#include "talk/base/buffer.h"
+#include "talk/base/byteorder.h"
+#include "talk/base/logging.h"
+#include "talk/base/stringutils.h"
+#include "talk/app/voicemediaengine.h"
+
+#include "modules/video_capture/main/interface/video_capture.h"
+
+#ifndef ARRAYSIZE
+#define ARRAYSIZE(a) (sizeof(a) / sizeof((a)[0]))
+#endif
+
+namespace webrtc {
+
+static const int kDefaultLogSeverity = 3;
+static const int kStartVideoBitrate = 300;
+static const int kMaxVideoBitrate = 1000;
+
+const RtcVideoEngine::VideoCodecPref RtcVideoEngine::kVideoCodecPrefs[] = {
+ {"VP8", 104, 0},
+ {"H264", 105, 1}
+};
+
+RtcVideoEngine::RtcVideoEngine()
+ : video_engine_(new VideoEngineWrapper()),
+ capture_(NULL),
+ capture_id_(-1),
+ voice_engine_(NULL),
+ initialized_(false),
+ log_level_(kDefaultLogSeverity),
+ capture_started_(false){
+}
+
+RtcVideoEngine::RtcVideoEngine(RtcVoiceEngine* voice_engine)
+ : video_engine_(new VideoEngineWrapper()),
+ capture_(NULL),
+ capture_id_(-1),
+ voice_engine_(voice_engine),
+ initialized_(false),
+ log_level_(kDefaultLogSeverity),
+ capture_started_(false){
+}
+
+RtcVideoEngine::~RtcVideoEngine() {
+ LOG(LS_VERBOSE) << " RtcVideoEngine::~RtcVideoEngine";
+ video_engine_->engine()->SetTraceCallback(NULL);
+ Terminate();
+}
+
+bool RtcVideoEngine::Init() {
+ LOG(LS_VERBOSE) << "RtcVideoEngine::Init";
+ ApplyLogging();
+ if (video_engine_->engine()->SetTraceCallback(this) != 0) {
+ LOG(LS_ERROR) << "SetTraceCallback error";
+ }
+
+ bool result = InitVideoEngine(voice_engine_);
+ if (result) {
+ LOG(LS_INFO) << "VideoEngine Init done";
+ } else {
+ LOG(LS_ERROR) << "VideoEngine Init failed, releasing";
+ Terminate();
+ }
+ return result;
+}
+
+bool RtcVideoEngine::InitVideoEngine(RtcVoiceEngine* voice_engine) {
+ LOG(LS_VERBOSE) << "RtcVideoEngine::InitVideoEngine";
+
+ bool ret = true;
+ if (video_engine_->base()->Init() != 0) {
+ LOG(LS_ERROR) << "VideoEngine Init method failed";
+ ret = false;
+ }
+
+ if (!voice_engine) {
+ LOG(LS_WARNING) << "NULL voice engine";
+ } else if ((video_engine_->base()->SetVoiceEngine(
+ voice_engine->webrtc()->engine())) != 0) {
+ LOG(LS_WARNING) << "Failed to SetVoiceEngine";
+ }
+
+ if ((video_engine_->base()->RegisterObserver(*this)) != 0) {
+ LOG(LS_WARNING) << "Failed to register observer";
+ }
+
+ int ncodecs = video_engine_->codec()->NumberOfCodecs();
+ for (int i = 0; i < ncodecs - 2; ++i) {
+ VideoCodec wcodec;
+ if ((video_engine_->codec()->GetCodec(i, wcodec) == 0) &&
+ (strncmp(wcodec.plName, "I420", 4) != 0)) { //ignore I420
+ cricket::VideoCodec codec(wcodec.plType, wcodec.plName, wcodec.width,
+ wcodec.height, wcodec.maxFramerate, i);
+ LOG(LS_INFO) << codec.ToString();
+ video_codecs_.push_back(codec);
+ }
+ }
+
+ std::sort(video_codecs_.begin(), video_codecs_.end(),
+ &cricket::VideoCodec::Preferable);
+ return ret;
+}
+
+void RtcVideoEngine::PerformanceAlarm(const unsigned int cpuLoad) {
+ return;
+}
+
+void RtcVideoEngine::Print(const TraceLevel level, const char *traceString,
+ const int length) {
+ return;
+}
+
+int RtcVideoEngine::GetCodecPreference(const char* name) {
+ for (size_t i = 0; i < ARRAY_SIZE(kVideoCodecPrefs); ++i) {
+ if (strcmp(kVideoCodecPrefs[i].payload_name, name) == 0) {
+ return kVideoCodecPrefs[i].pref;
+ }
+ }
+ return -1;
+}
+
+void RtcVideoEngine::ApplyLogging() {
+ int filter = 0;
+ switch(log_level_) {
+ case talk_base::LS_VERBOSE: filter |= kTraceAll;
+ case talk_base::LS_INFO: filter |= kTraceStateInfo;
+ case talk_base::LS_WARNING: filter |= kTraceWarning;
+ case talk_base::LS_ERROR: filter |= kTraceError | kTraceCritical;
+ }
+}
+
+void RtcVideoEngine::Terminate() {
+ LOG(LS_INFO) << "RtcVideoEngine::Terminate";
+ ReleaseCaptureDevice();
+}
+
+int RtcVideoEngine::GetCapabilities() {
+ return cricket::MediaEngine::VIDEO_RECV | cricket::MediaEngine::VIDEO_SEND;
+}
+
+bool RtcVideoEngine::SetOptions(int options) {
+ return true;
+}
+
+bool RtcVideoEngine::ReleaseCaptureDevice() {
+ if (capture_) {
+ // Stop capture
+ SetCapture(false);
+ // DisconnectCaptureDevice
+ RtcVideoMediaChannel* channel;
+ for (VideoChannels::const_iterator it = channels_.begin();
+ it != channels_.end(); ++it) {
+ ASSERT(*it != NULL);
+ channel = *it;
+ video_engine_->capture()->DisconnectCaptureDevice(channel->video_channel());
+ }
+ // ReleaseCaptureDevice
+ video_engine_->capture()->ReleaseCaptureDevice(capture_id_);
+ capture_id_ = -1;
+#ifdef PLATFORM_CHROMIUM
+ VideoCaptureChrome::DestroyVideoCapture(
+ static_cast<VideoCaptureChrome*>(capture_));
+#else
+ webrtc::VideoCaptureModule::Destroy(capture_);
+#endif
+ capture_ = NULL;
+ }
+ return true;
+}
+
+bool RtcVideoEngine::SetCaptureDevice(const cricket::Device* cam) {
+ ASSERT(video_engine_.get());
+ ASSERT(cam != NULL);
+
+ ReleaseCaptureDevice();
+
+#ifdef PLATFORM_CHROMIUM
+ int cam_id = atol(cam->id.c_str());
+ if (cam_id == -1)
+ return false;
+ unsigned char uniqueId[16];
+ capture_ = VideoCaptureChrome::CreateVideoCapture(cam_id, uniqueId);
+#else
+ WebRtc_UWord8 device_name[128];
+ WebRtc_UWord8 device_id[260];
+ VideoCaptureModule::DeviceInfo* device_info =
+ VideoCaptureModule::CreateDeviceInfo(0);
+ for (WebRtc_UWord32 i = 0; i < device_info->NumberOfDevices(); ++i) {
+ if (device_info->GetDeviceName(i, device_name, ARRAYSIZE(device_name),
+ device_id, ARRAYSIZE(device_id)) == 0) {
+ if ((cam->name.compare("") == 0) ||
+ (cam->id.compare((char*) device_id) == 0)) {
+ capture_ = VideoCaptureModule::Create(1234, device_id);
+ if (capture_) {
+ LOG(INFO) << "Found video capture device: " << device_name;
+ break;
+ }
+ }
+ }
+ }
+ VideoCaptureModule::DestroyDeviceInfo(device_info);
+#endif
+
+ if (!capture_)
+ return false;
+
+ ViECapture* vie_capture = video_engine_->capture();
+ if (vie_capture->AllocateCaptureDevice(*capture_, capture_id_) == 0) {
+ // Connect to all the channels
+ RtcVideoMediaChannel* channel;
+ for (VideoChannels::const_iterator it = channels_.begin();
+ it != channels_.end(); ++it) {
+ ASSERT(*it != NULL);
+ channel = *it;
+ vie_capture->ConnectCaptureDevice(capture_id_, channel->video_channel());
+ }
+ SetCapture(true);
+ } else {
+ ASSERT(capture_id_ == -1);
+ }
+
+ return (capture_id_ != -1);
+}
+
+bool RtcVideoEngine::SetVideoRenderer(int channel_id,
+ void* window,
+ unsigned int zOrder,
+ float left,
+ float top,
+ float right,
+ float bottom) {
+ int ret;
+ if (channel_id == -1)
+ channel_id = capture_id_;
+ ret = video_engine_->render()->AddRenderer(
+ channel_id, window, zOrder, left, top, right, bottom);
+ if (ret !=0 )
+ return false;
+ ret = video_engine_->render()->StartRender(channel_id);
+ if (ret !=0 )
+ return false;
+ return true;
+}
+
+bool RtcVideoEngine::SetLocalRenderer(cricket::VideoRenderer* renderer) {
+ LOG(LS_WARNING) << "Not required call SetLocalRenderer for webrtc";
+ return false;
+}
+
+cricket::CaptureResult RtcVideoEngine::SetCapture(bool capture) {
+ if (capture_started_ == capture)
+ return cricket::CR_SUCCESS;
+
+ if (capture_id_ != -1) {
+ int ret;
+ if (capture)
+ ret = video_engine_->capture()->StartCapture(capture_id_);
+ else
+ ret = video_engine_->capture()->StopCapture(capture_id_);
+ if (ret == 0) {
+ capture_started_ = capture;
+ return cricket::CR_SUCCESS;
+ }
+ }
+
+ return cricket::CR_NO_DEVICE;
+}
+
+const std::vector<cricket::VideoCodec>& RtcVideoEngine::codecs() const {
+ return video_codecs_;
+}
+
+void RtcVideoEngine::SetLogging(int min_sev, const char* filter) {
+ log_level_ = min_sev;
+ ApplyLogging();
+}
+
+bool RtcVideoEngine::SetDefaultEncoderConfig(
+ const cricket::VideoEncoderConfig& config) {
+ bool ret = SetDefaultCodec(config.max_codec);
+ if (ret) {
+ default_encoder_config_ = config;
+ }
+ return ret;
+}
+
+bool RtcVideoEngine::SetDefaultCodec(const cricket::VideoCodec& codec) {
+ default_codec_ = codec;
+ return true;
+}
+
+RtcVideoMediaChannel* RtcVideoEngine::CreateChannel(
+ cricket::VoiceMediaChannel* voice_channel) {
+ RtcVideoMediaChannel* channel =
+ new RtcVideoMediaChannel(this, voice_channel);
+ if (channel) {
+ if (!channel->Init()) {
+ delete channel;
+ channel = NULL;
+ }
+ }
+ return channel;
+}
+
+bool RtcVideoEngine::FindCodec(const cricket::VideoCodec& codec) {
+ for (size_t i = 0; i < video_codecs_.size(); ++i) {
+ if (video_codecs_[i].Matches(codec)) {
+ return true;
+ }
+ }
+ return false;
+}
+
+void RtcVideoEngine::ConvertToCricketVideoCodec(
+ const VideoCodec& in_codec, cricket::VideoCodec& out_codec) {
+ out_codec.id = in_codec.plType;
+ out_codec.name = in_codec.plName;
+ out_codec.width = in_codec.width;
+ out_codec.height = in_codec.height;
+ out_codec.framerate = in_codec.maxFramerate;
+}
+
+void RtcVideoEngine::ConvertFromCricketVideoCodec(
+ const cricket::VideoCodec& in_codec, VideoCodec& out_codec) {
+ out_codec.plType = in_codec.id;
+ strcpy(out_codec.plName, in_codec.name.c_str());
+ out_codec.width = 352; //in_codec.width;
+ out_codec.height = 288; //in_codec.height;
+ out_codec.maxFramerate = 30; //in_codec.framerate;
+
+ if (strncmp(out_codec.plName, "VP8", 3) == 0) {
+ out_codec.codecType = kVideoCodecVP8;
+ } else if (strncmp(out_codec.plName, "H263", 4) == 0) {
+ out_codec.codecType = kVideoCodecH263;
+ } else if (strncmp(out_codec.plName, "H264", 4) == 0) {
+ out_codec.codecType = kVideoCodecH264;
+ } else if (strncmp(out_codec.plName, "I420", 4) == 0) {
+ out_codec.codecType = kVideoCodecI420;
+ } else {
+ LOG(LS_INFO) << "invalid codec type";
+ }
+
+ out_codec.maxBitrate = kMaxVideoBitrate;
+ out_codec.startBitrate = kStartVideoBitrate;
+ out_codec.minBitrate = kStartVideoBitrate;
+}
+
+int RtcVideoEngine::GetLastVideoEngineError() {
+ return video_engine_->base()->LastError();
+}
+
+void RtcVideoEngine::RegisterChannel(RtcVideoMediaChannel *channel) {
+ talk_base::CritScope lock(&channels_cs_);
+ channels_.push_back(channel);
+}
+
+void RtcVideoEngine::UnregisterChannel(RtcVideoMediaChannel *channel) {
+ talk_base::CritScope lock(&channels_cs_);
+ VideoChannels::iterator i = std::find(channels_.begin(),
+ channels_.end(),
+ channel);
+ if (i != channels_.end()) {
+ channels_.erase(i);
+ }
+}
+
+
+
+
+// RtcVideoMediaChannel
+
+RtcVideoMediaChannel::RtcVideoMediaChannel(
+ RtcVideoEngine* engine, cricket::VoiceMediaChannel* channel)
+ : engine_(engine),
+ voice_channel_(channel),
+ video_channel_(-1),
+ sending_(false),
+ render_started_(false) {
+ engine->RegisterChannel(this);
+}
+
+bool RtcVideoMediaChannel::Init() {
+ bool ret = true;
+ if (engine_->video_engine()->base()->CreateChannel(video_channel_) != 0) {
+ LOG(LS_ERROR) << "ViE CreateChannel Failed!!";
+ ret = false;
+ }
+
+ LOG(LS_INFO) << "RtcVideoMediaChannel::Init "
+ << "video_channel " << video_channel_ << " created";
+ //connect audio channel
+ if (voice_channel_) {
+ RtcVoiceMediaChannel* channel =
+ static_cast<RtcVoiceMediaChannel*> (voice_channel_);
+ if (engine_->video_engine()->base()->ConnectAudioChannel(
+ video_channel_, channel->audio_channel()) != 0) {
+ LOG(LS_WARNING) << "ViE ConnectAudioChannel failed"
+ << "A/V not synchronized";
+ // Don't set ret to false;
+ }
+ }
+
+ //Register external transport
+ if (engine_->video_engine()->network()->RegisterSendTransport(
+ video_channel_, *this) != 0) {
+ ret = false;
+ } else {
+ EnableRtcp();
+ EnablePLI();
+ }
+ return ret;
+}
+
+RtcVideoMediaChannel::~RtcVideoMediaChannel() {
+ // Stop and remote renderer
+ SetRender(false);
+ if (engine()->video_engine()->render()->RemoveRenderer(video_channel_) == -1) {
+ LOG(LS_ERROR) << "Video RemoveRenderer failed for channel "
+ << video_channel_;
+ }
+
+ // DeRegister external transport
+ if (engine()->video_engine()->network()->DeregisterSendTransport(
+ video_channel_) == -1) {
+ LOG(LS_ERROR) << "DeRegisterSendTransport failed for channel id "
+ << video_channel_;
+ }
+
+ // Unregister RtcChannel with the engine.
+ engine()->UnregisterChannel(this);
+
+ // Delete VideoChannel
+ if (engine()->video_engine()->base()->DeleteChannel(video_channel_) == -1) {
+ LOG(LS_ERROR) << "Video DeleteChannel failed for channel "
+ << video_channel_;
+ }
+}
+
+bool RtcVideoMediaChannel::SetRecvCodecs(
+ const std::vector<cricket::VideoCodec>& codecs) {
+ bool ret = true;
+ for (std::vector<cricket::VideoCodec>::const_iterator iter = codecs.begin();
+ iter != codecs.end(); ++iter) {
+ if (engine()->FindCodec(*iter)) {
+ VideoCodec wcodec;
+ engine()->ConvertFromCricketVideoCodec(*iter, wcodec);
+ if (engine()->video_engine()->codec()->SetReceiveCodec(
+ video_channel_, wcodec) != 0) {
+ LOG(LS_ERROR) << "ViE SetReceiveCodec failed"
+ << " VideoChannel : " << video_channel_ << " Error: "
+ << engine()->video_engine()->base()->LastError()
+ << "wcodec " << wcodec.plName;
+ ret = false;
+ }
+ } else {
+ LOG(LS_INFO) << "Unknown codec" << iter->name;
+ ret = false;
+ }
+ }
+
+ // make channel ready to receive packets
+ if (ret) {
+ if (engine()->video_engine()->base()->StartReceive(video_channel_) != 0) {
+ LOG(LS_ERROR) << "ViE StartReceive failure";
+ ret = false;
+ }
+ }
+ return ret;
+}
+
+bool RtcVideoMediaChannel::SetSendCodecs(
+ const std::vector<cricket::VideoCodec>& codecs) {
+ if (sending_) {
+ LOG(LS_ERROR) << "channel is alredy sending";
+ return false;
+ }
+
+ //match with local video codec list
+ std::vector<VideoCodec> send_codecs;
+ for (std::vector<cricket::VideoCodec>::const_iterator iter = codecs.begin();
+ iter != codecs.end(); ++iter) {
+ if (engine()->FindCodec(*iter)) {
+ VideoCodec wcodec;
+ engine()->ConvertFromCricketVideoCodec(*iter, wcodec);
+ send_codecs.push_back(wcodec);
+ }
+ }
+
+ // if none matches, return with set
+ if (send_codecs.empty()) {
+ LOG(LS_ERROR) << "No matching codecs avilable";
+ return false;
+ }
+
+ //select the first matched codec
+ const VideoCodec& codec(send_codecs[0]);
+ send_codec_ = codec;
+ if (engine()->video_engine()->codec()->SetSendCodec(
+ video_channel_, codec) != 0) {
+ LOG(LS_ERROR) << "ViE SetSendCodec failed";
+ return false;
+ }
+ return true;
+}
+
+bool RtcVideoMediaChannel::SetRender(bool render) {
+ if (video_channel_ != -1) {
+ int ret = -1;
+ if (render == render_started_)
+ return true;
+
+ if (render) {
+ ret = engine()->video_engine()->render()->StartRender(video_channel_);
+ } else {
+ ret = engine()->video_engine()->render()->StopRender(video_channel_);
+ }
+
+ if (ret == 0) {
+ render_started_ = render;
+ return true;
+ }
+ }
+ return false;
+}
+
+bool RtcVideoMediaChannel::SetSend(bool send) {
+ if (send == sending()) {
+ return true; // no action required
+ }
+
+ bool ret = true;
+ if (send) { //enable
+ if (engine()->video_engine()->base()->StartSend(video_channel_) != 0) {
+ LOG(LS_ERROR) << "ViE StartSend failed";
+ ret = false;
+ }
+ } else { // disable
+ if (engine()->video_engine()->base()->StopSend(video_channel_) != 0) {
+ LOG(LS_ERROR) << "ViE StopSend failed";
+ ret = false;
+ }
+ }
+ if (ret)
+ sending_ = send;
+
+ return ret;
+}
+
+bool RtcVideoMediaChannel::AddStream(uint32 ssrc, uint32 voice_ssrc) {
+ return false;
+}
+
+bool RtcVideoMediaChannel::RemoveStream(uint32 ssrc) {
+ return false;
+}
+
+bool RtcVideoMediaChannel::SetRenderer(
+ uint32 ssrc, cricket::VideoRenderer* renderer) {
+ return false;
+}
+
+bool RtcVideoMediaChannel::SetExternalRenderer(uint32 ssrc, void* renderer)
+{
+ int ret;
+ ret = engine_->video_engine()->render()->AddRenderer(
+ video_channel_,
+ kVideoI420,
+ static_cast<ExternalRenderer*>(renderer));
+ if (ret !=0 )
+ return false;
+ ret = engine_->video_engine()->render()->StartRender(video_channel_);
+ if (ret !=0 )
+ return false;
+ return true;
+}
+
+bool RtcVideoMediaChannel::GetStats(cricket::VideoMediaInfo* info) {
+ cricket::VideoSenderInfo sinfo;
+ memset(&sinfo, 0, sizeof(sinfo));
+
+ unsigned int ssrc;
+ if (engine_->video_engine()->rtp()->GetLocalSSRC(video_channel_,
+ ssrc) != 0) {
+ LOG(LS_ERROR) << "ViE GetLocalSSRC failed";
+ return false;
+ }
+ sinfo.ssrc = ssrc;
+
+ unsigned int cumulative_lost, extended_max, jitter;
+ int rtt_ms;
+ unsigned short fraction_lost;
+
+ if (engine_->video_engine()->rtp()->GetSentRTCPStatistics(video_channel_,
+ fraction_lost, cumulative_lost, extended_max, jitter, rtt_ms) != 0) {
+ LOG(LS_ERROR) << "ViE GetLocalSSRC failed";
+ return false;
+ }
+
+ sinfo.fraction_lost = fraction_lost;
+ sinfo.rtt_ms = rtt_ms;
+
+ unsigned int bytes_sent, packets_sent, bytes_recv, packets_recv;
+ if (engine_->video_engine()->rtp()->GetRTPStatistics(video_channel_,
+ bytes_sent, packets_sent, bytes_recv, packets_recv) != 0) {
+ LOG(LS_ERROR) << "ViE GetRTPStatistics";
+ return false;
+ }
+ sinfo.packets_sent = packets_sent;
+ sinfo.bytes_sent = bytes_sent;
+ sinfo.packets_lost = -1;
+ sinfo.packets_cached = -1;
+
+ info->senders.push_back(sinfo);
+
+ //build receiver info.
+ // reusing the above local variables
+ cricket::VideoReceiverInfo rinfo;
+ memset(&rinfo, 0, sizeof(rinfo));
+ if (engine_->video_engine()->rtp()->GetReceivedRTCPStatistics(video_channel_,
+ fraction_lost, cumulative_lost, extended_max, jitter, rtt_ms) != 0) {
+ LOG(LS_ERROR) << "ViE GetReceivedRTPStatistics Failed";
+ return false;
+ }
+ rinfo.bytes_rcvd = bytes_recv;
+ rinfo.packets_rcvd = packets_recv;
+ rinfo.fraction_lost = fraction_lost;
+
+ if (engine_->video_engine()->rtp()->GetRemoteSSRC(video_channel_,
+ ssrc) != 0) {
+ return false;
+ }
+ rinfo.ssrc = ssrc;
+
+ //Get codec for wxh
+ info->receivers.push_back(rinfo);
+ return true;
+}
+
+bool RtcVideoMediaChannel::SendIntraFrame() {
+ bool ret = true;
+ if (engine()->video_engine()->codec()->SendKeyFrame(video_channel_) != 0) {
+ LOG(LS_ERROR) << "ViE SendKeyFrame failed";
+ ret = false;
+ }
+
+ return ret;
+}
+
+bool RtcVideoMediaChannel::RequestIntraFrame() {
+ //There is no API exposed to application to request a key frame
+ // ViE does this internally when there are errors from decoder
+ return true;
+}
+
+void RtcVideoMediaChannel::OnPacketReceived(talk_base::Buffer* packet) {
+ engine()->video_engine()->network()->ReceivedRTPPacket(video_channel_,
+ packet->data(),
+ packet->length());
+
+}
+
+void RtcVideoMediaChannel::OnRtcpReceived(talk_base::Buffer* packet) {
+ engine_->video_engine()->network()->ReceivedRTCPPacket(video_channel_,
+ packet->data(),
+ packet->length());
+
+}
+
+void RtcVideoMediaChannel::SetSendSsrc(uint32 id) {
+ if (!sending_){
+ if (engine()->video_engine()->rtp()->SetLocalSSRC(video_channel_, id) != 0) {
+ LOG(LS_ERROR) << "ViE SetLocalSSRC failed";
+ }
+ } else {
+ LOG(LS_ERROR) << "Channel already in send state";
+ }
+}
+
+bool RtcVideoMediaChannel::SetRtcpCName(const std::string& cname) {
+ if (engine()->video_engine()->rtp()->SetRTCPCName(video_channel_,
+ cname.c_str()) != 0) {
+ LOG(LS_ERROR) << "ViE SetRTCPCName failed";
+ return false;
+ }
+ return true;
+}
+
+bool RtcVideoMediaChannel::Mute(bool on) {
+ // stop send??
+ return false;
+}
+
+bool RtcVideoMediaChannel::SetSendBandwidth(bool autobw, int bps) {
+ LOG(LS_VERBOSE) << "RtcVideoMediaChanne::SetSendBandwidth";
+
+ VideoCodec current = send_codec_;
+ send_codec_.startBitrate = bps;
+
+ if (engine()->video_engine()->codec()->SetSendCodec(video_channel_,
+ send_codec_) != 0) {
+ LOG(LS_ERROR) << "ViE SetSendCodec failed";
+ if (engine()->video_engine()->codec()->SetSendCodec(video_channel_,
+ current) != 0) {
+ // should call be ended in this case?
+ }
+ return false;
+ }
+ return true;
+}
+
+bool RtcVideoMediaChannel::SetOptions(int options) {
+ return true;
+}
+
+void RtcVideoMediaChannel::EnableRtcp() {
+ engine()->video_engine()->rtp()->SetRTCPStatus(
+ video_channel_, kRtcpCompound_RFC4585);
+}
+
+void RtcVideoMediaChannel::EnablePLI() {
+ engine_->video_engine()->rtp()->SetKeyFrameRequestMethod(
+ video_channel_, kViEKeyFrameRequestPliRtcp);
+}
+
+void RtcVideoMediaChannel::EnableTMMBR() {
+ engine_->video_engine()->rtp()->SetTMMBRStatus(video_channel_, true);
+}
+
+int RtcVideoMediaChannel::SendPacket(int channel, const void* data, int len) {
+ if (!network_interface_) {
+ return -1;
+ }
+ talk_base::Buffer packet(data, len, cricket::kMaxRtpPacketLen);
+ return network_interface_->SendPacket(&packet) ? len : -1;
+}
+
+int RtcVideoMediaChannel::SendRTCPPacket(int channel,
+ const void* data,
+ int len) {
+ if (!network_interface_) {
+ return -1;
+ }
+ talk_base::Buffer packet(data, len, cricket::kMaxRtpPacketLen);
+ return network_interface_->SendRtcp(&packet) ? len : -1;
+}
+
+} // namespace webrtc
diff --git a/third_party_mods/libjingle/source/talk/app/videomediaengine.h b/third_party_mods/libjingle/source/talk/app/videomediaengine.h
new file mode 100644
index 0000000..a5e9fce
--- /dev/null
+++ b/third_party_mods/libjingle/source/talk/app/videomediaengine.h
@@ -0,0 +1,195 @@
+/*
+ * libjingle
+ * Copyright 2004--2011, Google Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright notice,
+ * this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ * 3. The name of the author may not be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED
+ * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO
+ * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
+ * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
+ * OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
+ * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef TALK_APP_WEBRTC_VIDEOMEDIAENGINE_H_
+#define TALK_APP_WEBRTC_VIDEOMEDIAENGINE_H_
+
+#include <vector>
+
+#include "talk/base/scoped_ptr.h"
+#include "talk/session/phone/videocommon.h"
+#include "talk/session/phone/codec.h"
+#include "talk/session/phone/channel.h"
+#include "talk/session/phone/mediaengine.h"
+#include "talk/app/videoengine.h"
+
+
+namespace cricket {
+class VoiceMediaChannel;
+class Device;
+class VideoRenderer;
+}
+
+namespace webrtc {
+class RtcVideoMediaChannel;
+class RtcVoiceEngine;
+class ExternalRenderer;
+
+class RtcVideoEngine : public ViEBaseObserver, public TraceCallback {
+ public:
+ RtcVideoEngine();
+ explicit RtcVideoEngine(RtcVoiceEngine* voice_engine);
+ ~RtcVideoEngine();
+
+ bool Init();
+ void Terminate();
+
+ RtcVideoMediaChannel* CreateChannel(
+ cricket::VoiceMediaChannel* voice_channel);
+ bool FindCodec(const cricket::VideoCodec& codec);
+ bool SetDefaultEncoderConfig(const cricket::VideoEncoderConfig& config);
+
+ void RegisterChannel(RtcVideoMediaChannel* channel);
+ void UnregisterChannel(RtcVideoMediaChannel* channel);
+
+ VideoEngineWrapper* video_engine() { return video_engine_.get(); }
+ int GetLastVideoEngineError();
+ int GetCapabilities();
+ bool SetOptions(int options);
+ //TODO - need to change this interface for webrtc
+ bool SetCaptureDevice(const cricket::Device* device);
+ bool SetVideoRenderer(int channel_id,
+ void* window,
+ unsigned int zOrder,
+ float left,
+ float top,
+ float right,
+ float bottom);
+ bool SetLocalRenderer(cricket::VideoRenderer* renderer);
+ cricket::CaptureResult SetCapture(bool capture);
+ const std::vector<cricket::VideoCodec>& codecs() const;
+ void SetLogging(int min_sev, const char* filter);
+
+ cricket::VideoEncoderConfig& default_encoder_config() {
+ return default_encoder_config_;
+ }
+ cricket::VideoCodec& default_codec() {
+ return default_codec_;
+ }
+ bool SetDefaultCodec(const cricket::VideoCodec& codec);
+
+ void ConvertToCricketVideoCodec(const VideoCodec& in_codec,
+ cricket::VideoCodec& out_codec);
+
+ void ConvertFromCricketVideoCodec(const cricket::VideoCodec& in_codec,
+ VideoCodec& out_codec);
+
+ bool SetCaptureDevice(void* external_capture);
+
+ sigslot::signal1<cricket::CaptureResult> SignalCaptureResult;
+ private:
+
+ struct VideoCodecPref {
+ const char* payload_name;
+ int payload_type;
+ int pref;
+ };
+
+ static const VideoCodecPref kVideoCodecPrefs[];
+ int GetCodecPreference(const char* name);
+
+ void ApplyLogging();
+ bool InitVideoEngine(RtcVoiceEngine* voice_engine);
+ void PerformanceAlarm(const unsigned int cpuLoad);
+ bool ReleaseCaptureDevice();
+ virtual void Print(const TraceLevel level, const char *traceString,
+ const int length);
+
+ typedef std::vector<RtcVideoMediaChannel*> VideoChannels;
+
+ talk_base::scoped_ptr<VideoEngineWrapper> video_engine_;
+ VideoCaptureModule* capture_;
+ int capture_id_;
+ RtcVoiceEngine* voice_engine_;
+ std::vector<cricket::VideoCodec> video_codecs_;
+ VideoChannels channels_;
+ talk_base::CriticalSection channels_cs_;
+ bool initialized_;
+ int log_level_;
+ cricket::VideoEncoderConfig default_encoder_config_;
+ cricket::VideoCodec default_codec_;
+ bool capture_started_;
+};
+
+class RtcVideoMediaChannel: public cricket::VideoMediaChannel,
+ public webrtc::Transport {
+ public:
+ RtcVideoMediaChannel(
+ RtcVideoEngine* engine, cricket::VoiceMediaChannel* voice_channel);
+ ~RtcVideoMediaChannel();
+
+ bool Init();
+ virtual bool SetRecvCodecs(const std::vector<cricket::VideoCodec> &codecs);
+ virtual bool SetSendCodecs(const std::vector<cricket::VideoCodec> &codecs);
+ virtual bool SetRender(bool render);
+ virtual bool SetSend(bool send);
+ virtual bool AddStream(uint32 ssrc, uint32 voice_ssrc);
+ virtual bool RemoveStream(uint32 ssrc);
+ virtual bool SetRenderer(uint32 ssrc, cricket::VideoRenderer* renderer);
+ virtual bool SetExternalRenderer(uint32 ssrc, void* renderer);
+ virtual bool GetStats(cricket::VideoMediaInfo* info);
+ virtual bool SendIntraFrame();
+ virtual bool RequestIntraFrame();
+
+ virtual void OnPacketReceived(talk_base::Buffer* packet);
+ virtual void OnRtcpReceived(talk_base::Buffer* packet);
+ virtual void SetSendSsrc(uint32 id);
+ virtual bool SetRtcpCName(const std::string& cname);
+ virtual bool Mute(bool on);
+ virtual bool SetRecvRtpHeaderExtensions(
+ const std::vector<cricket::RtpHeaderExtension>& extensions) { return false; }
+ virtual bool SetSendRtpHeaderExtensions(
+ const std::vector<cricket::RtpHeaderExtension>& extensions) { return false; }
+ virtual bool SetSendBandwidth(bool autobw, int bps);
+ virtual bool SetOptions(int options);
+
+ RtcVideoEngine* engine() { return engine_; }
+ cricket::VoiceMediaChannel* voice_channel() { return voice_channel_; }
+ int video_channel() { return video_channel_; }
+ bool sending() { return sending_; }
+ int GetMediaChannelId() { return video_channel_; }
+
+ protected:
+ virtual int SendPacket(int channel, const void* data, int len);
+ virtual int SendRTCPPacket(int channel, const void* data, int len);
+
+ private:
+ void EnableRtcp();
+ void EnablePLI();
+ void EnableTMMBR();
+
+ RtcVideoEngine* engine_;
+ cricket::VoiceMediaChannel* voice_channel_;
+ int video_channel_;
+ bool sending_;
+ bool render_started_;
+ webrtc::VideoCodec send_codec_;
+};
+
+}
+
+#endif /* TALK_APP_WEBRTC_VIDEOMEDIAENGINE_H_ */
diff --git a/third_party_mods/libjingle/source/talk/app/voiceengine.h b/third_party_mods/libjingle/source/talk/app/voiceengine.h
new file mode 100644
index 0000000..7de9a5a
--- /dev/null
+++ b/third_party_mods/libjingle/source/talk/app/voiceengine.h
@@ -0,0 +1,159 @@
+/*
+ * libjingle
+ * Copyright 2004--2011, Google Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright notice,
+ * this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ * 3. The name of the author may not be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED
+ * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO
+ * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
+ * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
+ * OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
+ * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+
+#ifndef TALK_APP_WEBRTC_VOICEENGINE_H_
+#define TALK_APP_WEBRTC_VOICEENGINE_H_
+
+#include "talk/base/common.h"
+#include "common_types.h"
+#include "voice_engine/main/interface/voe_base.h"
+#include "voice_engine/main/interface/voe_codec.h"
+#include "voice_engine/main/interface/voe_errors.h"
+#include "voice_engine/main/interface/voe_file.h"
+#include "voice_engine/main/interface/voe_hardware.h"
+#include "voice_engine/main/interface/voe_network.h"
+#include "voice_engine/main/interface/voe_rtp_rtcp.h"
+#include "voice_engine/main/interface/voe_video_sync.h"
+#include "voice_engine/main/interface/voe_volume_control.h"
+
+namespace webrtc {
+
+// Tracing helpers, for easy logging when WebRTC calls fail.
+// Example: "LOG_RTCERR1(StartSend, channel);" produces the trace
+// "StartSend(1) failed, err=XXXX"
+// The method GetLastRtcError must be defined in the calling scope.
+#define LOG_RTCERR0(func) \
+ LOG_RTCERR0_EX(func, GetLastRtcError())
+#define LOG_RTCERR1(func, a1) \
+ LOG_RTCERR1_EX(func, a1, GetLastRtcError())
+#define LOG_RTCERR2(func, a1, a2) \
+ LOG_RTCERR2_EX(func, a1, a2, GetLastRtcError())
+#define LOG_RTCERR3(func, a1, a2, a3) \
+ LOG_RTCERR3_EX(func, a1, a2, a3, GetLastRtcError())
+#define LOG_RTCERR0_EX(func, err) LOG(WARNING) \
+ << "" << #func << "() failed, err=" << err
+#define LOG_RTCERR1_EX(func, a1, err) LOG(WARNING) \
+ << "" << #func << "(" << a1 << ") failed, err=" << err
+#define LOG_RTCERR2_EX(func, a1, a2, err) LOG(WARNING) \
+ << "" << #func << "(" << a1 << ", " << a2 << ") failed, err=" \
+ << err
+#define LOG_RTCERR3_EX(func, a1, a2, a3, err) LOG(WARNING) \
+ << "" << #func << "(" << a1 << ", " << a2 << ", " << a3 \
+ << ") failed, err=" << err
+
+// automatically handles lifetime of WebRtc VoiceEngine
+class scoped_webrtc_engine {
+ public:
+ explicit scoped_webrtc_engine(VoiceEngine* e) : ptr(e) {}
+ // VERIFY, to ensure that there are no leaks at shutdown
+ ~scoped_webrtc_engine() { if (ptr) VERIFY(VoiceEngine::Delete(ptr)); }
+ VoiceEngine* get() const { return ptr; }
+ private:
+ VoiceEngine* ptr;
+};
+
+// scoped_ptr class to handle obtaining and releasing WebRTC interface pointers
+template<class T>
+class scoped_rtc_ptr {
+ public:
+ explicit scoped_rtc_ptr(const scoped_webrtc_engine& e)
+ : ptr(T::GetInterface(e.get())) {}
+ template <typename E>
+ explicit scoped_rtc_ptr(E* engine) : ptr(T::GetInterface(engine)) {}
+ explicit scoped_rtc_ptr(T* p) : ptr(p) {}
+ ~scoped_rtc_ptr() { if (ptr) ptr->Release(); }
+ T* operator->() const { return ptr; }
+ T* get() const { return ptr; }
+
+ // Queries the engine for the wrapped type and releases the current pointer.
+ template <typename E>
+ void reset(E* engine) {
+ reset();
+ if (engine)
+ ptr = T::GetInterface(engine);
+ }
+
+ // Releases the current pointer.
+ void reset() {
+ if (ptr) {
+ ptr->Release();
+ ptr = NULL;
+ }
+ }
+
+ private:
+ T* ptr;
+};
+
+// Utility class for aggregating the various WebRTC interface.
+// Fake implementations can also be injected for testing.
+class RtcWrapper {
+ public:
+ RtcWrapper()
+ : engine_(VoiceEngine::Create()),
+ base_(engine_), codec_(engine_), file_(engine_),
+ hw_(engine_), network_(engine_), rtp_(engine_),
+ sync_(engine_), volume_(engine_) {
+
+ }
+ RtcWrapper(VoEBase* base, VoECodec* codec, VoEFile* file,
+ VoEHardware* hw, VoENetwork* network,
+ VoERTP_RTCP* rtp, VoEVideoSync* sync,
+ VoEVolumeControl* volume)
+ : engine_(NULL),
+ base_(base), codec_(codec), file_(file),
+ hw_(hw), network_(network), rtp_(rtp),
+ sync_(sync), volume_(volume) {
+
+ }
+ virtual ~RtcWrapper() {}
+ VoiceEngine* engine() { return engine_.get(); }
+ VoEBase* base() { return base_.get(); }
+ VoECodec* codec() { return codec_.get(); }
+ VoEFile* file() { return file_.get(); }
+ VoEHardware* hw() { return hw_.get(); }
+ VoENetwork* network() { return network_.get(); }
+ VoERTP_RTCP* rtp() { return rtp_.get(); }
+ VoEVideoSync* sync() { return sync_.get(); }
+ VoEVolumeControl* volume() { return volume_.get(); }
+ int error() { return base_->LastError(); }
+
+ private:
+ scoped_webrtc_engine engine_;
+ scoped_rtc_ptr<VoEBase> base_;
+ scoped_rtc_ptr<VoECodec> codec_;
+ scoped_rtc_ptr<VoEFile> file_;
+ scoped_rtc_ptr<VoEHardware> hw_;
+ scoped_rtc_ptr<VoENetwork> network_;
+ scoped_rtc_ptr<VoERTP_RTCP> rtp_;
+ scoped_rtc_ptr<VoEVideoSync> sync_;
+ scoped_rtc_ptr<VoEVolumeControl> volume_;
+};
+} //namespace webrtc
+
+#endif // TALK_APP_WEBRTC_VOICEENGINE_H_
diff --git a/third_party_mods/libjingle/source/talk/app/voicemediaengine.cc b/third_party_mods/libjingle/source/talk/app/voicemediaengine.cc
new file mode 100644
index 0000000..326009d
--- /dev/null
+++ b/third_party_mods/libjingle/source/talk/app/voicemediaengine.cc
@@ -0,0 +1,966 @@
+/*
+ * libjingle
+ * Copyright 2004--2011, Google Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright notice,
+ * this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ * 3. The name of the author may not be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED
+ * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO
+ * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
+ * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
+ * OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
+ * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "talk/app/voicemediaengine.h"
+
+#include <algorithm>
+#include <cstdio>
+#include <string>
+#include <vector>
+
+#ifdef PLATFORM_CHROMIUM
+#include "content/renderer/renderer_webrtc_audio_device_impl.h"
+#else
+#include "modules/audio_device/main/interface/audio_device.h"
+#endif
+#include "talk/base/base64.h"
+#include "talk/base/byteorder.h"
+#include "talk/base/common.h"
+#include "talk/base/helpers.h"
+#include "talk/base/logging.h"
+#include "talk/base/stringencode.h"
+
+namespace webrtc {
+
+static void LogMultiline(talk_base::LoggingSeverity sev, char* text) {
+ const char* delim = "\r\n";
+ for (char* tok = strtok(text, delim); tok; tok = strtok(NULL, delim)) {
+ LOG_V(sev) << tok;
+ }
+}
+
+// RtcVoiceEngine
+const RtcVoiceEngine::CodecPref RtcVoiceEngine::kCodecPrefs[] = {
+ { "ISAC", 16000 },
+ { "ISAC", 32000 },
+ { "ISACLC", 16000 },
+ { "speex", 16000 },
+ { "IPCMWB", 16000 },
+ { "G722", 16000 },
+ { "iLBC", 8000 },
+ { "speex", 8000 },
+ { "GSM", 8000 },
+ { "EG711U", 8000 },
+ { "EG711A", 8000 },
+ { "PCMU", 8000 },
+ { "PCMA", 8000 },
+ { "CN", 32000 },
+ { "CN", 16000 },
+ { "CN", 8000 },
+ { "red", 8000 },
+ { "telephone-event", 8000 },
+};
+
+RtcVoiceEngine::RtcVoiceEngine()
+ : rtc_wrapper_(new RtcWrapper()),
+ log_level_(kDefaultLogSeverity),
+ adm_(NULL) {
+ Construct();
+}
+
+RtcVoiceEngine::RtcVoiceEngine(RtcWrapper* rtc_wrapper)
+ : rtc_wrapper_(rtc_wrapper),
+ log_level_(kDefaultLogSeverity),
+ adm_(NULL) {
+ Construct();
+}
+
+void RtcVoiceEngine::Construct() {
+ LOG(INFO) << "RtcVoiceEngine::RtcVoiceEngine";
+ ApplyLogging();
+
+ if (rtc_wrapper_->base()->RegisterVoiceEngineObserver(*this) == -1) {
+ LOG_RTCERR0(RegisterVoiceEngineObserver);
+ }
+
+ // Load our audio codec list
+ LOG(INFO) << "WebRTC VoiceEngine codecs:";
+ int ncodecs = rtc_wrapper_->codec()->NumOfCodecs();
+ for (int i = 0; i < ncodecs; ++i) {
+ CodecInst gcodec;
+ if (rtc_wrapper_->codec()->GetCodec(i, gcodec) >= 0) {
+ int pref = GetCodecPreference(gcodec.plname, gcodec.plfreq);
+ if (pref != -1) {
+ if (gcodec.rate == -1) gcodec.rate = 0;
+ cricket::AudioCodec codec(gcodec.pltype, gcodec.plname, gcodec.plfreq,
+ gcodec.rate, gcodec.channels, pref);
+ LOG(INFO) << gcodec.plname << "/" << gcodec.plfreq << "/" \
+ << gcodec.channels << " " << gcodec.pltype;
+ codecs_.push_back(codec);
+ }
+ }
+ }
+ // Make sure they are in local preference order
+ std::sort(codecs_.begin(), codecs_.end(), &cricket::AudioCodec::Preferable);
+}
+
+RtcVoiceEngine::~RtcVoiceEngine() {
+ LOG(INFO) << "RtcVoiceEngine::~RtcVoiceEngine";
+ if (rtc_wrapper_->base()->DeRegisterVoiceEngineObserver() == -1) {
+ LOG_RTCERR0(DeRegisterVoiceEngineObserver);
+ }
+ rtc_wrapper_.reset();
+ if (adm_) {
+ AudioDeviceModule::Destroy(adm_);
+ adm_ = NULL;
+ }
+}
+
+bool RtcVoiceEngine::Init() {
+ LOG(INFO) << "RtcVoiceEngine::Init";
+ bool res = InitInternal();
+ if (res) {
+ LOG(INFO) << "RtcVoiceEngine::Init Done!";
+ } else {
+ LOG(LERROR) << "RtcVoiceEngine::Init failed";
+ Terminate();
+ }
+ return res;
+}
+
+bool RtcVoiceEngine::InitInternal() {
+ // Temporarily turn logging level up for the Init call
+ int old_level = log_level_;
+ log_level_ = talk_base::_min(log_level_,
+ static_cast<int>(talk_base::INFO));
+ ApplyLogging();
+
+ if (!adm_) {
+#ifdef PLATFORM_CHROMIUM
+ adm_ = new RendererWebRtcAudioDeviceImpl(1440, 1440, 1, 1, 48000, 48000);
+#else
+ adm_ = AudioDeviceModule::Create(0);
+#endif
+
+ if (rtc_wrapper_->base()->RegisterAudioDeviceModule(*adm_) == -1) {
+ LOG_RTCERR0_EX(Init, rtc_wrapper_->error());
+ return false;
+ }
+ }
+
+ // Init WebRTC VoiceEngine, enabling AEC logging if specified in SetLogging.
+ if (rtc_wrapper_->base()->Init() == -1) {
+ LOG_RTCERR0_EX(Init, rtc_wrapper_->error());
+ return false;
+ }
+
+ // Restore the previous log level
+ log_level_ = old_level;
+ ApplyLogging();
+
+ // Log the WebRTC version info
+ char buffer[1024] = "";
+ rtc_wrapper_->base()->GetVersion(buffer);
+ LOG(INFO) << "WebRTC VoiceEngine Version:";
+ LogMultiline(talk_base::INFO, buffer);
+
+ // Turn on AEC and AGC by default.
+ if (!SetOptions(
+ cricket::MediaEngine::ECHO_CANCELLATION | cricket::MediaEngine::AUTO_GAIN_CONTROL)) {
+ return false;
+ }
+
+ // Print our codec list again for the call diagnostic log
+ LOG(INFO) << "WebRTC VoiceEngine codecs:";
+ for (std::vector<cricket::AudioCodec>::const_iterator it = codecs_.begin();
+ it != codecs_.end(); ++it) {
+ LOG(INFO) << it->name << "/" << it->clockrate << "/"
+ << it->channels << " " << it->id;
+ }
+ return true;
+}
+
+bool RtcVoiceEngine::SetDevices(const cricket::Device* in_device,
+ const cricket::Device* out_device) {
+ LOG(INFO) << "RtcVoiceEngine::SetDevices";
+ // Currently we always use the default device, so do nothing here.
+ return true;
+}
+
+void RtcVoiceEngine::Terminate() {
+ LOG(INFO) << "RtcVoiceEngine::Terminate";
+
+ rtc_wrapper_->base()->Terminate();
+}
+
+int RtcVoiceEngine::GetCapabilities() {
+ return cricket::MediaEngine::AUDIO_SEND | cricket::MediaEngine::AUDIO_RECV;
+}
+
+cricket::VoiceMediaChannel *RtcVoiceEngine::CreateChannel() {
+ RtcVoiceMediaChannel* ch = new RtcVoiceMediaChannel(this);
+ if (!ch->valid()) {
+ delete ch;
+ ch = NULL;
+ }
+ return ch;
+}
+
+bool RtcVoiceEngine::SetOptions(int options) {
+
+ return true;
+}
+
+bool RtcVoiceEngine::FindAudioDeviceId(
+ bool is_input, const std::string& dev_name, int dev_id, int* rtc_id) {
+ return false;
+}
+
+bool RtcVoiceEngine::GetOutputVolume(int* level) {
+ unsigned int ulevel;
+ if (rtc_wrapper_->volume()->GetSpeakerVolume(ulevel) == -1) {
+ LOG_RTCERR1(GetSpeakerVolume, level);
+ return false;
+ }
+ *level = ulevel;
+ return true;
+}
+
+bool RtcVoiceEngine::SetOutputVolume(int level) {
+ ASSERT(level >= 0 && level <= 255);
+ if (rtc_wrapper_->volume()->SetSpeakerVolume(level) == -1) {
+ LOG_RTCERR1(SetSpeakerVolume, level);
+ return false;
+ }
+ return true;
+}
+
+int RtcVoiceEngine::GetInputLevel() {
+ unsigned int ulevel;
+ return (rtc_wrapper_->volume()->GetSpeechInputLevel(ulevel) != -1) ?
+ static_cast<int>(ulevel) : -1;
+}
+
+bool RtcVoiceEngine::SetLocalMonitor(bool enable) {
+ return true;
+}
+
+const std::vector<cricket::AudioCodec>& RtcVoiceEngine::codecs() {
+ return codecs_;
+}
+
+bool RtcVoiceEngine::FindCodec(const cricket::AudioCodec& in) {
+ return FindRtcCodec(in, NULL);
+}
+
+bool RtcVoiceEngine::FindRtcCodec(const cricket::AudioCodec& in, CodecInst* out) {
+ int ncodecs = rtc_wrapper_->codec()->NumOfCodecs();
+ for (int i = 0; i < ncodecs; ++i) {
+ CodecInst gcodec;
+ if (rtc_wrapper_->codec()->GetCodec(i, gcodec) >= 0) {
+ cricket::AudioCodec codec(gcodec.pltype, gcodec.plname,
+ gcodec.plfreq, gcodec.rate, gcodec.channels, 0);
+ if (codec.Matches(in)) {
+ if (out) {
+ // If the codec is VBR and an explicit rate is specified, use it.
+ if (in.bitrate != 0 && gcodec.rate == -1) {
+ gcodec.rate = in.bitrate;
+ }
+ *out = gcodec;
+ }
+ return true;
+ }
+ }
+ }
+ return false;
+}
+
+void RtcVoiceEngine::SetLogging(int min_sev, const char* filter) {
+ log_level_ = min_sev;
+
+ std::vector<std::string> opts;
+ talk_base::tokenize(filter, ' ', &opts);
+
+ // voice log level
+ ApplyLogging();
+}
+
+int RtcVoiceEngine::GetLastRtcError() {
+ return rtc_wrapper_->error();
+}
+
+void RtcVoiceEngine::ApplyLogging() {
+ int filter = 0;
+ switch (log_level_) {
+ case talk_base::INFO: filter |= kTraceAll; // fall through
+ case talk_base::WARNING: filter |= kTraceWarning; // fall through
+ case talk_base::LERROR: filter |= kTraceError | kTraceCritical;
+ }
+}
+
+void RtcVoiceEngine::Print(const TraceLevel level,
+ const char* traceString, const int length) {
+ talk_base::LoggingSeverity sev = talk_base::INFO;
+ if (level == kTraceError || level == kTraceCritical)
+ sev = talk_base::LERROR;
+ else if (level == kTraceWarning)
+ sev = talk_base::WARNING;
+ else if (level == kTraceStateInfo)
+ sev = talk_base::INFO;
+
+ if (sev >= log_level_) {
+ // Skip past webrtc boilerplate prefix text
+ if (length <= 70) {
+ std::string msg(traceString, length);
+ LOG(LERROR) << "Malformed WebRTC log message: ";
+ LOG_V(sev) << msg;
+ } else {
+ std::string msg(traceString + 70, length - 71);
+ LOG_V(sev) << "VoE:" << msg;
+ }
+ }
+}
+
+void RtcVoiceEngine::CallbackOnError(const int err_code,
+ const int channel_num) {
+ talk_base::CritScope lock(&channels_cs_);
+ RtcVoiceMediaChannel* channel = NULL;
+ uint32 ssrc = 0;
+ LOG(WARNING) << "WebRTC error " << err_code << " reported on channel "
+ << channel_num << ".";
+ if (FindChannelAndSsrc(channel_num, &channel, &ssrc)) {
+ ASSERT(channel != NULL);
+ channel->OnError(ssrc, err_code);
+ } else {
+ LOG(LERROR) << "WebRTC channel " << channel_num
+ << " could not be found in the channel list when error reported.";
+ }
+}
+
+int RtcVoiceEngine::GetCodecPreference(const char *name, int clockrate) {
+ for (size_t i = 0; i < ARRAY_SIZE(kCodecPrefs); ++i) {
+ if ((strcmp(kCodecPrefs[i].name, name) == 0) &&
+ (kCodecPrefs[i].clockrate == clockrate))
+ return ARRAY_SIZE(kCodecPrefs) - i;
+ }
+ LOG(WARNING) << "Unexpected codec \"" << name << "/" << clockrate << "\"";
+ return -1;
+}
+
+bool RtcVoiceEngine::FindChannelAndSsrc(
+ int channel_num, RtcVoiceMediaChannel** channel, uint32* ssrc) const {
+ ASSERT(channel != NULL && ssrc != NULL);
+
+ *channel = NULL;
+ *ssrc = 0;
+ // Find corresponding channel and ssrc
+ for (ChannelList::const_iterator it = channels_.begin();
+ it != channels_.end(); ++it) {
+ ASSERT(*it != NULL);
+ if ((*it)->FindSsrc(channel_num, ssrc)) {
+ *channel = *it;
+ return true;
+ }
+ }
+
+ return false;
+}
+
+void RtcVoiceEngine::RegisterChannel(RtcVoiceMediaChannel *channel) {
+ talk_base::CritScope lock(&channels_cs_);
+ channels_.push_back(channel);
+}
+
+void RtcVoiceEngine::UnregisterChannel(RtcVoiceMediaChannel *channel) {
+ talk_base::CritScope lock(&channels_cs_);
+ ChannelList::iterator i = std::find(channels_.begin(),
+ channels_.end(),
+ channel);
+ if (i != channels_.end()) {
+ channels_.erase(i);
+ }
+}
+
+// RtcVoiceMediaChannel
+RtcVoiceMediaChannel::RtcVoiceMediaChannel(RtcVoiceEngine *engine)
+ : RtcMediaChannel<cricket::VoiceMediaChannel, RtcVoiceEngine>(engine,
+ engine->webrtc()->base()->CreateChannel()),
+ channel_options_(0), playout_(false), send_(cricket::SEND_NOTHING) {
+ engine->RegisterChannel(this);
+ LOG(INFO) << "RtcVoiceMediaChannel::RtcVoiceMediaChannel "
+ << audio_channel();
+
+ // Register external transport
+ if (engine->webrtc()->network()->RegisterExternalTransport(
+ audio_channel(), *static_cast<Transport*>(this)) == -1) {
+ LOG_RTCERR2(RegisterExternalTransport, audio_channel(), this);
+ }
+
+ // Enable RTCP (for quality stats and feedback messages)
+ EnableRtcp(audio_channel());
+
+ // Create a random but nonzero send SSRC
+ SetSendSsrc(talk_base::CreateRandomNonZeroId());
+}
+
+RtcVoiceMediaChannel::~RtcVoiceMediaChannel() {
+ LOG(INFO) << "RtcVoiceMediaChannel::~RtcVoiceMediaChannel "
+ << audio_channel();
+
+ // DeRegister external transport
+ if (engine()->webrtc()->network()->DeRegisterExternalTransport(
+ audio_channel()) == -1) {
+ LOG_RTCERR1(DeRegisterExternalTransport, audio_channel());
+ }
+
+ // Unregister ourselves from the engine.
+ engine()->UnregisterChannel(this);
+ // Remove any remaining streams.
+ while (!mux_channels_.empty()) {
+ RemoveStream(mux_channels_.begin()->first);
+ }
+ // Delete the primary channel.
+ if (engine()->webrtc()->base()->DeleteChannel(audio_channel()) == -1) {
+ LOG_RTCERR1(DeleteChannel, audio_channel());
+ }
+}
+
+bool RtcVoiceMediaChannel::SetOptions(int flags) {
+ // Always accept flags that are unchanged.
+ if (channel_options_ == flags) {
+ return true;
+ }
+
+ // Reject new options if we're already sending.
+ if (send_ != cricket::SEND_NOTHING) {
+ return false;
+ }
+ // Save the options, to be interpreted where appropriate.
+ channel_options_ = flags;
+ return true;
+}
+
+bool RtcVoiceMediaChannel::SetRecvCodecs(
+ const std::vector<cricket::AudioCodec>& codecs) {
+ // Update our receive payload types to match what we offered. This only is
+ // an issue when a different entity (i.e. a server) is generating the offer
+ // for us.
+ bool ret = true;
+ for (std::vector<cricket::AudioCodec>::const_iterator i = codecs.begin();
+ i != codecs.end() && ret; ++i) {
+ CodecInst gcodec;
+ if (engine()->FindRtcCodec(*i, &gcodec)) {
+ if (gcodec.pltype != i->id) {
+ LOG(INFO) << "Updating payload type for " << gcodec.plname
+ << " from " << gcodec.pltype << " to " << i->id;
+ gcodec.pltype = i->id;
+ if (engine()->webrtc()->codec()->SetRecPayloadType(
+ audio_channel(), gcodec) == -1) {
+ LOG_RTCERR1(SetRecPayloadType, audio_channel());
+ ret = false;
+ }
+ }
+ } else {
+ LOG(WARNING) << "Unknown codec " << i->name;
+ ret = false;
+ }
+ }
+
+ return ret;
+}
+
+bool RtcVoiceMediaChannel::SetSendCodecs(
+ const std::vector<cricket::AudioCodec>& codecs) {
+ bool first = true;
+ CodecInst send_codec;
+ memset(&send_codec, 0, sizeof(send_codec));
+
+ for (std::vector<cricket::AudioCodec>::const_iterator i = codecs.begin();
+ i != codecs.end(); ++i) {
+ CodecInst gcodec;
+ if (!engine()->FindRtcCodec(*i, &gcodec))
+ continue;
+
+ // We'll use the first codec in the list to actually send audio data.
+ // Be sure to use the payload type requested by the remote side.
+ if (first) {
+ send_codec = gcodec;
+ send_codec.pltype = i->id;
+ first = false;
+ }
+ }
+
+ // If we're being asked to set an empty list of codecs, due to a buggy client,
+ // choose the most common format: PCMU
+ if (first) {
+ LOG(WARNING) << "Received empty list of codecs; using PCMU/8000";
+ cricket::AudioCodec codec(0, "PCMU", 8000, 0, 1, 0);
+ engine()->FindRtcCodec(codec, &send_codec);
+ }
+
+ // Set the codec.
+ LOG(INFO) << "Selected voice codec " << send_codec.plname
+ << "/" << send_codec.plfreq;
+ if (engine()->webrtc()->codec()->SetSendCodec(audio_channel(),
+ send_codec) == -1) {
+ LOG_RTCERR1(SetSendCodec, audio_channel());
+ return false;
+ }
+
+ return true;
+}
+
+bool RtcVoiceMediaChannel::SetPlayout(bool playout) {
+ if (playout_ == playout) {
+ return true;
+ }
+
+ bool result = true;
+ if (mux_channels_.empty()) {
+ // Only toggle the default channel if we don't have any other channels.
+ result = SetPlayout(audio_channel(), playout);
+ }
+ for (ChannelMap::iterator it = mux_channels_.begin();
+ it != mux_channels_.end() && result; ++it) {
+ if (!SetPlayout(it->second, playout)) {
+ LOG(LERROR) << "SetPlayout " << playout << " on channel " << it->second
+ << " failed";
+ result = false;
+ }
+ }
+
+ if (result) {
+ playout_ = playout;
+ }
+ return result;
+}
+
+bool RtcVoiceMediaChannel::GetPlayout() {
+ return playout_;
+}
+
+bool RtcVoiceMediaChannel::SetSend(cricket::SendFlags send) {
+ if (send_ == send) {
+ return true;
+ }
+
+ if (send == cricket::SEND_MICROPHONE) {
+ if (sequence_number() != -1) {
+ if (engine()->webrtc()->sync()->SetInitSequenceNumber(
+ audio_channel(), sequence_number() + 1) == -1) {
+ LOG_RTCERR2(SetInitSequenceNumber, audio_channel(),
+ sequence_number() + 1);
+ }
+ }
+ if (engine()->webrtc()->base()->StartSend(audio_channel()) == -1) {
+ LOG_RTCERR1(StartSend, audio_channel());
+ return false;
+ }
+ if (engine()->webrtc()->file()->StopPlayingFileAsMicrophone(
+ audio_channel()) == -1) {
+ LOG_RTCERR1(StopPlayingFileAsMicrophone, audio_channel());
+ return false;
+ }
+ } else { // SEND_NOTHING
+ if (engine()->webrtc()->base()->StopSend(audio_channel()) == -1) {
+ LOG_RTCERR1(StopSend, audio_channel());
+ }
+ }
+ send_ = send;
+ return true;
+}
+
+cricket::SendFlags RtcVoiceMediaChannel::GetSend() {
+ return send_;
+}
+
+bool RtcVoiceMediaChannel::AddStream(uint32 ssrc) {
+ talk_base::CritScope lock(&mux_channels_cs_);
+
+ if (mux_channels_.find(ssrc) != mux_channels_.end()) {
+ return false;
+ }
+
+ // Create a new channel for receiving audio data.
+ int channel = engine()->webrtc()->base()->CreateChannel();
+ if (channel == -1) {
+ LOG_RTCERR0(CreateChannel);
+ return false;
+ }
+
+ // Configure to use external transport, like our default channel.
+ if (engine()->webrtc()->network()->RegisterExternalTransport(
+ channel, *this) == -1) {
+ LOG_RTCERR2(SetExternalTransport, channel, this);
+ return false;
+ }
+
+ // Use the same SSRC as our default channel (so the RTCP reports are correct).
+ unsigned int send_ssrc;
+ VoERTP_RTCP* rtp = engine()->webrtc()->rtp();
+ if (rtp->GetLocalSSRC(audio_channel(), send_ssrc) == -1) {
+ LOG_RTCERR2(GetSendSSRC, channel, send_ssrc);
+ return false;
+ }
+ if (rtp->SetLocalSSRC(channel, send_ssrc) == -1) {
+ LOG_RTCERR2(SetSendSSRC, channel, send_ssrc);
+ return false;
+ }
+
+ if (mux_channels_.empty() && GetPlayout()) {
+ LOG(INFO) << "Disabling playback on the default voice channel";
+ SetPlayout(audio_channel(), false);
+ }
+
+ mux_channels_[ssrc] = channel;
+
+ LOG(INFO) << "New audio stream " << ssrc << " registered to WebRTC channel #"
+ << channel << ".";
+ return SetPlayout(channel, playout_);
+
+
+}
+
+bool RtcVoiceMediaChannel::RemoveStream(uint32 ssrc) {
+ talk_base::CritScope lock(&mux_channels_cs_);
+ ChannelMap::iterator it = mux_channels_.find(ssrc);
+
+ if (it != mux_channels_.end()) {
+ if (engine()->webrtc()->network()->DeRegisterExternalTransport(
+ it->second) == -1) {
+ LOG_RTCERR1(DeRegisterExternalTransport, it->second);
+ }
+
+ LOG(INFO) << "Removing audio stream " << ssrc << " with WebRTC channel #"
+ << it->second << ".";
+ if (engine()->webrtc()->base()->DeleteChannel(it->second) == -1) {
+ LOG_RTCERR1(DeleteChannel, audio_channel());
+ return false;
+ }
+
+ mux_channels_.erase(it);
+ if (mux_channels_.empty() && GetPlayout()) {
+ // The last stream was removed. We can now enable the default
+ // channel for new channels to be played out immediately without
+ // waiting for AddStream messages.
+ // TODO(oja): Does the default channel still have it's CN state?
+ LOG(INFO) << "Enabling playback on the default voice channel";
+ SetPlayout(audio_channel(), true);
+ }
+ }
+ return true;
+}
+
+bool RtcVoiceMediaChannel::GetActiveStreams(cricket::AudioInfo::StreamList* actives) {
+ actives->clear();
+ for (ChannelMap::iterator it = mux_channels_.begin();
+ it != mux_channels_.end(); ++it) {
+ int level = GetOutputLevel(it->second);
+ if (level > 0) {
+ actives->push_back(std::make_pair(it->first, level));
+ }
+ }
+ return true;
+}
+
+int RtcVoiceMediaChannel::GetOutputLevel() {
+ // return the highest output level of all streams
+ int highest = GetOutputLevel(audio_channel());
+ for (ChannelMap::iterator it = mux_channels_.begin();
+ it != mux_channels_.end(); ++it) {
+ int level = GetOutputLevel(it->second);
+ highest = talk_base::_max(level, highest);
+ }
+ return highest;
+}
+
+bool RtcVoiceMediaChannel::SetRingbackTone(const char *buf, int len) {
+ return true;
+}
+
+bool RtcVoiceMediaChannel::PlayRingbackTone(uint32 ssrc, bool play, bool loop) {
+ return true;
+}
+
+bool RtcVoiceMediaChannel::PlayRingbackTone(bool play, bool loop) {
+ return true;
+}
+
+bool RtcVoiceMediaChannel::PressDTMF(int event, bool playout) {
+ return true;
+}
+
+void RtcVoiceMediaChannel::OnPacketReceived(talk_base::Buffer* packet) {
+ // Pick which channel to send this packet to. If this packet doesn't match
+ // any multiplexed streams, just send it to the default channel. Otherwise,
+ // send it to the specific decoder instance for that stream.
+ int which_channel = GetChannel(
+ ParseSsrc(packet->data(), packet->length(), false));
+ if (which_channel == -1) {
+ which_channel = audio_channel();
+ }
+
+ engine()->webrtc()->network()->ReceivedRTPPacket(which_channel,
+ packet->data(),
+ packet->length());
+}
+
+void RtcVoiceMediaChannel::OnRtcpReceived(talk_base::Buffer* packet) {
+ // See above.
+ int which_channel = GetChannel(
+ ParseSsrc(packet->data(), packet->length(), true));
+ if (which_channel == -1) {
+ which_channel = audio_channel();
+ }
+
+ engine()->webrtc()->network()->ReceivedRTCPPacket(which_channel,
+ packet->data(),
+ packet->length());
+}
+
+void RtcVoiceMediaChannel::SetSendSsrc(uint32 ssrc) {
+ if (engine()->webrtc()->rtp()->SetLocalSSRC(audio_channel(), ssrc)
+ == -1) {
+ LOG_RTCERR2(SetSendSSRC, audio_channel(), ssrc);
+ }
+}
+
+bool RtcVoiceMediaChannel::SetRtcpCName(const std::string& cname) {
+ if (engine()->webrtc()->rtp()->SetRTCP_CNAME(audio_channel(),
+ cname.c_str()) == -1) {
+ LOG_RTCERR2(SetRTCP_CNAME, audio_channel(), cname);
+ return false;
+ }
+ return true;
+}
+
+bool RtcVoiceMediaChannel::Mute(bool muted) {
+ if (engine()->webrtc()->volume()->SetInputMute(audio_channel(),
+ muted) == -1) {
+ LOG_RTCERR2(SetInputMute, audio_channel(), muted);
+ return false;
+ }
+ return true;
+}
+
+bool RtcVoiceMediaChannel::GetStats(cricket::VoiceMediaInfo* info) {
+ CallStatistics cs;
+ unsigned int ssrc;
+ CodecInst codec;
+ unsigned int level;
+
+ // Fill in the sender info, based on what we know, and what the
+ // remote side told us it got from its RTCP report.
+ cricket::VoiceSenderInfo sinfo;
+ memset(&sinfo, 0, sizeof(sinfo));
+
+ // Data we obtain locally.
+ memset(&cs, 0, sizeof(cs));
+ if (engine()->webrtc()->rtp()->GetRTCPStatistics(
+ audio_channel(), cs) == -1 ||
+ engine()->webrtc()->rtp()->GetLocalSSRC(audio_channel(), ssrc) == -1)
+ {
+ return false;
+ }
+
+ sinfo.ssrc = ssrc;
+ sinfo.bytes_sent = cs.bytesSent;
+ sinfo.packets_sent = cs.packetsSent;
+ // RTT isn't known until a RTCP report is received. Until then, WebRTC
+ // returns 0 to indicate an error value.
+ sinfo.rtt_ms = (cs.rttMs > 0) ? cs.rttMs : -1;
+
+ // Data from the last remote RTCP report.
+ unsigned int ntp_high, ntp_low, timestamp, ptimestamp, jitter;
+ unsigned short loss; // NOLINT
+ if (engine()->webrtc()->rtp()->GetRemoteRTCPData(audio_channel(),
+ ntp_high, ntp_low, timestamp, ptimestamp, &jitter, &loss) != -1 &&
+ engine()->webrtc()->codec()->GetSendCodec(audio_channel(),
+ codec) != -1) {
+ // Convert Q8 to floating point.
+ sinfo.fraction_lost = static_cast<float>(loss) / (1 << 8);
+ // Convert samples to milliseconds.
+ if (codec.plfreq / 1000 > 0) {
+ sinfo.jitter_ms = jitter / (codec.plfreq / 1000);
+ }
+ } else {
+ sinfo.fraction_lost = -1;
+ sinfo.jitter_ms = -1;
+ }
+
+ sinfo.packets_lost = -1;
+ sinfo.ext_seqnum = -1;
+
+ // Local speech level.
+ sinfo.audio_level = (engine()->webrtc()->volume()->
+ GetSpeechInputLevelFullRange(level) != -1) ? level : -1;
+ info->senders.push_back(sinfo);
+
+ // Build the list of receivers, one for each mux channel, or 1 in a 1:1 call.
+ std::vector<int> channels;
+ for (ChannelMap::const_iterator it = mux_channels_.begin();
+ it != mux_channels_.end(); ++it) {
+ channels.push_back(it->second);
+ }
+ if (channels.empty()) {
+ channels.push_back(audio_channel());
+ }
+
+ // Get the SSRC and stats for each receiver, based on our own calculations.
+ for (std::vector<int>::const_iterator it = channels.begin();
+ it != channels.end(); ++it) {
+ memset(&cs, 0, sizeof(cs));
+ if (engine()->webrtc()->rtp()->GetRemoteSSRC(*it, ssrc) != -1 &&
+ engine()->webrtc()->rtp()->GetRTCPStatistics(*it, cs) != -1 &&
+ engine()->webrtc()->codec()->GetRecCodec(*it, codec) != -1) {
+ cricket::VoiceReceiverInfo rinfo;
+ memset(&rinfo, 0, sizeof(rinfo));
+ rinfo.ssrc = ssrc;
+ rinfo.bytes_rcvd = cs.bytesReceived;
+ rinfo.packets_rcvd = cs.packetsReceived;
+ // The next four fields are from the most recently sent RTCP report.
+ // Convert Q8 to floating point.
+ rinfo.fraction_lost = static_cast<float>(cs.fractionLost) / (1 << 8);
+ rinfo.packets_lost = cs.cumulativeLost;
+ rinfo.ext_seqnum = cs.extendedMax;
+ // Convert samples to milliseconds.
+ if (codec.plfreq / 1000 > 0) {
+ rinfo.jitter_ms = cs.jitterSamples / (codec.plfreq / 1000);
+ }
+ // Get speech level.
+ rinfo.audio_level = (engine()->webrtc()->volume()->
+ GetSpeechOutputLevelFullRange(*it, level) != -1) ? level : -1;
+ info->receivers.push_back(rinfo);
+ }
+ }
+
+ return true;
+}
+
+void RtcVoiceMediaChannel::GetLastMediaError(
+ uint32* ssrc, VoiceMediaChannel::Error* error) {
+ ASSERT(ssrc != NULL);
+ ASSERT(error != NULL);
+ FindSsrc(audio_channel(), ssrc);
+ *error = WebRTCErrorToChannelError(GetLastRtcError());
+}
+
+bool RtcVoiceMediaChannel::FindSsrc(int channel_num, uint32* ssrc) {
+ talk_base::CritScope lock(&mux_channels_cs_);
+ ASSERT(ssrc != NULL);
+ if (channel_num == audio_channel()) {
+ unsigned local_ssrc = 0;
+ // This is a sending channel.
+ if (engine()->webrtc()->rtp()->GetLocalSSRC(
+ channel_num, local_ssrc) != -1) {
+ *ssrc = local_ssrc;
+ }
+ return true;
+ } else {
+ // Check whether this is a receiving channel.
+ for (ChannelMap::const_iterator it = mux_channels_.begin();
+ it != mux_channels_.end(); ++it) {
+ if (it->second == channel_num) {
+ *ssrc = it->first;
+ return true;
+ }
+ }
+ }
+ return false;
+}
+
+void RtcVoiceMediaChannel::OnError(uint32 ssrc, int error) {
+ SignalMediaError(ssrc, WebRTCErrorToChannelError(error));
+}
+
+int RtcVoiceMediaChannel::GetChannel(uint32 ssrc) {
+ ChannelMap::iterator it = mux_channels_.find(ssrc);
+ return (it != mux_channels_.end()) ? it->second : -1;
+}
+
+int RtcVoiceMediaChannel::GetOutputLevel(int channel) {
+ unsigned int ulevel;
+ int ret =
+ engine()->webrtc()->volume()->GetSpeechOutputLevel(channel, ulevel);
+ return (ret == 0) ? static_cast<int>(ulevel) : -1;
+}
+
+bool RtcVoiceMediaChannel::EnableRtcp(int channel) {
+ if (engine()->webrtc()->rtp()->SetRTCPStatus(channel, true) == -1) {
+ LOG_RTCERR2(SetRTCPStatus, audio_channel(), 1);
+ return false;
+ }
+ return true;
+}
+
+bool RtcVoiceMediaChannel::SetPlayout(int channel, bool playout) {
+ if (playout) {
+ LOG(INFO) << "Starting playout for channel #" << channel;
+ if (engine()->webrtc()->base()->StartPlayout(channel) == -1) {
+ LOG_RTCERR1(StartPlayout, channel);
+ return false;
+ }
+ } else {
+ LOG(INFO) << "Stopping playout for channel #" << channel;
+ engine()->webrtc()->base()->StopPlayout(channel);
+ }
+ return true;
+}
+
+uint32 RtcVoiceMediaChannel::ParseSsrc(const void* data, size_t len,
+ bool rtcp) {
+ size_t ssrc_pos = (!rtcp) ? 8 : 4;
+ uint32 ssrc = 0;
+ if (len >= (ssrc_pos + sizeof(ssrc))) {
+ ssrc = talk_base::GetBE32(static_cast<const char*>(data) + ssrc_pos);
+ }
+ return ssrc;
+}
+
+// Convert WebRTC error code into VoiceMediaChannel::Error enum.
+cricket::VoiceMediaChannel::Error RtcVoiceMediaChannel::WebRTCErrorToChannelError(
+ int err_code) {
+ switch (err_code) {
+ case 0:
+ return ERROR_NONE;
+ case VE_CANNOT_START_RECORDING:
+ case VE_MIC_VOL_ERROR:
+ case VE_GET_MIC_VOL_ERROR:
+ case VE_CANNOT_ACCESS_MIC_VOL:
+ return ERROR_REC_DEVICE_OPEN_FAILED;
+ case VE_SATURATION_WARNING:
+ return ERROR_REC_DEVICE_SATURATION;
+ case VE_REC_DEVICE_REMOVED:
+ return ERROR_REC_DEVICE_REMOVED;
+ case VE_RUNTIME_REC_WARNING:
+ case VE_RUNTIME_REC_ERROR:
+ return ERROR_REC_RUNTIME_ERROR;
+ case VE_CANNOT_START_PLAYOUT:
+ case VE_SPEAKER_VOL_ERROR:
+ case VE_GET_SPEAKER_VOL_ERROR:
+ case VE_CANNOT_ACCESS_SPEAKER_VOL:
+ return ERROR_PLAY_DEVICE_OPEN_FAILED;
+ case VE_RUNTIME_PLAY_WARNING:
+ case VE_RUNTIME_PLAY_ERROR:
+ return ERROR_PLAY_RUNTIME_ERROR;
+ default:
+ return VoiceMediaChannel::ERROR_OTHER;
+ }
+}
+
+} // namespace webrtc
+
diff --git a/third_party_mods/libjingle/source/talk/app/voicemediaengine.h b/third_party_mods/libjingle/source/talk/app/voicemediaengine.h
new file mode 100644
index 0000000..639d91f
--- /dev/null
+++ b/third_party_mods/libjingle/source/talk/app/voicemediaengine.h
@@ -0,0 +1,244 @@
+/*
+ * libjingle
+ * Copyright 2004--2011, Google Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright notice,
+ * this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ * 3. The name of the author may not be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED
+ * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO
+ * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
+ * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
+ * OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
+ * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef TALK_APP_WEBRTC_AUDIOMEDIAENGINE_H_
+#define TALK_APP_WEBRTC_AUDIOMEDIAENGINE_H_
+
+#include <map>
+#include <string>
+#include <vector>
+
+#include "talk/base/buffer.h"
+#include "talk/base/byteorder.h"
+#include "talk/base/logging.h"
+#include "talk/base/scoped_ptr.h"
+#include "talk/base/stream.h"
+#include "talk/session/phone/channel.h"
+#include "talk/session/phone/mediaengine.h"
+#include "talk/session/phone/rtputils.h"
+#include "talk/app/voiceengine.h"
+
+namespace cricket {
+class SoundclipMedia;
+class VoiceMediaChannel;
+}
+namespace webrtc {
+
+// MonitorStream is used to monitor a stream coming from WebRTC.
+// For now we just dump the data.
+class MonitorStream : public OutStream {
+ virtual bool Write(const void *buf, int len) {
+ return true;
+ }
+};
+
+class AudioDeviceModule;
+class RtcVoiceMediaChannel;
+
+// RtcVoiceEngine is a class to be used with CompositeMediaEngine.
+// It uses the WebRTC VoiceEngine library for audio handling.
+class RtcVoiceEngine
+ : public VoiceEngineObserver,
+ public TraceCallback {
+ public:
+ RtcVoiceEngine(); // NOLINT
+ // Dependency injection for testing.
+ explicit RtcVoiceEngine(RtcWrapper* rtc_wrapper);
+ ~RtcVoiceEngine();
+ bool Init();
+ void Terminate();
+
+ int GetCapabilities();
+ cricket::VoiceMediaChannel* CreateChannel();
+ cricket::SoundclipMedia* CreateSoundclip() { return NULL; }
+ bool SetDevices(const cricket::Device* in_device,
+ const cricket::Device* out_device);
+ bool SetOptions(int options);
+ bool GetOutputVolume(int* level);
+ bool SetOutputVolume(int level);
+ int GetInputLevel();
+ bool SetLocalMonitor(bool enable);
+
+ const std::vector<cricket::AudioCodec>& codecs();
+ bool FindCodec(const cricket::AudioCodec& codec);
+ bool FindRtcCodec(const cricket::AudioCodec& codec, CodecInst* gcodec);
+
+ void SetLogging(int min_sev, const char* filter);
+
+ // For tracking WebRTC channels. Needed because we have to pause them
+ // all when switching devices.
+ // May only be called by RtcVoiceMediaChannel.
+ void RegisterChannel(RtcVoiceMediaChannel *channel);
+ void UnregisterChannel(RtcVoiceMediaChannel *channel);
+
+ RtcWrapper* webrtc() { return rtc_wrapper_.get(); }
+ int GetLastRtcError();
+
+ private:
+ typedef std::vector<RtcVoiceMediaChannel *> ChannelList;
+
+ struct CodecPref {
+ const char* name;
+ int clockrate;
+ };
+
+ void Construct();
+ bool InitInternal();
+ void ApplyLogging();
+ virtual void Print(const TraceLevel level,
+ const char* traceString, const int length);
+ virtual void CallbackOnError(const int errCode, const int channel);
+ static int GetCodecPreference(const char *name, int clockrate);
+ // Given the device type, name, and id, find WebRTC's device id. Return true and
+ // set the output parameter rtc_id if successful.
+ bool FindAudioDeviceId(
+ bool is_input, const std::string& dev_name, int dev_id, int* rtc_id);
+ bool FindChannelAndSsrc(int channel_num,
+ RtcVoiceMediaChannel** channel,
+ uint32* ssrc) const;
+
+ static const int kDefaultLogSeverity = talk_base::LS_WARNING;
+ static const CodecPref kCodecPrefs[];
+
+ // The primary instance of WebRTC VoiceEngine.
+ talk_base::scoped_ptr<RtcWrapper> rtc_wrapper_;
+ int log_level_;
+ std::vector<cricket::AudioCodec> codecs_;
+ talk_base::scoped_ptr<MonitorStream> monitor_;
+ // TODO: Can't use scoped_ptr here since ~AudioDeviceModule is protected.
+ AudioDeviceModule* adm_;
+ ChannelList channels_;
+ talk_base::CriticalSection channels_cs_;
+};
+
+// RtcMediaChannel is a class that implements the common WebRTC channel
+// functionality.
+template <class T, class E>
+class RtcMediaChannel : public T, public Transport {
+ public:
+ RtcMediaChannel(E *engine, int channel)
+ : engine_(engine), audio_channel_(channel), sequence_number_(-1) {}
+ E *engine() { return engine_; }
+ int audio_channel() const { return audio_channel_; }
+ bool valid() const { return audio_channel_ != -1; }
+ protected:
+ // implements Transport interface
+ virtual int SendPacket(int channel, const void *data, int len) {
+ if (!T::network_interface_) {
+ return -1;
+ }
+
+ const uint8* header = static_cast<const uint8*>(data);
+ sequence_number_ = talk_base::GetBE16(header + 2);
+
+ talk_base::Buffer packet(data, len, cricket::kMaxRtpPacketLen);
+ return T::network_interface_->SendPacket(&packet) ? len : -1;
+ }
+ virtual int SendRTCPPacket(int channel, const void *data, int len) {
+ if (!T::network_interface_) {
+ return -1;
+ }
+
+ talk_base::Buffer packet(data, len, cricket::kMaxRtpPacketLen);
+ return T::network_interface_->SendRtcp(&packet) ? len : -1;
+ }
+ int sequence_number() {
+ return sequence_number_;
+ }
+ private:
+ E *engine_;
+ int audio_channel_;
+ int sequence_number_;
+};
+
+// RtcVoiceMediaChannel is an implementation of VoiceMediaChannel that uses
+// WebRTC Voice Engine.
+class RtcVoiceMediaChannel
+ : public RtcMediaChannel<cricket::VoiceMediaChannel, RtcVoiceEngine> {
+ public:
+ explicit RtcVoiceMediaChannel(RtcVoiceEngine *engine);
+ virtual ~RtcVoiceMediaChannel();
+ virtual bool SetOptions(int options);
+ virtual bool SetRecvCodecs(const std::vector<cricket::AudioCodec> &codecs);
+ virtual bool SetSendCodecs(const std::vector<cricket::AudioCodec> &codecs);
+ virtual bool SetPlayout(bool playout);
+ bool GetPlayout();
+ virtual bool SetSend(cricket::SendFlags send);
+ cricket::SendFlags GetSend();
+ virtual bool AddStream(uint32 ssrc);
+ virtual bool RemoveStream(uint32 ssrc);
+ virtual bool GetActiveStreams(cricket::AudioInfo::StreamList* actives);
+ virtual int GetOutputLevel();
+
+ virtual bool SetRingbackTone(const char *buf, int len);
+ virtual bool PlayRingbackTone(uint32 ssrc, bool play, bool loop);
+ virtual bool PlayRingbackTone(bool play, bool loop);
+ virtual bool PressDTMF(int event, bool playout);
+
+ virtual void OnPacketReceived(talk_base::Buffer* packet);
+ virtual void OnRtcpReceived(talk_base::Buffer* packet);
+ virtual void SetSendSsrc(uint32 id);
+ virtual bool SetRtcpCName(const std::string& cname);
+ virtual bool Mute(bool mute);
+ virtual bool SetRecvRtpHeaderExtensions(
+ const std::vector<cricket::RtpHeaderExtension>& extensions) { return false; }
+ virtual bool SetSendRtpHeaderExtensions(
+ const std::vector<cricket::RtpHeaderExtension>& extensions) { return false; }
+ virtual bool SetSendBandwidth(bool autobw, int bps) { return false; }
+ virtual bool GetStats(cricket::VoiceMediaInfo* info);
+
+ virtual void GetLastMediaError(uint32* ssrc,
+ VoiceMediaChannel::Error* error);
+ bool FindSsrc(int channel_num, uint32* ssrc);
+ void OnError(uint32 ssrc, int error);
+ virtual int GetMediaChannelId() { return audio_channel(); }
+
+ protected:
+ int GetLastRtcError() { return engine()->GetLastRtcError(); }
+ int GetChannel(uint32 ssrc);
+ int GetOutputLevel(int channel);
+ bool EnableRtcp(int channel);
+ bool SetPlayout(int channel, bool playout);
+ static uint32 ParseSsrc(const void* data, size_t len, bool rtcp);
+ static Error WebRTCErrorToChannelError(int err_code);
+
+ private:
+
+ typedef std::map<uint32, int> ChannelMap;
+ int channel_options_;
+ bool playout_;
+ cricket::SendFlags send_;
+ ChannelMap mux_channels_; // for multiple sources
+ // mux_channels_ can be read from WebRTC callback thread. Accesses off the
+ // WebRTC thread must be synchronized with edits on the worker thread. Reads
+ // on the worker thread are ok.
+ mutable talk_base::CriticalSection mux_channels_cs_;
+};
+
+} // namespace webrtc
+
+#endif // TALK_APP_WEBRTC_AUDIOMEDIAENGINE_H_
diff --git a/third_party_mods/libjingle/source/talk/app/webrtc_json.cc b/third_party_mods/libjingle/source/talk/app/webrtc_json.cc
new file mode 100644
index 0000000..c5a6781
--- /dev/null
+++ b/third_party_mods/libjingle/source/talk/app/webrtc_json.cc
@@ -0,0 +1,434 @@
+/*
+ * libjingle
+ * Copyright 2004--2011, Google Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright notice,
+ * this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ * 3. The name of the author may not be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED
+ * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO
+ * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
+ * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
+ * OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
+ * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+//this file contains all the json helper methods
+#include "talk/app/webrtc_json.h"
+
+#include <stdio.h>
+#include <string>
+
+#include "talk/base/json.h"
+#include "talk/base/logging.h"
+#include "talk/session/phone/mediasessionclient.h"
+#include "talk/session/phone/codec.h"
+#include "json/json.h"
+
+namespace webrtc {
+
+static const int kIceComponent = 1;
+static const int kIceFoundation = 1;
+
+bool GetConnectionMediator(const Json::Value& value, std::string& connectionMediator) {
+ if (value.type() != Json::objectValue && value.type() != Json::nullValue) {
+ LOG(LS_WARNING) << "Failed to parse stun values" ;
+ return false;
+ }
+
+ if (!GetStringFromJsonObject(value, "connectionmediator", &connectionMediator)) {
+ LOG(LS_WARNING) << "Failed to parse JSON for value: "
+ << value.toStyledString();
+ return false;
+ }
+ return true;
+}
+
+bool GetStunServer(const Json::Value& value, StunServiceDetails& stunServer) {
+ if (value.type() != Json::objectValue && value.type() != Json::nullValue) {
+ LOG(LS_WARNING) << "Failed to parse stun values" ;
+ return false;
+ }
+
+ Json::Value stun;
+ if (GetValueFromJsonObject(value, "stun_service", &stun)) {
+ if (stun.type() == Json::objectValue) {
+ if (!GetStringFromJsonObject(stun, "host", &stunServer.host) ||
+ !GetStringFromJsonObject(stun, "service", &stunServer.service) ||
+ !GetStringFromJsonObject(stun, "protocol", &stunServer.protocol)) {
+ LOG(LS_WARNING) << "Failed to parse JSON value: "
+ << value.toStyledString();
+ return false;
+ }
+ } else {
+ return false;
+ }
+ }
+ return true;
+
+}
+bool GetTurnServer(const Json::Value& value, std::string& turnServer) {
+ if (value.type() != Json::objectValue && value.type() != Json::nullValue) {
+ LOG(LS_WARNING) << "Failed to parse stun values" ;
+ return false;
+ }
+
+ Json::Value turn;
+ if (GetValueFromJsonObject(value, "turn_service", &turn)) {
+ if (turn.type() == Json::objectValue) {
+ if (!GetStringFromJsonObject(turn, "host", &turnServer)) {
+ LOG(LS_WARNING) << "Failed to parse JSON value: "
+ << value.toStyledString();
+ return false;
+ }
+ } else {
+ return false;
+ }
+ }
+ return true;
+}
+
+bool GetJSONSignalingMessage(
+ const cricket::SessionDescription* sdp,
+ const std::vector<cricket::Candidate>& candidates,
+ std::string* signaling_message) {
+ const cricket::ContentInfo* audio_content = GetFirstAudioContent(sdp);
+ const cricket::ContentInfo* video_content = GetFirstVideoContent(sdp);
+
+ std::vector<Json::Value> media;
+ if (audio_content) {
+ Json::Value value;
+ BuildMediaMessage(audio_content, candidates, false, value);
+ media.push_back(value);
+ }
+
+ if (video_content) {
+ Json::Value value;
+ BuildMediaMessage(video_content, candidates, true, value);
+ media.push_back(value);
+ }
+
+ Json::Value signal;
+ Append(signal, "media", media);
+
+ // now serialize
+ *signaling_message = Serialize(signal);
+ return true;
+}
+
+bool BuildMediaMessage(
+ const cricket::ContentInfo* content_info,
+ const std::vector<cricket::Candidate>& candidates,
+ bool video,
+ Json::Value& params) {
+
+ if (!content_info) {
+ return false;
+ }
+
+ if (video) {
+ Append(params, "label", 2); //always video 2
+ } else {
+ Append(params, "label", 1); //always audio 1
+ }
+ std::vector<Json::Value> rtpmap;
+
+ if (!BuildRtpMapParams(content_info, video, rtpmap)) {
+ return false;
+ }
+
+ Append(params, "rtpmap", rtpmap);
+
+ Json::Value attributes;
+// Append(attributes, "ice-pwd", candidates.front().password());
+// Append(attributes, "ice-ufrag", candidates.front().username());
+ std::vector<Json::Value> jcandidates;
+
+ if (!BuildAttributes(candidates, video, jcandidates)) {
+ return false;
+ }
+ Append(attributes, "candidate", jcandidates);
+ Append(params, "attributes", attributes);
+ return true;
+}
+
+bool BuildRtpMapParams(const cricket::ContentInfo* content_info,
+ bool video,
+ std::vector<Json::Value>& rtpmap) {
+
+ if (!video) {
+ const cricket::AudioContentDescription* audio_offer =
+ static_cast<const cricket::AudioContentDescription*>(
+ content_info->description);
+
+
+ for (std::vector<cricket::AudioCodec>::const_iterator iter =
+ audio_offer->codecs().begin();
+ iter != audio_offer->codecs().end(); ++iter) {
+
+ Json::Value codec;
+ std::string codec_str = std::string("audio/").append(iter->name);
+ Append(codec, "codec", codec_str);
+ Json::Value codec_id;
+ Append(codec_id, talk_base::ToString(iter->id), codec);
+ rtpmap.push_back(codec_id);
+ }
+ } else {
+ const cricket::VideoContentDescription* video_offer =
+ static_cast<const cricket::VideoContentDescription*>(
+ content_info->description);
+
+
+ for (std::vector<cricket::VideoCodec>::const_iterator iter =
+ video_offer->codecs().begin();
+ iter != video_offer->codecs().end(); ++iter) {
+
+ Json::Value codec;
+ std::string codec_str = std::string("video/").append(iter->name);
+ Append(codec, "codec", codec_str);
+ Json::Value codec_id;
+ Append(codec_id, talk_base::ToString(iter->id), codec);
+ rtpmap.push_back(codec_id);
+ }
+ }
+ return true;
+}
+
+bool BuildAttributes(const std::vector<cricket::Candidate>& candidates,
+ bool video,
+ std::vector<Json::Value>& jcandidates) {
+
+ for (std::vector<cricket::Candidate>::const_iterator iter =
+ candidates.begin(); iter != candidates.end(); ++iter) {
+ if ((video && !iter->name().compare("video_rtp") ||
+ (!video && !iter->name().compare("rtp")))) {
+ Json::Value candidate;
+ Append(candidate, "component", kIceComponent);
+ Append(candidate, "foundation", kIceFoundation);
+ Append(candidate, "generation", iter->generation());
+ Append(candidate, "proto", iter->protocol());
+ Append(candidate, "priority", iter->preference());
+ Append(candidate, "ip", iter->address().IPAsString());
+ Append(candidate, "port", iter->address().PortAsString());
+ Append(candidate, "type", iter->type());
+ Append(candidate, "name", iter->name());
+ Append(candidate, "network_name", iter->network_name());
+ Append(candidate, "username", iter->username());
+ Append(candidate, "password", iter->password());
+ jcandidates.push_back(candidate);
+ }
+ }
+ return true;
+}
+
+std::string Serialize(const Json::Value& value) {
+ Json::StyledWriter writer;
+ return writer.write(value);
+}
+
+bool Deserialize(const std::string& message, Json::Value& value) {
+ Json::Reader reader;
+ return reader.parse(message, value);
+}
+
+
+bool ParseJSONSignalingMessage(const std::string& signaling_message,
+ cricket::SessionDescription*& sdp,
+ std::vector<cricket::Candidate>& candidates) {
+ ASSERT(!sdp); // expect this to NULL
+ // first deserialize message
+ Json::Value value;
+ if (!Deserialize(signaling_message, value)) {
+ return false;
+ }
+
+ // get media objects
+ std::vector<Json::Value> mlines = ReadValues(value, "media");
+ if (mlines.empty()) {
+ // no m-lines found
+ return false;
+ }
+
+ sdp = new cricket::SessionDescription();
+
+ // get codec information
+ for (size_t i = 0; i < mlines.size(); ++i) {
+ if (mlines[i]["label"].asInt() == 1) {
+ cricket::AudioContentDescription* audio_content =
+ new cricket::AudioContentDescription();
+ ParseAudioCodec(mlines[i], audio_content);
+ audio_content->SortCodecs();
+ sdp->AddContent(cricket::CN_AUDIO, cricket::NS_JINGLE_RTP, audio_content);
+ ParseICECandidates(mlines[i], candidates);
+
+ } else {
+ cricket::VideoContentDescription* video_content =
+ new cricket::VideoContentDescription();
+ ParseVideoCodec(mlines[i], video_content);
+ video_content->SortCodecs();
+ sdp->AddContent(cricket::CN_VIDEO, cricket::NS_JINGLE_RTP, video_content);
+ ParseICECandidates(mlines[i], candidates);
+ }
+ }
+ return true;
+}
+
+bool ParseAudioCodec(Json::Value value,
+ cricket::AudioContentDescription* content) {
+ std::vector<Json::Value> rtpmap(ReadValues(value, "rtpmap"));
+ if (rtpmap.empty())
+ return false;
+
+ for (size_t i = 0; i < rtpmap.size(); ++i) {
+ cricket::AudioCodec codec;
+ std::string pltype = rtpmap[i].begin().memberName();
+ talk_base::FromString(pltype, &codec.id);
+ Json::Value codec_info = rtpmap[i][pltype];
+ std::vector<std::string> tokens;
+ talk_base::split(codec_info["codec"].asString(), '/', &tokens);
+ codec.name = tokens[1];
+ content->AddCodec(codec);
+ }
+
+ return true;
+}
+
+bool ParseVideoCodec(Json::Value value,
+ cricket::VideoContentDescription* content) {
+ std::vector<Json::Value> rtpmap(ReadValues(value, "rtpmap"));
+ if (rtpmap.empty())
+ return false;
+
+ for (size_t i = 0; i < rtpmap.size(); ++i) {
+ cricket::VideoCodec codec;
+ std::string pltype = rtpmap[i].begin().memberName();
+ talk_base::FromString(pltype, &codec.id);
+ Json::Value codec_info = rtpmap[i][pltype];
+ std::vector<std::string> tokens;
+ talk_base::split(codec_info["codec"].asString(), '/', &tokens);
+ codec.name = tokens[1];
+ content->AddCodec(codec);
+ }
+ return true;
+}
+
+bool ParseICECandidates(Json::Value& value,
+ std::vector<cricket::Candidate>& candidates) {
+ Json::Value attributes = ReadValue(value, "attributes");
+ std::string ice_pwd = ReadString(attributes, "ice-pwd");
+ std::string ice_ufrag = ReadString(attributes, "ice-ufrag");
+
+ std::vector<Json::Value> jcandidates = ReadValues(attributes, "candidate");
+ char buffer[64];
+ for (size_t i = 0; i < jcandidates.size(); ++i) {
+ cricket::Candidate cand;
+ std::string str;
+ str = ReadUInt(jcandidates[i], "generation");
+ cand.set_generation_str(str);
+ str = ReadString(jcandidates[i], "proto");
+ cand.set_protocol(str);
+ double priority = ReadDouble(jcandidates[i], "priority");
+#ifdef _DEBUG
+ double as_int = static_cast<int>(priority);
+ ASSERT(as_int == priority);
+#endif
+ sprintf(buffer, "%i", static_cast<int>(priority));
+ str = buffer;
+ cand.set_preference_str(str);
+ talk_base::SocketAddress addr;
+ str = ReadString(jcandidates[i], "ip");
+ addr.SetIP(str);
+ str = ReadString(jcandidates[i], "port");
+ int port; talk_base::FromString(str, &port);
+ addr.SetPort(port);
+ cand.set_address(addr);
+ str = ReadString(jcandidates[i], "type");
+ cand.set_type(str);
+ str = ReadString(jcandidates[i], "name");
+ cand.set_name(str);
+ str = ReadString(jcandidates[i], "network_name");
+ cand.set_network_name(str);
+ str = ReadString(jcandidates[i], "username");
+ cand.set_username(str);
+ str = ReadString(jcandidates[i], "password");
+ cand.set_password(str);
+ candidates.push_back(cand);
+ }
+ return true;
+}
+
+std::vector<Json::Value> ReadValues(
+ Json::Value& value, const std::string& key) {
+ std::vector<Json::Value> objects;
+ for (size_t i = 0; i < value[key].size(); ++i) {
+ objects.push_back(value[key][i]);
+ }
+ return objects;
+}
+
+Json::Value ReadValue(Json::Value& value, const std::string& key) {
+ return value[key];
+}
+
+std::string ReadString(Json::Value& value, const std::string& key) {
+ return value[key].asString();
+}
+
+uint32 ReadUInt(Json::Value& value, const std::string& key) {
+ return value[key].asUInt();
+}
+
+double ReadDouble(Json::Value& value, const std::string& key) {
+ return value[key].asDouble();
+}
+
+// Add values
+void Append(Json::Value& object, const std::string& key, bool value) {
+ object[key] = Json::Value(value);
+}
+
+void Append(Json::Value& object, const std::string& key, char * value) {
+ object[key] = Json::Value(value);
+}
+void Append(Json::Value& object, const std::string& key, double value) {
+ object[key] = Json::Value(value);
+}
+void Append(Json::Value& object, const std::string& key, float value) {
+ object[key] = Json::Value(value);
+}
+void Append(Json::Value& object, const std::string& key, int value) {
+ object[key] = Json::Value(value);
+}
+void Append(Json::Value& object, const std::string& key, std::string value) {
+ object[key] = Json::Value(value);
+}
+void Append(Json::Value& object, const std::string& key, uint32 value) {
+ object[key] = Json::Value(value);
+}
+
+void Append(Json::Value& object, const std::string& key, Json::Value value) {
+ object[key] = value;
+}
+
+void Append(Json::Value & object,
+ const std::string & key,
+ std::vector<Json::Value>& values){
+ for (std::vector<Json::Value>::const_iterator iter = values.begin();
+ iter != values.end(); ++iter) {
+ object[key].append(*iter);
+ }
+}
+
+} //namespace webrtc
diff --git a/third_party_mods/libjingle/source/talk/app/webrtc_json.h b/third_party_mods/libjingle/source/talk/app/webrtc_json.h
new file mode 100644
index 0000000..7ac57a6
--- /dev/null
+++ b/third_party_mods/libjingle/source/talk/app/webrtc_json.h
@@ -0,0 +1,116 @@
+/*
+ * libjingle
+ * Copyright 2004--2011, Google Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright notice,
+ * this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ * 3. The name of the author may not be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED
+ * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO
+ * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
+ * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
+ * OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
+ * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef TALK_APP_WEBRTC_WEBRTC_JSON_H_
+#define TALK_APP_WEBRTC_WEBRTC_JSON_H_
+
+#include <string>
+
+#include "json/json.h"
+#include "talk/session/phone/codec.h"
+#include "talk/p2p/base/candidate.h"
+
+namespace Json {
+class Value;
+}
+
+namespace cricket {
+class AudioContentDescription;
+class VideoContentDescription;
+struct ContentInfo;
+class SessionDescription;
+}
+struct StunServiceDetails {
+ std::string host;
+ std::string service;
+ std::string protocol;
+};
+
+namespace webrtc {
+
+bool GetConnectionMediator(const Json::Value& value,
+ std::string& connectionMediator);
+bool GetStunServer(const Json::Value& value, StunServiceDetails& stun);
+bool GetTurnServer(const Json::Value& value, std::string& turnServer);
+bool FromJsonToAVCodec(const Json::Value& value,
+ cricket::AudioContentDescription* audio,
+ cricket::VideoContentDescription* video);
+
+std::vector<Json::Value> ReadValues(Json::Value& value, const std::string& key);
+
+bool BuildMediaMessage(
+ const cricket::ContentInfo* content_info,
+ const std::vector<cricket::Candidate>& candidates,
+ bool video,
+ Json::Value& value);
+
+bool GetJSONSignalingMessage(
+ const cricket::SessionDescription* sdp,
+ const std::vector<cricket::Candidate>& candidates,
+ std::string* signaling_message);
+
+bool BuildRtpMapParams(
+ const cricket::ContentInfo* audio_offer,
+ bool video,
+ std::vector<Json::Value>& rtpmap);
+
+bool BuildAttributes(const std::vector<cricket::Candidate>& candidates,
+ bool video,
+ std::vector<Json::Value>& jcandidates);
+
+std::string Serialize(const Json::Value& value);
+bool Deserialize(const std::string& message, Json::Value& value);
+
+bool ParseJSONSignalingMessage(const std::string& signaling_message,
+ cricket::SessionDescription*& sdp,
+ std::vector<cricket::Candidate>& candidates);
+bool ParseAudioCodec(Json::Value value, cricket::AudioContentDescription* content);
+bool ParseVideoCodec(Json::Value value, cricket::VideoContentDescription* content);
+bool ParseICECandidates(Json::Value& value,
+ std::vector<cricket::Candidate>& candidates);
+Json::Value ReadValue(Json::Value& value, const std::string& key);
+std::string ReadString(Json::Value& value, const std::string& key);
+double ReadDouble(Json::Value& value, const std::string& key);
+uint32 ReadUInt(Json::Value& value, const std::string& key);
+
+// Add values
+void Append(Json::Value& object, const std::string& key, bool value);
+
+void Append(Json::Value& object, const std::string& key, char * value);
+void Append(Json::Value& object, const std::string& key, double value);
+void Append(Json::Value& object, const std::string& key, float value);
+void Append(Json::Value& object, const std::string& key, int value);
+void Append(Json::Value& object, const std::string& key, std::string value);
+void Append(Json::Value& object, const std::string& key, uint32 value);
+void Append(Json::Value& object, const std::string& key, Json::Value value);
+void Append(Json::Value & object,
+ const std::string & key,
+ std::vector<Json::Value>& values);
+}
+
+
+#endif // TALK_APP_WEBRTC_WEBRTC_JSON_H_
diff --git a/third_party_mods/libjingle/source/talk/app/webrtc_json_unittest.cc b/third_party_mods/libjingle/source/talk/app/webrtc_json_unittest.cc
new file mode 100644
index 0000000..93aa972
--- /dev/null
+++ b/third_party_mods/libjingle/source/talk/app/webrtc_json_unittest.cc
@@ -0,0 +1,77 @@
+/*
+ * libjingle
+ * Copyright 2004--2011, Google Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright notice,
+ * this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ * 3. The name of the author may not be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED
+ * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO
+ * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
+ * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
+ * OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
+ * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <iostream>
+#include <string>
+
+#include "talk/base/gunit.h"
+#include "talk/app/webrtc_json.h"
+
+namespace webrtc {
+
+Json::Value JsonValueFromString(const std::string &json) {
+ Json::Reader reader;
+ Json::Value value;
+
+ EXPECT_TRUE(reader.parse(json, value, false));
+
+ return value;
+}
+
+class WebRTCJsonTest : public testing::Test {
+ public:
+ WebRTCJsonTest() {}
+ ~WebRTCJsonTest() {}
+};
+
+TEST_F(WebRTCJsonTest, TestParseConfig) {
+ Json::Value value(JsonValueFromString(
+ "\{"
+ " \"connectionmediator\": \"https://somewhere.example.com/conneg\","
+ " \"stun_service\": { "
+ " \"host\" : \"stun.service.example.com\","
+ " \"service\" : \"stun\","
+ " \"protocol\" : \"udp\""
+ " }"
+ " }"));
+
+ std::string c;
+ EXPECT_TRUE(GetConnectionMediator(value, c));
+ std::cout << " --- connectionmediator --- : " << c << std::endl;
+
+ StunServiceDetails stun;
+ EXPECT_TRUE(GetStunServer(value, stun));
+ std::cout << " --- stun host --- : " << stun.host << std::endl;
+ std::cout << " --- stun service --- : " << stun.service << std::endl;
+ std::cout << " --- stun protocol --- : " << stun.protocol << std::endl;
+}
+
+TEST_F(WebRTCJsonTest, TestLocalBlob) {
+ EXPECT_TRUE(FromSessionDescriptionToJson());
+}
+
+}
diff --git a/third_party_mods/libjingle/source/talk/app/webrtcchannelmanager.cc b/third_party_mods/libjingle/source/talk/app/webrtcchannelmanager.cc
new file mode 100644
index 0000000..8b624fd
--- /dev/null
+++ b/third_party_mods/libjingle/source/talk/app/webrtcchannelmanager.cc
@@ -0,0 +1,137 @@
+// Copyright 2011 Google Inc. All Rights Reserved.
+// Author: mallinath@google.com (Mallinath Bareddy)
+
+#include "talk/app/webrtcchannelmanager.h"
+
+namespace webrtc {
+
+struct VideoCaptureDeviceParams : public talk_base::MessageData {
+ VideoCaptureDeviceParams(const std::string& cam_device)
+ : cam_device(cam_device),
+ result(false) {}
+ const std::string cam_device;
+ bool result;
+};
+
+struct RenderParams : public talk_base::MessageData {
+ RenderParams(int channel_id,
+ void* window,
+ unsigned int zOrder,
+ float left,
+ float top,
+ float right,
+ float bottom)
+ :channel_id(channel_id)
+ ,window(window)
+ ,zOrder(zOrder)
+ ,left(left)
+ ,top(top)
+ ,right(right)
+ ,bottom(bottom) {}
+
+ int channel_id;
+ void* window;
+ unsigned int zOrder;
+ float left;
+ float top;
+ float right;
+ float bottom;
+ bool result;
+};
+
+bool WebRtcChannelManager::Init() {
+ return MaybeInit();
+}
+
+cricket::VoiceChannel* WebRtcChannelManager::CreateVoiceChannel(
+ cricket::BaseSession* s, const std::string& content_name, bool rtcp) {
+ return (MaybeInit()) ?
+ ChannelManager::CreateVoiceChannel(s, content_name, rtcp) : NULL;
+}
+
+cricket::VideoChannel* WebRtcChannelManager::CreateVideoChannel(
+ cricket::BaseSession* s, const std::string& content_name, bool rtcp,
+ cricket::VoiceChannel* vc) {
+ return (MaybeInit()) ?
+ ChannelManager::CreateVideoChannel(s, content_name, rtcp, vc) : NULL;
+
+}
+
+cricket::Soundclip* WebRtcChannelManager::CreateSoundclip() {
+ return (MaybeInit()) ? ChannelManager::CreateSoundclip() : NULL;
+}
+void WebRtcChannelManager::DestroyVoiceChannel(cricket::VoiceChannel* vc) {
+ ChannelManager::DestroyVoiceChannel(vc);
+ MaybeTerm();
+}
+void WebRtcChannelManager::DestroyVideoChannel(cricket::VideoChannel* vc) {
+ ChannelManager::DestroyVideoChannel(vc);
+ MaybeTerm();
+}
+void WebRtcChannelManager::DestroySoundclip(cricket::Soundclip* s) {
+ ChannelManager::DestroySoundclip(s);
+ MaybeTerm();
+}
+
+bool WebRtcChannelManager::MaybeInit() {
+ bool ret = initialized();
+ if (!ret) {
+ ret = ChannelManager::Init();
+ }
+ return ret;
+}
+
+void WebRtcChannelManager::MaybeTerm() {
+ if (initialized() && !has_channels()) {
+ Terminate();
+ }
+}
+
+bool WebRtcChannelManager::SetVideoRenderer(int channel_id,
+ void* window,
+ unsigned int zOrder,
+ float left,
+ float top,
+ float right,
+ float bottom) {
+ if (MaybeInit()) {
+ RenderParams params(channel_id, window, zOrder, left, top, right, bottom);
+ return cricket::ChannelManager::Send(MSG_SETRTC_VIDEORENDERER, ¶ms);
+ } else {
+ return false;
+ }
+}
+
+void WebRtcChannelManager::SetVideoRenderer_w(int channel_id,
+ void* window,
+ unsigned int zOrder,
+ float left,
+ float top,
+ float right,
+ float bottom) {
+ ASSERT(worker_thread() == talk_base::Thread::Current());
+ ASSERT(initialized());
+ media_engine()->SetVideoRenderer(channel_id, window, zOrder, left, top, right, bottom);
+}
+
+void WebRtcChannelManager::OnMessage(talk_base::Message *message) {
+ talk_base::MessageData* data = message->pdata;
+ switch(message->message_id) {
+ case MSG_SETRTC_VIDEORENDERER: {
+ RenderParams* p = static_cast<RenderParams*>(data);
+ SetVideoRenderer_w(p->channel_id,
+ p->window,
+ p->zOrder,
+ p->left,
+ p->top,
+ p->right,
+ p->bottom);
+ break;
+ }
+ default: {
+ ChannelManager::OnMessage(message);
+ }
+ }
+}
+
+} // namespace webrtc
diff --git a/third_party_mods/libjingle/source/talk/app/webrtcchannelmanager.h b/third_party_mods/libjingle/source/talk/app/webrtcchannelmanager.h
new file mode 100644
index 0000000..b8f15a8
--- /dev/null
+++ b/third_party_mods/libjingle/source/talk/app/webrtcchannelmanager.h
@@ -0,0 +1,68 @@
+// Copyright 2011 Google Inc. All Rights Reserved.
+// Author: mallinath@google.com (Mallinath Bareddy)
+
+
+#ifndef TALK_APP_WEBRTC_WEBRTCCHANNELMANAGER_H_
+#define TALK_APP_WEBRTC_WEBRTCCHANNELMANAGER_H_
+
+#include "talk/session/phone/channelmanager.h"
+
+namespace webrtc {
+
+class AudioDeviceModule;
+
+enum {
+ MSG_SETRTC_VIDEORENDERER = 21, // Set internal video renderer
+};
+
+// WebRtcChannelManager automatically takes care of initialization and
+// cricket::ChannelManager. Terminates when not needed
+
+class WebRtcChannelManager : public cricket::ChannelManager {
+ public:
+ WebRtcChannelManager(talk_base::Thread* worker_thread)
+ : ChannelManager(worker_thread) {
+ }
+
+ WebRtcChannelManager(cricket::MediaEngine* me, cricket::DeviceManager* dm,
+ talk_base::Thread* worker_thread)
+ : ChannelManager(me, dm, worker_thread) {
+ }
+
+ bool Init();
+ cricket::VoiceChannel* CreateVoiceChannel(
+ cricket::BaseSession* s, const std::string& content_name, bool rtcp);
+ cricket::VideoChannel* CreateVideoChannel(
+ cricket::BaseSession* s, const std::string& content_name, bool rtcp,
+ cricket::VoiceChannel* vc);
+ cricket::Soundclip* CreateSoundclip();
+ void DestroyVoiceChannel(cricket::VoiceChannel* vc);
+ void DestroyVideoChannel(cricket::VideoChannel* vc);
+ void DestroySoundclip(cricket::Soundclip* s);
+
+ bool SetVideoRenderer(int channel_id,
+ void* window,
+ unsigned int zOrder,
+ float left,
+ float top,
+ float right,
+ float bottom);
+
+ private:
+ bool MaybeInit();
+ void MaybeTerm();
+ void SetExternalAdm_w(AudioDeviceModule* external_adm);
+ void SetVideoRenderer_w(int channel_id,
+ void* window,
+ unsigned int zOrder,
+ float left,
+ float top,
+ float right,
+ float bottom);
+ void OnMessage(talk_base::Message *message);
+};
+
+} // namespace webrtc
+
+
+#endif /* TALK_APP_WEBRTC_WEBRTCCHANNELMANAGER_H_ */
diff --git a/third_party_mods/libjingle/source/talk/app/webrtcsession.cc b/third_party_mods/libjingle/source/talk/app/webrtcsession.cc
new file mode 100644
index 0000000..895b665
--- /dev/null
+++ b/third_party_mods/libjingle/source/talk/app/webrtcsession.cc
@@ -0,0 +1,39 @@
+/*
+ * libjingle
+ * Copyright 2004--2011, Google Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright notice,
+ * this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ * 3. The name of the author may not be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED
+ * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO
+ * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
+ * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
+ * OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
+ * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "talk/app/webrtcsession.h"
+
+namespace webrtc {
+
+const std::string WebRTCSession::kOutgoingDirection = "s";
+const std::string WebRTCSession::kIncomingDirection = "r";
+//const std::string WebRTCSession::kAudioType = "a";
+//const std::string WebRTCSession::kVideoType = "v";
+//const std::string WebRTCSession::kTestType = "t";
+
+} /* namespace webrtc */
+
diff --git a/third_party_mods/libjingle/source/talk/app/webrtcsession.h b/third_party_mods/libjingle/source/talk/app/webrtcsession.h
new file mode 100644
index 0000000..17a33d6
--- /dev/null
+++ b/third_party_mods/libjingle/source/talk/app/webrtcsession.h
@@ -0,0 +1,100 @@
+/*
+ * libjingle
+ * Copyright 2004--2011, Google Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright notice,
+ * this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ * 3. The name of the author may not be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED
+ * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO
+ * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
+ * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
+ * OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
+ * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef TALK_APP_WEBRTC_WEBRTCSESSION_H_
+#define TALK_APP_WEBRTC_WEBRTCSESSION_H_
+
+#include "talk/base/logging.h"
+#include "talk/p2p/base/constants.h"
+#include "talk/p2p/base/session.h"
+
+namespace cricket {
+class PortAllocator;
+}
+
+namespace webrtc {
+class PeerConnection;
+
+class WebRTCSession: public cricket::BaseSession {
+ public:
+ WebRTCSession(const std::string& id, const std::string& direction,
+ cricket::PortAllocator* allocator,
+ PeerConnection* connection,
+ talk_base::Thread* signaling_thread)
+ : BaseSession(signaling_thread),
+ signaling_thread_(signaling_thread),
+ id_(id),
+ incoming_(direction == kIncomingDirection),
+ port_allocator_(allocator),
+ connection_(connection) {
+ BaseSession::sid_ = id;
+ }
+
+ virtual ~WebRTCSession() {
+ }
+
+ virtual bool Initiate() = 0;
+
+ const std::string& id() const { return id_; }
+ //const std::string& type() const { return type_; }
+ bool incoming() const { return incoming_; }
+ cricket::PortAllocator* port_allocator() const { return port_allocator_; }
+
+// static const std::string kAudioType;
+// static const std::string kVideoType;
+ static const std::string kIncomingDirection;
+ static const std::string kOutgoingDirection;
+// static const std::string kTestType;
+ PeerConnection* connection() const { return connection_; }
+
+ protected:
+ //methods from cricket::BaseSession
+ virtual bool Accept(const cricket::SessionDescription* sdesc) {
+ return true;
+ }
+ virtual bool Reject(const std::string& reason) {
+ return true;
+ }
+ virtual bool TerminateWithReason(const std::string& reason) {
+ return true;
+ }
+
+ protected:
+ talk_base::Thread* signaling_thread_;
+
+ private:
+ std::string id_;
+ //std::string type_;
+ bool incoming_;
+ cricket::PortAllocator* port_allocator_;
+ PeerConnection* connection_;
+};
+
+} // namespace webrtc
+
+
+#endif /* TALK_APP_WEBRTC_WEBRTCSESSION_H_ */
diff --git a/third_party_mods/libjingle/source/talk/app/webrtcsessionimpl.cc b/third_party_mods/libjingle/source/talk/app/webrtcsessionimpl.cc
new file mode 100644
index 0000000..7c916c9
--- /dev/null
+++ b/third_party_mods/libjingle/source/talk/app/webrtcsessionimpl.cc
@@ -0,0 +1,1087 @@
+/*
+ * libjingle
+ * Copyright 2004--2011, Google Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright notice,
+ * this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ * 3. The name of the author may not be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED
+ * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO
+ * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
+ * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
+ * OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
+ * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#include "talk/app/webrtcsessionimpl.h"
+
+#include <string>
+#include <vector>
+
+#include "talk/base/common.h"
+#include "talk/base/json.h"
+#include "talk/base/scoped_ptr.h"
+#include "talk/p2p/base/constants.h"
+#include "talk/p2p/base/sessiondescription.h"
+#include "talk/p2p/base/p2ptransport.h"
+#include "talk/session/phone/mediasessionclient.h"
+#include "talk/session/phone/channel.h"
+#include "talk/session/phone/voicechannel.h"
+#include "talk/session/phone/channelmanager.h"
+#include "talk/app/webrtc_json.h"
+#include "talk/app/webrtcchannelmanager.h"
+#include "talk/app/peerconnection.h"
+#include "talk/app/pc_transport_impl.h"
+
+using namespace cricket;
+
+namespace webrtc {
+
+enum {
+ MSG_RTC_CREATEVIDEOCHANNEL = 1,
+ MSG_RTC_CREATEAUDIOCHANNEL = 2,
+ MSG_RTC_SETSTATE = 3,
+ MSG_RTC_SETVIDEOCAPTURE = 4,
+ MSG_RTC_CANDIDATETIMEOUT = 5,
+ MSG_RTC_SETEXTERNALRENDERER = 6,
+ MSG_RTC_SETRENDERER = 7,
+ MSG_RTC_CHANNELENABLE = 8,
+ MSG_RTC_SIGNALONWRITABLESTATE = 9,
+ MSG_RTC_DESTROYVOICECHANNEL = 10,
+ MSG_RTC_DESTROYVIDEOCHANNEL = 11,
+ MSG_RTC_SENDLOCALDESCRIPTION = 12,
+ MSG_RTC_REMOVESTREAM = 13,
+ MSG_RTC_REMOVEALLSTREAMS = 14,
+ MSG_RTC_ENABLEALLSTREAMS = 15,
+ MSG_RTC_SETSESSIONERROR = 16,
+};
+
+struct CreateChannelParams : public talk_base::MessageData {
+ CreateChannelParams(const std::string& content_name, bool rtcp,
+ cricket::VoiceChannel* voice_channel)
+ : content_name(content_name),
+ rtcp(rtcp),
+ voice_channel(voice_channel),
+ video_channel(NULL) {}
+
+ std::string content_name;
+ bool rtcp;
+ cricket::VoiceChannel* voice_channel;
+ cricket::VideoChannel* video_channel;
+};
+
+struct SetStateParams : public talk_base::MessageData {
+ SetStateParams(int state)
+ : state(state) {}
+ int state;
+ bool result;
+};
+
+struct CaptureParams : public talk_base::MessageData {
+ explicit CaptureParams(bool c) : capture(c), result(CR_FAILURE) {}
+
+ bool capture;
+ CaptureResult result;
+};
+
+struct ExternalRenderParams : public talk_base::MessageData {
+ ExternalRenderParams(const std::string& stream_id,
+ ExternalRenderer* external_renderer)
+ : stream_id(stream_id),
+ external_renderer(external_renderer),
+ result(false) {}
+
+ const std::string stream_id;
+ ExternalRenderer* external_renderer;
+ bool result;
+};
+
+struct RenderParams : public talk_base::MessageData {
+ RenderParams(int channel_id,
+ void* window,
+ unsigned int zOrder,
+ float left,
+ float top,
+ float right,
+ float bottom)
+ :channel_id(channel_id)
+ ,window(window)
+ ,zOrder(zOrder)
+ ,left(left)
+ ,top(top)
+ ,right(right)
+ ,bottom(bottom) {}
+
+ int channel_id;
+ void* window;
+ unsigned int zOrder;
+ float left;
+ float top;
+ float right;
+ float bottom;
+ bool result;
+};
+
+struct ChannelEnableParams : public talk_base::MessageData {
+ ChannelEnableParams(cricket::BaseChannel* channel, bool enable)
+ : channel(channel), enable(enable) {}
+
+ cricket::BaseChannel* channel;
+ bool enable;
+};
+
+static const int kAudioMonitorPollFrequency = 100;
+static const int kMonitorPollFrequency = 1000;
+
+// We allow 30 seconds to establish a connection; beyond that we consider
+// it an error
+static const int kCallSetupTimeout = 30 * 1000;
+// A loss of connectivity is probably due to the Internet connection going
+// down, and it might take a while to come back on wireless networks, so we
+// use a longer timeout for that.
+static const int kCallLostTimeout = 60 * 1000;
+static const uint32 kCandidateTimeoutId = 101;
+
+typedef std::vector<StreamInfo*> StreamMap; // not really a map (vector)
+
+WebRTCSessionImpl::WebRTCSessionImpl(
+ const std::string& id,
+ const std::string& direction,
+ cricket::PortAllocator* allocator,
+ WebRtcChannelManager* channelmgr,
+ PeerConnection* connection,
+ talk_base::Thread* signaling_thread)
+ : WebRTCSession(id, direction, allocator, connection, signaling_thread),
+ channel_manager_(channelmgr),
+ all_writable_(false),
+ muted_(false),
+ camera_muted_(false),
+ setup_timeout_(kCallSetupTimeout),
+ signal_initiated_(false) {
+}
+
+WebRTCSessionImpl::~WebRTCSessionImpl() {
+ if (state_ != STATE_RECEIVEDTERMINATE) {
+ Terminate();
+ }
+}
+
+bool WebRTCSessionImpl::CreateP2PTransportChannel(const std::string& stream_id,
+ bool video) {
+ PC_Transport_Impl* transport = new PC_Transport_Impl(this);
+ ASSERT(transport != NULL);
+ const std::string name = ((video) ? "video_rtp" : "rtp");
+ if (!transport->Init(name)) {
+ delete transport;
+ return false;
+ }
+
+ ASSERT(transport_channels_.find(name) == transport_channels_.end());
+ transport_channels_[name] = transport;
+
+ StreamInfo* stream_info = new StreamInfo(stream_id);
+ stream_info->transport = transport;
+ stream_info->video = video;
+ streams_.push_back(stream_info);
+
+ return true;
+}
+
+bool WebRTCSessionImpl::CreateVoiceChannel(const std::string& stream_id) {
+ this->SignalVoiceChannel.connect(
+ this, &WebRTCSessionImpl::OnVoiceChannelCreated);
+
+ signaling_thread_->Post(this, MSG_RTC_CREATEAUDIOCHANNEL,
+ new CreateChannelParams(stream_id, false, NULL));
+ return true;
+}
+
+cricket::VoiceChannel* WebRTCSessionImpl::CreateVoiceChannel_w(
+ const std::string& content_name,
+ bool rtcp) {
+ cricket::VoiceChannel* voice_channel = channel_manager_->CreateVoiceChannel(
+ this, content_name, rtcp);
+ return voice_channel;
+}
+
+void WebRTCSessionImpl::OnVoiceChannelCreated(
+ cricket::VoiceChannel* voice_channel,
+ std::string& stream_id) {
+ StreamMap::iterator iter;
+ for (iter = streams_.begin(); iter != streams_.end(); ++iter) {
+ StreamInfo* stream_info = (*iter);
+ if (stream_info->stream_id.compare(stream_id) == 0) {
+ ASSERT(stream_info->channel == NULL);
+ stream_info->channel = voice_channel;
+ stream_info->media_channel =
+ voice_channel->media_channel()->GetMediaChannelId();
+ if (incoming()) {
+ // change stream id to audio-<media_channel>
+ // ^^ - The code that does this has been disabled because
+ // it causes us to not be able to find the stream by name later.
+ // Instead, we could store the channel_id as an int member with
+ // stream_info?
+ streams_.erase(iter);
+#if 0
+ stream_info->stream_id.append("-");
+ stream_info->stream_id.append(
+ talk_base::ToString(stream_info->media_channel));
+#endif
+ streams_.push_back(stream_info);
+ connection()->OnAddStream(
+ stream_info->stream_id, stream_info->media_channel, false);
+ } else {
+ connection()->OnRtcMediaChannelCreated(
+ stream_id, stream_info->media_channel, false);
+ }
+ break;
+ }
+ }
+}
+
+bool WebRTCSessionImpl::CreateVideoChannel(const std::string& stream_id) {
+ this->SignalVideoChannel.connect(
+ this, &WebRTCSessionImpl::OnVideoChannelCreated);
+
+ signaling_thread_->Post(this, MSG_RTC_CREATEVIDEOCHANNEL,
+ new CreateChannelParams(stream_id, false, NULL));
+ return true;
+}
+
+cricket::VideoChannel* WebRTCSessionImpl::CreateVideoChannel_w(
+ const std::string& content_name,
+ bool rtcp,
+ cricket::VoiceChannel* voice_channel) {
+ cricket::VideoChannel* video_channel = channel_manager_->CreateVideoChannel(
+ this, content_name, rtcp, voice_channel);
+ return video_channel;
+}
+
+void WebRTCSessionImpl::OnVideoChannelCreated(
+ cricket::VideoChannel* video_channel,
+ std::string& stream_id) {
+ StreamMap::iterator iter;
+ for (iter = streams_.begin(); iter != streams_.end(); ++iter) {
+ StreamInfo* stream_info = (*iter);
+ if (stream_info->stream_id.compare(stream_id) == 0) {
+ ASSERT(stream_info->channel == NULL);
+ stream_info->channel = video_channel;
+ stream_info->media_channel =
+ video_channel->media_channel()->GetMediaChannelId();
+ if (incoming()) {
+ // change stream id to video-<media_channel>
+ // ^^ - The code that does this has been disabled because
+ // it causes us to not be able to find the stream by name later.
+ // Instead, we could store the channel_id as an int member with
+ // stream_info?
+ streams_.erase(iter);
+#if 0
+ stream_info->stream_id.append("-");
+ stream_info->stream_id.append(
+ talk_base::ToString(stream_info->media_channel));
+#endif
+ streams_.push_back(stream_info);
+ connection()->OnAddStream(
+ stream_info->stream_id, stream_info->media_channel, true);
+ } else {
+ connection()->OnRtcMediaChannelCreated(
+ stream_id, stream_info->media_channel, true);
+ }
+ break;
+ }
+ }
+}
+
+bool WebRTCSessionImpl::SetVideoRenderer(const std::string& stream_id,
+ ExternalRenderer* external_renderer) {
+ if(signaling_thread_ != talk_base::Thread::Current()) {
+ signaling_thread_->Post(this, MSG_RTC_SETEXTERNALRENDERER,
+ new ExternalRenderParams(stream_id, external_renderer),
+ true);
+ return true;
+ }
+
+ ASSERT(signaling_thread_ == talk_base::Thread::Current());
+
+ bool ret = false;
+ StreamMap::iterator iter;
+ for (iter = streams_.begin(); iter != streams_.end(); ++iter) {
+ StreamInfo* stream_info = (*iter);
+ if (stream_info->stream_id.compare(stream_id) == 0) {
+ ASSERT(stream_info->channel != NULL);
+ ASSERT(stream_info->video);
+ cricket::VideoChannel* channel = static_cast<cricket::VideoChannel*> (
+ stream_info->channel);
+ ret = channel->media_channel()->SetExternalRenderer(0, external_renderer);
+ break;
+ }
+ }
+ return ret;
+}
+
+bool WebRTCSessionImpl::SetVideoRenderer(int channel_id,
+ void* window,
+ unsigned int zOrder,
+ float left,
+ float top,
+ float right,
+ float bottom) {
+ signaling_thread_->Post(this, MSG_RTC_SETRENDERER,
+ new RenderParams(channel_id, window, zOrder, left, top, right, bottom),
+ true);
+ return true;
+}
+
+bool WebRTCSessionImpl::SetVideoRenderer_w(int channel_id,
+ void* window,
+ unsigned int zOrder,
+ float left,
+ float top,
+ float right,
+ float bottom) {
+ ASSERT(signaling_thread_ == talk_base::Thread::Current());
+ return channel_manager_->SetVideoRenderer(channel_id, window, zOrder, left, top, right, bottom);
+}
+
+void WebRTCSessionImpl::OnMessage(talk_base::Message* message) {
+ using talk_base::TypedMessageData;
+ talk_base::MessageData* data = message->pdata;
+ switch(message->message_id) {
+ case MSG_RTC_CREATEVIDEOCHANNEL: {
+ CreateChannelParams* p = reinterpret_cast<CreateChannelParams*>(data);
+ p->video_channel =
+ CreateVideoChannel_w(p->content_name, p->rtcp, p->voice_channel);
+ SignalVideoChannel(p->video_channel, p->content_name);
+ delete p;
+ break;
+ }
+ case MSG_RTC_CREATEAUDIOCHANNEL: {
+ CreateChannelParams* p = reinterpret_cast<CreateChannelParams*>(data);
+ p->voice_channel =
+ CreateVoiceChannel_w(p->content_name, p->rtcp);
+ SignalVoiceChannel(p->voice_channel, p->content_name);
+ delete p;
+ break;
+ }
+ case MSG_RTC_DESTROYVOICECHANNEL: {
+ cricket::VoiceChannel* channel =
+ reinterpret_cast<TypedMessageData<cricket::VoiceChannel*>*>(data)
+ ->data();
+ std::string name(channel->content_name());
+ DestroyVoiceChannel_w(channel);
+ delete data;
+ break;
+ }
+ case MSG_RTC_SETSESSIONERROR: {
+ int err = reinterpret_cast<TypedMessageData<int>*>(data)->data();
+ BaseSession::SetError(static_cast<Error>(err));
+ delete data;
+ break;
+ }
+ case MSG_RTC_DESTROYVIDEOCHANNEL: {
+ cricket::VideoChannel* channel =
+ reinterpret_cast<TypedMessageData<cricket::VideoChannel*>*>(data)
+ ->data();
+ std::string name(channel->content_name());
+ DestroyVideoChannel_w(channel);
+ delete data;
+ break;
+ }
+ case MSG_RTC_REMOVESTREAM : {
+ std::string stream_id(
+ reinterpret_cast<TypedMessageData<std::string>*>(data)->data());
+ RemoveStream_w(stream_id);
+ delete data;
+ break;
+ }
+ case MSG_RTC_REMOVEALLSTREAMS : {
+ RemoveAllStreams_w();
+ delete data;
+ break;
+ }
+ case MSG_RTC_ENABLEALLSTREAMS: {
+ EnableAllStreams_w();
+ delete data;
+ break;
+ }
+ case MSG_RTC_SETSTATE : {
+ SetSessionState_w();
+ break;
+ }
+ case MSG_RTC_SETVIDEOCAPTURE : {
+ CaptureParams* p = static_cast<CaptureParams*>(data);
+ p->result = SetVideoCapture_w(p->capture);
+ delete p;
+ break;
+ }
+ case MSG_RTC_SETEXTERNALRENDERER : {
+ ExternalRenderParams* p = static_cast<ExternalRenderParams*> (data);
+ p->result = SetVideoRenderer(p->stream_id, p->external_renderer);
+ delete p;
+ break;
+ }
+ case MSG_RTC_SETRENDERER : {
+ RenderParams* p = static_cast<RenderParams*> (data);
+ p->result = SetVideoRenderer_w(p->channel_id,
+ p->window,
+ p->zOrder,
+ p->left,
+ p->top,
+ p->right,
+ p->bottom);
+ delete p;
+ break;
+ }
+ case MSG_RTC_CHANNELENABLE : {
+ ChannelEnableParams* p = static_cast<ChannelEnableParams*> (data);
+ ChannelEnable_w(p->channel, p->enable);
+ delete p;
+ break;
+ }
+ case MSG_RTC_SIGNALONWRITABLESTATE : {
+ cricket::TransportChannel* channel =
+ reinterpret_cast<TypedMessageData<cricket::TransportChannel*>*>(data)
+ ->data();
+ SignalOnWritableState_w(channel);
+ delete data;
+ break;
+ }
+ case MSG_RTC_CANDIDATETIMEOUT: {
+ break;
+ }
+ case MSG_RTC_SENDLOCALDESCRIPTION : {
+ SendLocalDescription_w();
+ break;
+ }
+ default: {
+ WebRTCSession::OnMessage(message);
+ }
+ }
+}
+
+bool WebRTCSessionImpl::Initiate() {
+ if (streams_.empty()) {
+ // nothing to initiate
+ return false;
+ }
+
+ // Enable all the channels
+ signaling_thread_->Post(this, MSG_RTC_ENABLEALLSTREAMS);
+
+ SetVideoCapture(true);
+ signal_initiated_ = true;
+
+ if (local_candidates_.size() == streams_.size()) {
+ SendLocalDescription();
+ }
+ return true;
+}
+
+void WebRTCSessionImpl::ChannelEnable(cricket::BaseChannel* channel,
+ bool enable) {
+ ASSERT(channel);
+ signaling_thread_->Post(this, MSG_RTC_CHANNELENABLE,
+ new ChannelEnableParams(channel, enable), true);
+}
+
+void WebRTCSessionImpl::ChannelEnable_w(cricket::BaseChannel* channel,
+ bool enable) {
+ if (channel) {
+ channel->Enable(enable);
+ }
+}
+
+void WebRTCSessionImpl::SetSessionState(State state) {
+ session_state_ = state;
+ signaling_thread_->Post(this, MSG_RTC_SETSTATE);
+}
+
+void WebRTCSessionImpl::SetSessionState_w() {
+ SetState(session_state_);
+}
+
+bool WebRTCSessionImpl::SetVideoCapture(bool capture) {
+ signaling_thread_->Post(this, MSG_RTC_SETVIDEOCAPTURE,
+ new CaptureParams(capture), true);
+ return true;
+}
+
+cricket::CaptureResult WebRTCSessionImpl::SetVideoCapture_w(bool capture) {
+ ASSERT(signaling_thread_ == talk_base::Thread::Current());
+ return channel_manager_->SetVideoCapture(capture);
+}
+
+void WebRTCSessionImpl::OnVoiceChannelError(
+ cricket::VoiceChannel* voice_channel, uint32 ssrc,
+ cricket::VoiceMediaChannel::Error error) {
+ //report error to connection
+}
+
+void WebRTCSessionImpl::OnVideoChannelError(
+ cricket::VideoChannel* video_channel, uint32 ssrc,
+ cricket::VideoMediaChannel::Error error) {
+ //report error to connection
+}
+
+void WebRTCSessionImpl::RemoveStream_w(const std::string& stream_id) {
+ bool found = false;
+ StreamMap::iterator iter;
+ std::string candidate_name;
+ for (iter = streams_.begin(); iter != streams_.end(); ++iter) {
+ StreamInfo* sinfo = (*iter);
+ candidate_name = sinfo->transport->name();
+ if (sinfo->stream_id.compare(stream_id) == 0) {
+ DisableLocalCandidate(candidate_name);
+ if (!sinfo->video) {
+ cricket::VoiceChannel* channel = static_cast<cricket::VoiceChannel*> (
+ sinfo->channel);
+ channel_manager_->DestroyVoiceChannel(channel);
+ } else {
+ cricket::VideoChannel* channel = static_cast<cricket::VideoChannel*> (
+ sinfo->channel);
+ channel_manager_->DestroyVideoChannel(channel);
+ }
+ // channel and transport will be deleted in
+ // DestroyVoiceChannel/DestroyVideoChannel
+ found = true;
+ break;
+ }
+ }
+ if (!found) {
+ LOG(LS_ERROR) << "No streams found for stream id " << stream_id;
+ //TODO - trigger onError callback
+ }
+}
+
+bool WebRTCSessionImpl::RemoveStream(const std::string& stream_id) {
+ bool ret = true;
+ if ((state_ == STATE_RECEIVEDACCEPT) ||
+ (state_ == STATE_SENTACCEPT)) {
+
+ signaling_thread_->Post(this, MSG_RTC_REMOVESTREAM,
+ new talk_base::TypedMessageData<std::string>(stream_id));
+ } else {
+ LOG(LS_ERROR) << "Invalid session state -" << state_;
+ ret = false;
+ }
+ return ret;
+}
+
+void WebRTCSessionImpl::DisableLocalCandidate(const std::string& name) {
+ for (size_t i = 0; i < local_candidates_.size(); ++i) {
+ if (local_candidates_[i].name().compare(name) == 0) {
+ talk_base::SocketAddress address(local_candidates_[i].address().ip(), 0);
+ local_candidates_[i].set_address(address);
+ }
+ }
+}
+
+void WebRTCSessionImpl::RemoveAllStreams_w() {
+ // First build a list of streams to remove and then remove them.
+ // The reason we do this is that if we remove the streams inside the
+ // loop, a stream might get removed while we're enumerating and the iterator
+ // will become invalid (and we crash).
+ std::vector<std::string> streams_to_remove;
+ StreamMap::iterator iter;
+ for (iter = streams_.begin(); iter != streams_.end(); ++iter)
+ streams_to_remove.push_back((*iter)->stream_id);
+
+ for (std::vector<std::string>::iterator i = streams_to_remove.begin();
+ i != streams_to_remove.end(); ++i)
+ RemoveStream_w(*i);
+
+ SignalOnRemoveStream(this);
+}
+
+void WebRTCSessionImpl::EnableAllStreams_w() {
+ StreamMap::const_iterator i;
+ for (i = streams_.begin(); i != streams_.end(); ++i) {
+ cricket::BaseChannel* channel = (*i)->channel;
+ if (channel)
+ channel->Enable(true);
+ }
+}
+
+void WebRTCSessionImpl::RemoveAllStreams() {
+ signaling_thread_->Post(this, MSG_RTC_REMOVEALLSTREAMS);
+}
+
+bool WebRTCSessionImpl::HasStream(const std::string& stream_id) const {
+ StreamMap::const_iterator iter;
+ for (iter = streams_.begin(); iter != streams_.end(); ++iter) {
+ StreamInfo* sinfo = (*iter);
+ if (stream_id.compare(sinfo->stream_id) == 0) {
+ return true;
+ }
+ }
+ return false;
+}
+
+bool WebRTCSessionImpl::HasStream(bool video) const {
+ StreamMap::const_iterator iter;
+ for (iter = streams_.begin(); iter != streams_.end(); ++iter) {
+ StreamInfo* sinfo = (*iter);
+ if (sinfo->video == video) {
+ return true;
+ }
+ }
+ return false;
+}
+
+bool WebRTCSessionImpl::HasAudioStream() const {
+ return HasStream(false);
+}
+
+bool WebRTCSessionImpl::HasVideoStream() const {
+ return HasStream(true);
+}
+
+void WebRTCSessionImpl::OnRequestSignaling(cricket::Transport* transport) {
+ transport->OnSignalingReady();
+}
+
+cricket::TransportChannel* WebRTCSessionImpl::CreateChannel(
+ const std::string& content_name, const std::string& name) {
+
+ // channel must be already present in the vector
+ StreamMap::iterator iter;
+ for (iter = streams_.begin(); iter != streams_.end(); ++iter) {
+ if (content_name.compare((*iter)->stream_id) == 0) {
+ StreamInfo* sinfo = (*iter);
+ // if it's a incoming call, remote candidates are already received
+ // in initial SignalingMessage. apply now
+ if (incoming() && state_ == STATE_RECEIVEDINITIATE) {
+ // process the remote candidates
+ std::vector<cricket::Candidate>::iterator iter;
+ for (iter = remote_candidates_.begin();
+ iter != remote_candidates_.end(); ++iter) {
+ std::string tname = iter->name();
+ TransportChannelMap::iterator titer = transport_channels_.find(tname);
+ if (titer != transport_channels_.end()) {
+ titer->second->AddRemoteCandidate(*iter);
+ }
+ }
+ }
+ return sinfo->transport->GetP2PChannel();
+ }
+ }
+ return NULL;
+}
+
+cricket::TransportChannel* WebRTCSessionImpl::GetChannel(
+ const std::string& content_name, const std::string& name) {
+ StreamMap::iterator iter;
+ for (iter = streams_.begin(); iter != streams_.end(); ++iter) {
+ if (content_name.compare((*iter)->stream_id) == 0) {
+ PC_Transport_Impl* transport = (*iter)->transport;
+ return transport->GetP2PChannel();
+ }
+ }
+ return NULL;
+}
+
+void WebRTCSessionImpl::DestroyChannel(
+ const std::string& content_name, const std::string& name) {
+ bool found = false;
+ StreamMap::iterator iter;
+ for (iter = streams_.begin(); iter != streams_.end(); ++iter) {
+ if (content_name.compare((*iter)->stream_id) == 0) {
+ PC_Transport_Impl* transport = (*iter)->transport;
+ delete transport;
+ (*iter)->transport = NULL;
+ connection()->OnRemoveStream((*iter)->stream_id, (*iter)->media_channel,
+ (*iter)->video);
+ streams_.erase(iter);
+ found = true;
+ break;
+ }
+ }
+
+ ASSERT(found);
+}
+
+void WebRTCSessionImpl::DestroyVoiceChannel_w(
+ cricket::VoiceChannel* channel) {
+ channel_manager_->DestroyVoiceChannel(channel);
+}
+
+void WebRTCSessionImpl::DestroyVideoChannel_w(
+ cricket::VideoChannel* channel) {
+ channel_manager_->DestroyVideoChannel(channel);
+
+}
+
+void WebRTCSessionImpl::StartTransportTimeout(int timeout) {
+ talk_base::Thread::Current()->PostDelayed(timeout, this,
+ MSG_RTC_CANDIDATETIMEOUT,
+ NULL);
+}
+
+void WebRTCSessionImpl::ClearTransportTimeout() {
+ //LOG(LS_INFO) << "ClearTransportTimeout";
+ talk_base::Thread::Current()->Clear(this, MSG_RTC_CANDIDATETIMEOUT);
+}
+
+void WebRTCSessionImpl::NotifyTransportState() {
+}
+
+bool WebRTCSessionImpl::OnRemoteDescription(Json::Value& desc) {
+ if ((!incoming() && state() != STATE_SENTINITIATE) ||
+ (incoming() && state() != STATE_INIT)) {
+ LOG(LS_WARNING) << "Invalid session state" ;
+ return false;
+ }
+
+ talk_base::scoped_ptr<cricket::AudioContentDescription> audio(
+ new cricket::AudioContentDescription());
+
+ talk_base::scoped_ptr<cricket::VideoContentDescription> video(
+ new cricket::VideoContentDescription());
+
+ //TODO- get media description from Json format
+ //set_remote_description();
+
+ if (incoming()) {
+ SetState(STATE_RECEIVEDINITIATE);
+ }
+ return true;
+}
+
+bool WebRTCSessionImpl::OnInitiateMessage(
+ const cricket::SessionDescription* offer,
+ std::vector<cricket::Candidate>& candidates) {
+ if (!offer) {
+ LOG(LS_ERROR) << "No SessionDescription from peer";
+ return false;
+ }
+
+ set_remote_description(offer);
+ const cricket::SessionDescription* answer = CreateAnswer(offer);
+
+ const cricket::ContentInfo* audio_content = GetFirstAudioContent(answer);
+ const cricket::ContentInfo* video_content = GetFirstVideoContent(answer);
+
+ if (!audio_content && !video_content) {
+ // no codec information of audio and video
+ set_remote_description(NULL);
+ delete answer;
+ return false;
+ }
+
+ SetSessionState(STATE_RECEIVEDINITIATE);
+
+ bool ret = true;
+ if (audio_content) {
+ ret = !HasAudioStream() &&
+ CreateP2PTransportChannel(audio_content->name, false) &&
+ CreateVoiceChannel(audio_content->name);
+ }
+
+ if (video_content) {
+ ret = !HasVideoStream() &&
+ CreateP2PTransportChannel(video_content->name, true) &&
+ CreateVideoChannel(video_content->name);
+ }
+
+ delete answer;
+
+ if (!ret) {
+ LOG(LS_ERROR) << "Failed to create channel for incoming media stream";
+ return false;
+ }
+
+ // Candidate processing.
+ ASSERT(candidates.size());
+ remote_candidates_.clear();
+ remote_candidates_.insert(remote_candidates_.begin(),
+ candidates.begin(), candidates.end());
+ return true;
+}
+
+bool WebRTCSessionImpl::OnRemoteDescription(
+ const cricket::SessionDescription* rdesc,
+ std::vector<cricket::Candidate>& candidates) {
+
+ if (state() == STATE_SENTACCEPT || state() == STATE_RECEIVEDACCEPT) {
+ return OnRemoteDescriptionUpdate(rdesc, candidates);
+ }
+
+ if ((!incoming()) && (state() != STATE_SENTINITIATE)) {
+ LOG(LS_ERROR) << "invalid session state";
+ return false;
+ }
+
+// cricket::SessionDescription* answer = new cricket::SessionDescription();
+// const ContentInfo* audio_content = GetFirstAudioContent(rdesc);
+// if (audio_content) {
+// const AudioContentDescription* audio_offer =
+// static_cast<const AudioContentDescription*>(audio_content->description);
+//
+// AudioContentDescription* audio_accept = new AudioContentDescription();
+//
+//
+// for (AudioCodecs::const_iterator theirs = audio_offer->codecs().begin();
+// theirs != audio_offer->codecs().end(); ++theirs) {
+// audio_accept->AddCodec(*theirs);
+// }
+// audio_accept->SortCodecs();
+// answer->AddContent(audio_content->name, audio_content->type, audio_accept);
+// }
+//
+// const ContentInfo* video_content = GetFirstVideoContent(rdesc);
+// if (video_content) {
+// const VideoContentDescription* video_offer =
+// static_cast<const VideoContentDescription*>(video_content->description);
+//
+// VideoContentDescription* video_accept = new VideoContentDescription();
+//
+// for (VideoCodecs::const_iterator theirs = video_offer->codecs().begin();
+// theirs != video_offer->codecs().end(); ++theirs) {
+// video_accept->AddCodec(*theirs);
+// }
+// video_accept->SortCodecs();
+// answer->AddContent(video_content->name, video_content->type, video_accept);
+// }
+
+ // process the remote candidates
+ remote_candidates_.clear();
+ std::vector<cricket::Candidate>::iterator iter;
+ for (iter = candidates.begin(); iter != candidates.end(); ++iter) {
+ std::string tname = iter->name();
+ TransportChannelMap::iterator titer = transport_channels_.find(tname);
+ if (titer != transport_channels_.end()) {
+ remote_candidates_.push_back(*iter);
+ titer->second->AddRemoteCandidate(*iter);
+ }
+ }
+
+ set_remote_description(rdesc);
+ SetSessionState(STATE_RECEIVEDACCEPT);
+ return true;
+}
+
+bool WebRTCSessionImpl::OnRemoteDescriptionUpdate(
+ const cricket::SessionDescription* desc,
+ std::vector<cricket::Candidate>& candidates) {
+ // This will be called when session is in connected state
+ // In this state session expects signaling message for any removed
+ // streamed by the peer.
+ // check for candidates port, if its equal to 0, remove that stream
+ // and provide callback OnRemoveStream else keep as it is
+
+ for (size_t i = 0; i < candidates.size(); ++i) {
+ if (candidates[i].address().port() == 0) {
+ RemoveStreamOnRequest(candidates[i]);
+ }
+ }
+ return true;
+}
+
+void WebRTCSessionImpl::RemoveStreamOnRequest(const cricket::Candidate& candidate) {
+ // 1. Get Transport corresponding to candidate name
+ // 2. Get StreamInfo for the transport found in step 1
+ // 3. call ChannelManager Destroy voice/video method
+
+ TransportChannelMap::iterator iter =
+ transport_channels_.find(candidate.name());
+ if (iter == transport_channels_.end()) {
+ return;
+ }
+
+ PC_Transport_Impl* transport = iter->second;
+ std::vector<StreamInfo*>::iterator siter;
+ for (siter = streams_.begin(); siter != streams_.end(); ++siter) {
+ StreamInfo* stream_info = (*siter);
+ if (stream_info->transport == transport) {
+ if (!stream_info->video) {
+ cricket::VoiceChannel* channel = static_cast<cricket::VoiceChannel*> (
+ stream_info->channel);
+ signaling_thread_->Post(this, MSG_RTC_DESTROYVOICECHANNEL,
+ new talk_base::TypedMessageData<cricket::VoiceChannel*>(channel));
+ } else {
+ cricket::VideoChannel* channel = static_cast<cricket::VideoChannel*> (
+ stream_info->channel);
+ signaling_thread_->Post(this, MSG_RTC_DESTROYVIDEOCHANNEL,
+ new talk_base::TypedMessageData<cricket::VideoChannel*>(channel));
+ }
+ break;
+ }
+ }
+}
+
+cricket::SessionDescription* WebRTCSessionImpl::CreateOffer() {
+
+ SessionDescription* offer = new SessionDescription();
+ StreamMap::iterator iter;
+ for (iter = streams_.begin(); iter != streams_.end(); ++iter) {
+ if ((*iter)->video) {
+ // add video codecs, if there is video stream added
+ VideoContentDescription* video = new VideoContentDescription();
+ std::vector<cricket::VideoCodec> video_codecs;
+ channel_manager_->GetSupportedVideoCodecs(&video_codecs);
+ for (VideoCodecs::const_iterator codec = video_codecs.begin();
+ codec != video_codecs.end(); ++codec) {
+ video->AddCodec(*codec);
+ }
+
+ video->SortCodecs();
+ offer->AddContent(CN_VIDEO, NS_JINGLE_RTP, video);
+ } else {
+ AudioContentDescription* audio = new AudioContentDescription();
+
+ std::vector<cricket::AudioCodec> audio_codecs;
+ channel_manager_->GetSupportedAudioCodecs(&audio_codecs);
+ for (AudioCodecs::const_iterator codec = audio_codecs.begin();
+ codec != audio_codecs.end(); ++codec) {
+ audio->AddCodec(*codec);
+ }
+
+ audio->SortCodecs();
+ offer->AddContent(CN_AUDIO, NS_JINGLE_RTP, audio);
+ }
+ }
+ return offer;
+}
+
+cricket::SessionDescription* WebRTCSessionImpl::CreateAnswer(
+ const cricket::SessionDescription* offer) {
+ cricket::SessionDescription* answer = new cricket::SessionDescription();
+
+ const ContentInfo* audio_content = GetFirstAudioContent(offer);
+ if (audio_content) {
+ const AudioContentDescription* audio_offer =
+ static_cast<const AudioContentDescription*>(audio_content->description);
+
+ AudioContentDescription* audio_accept = new AudioContentDescription();
+ AudioCodecs audio_codecs;
+ channel_manager_->GetSupportedAudioCodecs(&audio_codecs);
+
+ for (AudioCodecs::const_iterator ours = audio_codecs.begin();
+ ours != audio_codecs.end(); ++ours) {
+ for (AudioCodecs::const_iterator theirs = audio_offer->codecs().begin();
+ theirs != audio_offer->codecs().end(); ++theirs) {
+ if (ours->Matches(*theirs)) {
+ cricket::AudioCodec negotiated(*ours);
+ negotiated.id = theirs->id;
+ audio_accept->AddCodec(negotiated);
+ }
+ }
+ }
+ audio_accept->SortCodecs();
+ answer->AddContent(audio_content->name, audio_content->type, audio_accept);
+ }
+
+ const ContentInfo* video_content = GetFirstVideoContent(offer);
+ if (video_content) {
+ const VideoContentDescription* video_offer =
+ static_cast<const VideoContentDescription*>(video_content->description);
+
+ VideoContentDescription* video_accept = new VideoContentDescription();
+ VideoCodecs video_codecs;
+ channel_manager_->GetSupportedVideoCodecs(&video_codecs);
+
+ for (VideoCodecs::const_iterator ours = video_codecs.begin();
+ ours != video_codecs.end(); ++ours) {
+ for (VideoCodecs::const_iterator theirs = video_offer->codecs().begin();
+ theirs != video_offer->codecs().end(); ++theirs) {
+ if (ours->Matches(*theirs)) {
+ cricket::VideoCodec negotiated(*ours);
+ negotiated.id = theirs->id;
+ video_accept->AddCodec(negotiated);
+ }
+ }
+ }
+ video_accept->SortCodecs();
+ answer->AddContent(video_content->name, video_content->type, video_accept);
+ }
+ return answer;
+}
+
+void WebRTCSessionImpl::OnMute(bool mute) {
+}
+
+void WebRTCSessionImpl::OnCameraMute(bool mute) {
+}
+
+void WebRTCSessionImpl::SetError(Error error) {
+ if (signaling_thread_->IsCurrent()) {
+ BaseSession::SetError(error);
+ } else {
+ signaling_thread_->Post(this, MSG_RTC_SETSESSIONERROR,
+ new talk_base::TypedMessageData<int>(error));
+ }
+}
+
+void WebRTCSessionImpl::OnCandidateReady(const cricket::Candidate& address) {
+ local_candidates_.push_back(address);
+
+ // for now we are using only one candidate from each connection.
+ // PC_Transport_Impl will discard remaining candidates from
+ // P2PTransportChannel. When this function is called, if
+ // local_candidates_ size is equal to streams_ ( if RTCP is disabled)
+ // then send local session description
+
+ // TODO(mallinath): Is it correct to check the state variable here for
+ // incoming sessions?
+ if ((signal_initiated_ || state_ == STATE_RECEIVEDINITIATE) &&
+ (local_candidates_.size() == streams_.size())) {
+ SendLocalDescription();
+
+ // On the receiving end, we haven't yet enabled the channels, so after
+ // sending the local description, let's enable the channels.
+ if (!signal_initiated_) {
+ // Enable all the channels and then send our local description.
+ signaling_thread_->Post(this, MSG_RTC_ENABLEALLSTREAMS);
+ }
+ }
+}
+
+void WebRTCSessionImpl::SendLocalDescription() {
+ signaling_thread_->Post(this, MSG_RTC_SENDLOCALDESCRIPTION);
+}
+
+void WebRTCSessionImpl::SendLocalDescription_w() {
+ cricket::SessionDescription* desc;
+ if (incoming() && state_ == STATE_RECEIVEDINITIATE) {
+ desc = CreateAnswer(remote_description_);
+ } else {
+ desc = CreateOffer();
+ }
+ if (desc) {
+ set_local_description(desc);
+ session_state_ = (incoming()) ? STATE_SENTACCEPT : STATE_SENTINITIATE;
+ SetState(session_state_);
+ connection()->OnLocalDescription(desc, local_candidates_);
+ }
+}
+
+void WebRTCSessionImpl::SignalOnWritableState_w(
+ cricket::TransportChannel* channel) {
+ ASSERT(connection()->media_thread() == talk_base::Thread::Current());
+ SignalWritableState(channel);
+}
+
+void WebRTCSessionImpl::OnStateChange(P2PTransportClass::State state,
+ cricket::TransportChannel* channel) {
+ if (P2PTransportClass::STATE_WRITABLE & state) {
+ connection()->media_thread()->Post(
+ this, MSG_RTC_SIGNALONWRITABLESTATE,
+ new talk_base::TypedMessageData<cricket::TransportChannel*>(channel));
+ }
+}
+
+void WebRTCSessionImpl::OnMessageReceived(const char* data, size_t data_size) {
+}
+
+} /* namespace webrtc */
diff --git a/third_party_mods/libjingle/source/talk/app/webrtcsessionimpl.h b/third_party_mods/libjingle/source/talk/app/webrtcsessionimpl.h
new file mode 100644
index 0000000..20a141a
--- /dev/null
+++ b/third_party_mods/libjingle/source/talk/app/webrtcsessionimpl.h
@@ -0,0 +1,250 @@
+/*
+ * libjingle
+ * Copyright 2004--2011, Google Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright notice,
+ * this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ * 3. The name of the author may not be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED
+ * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO
+ * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
+ * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
+ * OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
+ * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef TALK_APP_WEBRTC_WEBRTCSESSIONIMPL_H_
+#define TALK_APP_WEBRTC_WEBRTCSESSIONIMPL_H_
+
+#include <string>
+#include <vector>
+
+#include "talk/base/messagehandler.h"
+#include "talk/p2p/base/candidate.h"
+#include "talk/session/phone/channel.h"
+#include "talk/session/phone/mediachannel.h"
+#include "talk/app/pc_transport_impl.h"
+#include "talk/app/webrtcsession.h"
+
+namespace cricket {
+class ChannelManager;
+class Transport;
+class TransportChannel;
+class VoiceChannel;
+class VideoChannel;
+struct ConnectionInfo;
+}
+
+namespace Json {
+class Value;
+}
+
+namespace webrtc {
+
+struct StreamInfo {
+ StreamInfo(const std::string stream_id)
+ : channel(NULL),
+ transport(NULL),
+ video(false),
+ stream_id(stream_id),
+ media_channel(-1) {}
+
+ StreamInfo()
+ : channel(NULL),
+ transport(NULL),
+ video(false),
+ media_channel(-1) {}
+
+ cricket::BaseChannel* channel;
+ PC_Transport_Impl* transport; //TODO - add RTCP transport channel
+ bool video;
+ std::string stream_id;
+ int media_channel;
+};
+
+typedef std::vector<cricket::AudioCodec> AudioCodecs;
+typedef std::vector<cricket::VideoCodec> VideoCodecs;
+
+class ExternalRenderer;
+class PeerConnection;
+class WebRtcChannelManager;
+
+class WebRTCSessionImpl: public WebRTCSession {
+
+ public:
+
+ WebRTCSessionImpl(const std::string& id,
+ const std::string& direction,
+ cricket::PortAllocator* allocator,
+ WebRtcChannelManager* channelmgr,
+ PeerConnection* connection,
+ talk_base::Thread* signaling_thread);
+
+ ~WebRTCSessionImpl();
+ virtual bool Initiate();
+ virtual bool OnRemoteDescription(Json::Value& desc);
+ virtual bool OnRemoteDescription(const cricket::SessionDescription* sdp,
+ std::vector<cricket::Candidate>& candidates);
+ virtual bool OnInitiateMessage(const cricket::SessionDescription* sdp,
+ std::vector<cricket::Candidate>& candidates);
+ virtual void OnMute(bool mute);
+ virtual void OnCameraMute(bool mute);
+
+ // Override from BaseSession to allow setting errors from other threads
+ // than the signaling thread.
+ virtual void SetError(Error error);
+
+ bool muted() const { return muted_; }
+ bool camera_muted() const { return camera_muted_; }
+
+ bool CreateP2PTransportChannel(const std::string& stream_id, bool video);
+
+ bool CreateVoiceChannel(const std::string& stream_id);
+ bool CreateVideoChannel(const std::string& stream_id);
+ bool RemoveStream(const std::string& stream_id);
+ void RemoveAllStreams();
+
+ // Returns true if we have either a voice or video stream matching this label.
+ bool HasStream(const std::string& label) const;
+ bool HasStream(bool video) const;
+
+ // Returns true if there's one or more audio channels in the session.
+ bool HasAudioStream() const;
+
+ // Returns true if there's one or more video channels in the session.
+ bool HasVideoStream() const;
+
+ void OnCandidateReady(const cricket::Candidate& candidate);
+ void OnStateChange(P2PTransportClass::State state,
+ cricket::TransportChannel* channel);
+ void OnMessageReceived(const char* data, size_t data_size);
+ bool SetVideoRenderer(const std::string& stream_id,
+ ExternalRenderer* external_renderer);
+ bool SetVideoRenderer(int channel_id,
+ void* window,
+ unsigned int zOrder,
+ float left,
+ float top,
+ float right,
+ float bottom);
+ sigslot::signal2<cricket::VideoChannel*, std::string&> SignalVideoChannel;
+ sigslot::signal2<cricket::VoiceChannel*, std::string&> SignalVoiceChannel;
+ sigslot::signal1<WebRTCSessionImpl*> SignalOnRemoveStream;
+
+ void OnVoiceChannelCreated(cricket::VoiceChannel* voice_channel,
+ std::string& stream_id);
+ void OnVideoChannelCreated(cricket::VideoChannel* video_channel,
+ std::string& stream_id);
+
+ void ChannelEnable(cricket::BaseChannel* channel, bool enable);
+
+ std::vector<cricket::Candidate>& local_candidates() {
+ return local_candidates_;
+ }
+
+ private:
+ bool SetVideoRenderer_w(int channel_id,
+ void* window,
+ unsigned int zOrder,
+ float left,
+ float top,
+ float right,
+ float bottom);
+ void ChannelEnable_w(cricket::BaseChannel* channel, bool enable);
+
+ void OnVoiceChannelError(cricket::VoiceChannel* voice_channel, uint32 ssrc,
+ cricket::VoiceMediaChannel::Error error);
+ void OnVideoChannelError(cricket::VideoChannel* video_channel, uint32 ssrc,
+ cricket::VideoMediaChannel::Error error);
+
+ // methods signaled by the transport
+ void OnRequestSignaling(cricket::Transport* transport);
+ void OnCandidatesReady(cricket::Transport* transport,
+ const std::vector<cricket::Candidate>& candidates);
+ void OnWritableState(cricket::Transport* transport);
+
+ // transport-management overrides from cricket::BaseSession
+ virtual cricket::TransportChannel* CreateChannel(
+ const std::string& content_name, const std::string& name);
+ virtual cricket::TransportChannel* GetChannel(
+ const std::string& content_name, const std::string& name);
+ virtual void DestroyChannel(
+ const std::string& content_name, const std::string& name);
+
+ virtual talk_base::Thread* worker_thread() {
+ return NULL;
+ }
+ void SendLocalDescription();
+
+ void UpdateTransportWritableState();
+ bool CheckAllTransportsWritable();
+ void StartTransportTimeout(int timeout);
+ void ClearTransportTimeout();
+ void NotifyTransportState();
+
+ cricket::SessionDescription* CreateOffer();
+ cricket::SessionDescription* CreateAnswer(
+ const cricket::SessionDescription* answer);
+
+ //from MessageHandler
+ virtual void OnMessage(talk_base::Message* message);
+
+ private:
+ typedef std::map<std::string, PC_Transport_Impl*> TransportChannelMap;
+
+ cricket::VideoChannel* CreateVideoChannel_w(
+ const std::string& content_name,
+ bool rtcp,
+ cricket::VoiceChannel* voice_channel);
+
+ cricket::VoiceChannel* CreateVoiceChannel_w(
+ const std::string& content_name,
+ bool rtcp);
+
+ void DestroyVoiceChannel_w(cricket::VoiceChannel* channel);
+ void DestroyVideoChannel_w(cricket::VideoChannel* channel);
+ void SignalOnWritableState_w(cricket::TransportChannel* channel);
+
+ void SetSessionState(State state);
+ void SetSessionState_w();
+ bool SetVideoCapture(bool capture);
+ cricket::CaptureResult SetVideoCapture_w(bool capture);
+ void DisableLocalCandidate(const std::string& name);
+ bool OnRemoteDescriptionUpdate(const cricket::SessionDescription* desc,
+ std::vector<cricket::Candidate>& candidates);
+ void RemoveStreamOnRequest(const cricket::Candidate& candidate);
+ void RemoveStream_w(const std::string& stream_id);
+ void RemoveAllStreams_w();
+
+ void EnableAllStreams_w();
+
+ void SendLocalDescription_w();
+
+ WebRtcChannelManager* channel_manager_;
+ std::vector<StreamInfo*> streams_;
+ TransportChannelMap transport_channels_;
+ bool all_writable_;
+ bool muted_;
+ bool camera_muted_;
+ int setup_timeout_;
+ std::vector<cricket::Candidate> local_candidates_;
+ std::vector<cricket::Candidate> remote_candidates_;
+ State session_state_;
+ bool signal_initiated_;
+};
+
+} /* namespace webrtc */
+
+#endif /* TALK_APP_WEBRTC_WEBRTCSESSIONIMPL_H_ */
diff --git a/third_party_mods/libjingle/source/talk/app/webrtcsessionimpl_unittest.cc b/third_party_mods/libjingle/source/talk/app/webrtcsessionimpl_unittest.cc
new file mode 100644
index 0000000..17e022d
--- /dev/null
+++ b/third_party_mods/libjingle/source/talk/app/webrtcsessionimpl_unittest.cc
@@ -0,0 +1,100 @@
+/*
+ * webrtcsessionimpl_unittest.cc
+ *
+ * Created on: Mar 11, 2011
+ * Author: mallinath
+ */
+
+#include "talk/base/gunit.h"
+#include "talk/base/logging.h"
+#include "talk/base/scoped_ptr.h"
+#include "talk/base/sigslot.h"
+#include "talk/app/webrtcsessionimpl.h"
+#include "talk/p2p/client/basicportallocator.h"
+#include "talk/session/phone/channelmanager.h"
+#include "talk/session/phone/fakemediaengine.h"
+#include "talk/session/phone/fakesession.h"
+
+namespace webrtc {
+using talk_base::scoped_ptr;
+
+static const char* kTestSessionId = "1234";
+
+class WebRTCSessionImplForTest : public WebRTCSessionImpl {
+ public:
+ WebRTCSessionImplForTest(const std::string& jid, const std::string& id,
+ const std::string& type,
+ const std::string& direction,
+ cricket::PortAllocator* allocator,
+ cricket::ChannelManager* channelmgr)
+ : WebRTCSessionImpl(NULL, id, type, direction, allocator, channelmgr) {
+
+ }
+
+ ~WebRTCSessionImplForTest() {
+ //Do Nothing
+ }
+
+ virtual cricket::Transport* GetTransport() {
+ return static_cast<cricket::FakeTransport*>(WebRTCSessionImpl::GetTransport());
+ }
+
+ protected:
+ virtual cricket::Transport* CreateTransport() {
+ return new cricket::FakeTransport(talk_base::Thread::Current(), talk_base::Thread::Current());
+ }
+
+};
+
+class WebRTCSessionImplTest : public sigslot::has_slots<>,
+ public testing::Test {
+ public:
+ WebRTCSessionImplTest() {
+ network_mgr_.reset(new talk_base::NetworkManager());
+ port_allocator_.reset(new cricket::BasicPortAllocator(network_mgr_.get()));
+ media_engine_ = new cricket::FakeMediaEngine();
+ channel_mgr_.reset(new cricket::ChannelManager(talk_base::Thread::Current()));
+ channel_mgr_.reset(NULL);
+
+ }
+ ~WebRTCSessionImplTest() {
+
+ }
+
+ void CreateSession(const std::string& jid, const std::string& id,
+ const std::string& type, const std::string& dir) {
+ session_.reset(new WebRTCSessionImplForTest(jid, id, type, dir,
+ port_allocator_.get(),
+ channel_mgr_.get()));
+ }
+ bool InitiateCall(const std::string& jid, const std::string& id,
+ const std::string& type, const std::string& dir) {
+ CreateSession(jid, id, type, dir);
+ bool ret = session_->Initiate();
+ return ret;
+ }
+
+ bool GetCandidates() {
+ return InitiateCall("", kTestSessionId, "t", "s");
+
+ }
+
+
+ protected:
+ scoped_ptr<talk_base::NetworkManager> network_mgr_;
+ scoped_ptr<cricket::BasicPortAllocator> port_allocator_;
+ cricket::FakeMediaEngine* media_engine_;
+ scoped_ptr<cricket::ChannelManager> channel_mgr_;
+ scoped_ptr<WebRTCSessionImplForTest> session_;
+
+};
+
+TEST_F(WebRTCSessionImplTest, TestGetCandidatesCall) {
+ EXPECT_TRUE(GetCandidates());
+ EXPECT_EQ(cricket::Session::STATE_INIT, session_->state());
+ EXPECT_EQ(kTestSessionId, session_->id());
+ EXPECT_EQ(WebRTCSession::kTestType, session_->type());
+ EXPECT_FALSE(session_->incoming());
+}
+
+} /* namespace webrtc */
diff --git a/third_party_mods/libjingle/source/talk/base/json.cc b/third_party_mods/libjingle/source/talk/base/json.cc
new file mode 100644
index 0000000..620c0e0
--- /dev/null
+++ b/third_party_mods/libjingle/source/talk/base/json.cc
@@ -0,0 +1,217 @@
+/*
+ * libjingle
+ * Copyright 2004--2005, Google Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright notice,
+ * this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ * 3. The name of the author may not be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED
+ * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO
+ * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
+ * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
+ * OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
+ * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "talk/base/json.h"
+
+#include <errno.h>
+
+#include <climits>
+#include <cstdlib>
+#include <sstream>
+
+bool GetStringFromJson(const Json::Value& in, std::string* out) {
+ if (!in.isString()) {
+ std::ostringstream s;
+ if (in.isBool()) {
+ s << std::boolalpha << in.asBool();
+ } else if (in.isInt()) {
+ s << in.asInt();
+ } else if (in.isUInt()) {
+ s << in.asUInt();
+ } else if (in.isDouble()) {
+ s << in.asDouble();
+ } else {
+ return false;
+ }
+ *out = s.str();
+ } else {
+ *out = in.asString();
+ }
+ return true;
+}
+
+bool GetIntFromJson(const Json::Value& in, int* out) {
+ bool ret;
+ if (!in.isString()) {
+ ret = in.isConvertibleTo(Json::intValue);
+ if (ret) {
+ *out = in.asInt();
+ }
+ } else {
+ long val; // NOLINT
+ const char* c_str = in.asCString();
+ char* end_ptr;
+ errno = 0;
+ val = strtol(c_str, &end_ptr, 10); // NOLINT
+ ret = (end_ptr != c_str && *end_ptr == '\0' && !errno &&
+ val >= INT_MIN && val <= INT_MAX);
+ *out = val;
+ }
+ return ret;
+}
+
+bool GetUIntFromJson(const Json::Value& in, unsigned int* out) {
+ bool ret;
+ if (!in.isString()) {
+ ret = in.isConvertibleTo(Json::uintValue);
+ if (ret) {
+ *out = in.asUInt();
+ }
+ } else {
+ unsigned long val; // NOLINT
+ const char* c_str = in.asCString();
+ char* end_ptr;
+ errno = 0;
+ val = strtoul(c_str, &end_ptr, 10); // NOLINT
+ ret = (end_ptr != c_str && *end_ptr == '\0' && !errno &&
+ val <= UINT_MAX);
+ *out = val;
+ }
+ return ret;
+}
+
+bool GetBoolFromJson(const Json::Value& in, bool* out) {
+ bool ret;
+ if (!in.isString()) {
+ ret = in.isConvertibleTo(Json::booleanValue);
+ if (ret) {
+ *out = in.asBool();
+ }
+ } else {
+ if (in.asString() == "true") {
+ *out = true;
+ ret = true;
+ } else if (in.asString() == "false") {
+ *out = false;
+ ret = true;
+ } else {
+ ret = false;
+ }
+ }
+ return ret;
+}
+
+bool GetValueFromJsonArray(const Json::Value& in, size_t n,
+ Json::Value* out) {
+ if (!in.isArray() || !in.isValidIndex(n)) {
+ return false;
+ }
+
+ *out = in[n];
+ return true;
+}
+
+bool GetIntFromJsonArray(const Json::Value& in, size_t n,
+ int* out) {
+ Json::Value x;
+ return GetValueFromJsonArray(in, n, &x) && GetIntFromJson(x, out);
+}
+
+bool GetUIntFromJsonArray(const Json::Value& in, size_t n,
+ unsigned int* out) {
+ Json::Value x;
+ return GetValueFromJsonArray(in, n, &x) && GetUIntFromJson(x, out);
+}
+
+bool GetStringFromJsonArray(const Json::Value& in, size_t n,
+ std::string* out) {
+ Json::Value x;
+ return GetValueFromJsonArray(in, n, &x) && GetStringFromJson(x, out);
+}
+
+bool GetBoolFromJsonArray(const Json::Value& in, size_t n,
+ bool* out) {
+ Json::Value x;
+ return GetValueFromJsonArray(in, n, &x) && GetBoolFromJson(x, out);
+}
+
+bool GetValueFromJsonObject(const Json::Value& in, const std::string& k,
+ Json::Value* out) {
+ if (!in.isObject() || !in.isMember(k)) {
+ return false;
+ }
+
+ *out = in[k];
+ return true;
+}
+
+
+bool GetIntFromJsonObject(const Json::Value& in, const std::string& k,
+ int* out) {
+ Json::Value x;
+ return GetValueFromJsonObject(in, k, &x) && GetIntFromJson(x, out);
+}
+
+bool GetUIntFromJsonObject(const Json::Value& in, const std::string& k,
+ unsigned int* out) {
+ Json::Value x;
+ return GetValueFromJsonObject(in, k, &x) && GetUIntFromJson(x, out);
+}
+
+bool GetStringFromJsonObject(const Json::Value& in, const std::string& k,
+ std::string* out) {
+ Json::Value x;
+ return GetValueFromJsonObject(in, k, &x) && GetStringFromJson(x, out);
+}
+
+bool GetBoolFromJsonObject(const Json::Value& in, const std::string& k,
+ bool* out) {
+ Json::Value x;
+ return GetValueFromJsonObject(in, k, &x) && GetBoolFromJson(x, out);
+}
+
+Json::Value StringVectorToJsonValue(const std::vector<std::string>& strings) {
+ Json::Value result(Json::arrayValue);
+ for (size_t i = 0; i < strings.size(); ++i) {
+ result.append(Json::Value(strings[i]));
+ }
+ return result;
+}
+
+bool JsonValueToStringVector(const Json::Value& value,
+ std::vector<std::string> *strings) {
+ strings->clear();
+ if (!value.isArray()) {
+ return false;
+ }
+
+ for (size_t i = 0; i < value.size(); ++i) {
+ if (value[i].isString()) {
+ strings->push_back(value[i].asString());
+ } else {
+ return false;
+ }
+ }
+
+ return true;
+}
+
+std::string JsonValueToString(const Json::Value& json) {
+ Json::FastWriter w;
+ std::string value = w.write(json);
+ return value.substr(0, value.size() - 1); // trim trailing newline
+}
diff --git a/third_party_mods/libjingle/source/talk/base/json.h b/third_party_mods/libjingle/source/talk/base/json.h
new file mode 100644
index 0000000..cb8266f
--- /dev/null
+++ b/third_party_mods/libjingle/source/talk/base/json.h
@@ -0,0 +1,80 @@
+/*
+ * libjingle
+ * Copyright 2004--2005, Google Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright notice,
+ * this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ * 3. The name of the author may not be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED
+ * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO
+ * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
+ * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
+ * OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
+ * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef TALK_BASE_JSON_H_
+#define TALK_BASE_JSON_H_
+
+#include <string>
+#include <vector>
+
+#include "json/json.h"
+
+// TODO(juberti): Move to talk_base namespace
+
+///////////////////////////////////////////////////////////////////////////////
+// JSON Helpers
+///////////////////////////////////////////////////////////////////////////////
+
+// Robust conversion operators, better than the ones in JsonCpp.
+bool GetIntFromJson(const Json::Value& in, int* out);
+bool GetUIntFromJson(const Json::Value& in, unsigned int* out);
+bool GetStringFromJson(const Json::Value& in, std::string* out);
+bool GetBoolFromJson(const Json::Value& in, bool* out);
+
+// Pull values out of a JSON array.
+bool GetValueFromJsonArray(const Json::Value& in, size_t n,
+ Json::Value* out);
+bool GetIntFromJsonArray(const Json::Value& in, size_t n,
+ int* out);
+bool GetUIntFromJsonArray(const Json::Value& in, size_t n,
+ unsigned int* out);
+bool GetStringFromJsonArray(const Json::Value& in, size_t n,
+ std::string* out);
+bool GetBoolFromJsonArray(const Json::Value& in, size_t n,
+ bool* out);
+
+// Pull values out of a JSON object.
+bool GetValueFromJsonObject(const Json::Value& in, const std::string& k,
+ Json::Value* out);
+bool GetIntFromJsonObject(const Json::Value& in, const std::string& k,
+ int* out);
+bool GetUIntFromJsonObject(const Json::Value& in, const std::string& k,
+ unsigned int* out);
+bool GetStringFromJsonObject(const Json::Value& in, const std::string& k,
+ std::string* out);
+bool GetBoolFromJsonObject(const Json::Value& in, const std::string& k,
+ bool* out);
+
+// Converts vectors of strings to/from JSON arrays.
+Json::Value StringVectorToJsonValue(const std::vector<std::string>& strings);
+bool JsonValueToStringVector(const Json::Value& value,
+ std::vector<std::string> *strings);
+
+// Writes out a Json value as a string.
+std::string JsonValueToString(const Json::Value& json);
+
+#endif // TALK_BASE_JSON_H_
diff --git a/third_party_mods/libjingle/source/talk/p2p/base/p2ptransportchannel.cc b/third_party_mods/libjingle/source/talk/p2p/base/p2ptransportchannel.cc
new file mode 100644
index 0000000..b4f5406
--- /dev/null
+++ b/third_party_mods/libjingle/source/talk/p2p/base/p2ptransportchannel.cc
@@ -0,0 +1,972 @@
+/*
+ * libjingle
+ * Copyright 2004--2005, Google Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright notice,
+ * this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ * 3. The name of the author may not be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED
+ * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO
+ * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
+ * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
+ * OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
+ * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "talk/p2p/base/p2ptransportchannel.h"
+
+#include <set>
+
+#include "talk/base/buffer.h"
+#include "talk/base/common.h"
+#include "talk/base/logging.h"
+#include "talk/p2p/base/common.h"
+
+namespace {
+
+// messages for queuing up work for ourselves
+const uint32 MSG_SORT = 1;
+const uint32 MSG_PING = 2;
+const uint32 MSG_ALLOCATE = 3;
+
+#ifdef PLATFORM_CHROMIUM
+const uint32 MSG_SENDPACKET = 4;
+
+struct SendPacketParams : public talk_base::MessageData {
+ talk_base::Buffer packet;
+};
+#endif
+
+// When the socket is unwritable, we will use 10 Kbps (ignoring IP+UDP headers)
+// for pinging. When the socket is writable, we will use only 1 Kbps because
+// we don't want to degrade the quality on a modem. These numbers should work
+// well on a 28.8K modem, which is the slowest connection on which the voice
+// quality is reasonable at all.
+static const uint32 PING_PACKET_SIZE = 60 * 8;
+static const uint32 WRITABLE_DELAY = 1000 * PING_PACKET_SIZE / 1000; // 480ms
+static const uint32 UNWRITABLE_DELAY = 1000 * PING_PACKET_SIZE / 10000; // 50ms
+
+// If there is a current writable connection, then we will also try hard to
+// make sure it is pinged at this rate.
+static const uint32 MAX_CURRENT_WRITABLE_DELAY = 900; // 2*WRITABLE_DELAY - bit
+
+// The minimum improvement in RTT that justifies a switch.
+static const double kMinImprovement = 10;
+
+// Amount of time that we wait when *losing* writability before we try doing
+// another allocation.
+static const int kAllocateDelay = 1 * 1000; // 1 second
+
+// We will try creating a new allocator from scratch after a delay of this
+// length without becoming writable (or timing out).
+static const int kAllocatePeriod = 20 * 1000; // 20 seconds
+
+cricket::Port::CandidateOrigin GetOrigin(cricket::Port* port,
+ cricket::Port* origin_port) {
+ if (!origin_port)
+ return cricket::Port::ORIGIN_MESSAGE;
+ else if (port == origin_port)
+ return cricket::Port::ORIGIN_THIS_PORT;
+ else
+ return cricket::Port::ORIGIN_OTHER_PORT;
+}
+
+// Compares two connections based only on static information about them.
+int CompareConnectionCandidates(cricket::Connection* a,
+ cricket::Connection* b) {
+ // Combine local and remote preferences
+ ASSERT(a->local_candidate().preference() == a->port()->preference());
+ ASSERT(b->local_candidate().preference() == b->port()->preference());
+ double a_pref = a->local_candidate().preference()
+ * a->remote_candidate().preference();
+ double b_pref = b->local_candidate().preference()
+ * b->remote_candidate().preference();
+
+ // Now check combined preferences. Lower values get sorted last.
+ if (a_pref > b_pref)
+ return 1;
+ if (a_pref < b_pref)
+ return -1;
+
+ // If we're still tied at this point, prefer a younger generation.
+ return (a->remote_candidate().generation() + a->port()->generation()) -
+ (b->remote_candidate().generation() + b->port()->generation());
+}
+
+// Compare two connections based on their writability and static preferences.
+int CompareConnections(cricket::Connection *a, cricket::Connection *b) {
+ // Sort based on write-state. Better states have lower values.
+ if (a->write_state() < b->write_state())
+ return 1;
+ if (a->write_state() > b->write_state())
+ return -1;
+
+ // Compare the candidate information.
+ return CompareConnectionCandidates(a, b);
+}
+
+// Wraps the comparison connection into a less than operator that puts higher
+// priority writable connections first.
+class ConnectionCompare {
+ public:
+ bool operator()(const cricket::Connection *ca,
+ const cricket::Connection *cb) {
+ cricket::Connection* a = const_cast<cricket::Connection*>(ca);
+ cricket::Connection* b = const_cast<cricket::Connection*>(cb);
+
+ // Compare first on writability and static preferences.
+ int cmp = CompareConnections(a, b);
+ if (cmp > 0)
+ return true;
+ if (cmp < 0)
+ return false;
+
+ // Otherwise, sort based on latency estimate.
+ return a->rtt() < b->rtt();
+
+ // Should we bother checking for the last connection that last received
+ // data? It would help rendezvous on the connection that is also receiving
+ // packets.
+ //
+ // TODO: Yes we should definitely do this. The TCP protocol gains
+ // efficiency by being used bidirectionally, as opposed to two separate
+ // unidirectional streams. This test should probably occur before
+ // comparison of local prefs (assuming combined prefs are the same). We
+ // need to be careful though, not to bounce back and forth with both sides
+ // trying to rendevous with the other.
+ }
+};
+
+// Determines whether we should switch between two connections, based first on
+// static preferences and then (if those are equal) on latency estimates.
+bool ShouldSwitch(cricket::Connection* a_conn, cricket::Connection* b_conn) {
+ if (a_conn == b_conn)
+ return false;
+
+ if (!a_conn || !b_conn) // don't think the latter should happen
+ return true;
+
+ int prefs_cmp = CompareConnections(a_conn, b_conn);
+ if (prefs_cmp < 0)
+ return true;
+ if (prefs_cmp > 0)
+ return false;
+
+ return b_conn->rtt() <= a_conn->rtt() + kMinImprovement;
+}
+
+} // unnamed namespace
+
+namespace cricket {
+
+P2PTransportChannel::P2PTransportChannel(const std::string &name,
+ const std::string &content_type,
+ P2PTransport* transport,
+ PortAllocator *allocator) :
+ TransportChannelImpl(name, content_type),
+ transport_(transport),
+ allocator_(allocator),
+ worker_thread_(talk_base::Thread::Current()),
+ incoming_only_(false),
+ waiting_for_signaling_(false),
+ error_(0),
+ best_connection_(NULL),
+ pinging_started_(false),
+ sort_dirty_(false),
+ was_writable_(false),
+ was_timed_out_(true) {
+}
+
+P2PTransportChannel::~P2PTransportChannel() {
+ ASSERT(worker_thread_ == talk_base::Thread::Current());
+
+ for (uint32 i = 0; i < allocator_sessions_.size(); ++i)
+ delete allocator_sessions_[i];
+}
+
+// Add the allocator session to our list so that we know which sessions
+// are still active.
+void P2PTransportChannel::AddAllocatorSession(PortAllocatorSession* session) {
+ session->set_generation(static_cast<uint32>(allocator_sessions_.size()));
+ allocator_sessions_.push_back(session);
+
+ // We now only want to apply new candidates that we receive to the ports
+ // created by this new session because these are replacing those of the
+ // previous sessions.
+ ports_.clear();
+
+ session->SignalPortReady.connect(this, &P2PTransportChannel::OnPortReady);
+ session->SignalCandidatesReady.connect(
+ this, &P2PTransportChannel::OnCandidatesReady);
+ session->GetInitialPorts();
+ if (pinging_started_)
+ session->StartGetAllPorts();
+}
+
+// Go into the state of processing candidates, and running in general
+void P2PTransportChannel::Connect() {
+ ASSERT(worker_thread_ == talk_base::Thread::Current());
+
+ // Kick off an allocator session
+ Allocate();
+
+ // Start pinging as the ports come in.
+ thread()->Post(this, MSG_PING);
+}
+
+// Reset the socket, clear up any previous allocations and start over
+void P2PTransportChannel::Reset() {
+ ASSERT(worker_thread_ == talk_base::Thread::Current());
+
+ // Get rid of all the old allocators. This should clean up everything.
+ for (uint32 i = 0; i < allocator_sessions_.size(); ++i)
+ delete allocator_sessions_[i];
+
+ allocator_sessions_.clear();
+ ports_.clear();
+ connections_.clear();
+ best_connection_ = NULL;
+
+ // Forget about all of the candidates we got before.
+ remote_candidates_.clear();
+
+ // Revert to the initial state.
+ set_readable(false);
+ set_writable(false);
+
+ // Reinitialize the rest of our state.
+ waiting_for_signaling_ = false;
+ pinging_started_ = false;
+ sort_dirty_ = false;
+ was_writable_ = false;
+ was_timed_out_ = true;
+
+ // If we allocated before, start a new one now.
+ if (transport_->connect_requested())
+ Allocate();
+
+ // Start pinging as the ports come in.
+ thread()->Clear(this);
+ thread()->Post(this, MSG_PING);
+}
+
+// A new port is available, attempt to make connections for it
+void P2PTransportChannel::OnPortReady(PortAllocatorSession *session,
+ Port* port) {
+ ASSERT(worker_thread_ == talk_base::Thread::Current());
+
+ // Set in-effect options on the new port
+ for (OptionMap::const_iterator it = options_.begin();
+ it != options_.end();
+ ++it) {
+ int val = port->SetOption(it->first, it->second);
+ if (val < 0) {
+ LOG_J(LS_WARNING, port) << "SetOption(" << it->first
+ << ", " << it->second
+ << ") failed: " << port->GetError();
+ }
+ }
+
+ // Remember the ports and candidates, and signal that candidates are ready.
+ // The session will handle this, and send an initiate/accept/modify message
+ // if one is pending.
+
+ ports_.push_back(port);
+ port->SignalUnknownAddress.connect(
+ this, &P2PTransportChannel::OnUnknownAddress);
+ port->SignalDestroyed.connect(this, &P2PTransportChannel::OnPortDestroyed);
+
+ // Attempt to create a connection from this new port to all of the remote
+ // candidates that we were given so far.
+
+ std::vector<RemoteCandidate>::iterator iter;
+ for (iter = remote_candidates_.begin(); iter != remote_candidates_.end();
+ ++iter) {
+ CreateConnection(port, *iter, iter->origin_port(), false);
+ }
+
+ SortConnections();
+}
+
+// A new candidate is available, let listeners know
+void P2PTransportChannel::OnCandidatesReady(
+ PortAllocatorSession *session, const std::vector<Candidate>& candidates) {
+ for (size_t i = 0; i < candidates.size(); ++i) {
+ SignalCandidateReady(this, candidates[i]);
+ }
+}
+
+// Handle stun packets
+void P2PTransportChannel::OnUnknownAddress(
+ Port *port, const talk_base::SocketAddress &address, StunMessage *stun_msg,
+ const std::string &remote_username) {
+ ASSERT(worker_thread_ == talk_base::Thread::Current());
+
+ // Port has received a valid stun packet from an address that no Connection
+ // is currently available for. See if the remote user name is in the remote
+ // candidate list. If it isn't return error to the stun request.
+
+ const Candidate *candidate = NULL;
+ std::vector<RemoteCandidate>::iterator it;
+ for (it = remote_candidates_.begin(); it != remote_candidates_.end(); ++it) {
+ if ((*it).username() == remote_username) {
+ candidate = &(*it);
+ break;
+ }
+ }
+ if (candidate == NULL) {
+ // Don't know about this username, the request is bogus
+ // This sometimes happens if a binding response comes in before the ACCEPT
+ // message. It is totally valid; the retry state machine will try again.
+
+ port->SendBindingErrorResponse(stun_msg, address,
+ STUN_ERROR_STALE_CREDENTIALS, STUN_ERROR_REASON_STALE_CREDENTIALS);
+ delete stun_msg;
+ return;
+ }
+
+ // Check for connectivity to this address. Create connections
+ // to this address across all local ports. First, add this as a new remote
+ // address
+
+ Candidate new_remote_candidate = *candidate;
+ new_remote_candidate.set_address(address);
+ // new_remote_candidate.set_protocol(port->protocol());
+
+ // This remote username exists. Now create connections using this candidate,
+ // and resort
+
+ if (CreateConnections(new_remote_candidate, port, true)) {
+ // Send the pinger a successful stun response.
+ port->SendBindingResponse(stun_msg, address);
+
+ // Update the list of connections since we just added another. We do this
+ // after sending the response since it could (in principle) delete the
+ // connection in question.
+ SortConnections();
+ } else {
+ // Hopefully this won't occur, because changing a destination address
+ // shouldn't cause a new connection to fail
+ ASSERT(false);
+ port->SendBindingErrorResponse(stun_msg, address, STUN_ERROR_SERVER_ERROR,
+ STUN_ERROR_REASON_SERVER_ERROR);
+ }
+
+ delete stun_msg;
+}
+
+void P2PTransportChannel::OnCandidate(const Candidate& candidate) {
+ ASSERT(worker_thread_ == talk_base::Thread::Current());
+
+ // Create connections to this remote candidate.
+ CreateConnections(candidate, NULL, false);
+
+ // Resort the connections list, which may have new elements.
+ SortConnections();
+}
+
+// Creates connections from all of the ports that we care about to the given
+// remote candidate. The return value is true if we created a connection from
+// the origin port.
+bool P2PTransportChannel::CreateConnections(const Candidate &remote_candidate,
+ Port* origin_port,
+ bool readable) {
+ ASSERT(worker_thread_ == talk_base::Thread::Current());
+
+ // Add a new connection for this candidate to every port that allows such a
+ // connection (i.e., if they have compatible protocols) and that does not
+ // already have a connection to an equivalent candidate. We must be careful
+ // to make sure that the origin port is included, even if it was pruned,
+ // since that may be the only port that can create this connection.
+
+ bool created = false;
+
+ std::vector<Port *>::reverse_iterator it;
+ for (it = ports_.rbegin(); it != ports_.rend(); ++it) {
+ if (CreateConnection(*it, remote_candidate, origin_port, readable)) {
+ if (*it == origin_port)
+ created = true;
+ }
+ }
+
+ if ((origin_port != NULL) &&
+ std::find(ports_.begin(), ports_.end(), origin_port) == ports_.end()) {
+ if (CreateConnection(origin_port, remote_candidate, origin_port, readable))
+ created = true;
+ }
+
+ // Remember this remote candidate so that we can add it to future ports.
+ RememberRemoteCandidate(remote_candidate, origin_port);
+
+ return created;
+}
+
+// Setup a connection object for the local and remote candidate combination.
+// And then listen to connection object for changes.
+bool P2PTransportChannel::CreateConnection(Port* port,
+ const Candidate& remote_candidate,
+ Port* origin_port,
+ bool readable) {
+ // Look for an existing connection with this remote address. If one is not
+ // found, then we can create a new connection for this address.
+ Connection* connection = port->GetConnection(remote_candidate.address());
+ if (connection != NULL) {
+ // It is not legal to try to change any of the parameters of an existing
+ // connection; however, the other side can send a duplicate candidate.
+ if (!remote_candidate.IsEquivalent(connection->remote_candidate())) {
+ LOG(INFO) << "Attempt to change a remote candidate";
+ return false;
+ }
+ } else {
+ Port::CandidateOrigin origin = GetOrigin(port, origin_port);
+
+ // Don't create connection if this is a candidate we received in a
+ // message and we are not allowed to make outgoing connections.
+ if (origin == cricket::Port::ORIGIN_MESSAGE && incoming_only_)
+ return false;
+
+ connection = port->CreateConnection(remote_candidate, origin);
+ if (!connection)
+ return false;
+
+ connections_.push_back(connection);
+ connection->SignalReadPacket.connect(
+ this, &P2PTransportChannel::OnReadPacket);
+ connection->SignalStateChange.connect(
+ this, &P2PTransportChannel::OnConnectionStateChange);
+ connection->SignalDestroyed.connect(
+ this, &P2PTransportChannel::OnConnectionDestroyed);
+
+ LOG_J(LS_INFO, this) << "Created connection with origin=" << origin << ", ("
+ << connections_.size() << " total)";
+ }
+
+ // If we are readable, it is because we are creating this in response to a
+ // ping from the other side. This will cause the state to become readable.
+ if (readable)
+ connection->ReceivedPing();
+
+ return true;
+}
+
+// Maintain our remote candidate list, adding this new remote one.
+void P2PTransportChannel::RememberRemoteCandidate(
+ const Candidate& remote_candidate, Port* origin_port) {
+ // Remove any candidates whose generation is older than this one. The
+ // presence of a new generation indicates that the old ones are not useful.
+ uint32 i = 0;
+ while (i < remote_candidates_.size()) {
+ if (remote_candidates_[i].generation() < remote_candidate.generation()) {
+ LOG(INFO) << "Pruning candidate from old generation: "
+ << remote_candidates_[i].address().ToString();
+ remote_candidates_.erase(remote_candidates_.begin() + i);
+ } else {
+ i += 1;
+ }
+ }
+
+ // Make sure this candidate is not a duplicate.
+ for (uint32 i = 0; i < remote_candidates_.size(); ++i) {
+ if (remote_candidates_[i].IsEquivalent(remote_candidate)) {
+ LOG(INFO) << "Duplicate candidate: "
+ << remote_candidate.address().ToString();
+ return;
+ }
+ }
+
+ // Try this candidate for all future ports.
+ remote_candidates_.push_back(RemoteCandidate(remote_candidate, origin_port));
+
+ // We have some candidates from the other side, we are now serious about
+ // this connection. Let's do the StartGetAllPorts thing.
+ if (!pinging_started_) {
+ pinging_started_ = true;
+ for (size_t i = 0; i < allocator_sessions_.size(); ++i) {
+ if (!allocator_sessions_[i]->IsGettingAllPorts())
+ allocator_sessions_[i]->StartGetAllPorts();
+ }
+ }
+}
+
+// Send data to the other side, using our best connection
+int P2PTransportChannel::SendPacket(talk_base::Buffer* packet) {
+#ifdef PLATFORM_CHROMIUM
+ if(worker_thread_ != talk_base::Thread::Current()) {
+ SendPacketParams* params = new SendPacketParams;
+ packet->TransferTo(¶ms->packet);
+ worker_thread_->Post(this, MSG_SENDPACKET, params);
+ return params->packet.length();
+ }
+#endif
+
+ return SendPacket(packet->data(), packet->length());
+}
+
+// Send data to the other side, using our best connection
+int P2PTransportChannel::SendPacket(const char *data, size_t len) {
+ // This can get called on any thread that is convenient to write from!
+ if (best_connection_ == NULL) {
+ error_ = EWOULDBLOCK;
+ return SOCKET_ERROR;
+ }
+ int sent = best_connection_->Send(data, len);
+ if (sent <= 0) {
+ ASSERT(sent < 0);
+ error_ = best_connection_->GetError();
+ }
+ return sent;
+}
+
+// Begin allocate (or immediately re-allocate, if MSG_ALLOCATE pending)
+void P2PTransportChannel::Allocate() {
+ CancelPendingAllocate();
+ // Time for a new allocator, lets make sure we have a signalling channel
+ // to communicate candidates through first.
+ waiting_for_signaling_ = true;
+ SignalRequestSignaling();
+}
+
+// Cancels the pending allocate, if any.
+void P2PTransportChannel::CancelPendingAllocate() {
+ thread()->Clear(this, MSG_ALLOCATE);
+}
+
+// Monitor connection states
+void P2PTransportChannel::UpdateConnectionStates() {
+ uint32 now = talk_base::Time();
+
+ // We need to copy the list of connections since some may delete themselves
+ // when we call UpdateState.
+ for (uint32 i = 0; i < connections_.size(); ++i)
+ connections_[i]->UpdateState(now);
+}
+
+// Prepare for best candidate sorting
+void P2PTransportChannel::RequestSort() {
+ if (!sort_dirty_) {
+ worker_thread_->Post(this, MSG_SORT);
+ sort_dirty_ = true;
+ }
+}
+
+// Sort the available connections to find the best one. We also monitor
+// the number of available connections and the current state so that we
+// can possibly kick off more allocators (for more connections).
+void P2PTransportChannel::SortConnections() {
+ ASSERT(worker_thread_ == talk_base::Thread::Current());
+
+ // Make sure the connection states are up-to-date since this affects how they
+ // will be sorted.
+ UpdateConnectionStates();
+
+ // Any changes after this point will require a re-sort.
+ sort_dirty_ = false;
+
+ // Get a list of the networks that we are using.
+ std::set<talk_base::Network*> networks;
+ for (uint32 i = 0; i < connections_.size(); ++i)
+ networks.insert(connections_[i]->port()->network());
+
+ // Find the best alternative connection by sorting. It is important to note
+ // that amongst equal preference, writable connections, this will choose the
+ // one whose estimated latency is lowest. So it is the only one that we
+ // need to consider switching to.
+
+ ConnectionCompare cmp;
+ std::stable_sort(connections_.begin(), connections_.end(), cmp);
+ LOG(LS_VERBOSE) << "Sorting available connections:";
+ for (uint32 i = 0; i < connections_.size(); ++i) {
+ LOG(LS_VERBOSE) << connections_[i]->ToString();
+ }
+
+ Connection* top_connection = NULL;
+ if (connections_.size() > 0)
+ top_connection = connections_[0];
+
+ // If necessary, switch to the new choice.
+ if (ShouldSwitch(best_connection_, top_connection))
+ SwitchBestConnectionTo(top_connection);
+
+ // We can prune any connection for which there is a writable connection on
+ // the same network with better or equal prefences. We leave those with
+ // better preference just in case they become writable later (at which point,
+ // we would prune out the current best connection). We leave connections on
+ // other networks because they may not be using the same resources and they
+ // may represent very distinct paths over which we can switch.
+ std::set<talk_base::Network*>::iterator network;
+ for (network = networks.begin(); network != networks.end(); ++network) {
+ Connection* primier = GetBestConnectionOnNetwork(*network);
+ if (!primier || (primier->write_state() != Connection::STATE_WRITABLE))
+ continue;
+
+ for (uint32 i = 0; i < connections_.size(); ++i) {
+ if ((connections_[i] != primier) &&
+ (connections_[i]->port()->network() == *network) &&
+ (CompareConnectionCandidates(primier, connections_[i]) >= 0)) {
+ connections_[i]->Prune();
+ }
+ }
+ }
+
+ // Count the number of connections in the various states.
+
+ int writable = 0;
+ int write_connect = 0;
+ int write_timeout = 0;
+
+ for (uint32 i = 0; i < connections_.size(); ++i) {
+ switch (connections_[i]->write_state()) {
+ case Connection::STATE_WRITABLE:
+ ++writable;
+ break;
+ case Connection::STATE_WRITE_CONNECT:
+ ++write_connect;
+ break;
+ case Connection::STATE_WRITE_TIMEOUT:
+ ++write_timeout;
+ break;
+ default:
+ ASSERT(false);
+ }
+ }
+
+ if (writable > 0) {
+ HandleWritable();
+ } else if (write_connect > 0) {
+ HandleNotWritable();
+ } else {
+ HandleAllTimedOut();
+ }
+
+ // Update the state of this channel. This method is called whenever the
+ // state of any connection changes, so this is a good place to do this.
+ UpdateChannelState();
+
+ // Notify of connection state change
+ SignalConnectionMonitor(this);
+}
+
+// Track the best connection, and let listeners know
+void P2PTransportChannel::SwitchBestConnectionTo(Connection* conn) {
+ // Note: if conn is NULL, the previous best_connection_ has been destroyed,
+ // so don't use it.
+ // use it.
+ Connection* old_best_connection = best_connection_;
+ best_connection_ = conn;
+ if (best_connection_) {
+ if (old_best_connection) {
+ LOG_J(LS_INFO, this) << "Previous best connection: "
+ << old_best_connection->ToString();
+ }
+ LOG_J(LS_INFO, this) << "New best connection: "
+ << best_connection_->ToString();
+ SignalRouteChange(this, best_connection_->remote_candidate());
+ } else {
+ LOG_J(LS_INFO, this) << "No best connection";
+ }
+}
+
+void P2PTransportChannel::UpdateChannelState() {
+ // The Handle* functions already set the writable state. We'll just double-
+ // check it here.
+ bool writable = ((best_connection_ != NULL) &&
+ (best_connection_->write_state() ==
+ Connection::STATE_WRITABLE));
+ ASSERT(writable == this->writable());
+ if (writable != this->writable())
+ LOG(LS_ERROR) << "UpdateChannelState: writable state mismatch";
+
+ bool readable = false;
+ for (uint32 i = 0; i < connections_.size(); ++i) {
+ if (connections_[i]->read_state() == Connection::STATE_READABLE)
+ readable = true;
+ }
+ set_readable(readable);
+}
+
+// We checked the status of our connections and we had at least one that
+// was writable, go into the writable state.
+void P2PTransportChannel::HandleWritable() {
+ //
+ // One or more connections writable!
+ //
+ if (!writable()) {
+ for (uint32 i = 0; i < allocator_sessions_.size(); ++i) {
+ if (allocator_sessions_[i]->IsGettingAllPorts()) {
+ allocator_sessions_[i]->StopGetAllPorts();
+ }
+ }
+
+ // Stop further allocations.
+ CancelPendingAllocate();
+ }
+
+ // We're writable, obviously we aren't timed out
+ was_writable_ = true;
+ was_timed_out_ = false;
+ set_writable(true);
+}
+
+// We checked the status of our connections and we didn't have any that
+// were writable, go into the connecting state (kick off a new allocator
+// session).
+void P2PTransportChannel::HandleNotWritable() {
+ //
+ // No connections are writable but not timed out!
+ //
+ if (was_writable_) {
+ // If we were writable, let's kick off an allocator session immediately
+ was_writable_ = false;
+ Allocate();
+ }
+
+ // We were connecting, obviously not ALL timed out.
+ was_timed_out_ = false;
+ set_writable(false);
+}
+
+// We checked the status of our connections and not only weren't they writable
+// but they were also timed out, we really need a new allocator.
+void P2PTransportChannel::HandleAllTimedOut() {
+ //
+ // No connections... all are timed out!
+ //
+ if (!was_timed_out_) {
+ // We weren't timed out before, so kick off an allocator now (we'll still
+ // be in the fully timed out state until the allocator actually gives back
+ // new ports)
+ Allocate();
+ }
+
+ // NOTE: we start was_timed_out_ in the true state so that we don't get
+ // another allocator created WHILE we are in the process of building up
+ // our first allocator.
+ was_timed_out_ = true;
+ was_writable_ = false;
+ set_writable(false);
+}
+
+// If we have a best connection, return it, otherwise return top one in the
+// list (later we will mark it best).
+Connection* P2PTransportChannel::GetBestConnectionOnNetwork(
+ talk_base::Network* network) {
+ // If the best connection is on this network, then it wins.
+ if (best_connection_ && (best_connection_->port()->network() == network))
+ return best_connection_;
+
+ // Otherwise, we return the top-most in sorted order.
+ for (uint32 i = 0; i < connections_.size(); ++i) {
+ if (connections_[i]->port()->network() == network)
+ return connections_[i];
+ }
+
+ return NULL;
+}
+
+// Handle any queued up requests
+void P2PTransportChannel::OnMessage(talk_base::Message *pmsg) {
+ if (pmsg->message_id == MSG_SORT)
+ OnSort();
+ else if (pmsg->message_id == MSG_PING)
+ OnPing();
+ else if (pmsg->message_id == MSG_ALLOCATE)
+ Allocate();
+#ifdef PLATFORM_CHROMIUM
+ else if (pmsg->message_id == MSG_SENDPACKET) {
+ SendPacketParams* data = static_cast<SendPacketParams*>(pmsg->pdata);
+ SendPacket(&data->packet);
+ delete data; // because it is Posted
+ }
+#endif
+ else
+ ASSERT(false);
+}
+
+// Handle queued up sort request
+void P2PTransportChannel::OnSort() {
+ // Resort the connections based on the new statistics.
+ SortConnections();
+}
+
+// Handle queued up ping request
+void P2PTransportChannel::OnPing() {
+ // Make sure the states of the connections are up-to-date (since this affects
+ // which ones are pingable).
+ UpdateConnectionStates();
+
+ // Find the oldest pingable connection and have it do a ping.
+ Connection* conn = FindNextPingableConnection();
+ if (conn)
+ conn->Ping(talk_base::Time());
+
+ // Post ourselves a message to perform the next ping.
+ uint32 delay = writable() ? WRITABLE_DELAY : UNWRITABLE_DELAY;
+ thread()->PostDelayed(delay, this, MSG_PING);
+}
+
+// Is the connection in a state for us to even consider pinging the other side?
+bool P2PTransportChannel::IsPingable(Connection* conn) {
+ // An unconnected connection cannot be written to at all, so pinging is out
+ // of the question.
+ if (!conn->connected())
+ return false;
+
+ if (writable()) {
+ // If we are writable, then we only want to ping connections that could be
+ // better than this one, i.e., the ones that were not pruned.
+ return (conn->write_state() != Connection::STATE_WRITE_TIMEOUT);
+ } else {
+ // If we are not writable, then we need to try everything that might work.
+ // This includes both connections that do not have write timeout as well as
+ // ones that do not have read timeout. A connection could be readable but
+ // be in write-timeout if we pruned it before. Since the other side is
+ // still pinging it, it very well might still work.
+ return (conn->write_state() != Connection::STATE_WRITE_TIMEOUT) ||
+ (conn->read_state() != Connection::STATE_READ_TIMEOUT);
+ }
+}
+
+// Returns the next pingable connection to ping. This will be the oldest
+// pingable connection unless we have a writable connection that is past the
+// maximum acceptable ping delay.
+Connection* P2PTransportChannel::FindNextPingableConnection() {
+ uint32 now = talk_base::Time();
+ if (best_connection_ &&
+ (best_connection_->write_state() == Connection::STATE_WRITABLE) &&
+ (best_connection_->last_ping_sent()
+ + MAX_CURRENT_WRITABLE_DELAY <= now)) {
+ return best_connection_;
+ }
+
+ Connection* oldest_conn = NULL;
+ uint32 oldest_time = 0xFFFFFFFF;
+ for (uint32 i = 0; i < connections_.size(); ++i) {
+ if (IsPingable(connections_[i])) {
+ if (connections_[i]->last_ping_sent() < oldest_time) {
+ oldest_time = connections_[i]->last_ping_sent();
+ oldest_conn = connections_[i];
+ }
+ }
+ }
+ return oldest_conn;
+}
+
+// return the number of "pingable" connections
+uint32 P2PTransportChannel::NumPingableConnections() {
+ uint32 count = 0;
+ for (uint32 i = 0; i < connections_.size(); ++i) {
+ if (IsPingable(connections_[i]))
+ count += 1;
+ }
+ return count;
+}
+
+// When a connection's state changes, we need to figure out who to use as
+// the best connection again. It could have become usable, or become unusable.
+void P2PTransportChannel::OnConnectionStateChange(Connection *connection) {
+ ASSERT(worker_thread_ == talk_base::Thread::Current());
+
+ // We have to unroll the stack before doing this because we may be changing
+ // the state of connections while sorting.
+ RequestSort();
+}
+
+// When a connection is removed, edit it out, and then update our best
+// connection.
+void P2PTransportChannel::OnConnectionDestroyed(Connection *connection) {
+ ASSERT(worker_thread_ == talk_base::Thread::Current());
+
+ // Note: the previous best_connection_ may be destroyed by now, so don't
+ // use it.
+
+ // Remove this connection from the list.
+ std::vector<Connection*>::iterator iter =
+ std::find(connections_.begin(), connections_.end(), connection);
+ ASSERT(iter != connections_.end());
+ connections_.erase(iter);
+
+ LOG_J(LS_INFO, this) << "Removed connection ("
+ << static_cast<int>(connections_.size()) << " remaining)";
+
+ // If this is currently the best connection, then we need to pick a new one.
+ // The call to SortConnections will pick a new one. It looks at the current
+ // best connection in order to avoid switching between fairly similar ones.
+ // Since this connection is no longer an option, we can just set best to NULL
+ // and re-choose a best assuming that there was no best connection.
+ if (best_connection_ == connection) {
+ SwitchBestConnectionTo(NULL);
+ RequestSort();
+ }
+}
+
+// When a port is destroyed remove it from our list of ports to use for
+// connection attempts.
+void P2PTransportChannel::OnPortDestroyed(Port* port) {
+ ASSERT(worker_thread_ == talk_base::Thread::Current());
+
+ // Remove this port from the list (if we didn't drop it already).
+ std::vector<Port*>::iterator iter =
+ std::find(ports_.begin(), ports_.end(), port);
+ if (iter != ports_.end())
+ ports_.erase(iter);
+
+ LOG(INFO) << "Removed port from p2p socket: "
+ << static_cast<int>(ports_.size()) << " remaining";
+}
+
+// We data is available, let listeners know
+void P2PTransportChannel::OnReadPacket(Connection *connection,
+ const char *data, size_t len) {
+ ASSERT(worker_thread_ == talk_base::Thread::Current());
+
+ // Let the client know of an incoming packet
+
+ SignalReadPacket(this, data, len);
+}
+
+// Set options on ourselves is simply setting options on all of our available
+// port objects.
+int P2PTransportChannel::SetOption(talk_base::Socket::Option opt, int value) {
+ OptionMap::iterator it = options_.find(opt);
+ if (it == options_.end()) {
+ options_.insert(std::make_pair(opt, value));
+ } else if (it->second == value) {
+ return 0;
+ } else {
+ it->second = value;
+ }
+
+ for (uint32 i = 0; i < ports_.size(); ++i) {
+ int val = ports_[i]->SetOption(opt, value);
+ if (val < 0) {
+ // Because this also occurs deferred, probably no point in reporting an
+ // error
+ LOG(WARNING) << "SetOption(" << opt << ", " << value << ") failed: "
+ << ports_[i]->GetError();
+ }
+ }
+ return 0;
+}
+
+// When the signalling channel is ready, we can really kick off the allocator
+void P2PTransportChannel::OnSignalingReady() {
+ if (waiting_for_signaling_) {
+ waiting_for_signaling_ = false;
+ AddAllocatorSession(allocator_->CreateSession(name(), content_type()));
+ thread()->PostDelayed(kAllocatePeriod, this, MSG_ALLOCATE);
+ }
+}
+
+} // namespace cricket
diff --git a/third_party_mods/libjingle/source/talk/p2p/base/p2ptransportchannel.h b/third_party_mods/libjingle/source/talk/p2p/base/p2ptransportchannel.h
new file mode 100644
index 0000000..0288697
--- /dev/null
+++ b/third_party_mods/libjingle/source/talk/p2p/base/p2ptransportchannel.h
@@ -0,0 +1,169 @@
+/*
+ * libjingle
+ * Copyright 2004--2005, Google Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright notice,
+ * this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ * 3. The name of the author may not be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED
+ * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO
+ * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
+ * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
+ * OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
+ * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+// P2PTransportChannel wraps up the state management of the connection between
+// two P2P clients. Clients have candidate ports for connecting, and
+// connections which are combinations of candidates from each end (Alice and
+// Bob each have candidates, one candidate from Alice and one candidate from
+// Bob are used to make a connection, repeat to make many connections).
+//
+// When all of the available connections become invalid (non-writable), we
+// kick off a process of determining more candidates and more connections.
+//
+#ifndef TALK_P2P_BASE_P2PTRANSPORTCHANNEL_H_
+#define TALK_P2P_BASE_P2PTRANSPORTCHANNEL_H_
+
+#include <map>
+#include <vector>
+#include <string>
+
+#include "talk/base/sigslot.h"
+#include "talk/p2p/base/candidate.h"
+#include "talk/p2p/base/port.h"
+#include "talk/p2p/base/portallocator.h"
+#include "talk/p2p/base/transport.h"
+#include "talk/p2p/base/transportchannelimpl.h"
+#include "talk/p2p/base/p2ptransport.h"
+
+namespace cricket {
+
+// Adds the port on which the candidate originated.
+class RemoteCandidate : public Candidate {
+ public:
+ RemoteCandidate(const Candidate& c, Port* origin_port)
+ : Candidate(c), origin_port_(origin_port) {}
+
+ Port* origin_port() { return origin_port_; }
+
+ private:
+ Port* origin_port_;
+};
+
+// P2PTransportChannel manages the candidates and connection process to keep
+// two P2P clients connected to each other.
+class P2PTransportChannel : public TransportChannelImpl,
+ public talk_base::MessageHandler {
+ public:
+ P2PTransportChannel(const std::string &name,
+ const std::string &content_type,
+ P2PTransport* transport,
+ PortAllocator *allocator);
+ virtual ~P2PTransportChannel();
+
+ // From TransportChannelImpl:
+ virtual Transport* GetTransport() { return transport_; }
+ virtual void Connect();
+ virtual void Reset();
+ virtual void OnSignalingReady();
+
+ // From TransportChannel:
+ virtual int SendPacket(talk_base::Buffer* packet);
+ virtual int SendPacket(const char *data, size_t len);
+ virtual int SetOption(talk_base::Socket::Option opt, int value);
+ virtual int GetError() { return error_; }
+
+ // This hack is here to allow the SocketMonitor to downcast to the
+ // P2PTransportChannel safely.
+ virtual P2PTransportChannel* GetP2PChannel() { return this; }
+
+ // These are used by the connection monitor.
+ sigslot::signal1<P2PTransportChannel*> SignalConnectionMonitor;
+ const std::vector<Connection *>& connections() const { return connections_; }
+ Connection* best_connection() const { return best_connection_; }
+
+ void set_incoming_only(bool value) { incoming_only_ = value; }
+
+ // Handler for internal messages.
+ virtual void OnMessage(talk_base::Message *pmsg);
+
+ virtual void OnCandidate(const Candidate& candidate);
+
+ private:
+ void Allocate();
+ void CancelPendingAllocate();
+ void UpdateConnectionStates();
+ void RequestSort();
+ void SortConnections();
+ void SwitchBestConnectionTo(Connection* conn);
+ void UpdateChannelState();
+ void HandleWritable();
+ void HandleNotWritable();
+ void HandleAllTimedOut();
+ Connection* GetBestConnectionOnNetwork(talk_base::Network* network);
+ bool CreateConnections(const Candidate &remote_candidate, Port* origin_port,
+ bool readable);
+ bool CreateConnection(Port* port, const Candidate& remote_candidate,
+ Port* origin_port, bool readable);
+ void RememberRemoteCandidate(const Candidate& remote_candidate,
+ Port* origin_port);
+ void OnUnknownAddress(Port *port, const talk_base::SocketAddress &addr,
+ StunMessage *stun_msg,
+ const std::string &remote_username);
+ void OnPortReady(PortAllocatorSession *session, Port* port);
+ void OnCandidatesReady(PortAllocatorSession *session,
+ const std::vector<Candidate>& candidates);
+ void OnConnectionStateChange(Connection *connection);
+ void OnConnectionDestroyed(Connection *connection);
+ void OnPortDestroyed(Port* port);
+ void OnReadPacket(Connection *connection, const char *data, size_t len);
+ void OnSort();
+ void OnPing();
+ bool IsPingable(Connection* conn);
+ Connection* FindNextPingableConnection();
+ uint32 NumPingableConnections();
+ PortAllocatorSession* allocator_session() {
+ return allocator_sessions_.back();
+ }
+ void AddAllocatorSession(PortAllocatorSession* session);
+
+ talk_base::Thread* thread() const { return worker_thread_; }
+
+ P2PTransport* transport_;
+ PortAllocator *allocator_;
+ talk_base::Thread *worker_thread_;
+ bool incoming_only_;
+ bool waiting_for_signaling_;
+ int error_;
+ std::vector<PortAllocatorSession*> allocator_sessions_;
+ std::vector<Port *> ports_;
+ std::vector<Connection *> connections_;
+ Connection *best_connection_;
+ std::vector<RemoteCandidate> remote_candidates_;
+ // indicates whether StartGetAllCandidates has been called
+ bool pinging_started_;
+ bool sort_dirty_; // indicates whether another sort is needed right now
+ bool was_writable_;
+ bool was_timed_out_;
+ typedef std::map<talk_base::Socket::Option, int> OptionMap;
+ OptionMap options_;
+
+ DISALLOW_EVIL_CONSTRUCTORS(P2PTransportChannel);
+};
+
+} // namespace cricket
+
+#endif // TALK_P2P_BASE_P2PTRANSPORTCHANNEL_H_
diff --git a/third_party_mods/libjingle/source/talk/p2p/base/session.h b/third_party_mods/libjingle/source/talk/p2p/base/session.h
new file mode 100644
index 0000000..b46f438
--- /dev/null
+++ b/third_party_mods/libjingle/source/talk/p2p/base/session.h
@@ -0,0 +1,546 @@
+/*
+ * libjingle
+ * Copyright 2004--2005, Google Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright notice,
+ * this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ * 3. The name of the author may not be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED
+ * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO
+ * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
+ * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
+ * OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
+ * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef TALK_P2P_BASE_SESSION_H_
+#define TALK_P2P_BASE_SESSION_H_
+
+#include <list>
+#include <map>
+#include <string>
+#include <vector>
+
+#include "talk/p2p/base/sessionmessages.h"
+#include "talk/p2p/base/sessionmanager.h"
+#include "talk/base/socketaddress.h"
+#include "talk/p2p/base/sessionclient.h"
+#include "talk/p2p/base/parsing.h"
+#include "talk/p2p/base/port.h"
+#include "talk/xmllite/xmlelement.h"
+#include "talk/xmpp/constants.h"
+
+namespace cricket {
+
+class P2PTransportChannel;
+class Transport;
+class TransportChannel;
+class TransportChannelProxy;
+class TransportChannelImpl;
+
+// Used for errors that will send back a specific error message to the
+// remote peer. We add "type" to the errors because it's needed for
+// SignalErrorMessage.
+struct MessageError : ParseError {
+ buzz::QName type;
+
+ // if unset, assume type is a parse error
+ MessageError() : ParseError(), type(buzz::QN_STANZA_BAD_REQUEST) {}
+
+ void SetType(const buzz::QName type) {
+ this->type = type;
+ }
+};
+
+// Used for errors that may be returned by public session methods that
+// can fail.
+// TODO: Use this error in Session::Initiate and
+// Session::Accept.
+struct SessionError : WriteError {
+};
+
+// Bundles a Transport and ChannelMap together. ChannelMap is used to
+// create transport channels before receiving or sending a session
+// initiate, and for speculatively connecting channels. Previously, a
+// session had one ChannelMap and transport. Now, with multiple
+// transports per session, we need multiple ChannelMaps as well.
+class TransportProxy {
+ public:
+ TransportProxy(const std::string& content_name, Transport* transport)
+ : content_name_(content_name),
+ transport_(transport),
+ state_(STATE_INIT),
+ sent_candidates_(false) {}
+ ~TransportProxy();
+
+ std::string content_name() const { return content_name_; }
+ Transport* impl() const { return transport_; }
+ std::string type() const;
+ bool negotiated() const { return state_ == STATE_NEGOTIATED; }
+ const Candidates& sent_candidates() const { return sent_candidates_; }
+
+ TransportChannel* GetChannel(const std::string& name);
+ TransportChannel* CreateChannel(const std::string& name,
+ const std::string& content_type);
+ void DestroyChannel(const std::string& name);
+ void AddSentCandidates(const Candidates& candidates);
+ void ClearSentCandidates() { sent_candidates_.clear(); }
+ void SpeculativelyConnectChannels();
+ void CompleteNegotiation();
+
+ private:
+ enum TransportState {
+ STATE_INIT,
+ STATE_CONNECTING,
+ STATE_NEGOTIATED
+ };
+
+ typedef std::map<std::string, TransportChannelProxy*> ChannelMap;
+
+ TransportChannelProxy* GetProxy(const std::string& name);
+ TransportChannelImpl* GetOrCreateImpl(const std::string& name,
+ const std::string& content_type);
+ void SetProxyImpl(const std::string& name, TransportChannelProxy* proxy);
+
+ std::string content_name_;
+ Transport* transport_;
+ TransportState state_;
+ ChannelMap channels_;
+ Candidates sent_candidates_;
+};
+
+typedef std::map<std::string, TransportProxy*> TransportMap;
+
+// TODO: Consider simplifying the dependency from Voice/VideoChannel
+// on Session. Right now the Channel class requires a BaseSession, but it only
+// uses CreateChannel/DestroyChannel. Perhaps something like a
+// TransportChannelFactory could be hoisted up out of BaseSession, or maybe
+// the transports could be passed in directly.
+
+// A BaseSession manages general session state. This includes negotiation
+// of both the application-level and network-level protocols: the former
+// defines what will be sent and the latter defines how it will be sent. Each
+// network-level protocol is represented by a Transport object. Each Transport
+// participates in the network-level negotiation. The individual streams of
+// packets are represented by TransportChannels. The application-level protocol
+// is represented by SessionDecription objects.
+class BaseSession : public sigslot::has_slots<>,
+ public talk_base::MessageHandler {
+ public:
+ enum State {
+ STATE_INIT = 0,
+ STATE_SENTINITIATE, // sent initiate, waiting for Accept or Reject
+ STATE_RECEIVEDINITIATE, // received an initiate. Call Accept or Reject
+ STATE_SENTACCEPT, // sent accept. begin connecting transport
+ STATE_RECEIVEDACCEPT, // received accept. begin connecting transport
+ STATE_SENTMODIFY, // sent modify, waiting for Accept or Reject
+ STATE_RECEIVEDMODIFY, // received modify, call Accept or Reject
+ STATE_SENTREJECT, // sent reject after receiving initiate
+ STATE_RECEIVEDREJECT, // received reject after sending initiate
+ STATE_SENTREDIRECT, // sent direct after receiving initiate
+ STATE_SENTTERMINATE, // sent terminate (any time / either side)
+ STATE_RECEIVEDTERMINATE, // received terminate (any time / either side)
+ STATE_INPROGRESS, // session accepted and in progress
+ STATE_DEINIT, // session is being destroyed
+ };
+
+ enum Error {
+ ERROR_NONE = 0, // no error
+ ERROR_TIME = 1, // no response to signaling
+ ERROR_RESPONSE = 2, // error during signaling
+ ERROR_NETWORK = 3, // network error, could not allocate network resources
+ ERROR_CONTENT = 4, // channel errors in SetLocalContent/SetRemoteContent
+ };
+
+ explicit BaseSession(talk_base::Thread *signaling_thread);
+ virtual ~BaseSession();
+
+ // Updates the state, signaling if necessary.
+ void SetState(State state);
+
+ // Updates the error state, signaling if necessary.
+ virtual void SetError(Error error);
+
+ // Handles messages posted to us.
+ virtual void OnMessage(talk_base::Message *pmsg);
+
+ // Returns the current state of the session. See the enum above for details.
+ // Each time the state changes, we will fire this signal.
+ State state() const { return state_; }
+ sigslot::signal2<BaseSession *, State> SignalState;
+
+ // Returns the last error in the session. See the enum above for details.
+ // Each time the an error occurs, we will fire this signal.
+ Error error() const { return error_; }
+ sigslot::signal2<BaseSession *, Error> SignalError;
+
+ sigslot::signal1<TransportChannel*> SignalWritableState;
+ sigslot::signal3<TransportChannel*, const char*, size_t> SignalReadPacket;
+
+
+ // Creates a new channel with the given names. This method may be called
+ // immediately after creating the session. However, the actual
+ // implementation may not be fixed until transport negotiation completes.
+ // This will usually be called from the worker thread, but that
+ // shouldn't be an issue since the main thread will be blocked in
+ // Send when doing so.
+ virtual TransportChannel* CreateChannel(const std::string& content_name,
+ const std::string& channel_name) = 0;
+
+ // Returns the channel with the given names.
+ virtual TransportChannel* GetChannel(const std::string& content_name,
+ const std::string& channel_name) = 0;
+
+ // Destroys the channel with the given names.
+ // This will usually be called from the worker thread, but that
+ // shouldn't be an issue since the main thread will be blocked in
+ // Send when doing so.
+ virtual void DestroyChannel(const std::string& content_name,
+ const std::string& channel_name) = 0;
+
+ // Invoked when we notice that there is no matching channel on our peer.
+ sigslot::signal2<Session*, const std::string&> SignalChannelGone;
+
+ // Returns the application-level description given by our client.
+ // If we are the recipient, this will be NULL until we send an accept.
+ const SessionDescription* local_description() const {
+ return local_description_;
+ }
+ // Takes ownership of SessionDescription*
+ bool set_local_description(const SessionDescription* sdesc) {
+ if (sdesc != local_description_) {
+ delete local_description_;
+ local_description_ = sdesc;
+ }
+ return true;
+ }
+
+ // Returns the application-level description given by the other client.
+ // If we are the initiator, this will be NULL until we receive an accept.
+ const SessionDescription* remote_description() const {
+ return remote_description_;
+ }
+ // Takes ownership of SessionDescription*
+ bool set_remote_description(const SessionDescription* sdesc) {
+ if (sdesc != remote_description_) {
+ delete remote_description_;
+ remote_description_ = sdesc;
+ }
+ return true;
+ }
+
+ // When we receive an initiate, we create a session in the
+ // RECEIVEDINITIATE state and respond by accepting or rejecting.
+ // Takes ownership of session description.
+ virtual bool Accept(const SessionDescription* sdesc) = 0;
+ virtual bool Reject(const std::string& reason) = 0;
+ bool Terminate() {
+ return TerminateWithReason(STR_TERMINATE_SUCCESS);
+ }
+ virtual bool TerminateWithReason(const std::string& reason) = 0;
+
+ // The worker thread used by the session manager
+ virtual talk_base::Thread *worker_thread() = 0;
+
+ talk_base::Thread *signaling_thread() {
+ return signaling_thread_;
+ }
+
+ // Returns the JID of this client.
+ const std::string& local_name() const { return local_name_; }
+
+ // Returns the JID of the other peer in this session.
+ const std::string& remote_name() const { return remote_name_; }
+
+ // Set the JID of the other peer in this session.
+ // Typically the remote_name_ is set when the session is initiated.
+ // However, sometimes (e.g when a proxy is used) the peer name is
+ // known after the BaseSession has been initiated and it must be updated
+ // explicitly.
+ void set_remote_name(const std::string& name) { remote_name_ = name; }
+
+ const std::string& id() const { return sid_; }
+
+ protected:
+ State state_;
+ Error error_;
+ const SessionDescription* local_description_;
+ const SessionDescription* remote_description_;
+ std::string sid_;
+ // We don't use buzz::Jid because changing to buzz:Jid here has a
+ // cascading effect that requires an enormous number places to
+ // change to buzz::Jid as well.
+ std::string local_name_;
+ std::string remote_name_;
+ talk_base::Thread *signaling_thread_;
+};
+
+// A specific Session created by the SessionManager, using XMPP for protocol.
+class Session : public BaseSession {
+ public:
+ // Returns the manager that created and owns this session.
+ SessionManager* session_manager() const { return session_manager_; }
+
+ // the worker thread used by the session manager
+ talk_base::Thread *worker_thread() {
+ return session_manager_->worker_thread();
+ }
+
+ // Returns the XML namespace identifying the type of this session.
+ const std::string& content_type() const { return content_type_; }
+
+ // Returns the client that is handling the application data of this session.
+ SessionClient* client() const { return client_; }
+
+ SignalingProtocol current_protocol() const { return current_protocol_; }
+
+ void set_current_protocol(SignalingProtocol protocol) {
+ current_protocol_ = protocol;
+ }
+
+ // Indicates whether we initiated this session.
+ bool initiator() const { return initiator_; }
+
+ const SessionDescription* initiator_description() const {
+ if (initiator_) {
+ return local_description_;
+ } else {
+ return remote_description_;
+ }
+ }
+
+ // Fired whenever we receive a terminate message along with a reason
+ sigslot::signal2<Session*, const std::string&> SignalReceivedTerminateReason;
+
+ void set_allow_local_ips(bool allow);
+
+ // Returns the transport that has been negotiated or NULL if
+ // negotiation is still in progress.
+ Transport* GetTransport(const std::string& content_name);
+
+ // Takes ownership of session description.
+ // TODO: Add an error argument to pass back to the caller.
+ bool Initiate(const std::string& to,
+ const SessionDescription* sdesc);
+
+ // When we receive an initiate, we create a session in the
+ // RECEIVEDINITIATE state and respond by accepting or rejecting.
+ // Takes ownership of session description.
+ // TODO: Add an error argument to pass back to the caller.
+ virtual bool Accept(const SessionDescription* sdesc);
+ virtual bool Reject(const std::string& reason);
+ virtual bool TerminateWithReason(const std::string& reason);
+
+ // The two clients in the session may also send one another
+ // arbitrary XML messages, which are called "info" messages. Sending
+ // takes ownership of the given elements. The signal does not; the
+ // parent element will be deleted after the signal.
+ bool SendInfoMessage(const XmlElements& elems);
+ sigslot::signal2<Session*, const buzz::XmlElement*> SignalInfoMessage;
+
+ // Maps passed to serialization functions.
+ TransportParserMap GetTransportParsers();
+ ContentParserMap GetContentParsers();
+
+ // Creates a new channel with the given names. This method may be called
+ // immediately after creating the session. However, the actual
+ // implementation may not be fixed until transport negotiation completes.
+ virtual TransportChannel* CreateChannel(const std::string& content_name,
+ const std::string& channel_name);
+
+ // Returns the channel with the given names.
+ virtual TransportChannel* GetChannel(const std::string& content_name,
+ const std::string& channel_name);
+
+ // Destroys the channel with the given names.
+ virtual void DestroyChannel(const std::string& content_name,
+ const std::string& channel_name);
+
+ // Updates the error state, signaling if necessary.
+ virtual void SetError(Error error);
+
+ // Handles messages posted to us.
+ virtual void OnMessage(talk_base::Message *pmsg);
+
+ private:
+ // Creates or destroys a session. (These are called only SessionManager.)
+ Session(SessionManager *session_manager,
+ const std::string& local_name, const std::string& initiator_name,
+ const std::string& sid, const std::string& content_type,
+ SessionClient* client);
+ ~Session();
+
+ // Get a TransportProxy by content_name or transport. NULL if not found.
+ TransportProxy* GetTransportProxy(const std::string& content_name);
+ TransportProxy* GetTransportProxy(const Transport* transport);
+ TransportProxy* GetFirstTransportProxy();
+ // TransportProxy is owned by session. Return proxy just for convenience.
+ TransportProxy* GetOrCreateTransportProxy(const std::string& content_name);
+ // For each transport info, create a transport proxy. Can fail for
+ // incompatible transport types.
+ bool CreateTransportProxies(const TransportInfos& tinfos,
+ SessionError* error);
+ void SpeculativelyConnectAllTransportChannels();
+ bool OnRemoteCandidates(const TransportInfos& tinfos,
+ ParseError* error);
+ // Returns a TransportInfo without candidates for each content name.
+ // Uses the transport_type_ of the session.
+ TransportInfos GetEmptyTransportInfos(const ContentInfos& contents) const;
+
+ // Called when the first channel of a transport begins connecting. We use
+ // this to start a timer, to make sure that the connection completes in a
+ // reasonable amount of time.
+ void OnTransportConnecting(Transport* transport);
+
+ // Called when a transport changes its writable state. We track this to make
+ // sure that the transport becomes writable within a reasonable amount of
+ // time. If this does not occur, we signal an error.
+ void OnTransportWritable(Transport* transport);
+
+ // Called when a transport requests signaling.
+ void OnTransportRequestSignaling(Transport* transport);
+
+ // Called when a transport signals that it has a message to send. Note that
+ // these messages are just the transport part of the stanza; they need to be
+ // wrapped in the appropriate session tags.
+ void OnTransportCandidatesReady(Transport* transport,
+ const Candidates& candidates);
+
+ // Called when a transport signals that it found an error in an incoming
+ // message.
+ void OnTransportSendError(Transport* transport,
+ const buzz::XmlElement* stanza,
+ const buzz::QName& name,
+ const std::string& type,
+ const std::string& text,
+ const buzz::XmlElement* extra_info);
+
+ // Called when we notice that one of our local channels has no peer, so it
+ // should be destroyed.
+ void OnTransportChannelGone(Transport* transport, const std::string& name);
+
+ // When the session needs to send signaling messages, it beings by requesting
+ // signaling. The client should handle this by calling OnSignalingReady once
+ // it is ready to send the messages.
+ // (These are called only by SessionManager.)
+ sigslot::signal1<Session*> SignalRequestSignaling;
+ void OnSignalingReady();
+
+ // Send various kinds of session messages.
+ bool SendInitiateMessage(const SessionDescription* sdesc,
+ SessionError* error);
+ bool SendAcceptMessage(const SessionDescription* sdesc, SessionError* error);
+ bool SendRejectMessage(const std::string& reason, SessionError* error);
+ bool SendTerminateMessage(const std::string& reason, SessionError* error);
+ bool SendTransportInfoMessage(const TransportInfo& tinfo,
+ SessionError* error);
+ bool ResendAllTransportInfoMessages(SessionError* error);
+
+ // Both versions of SendMessage send a message of the given type to
+ // the other client. Can pass either a set of elements or an
+ // "action", which must have a WriteSessionAction method to go along
+ // with it. Sending with an action supports sending a "hybrid"
+ // message. Sending with elements must be sent as Jingle or Gingle.
+
+ // When passing elems, must be either Jingle or Gingle protocol.
+ // Takes ownership of action_elems.
+ bool SendMessage(ActionType type, const XmlElements& action_elems,
+ SessionError* error);
+ // When passing an action, may be Hybrid protocol.
+ template <typename Action>
+ bool SendMessage(ActionType type, const Action& action,
+ SessionError* error);
+
+ // Helper methods to write the session message stanza.
+ template <typename Action>
+ bool WriteActionMessage(ActionType type, const Action& action,
+ buzz::XmlElement* stanza, WriteError* error);
+ template <typename Action>
+ bool WriteActionMessage(SignalingProtocol protocol,
+ ActionType type, const Action& action,
+ buzz::XmlElement* stanza, WriteError* error);
+
+ // Sending messages in hybrid form requires being able to write them
+ // on a per-protocol basis with a common method signature, which all
+ // of these have.
+ bool WriteSessionAction(SignalingProtocol protocol,
+ const SessionInitiate& init,
+ XmlElements* elems, WriteError* error);
+ bool WriteSessionAction(SignalingProtocol protocol,
+ const TransportInfo& tinfo,
+ XmlElements* elems, WriteError* error);
+ bool WriteSessionAction(SignalingProtocol protocol,
+ const SessionTerminate& term,
+ XmlElements* elems, WriteError* error);
+
+ // Sends a message back to the other client indicating that we have received
+ // and accepted their message.
+ void SendAcknowledgementMessage(const buzz::XmlElement* stanza);
+
+ // Once signaling is ready, the session will use this signal to request the
+ // sending of each message. When messages are received by the other client,
+ // they should be handed to OnIncomingMessage.
+ // (These are called only by SessionManager.)
+ sigslot::signal2<Session *, const buzz::XmlElement*> SignalOutgoingMessage;
+ void OnIncomingMessage(const SessionMessage& msg);
+
+ void OnFailedSend(const buzz::XmlElement* orig_stanza,
+ const buzz::XmlElement* error_stanza);
+
+ // Invoked when an error is found in an incoming message. This is translated
+ // into the appropriate XMPP response by SessionManager.
+ sigslot::signal6<BaseSession*,
+ const buzz::XmlElement*,
+ const buzz::QName&,
+ const std::string&,
+ const std::string&,
+ const buzz::XmlElement*> SignalErrorMessage;
+
+ // Handlers for the various types of messages. These functions may take
+ // pointers to the whole stanza or to just the session element.
+ bool OnInitiateMessage(const SessionMessage& msg, MessageError* error);
+ bool OnAcceptMessage(const SessionMessage& msg, MessageError* error);
+ bool OnRejectMessage(const SessionMessage& msg, MessageError* error);
+ bool OnInfoMessage(const SessionMessage& msg);
+ bool OnTerminateMessage(const SessionMessage& msg, MessageError* error);
+ bool OnTransportInfoMessage(const SessionMessage& msg, MessageError* error);
+ bool OnTransportAcceptMessage(const SessionMessage& msg, MessageError* error);
+ bool OnUpdateMessage(const SessionMessage& msg, MessageError* error);
+ bool OnRedirectError(const SessionRedirect& redirect, SessionError* error);
+
+ // Verifies that we are in the appropriate state to receive this message.
+ bool CheckState(State state, MessageError* error);
+
+ SessionManager *session_manager_;
+ bool initiator_;
+ std::string initiator_name_;
+ std::string content_type_;
+ SessionClient* client_;
+ std::string transport_type_;
+ TransportParser* transport_parser_;
+ // This is transport-specific but required so much by unit tests
+ // that it's much easier to put it here.
+ bool allow_local_ips_;
+ TransportMap transports_;
+ // Keeps track of what protocol we are speaking.
+ SignalingProtocol current_protocol_;
+
+ friend class SessionManager; // For access to constructor, destructor,
+ // and signaling related methods.
+};
+
+} // namespace cricket
+
+#endif // TALK_P2P_BASE_SESSION_H_
diff --git a/third_party_mods/libjingle/source/talk/p2p/base/transportchannel.h b/third_party_mods/libjingle/source/talk/p2p/base/transportchannel.h
new file mode 100644
index 0000000..45de275
--- /dev/null
+++ b/third_party_mods/libjingle/source/talk/p2p/base/transportchannel.h
@@ -0,0 +1,114 @@
+/*
+ * libjingle
+ * Copyright 2004--2005, Google Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright notice,
+ * this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ * 3. The name of the author may not be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED
+ * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO
+ * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
+ * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
+ * OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
+ * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef TALK_P2P_BASE_TRANSPORTCHANNEL_H_
+#define TALK_P2P_BASE_TRANSPORTCHANNEL_H_
+
+#include <string>
+#include "talk/base/basictypes.h"
+#include "talk/base/sigslot.h"
+#include "talk/base/socket.h"
+
+namespace talk_base {
+class Buffer;
+}
+
+namespace cricket {
+
+class Candidate;
+class P2PTransportChannel;
+
+// A TransportChannel represents one logical stream of packets that are sent
+// between the two sides of a session.
+class TransportChannel: public sigslot::has_slots<> {
+ public:
+ TransportChannel(const std::string& name, const std::string &content_type)
+ : name_(name), content_type_(content_type),
+ readable_(false), writable_(false) {}
+ virtual ~TransportChannel() {}
+
+ // Returns the name of this channel.
+ const std::string& name() const { return name_; }
+ const std::string& content_type() const { return content_type_; }
+
+ // Returns the readable and states of this channel. Each time one of these
+ // states changes, a signal is raised. These states are aggregated by the
+ // TransportManager.
+ bool readable() const { return readable_; }
+ bool writable() const { return writable_; }
+ sigslot::signal1<TransportChannel*> SignalReadableState;
+ sigslot::signal1<TransportChannel*> SignalWritableState;
+
+ virtual int SendPacket(talk_base::Buffer* packet) = 0;
+ // Attempts to send the given packet. The return value is < 0 on failure.
+ virtual int SendPacket(const char *data, size_t len) = 0;
+
+ // Sets a socket option on this channel. Note that not all options are
+ // supported by all transport types.
+ virtual int SetOption(talk_base::Socket::Option opt, int value) = 0;
+
+ // Returns the most recent error that occurred on this channel.
+ virtual int GetError() = 0;
+
+ // This hack is here to allow the SocketMonitor to downcast to the
+ // P2PTransportChannel safely.
+ // TODO: Generalize network monitoring.
+ virtual P2PTransportChannel* GetP2PChannel() { return NULL; }
+
+ // Signalled each time a packet is received on this channel.
+ sigslot::signal3<TransportChannel*, const char*, size_t> SignalReadPacket;
+
+ // This signal occurs when there is a change in the way that packets are
+ // being routed, i.e. to a different remote location. The candidate
+ // indicates where and how we are currently sending media.
+ sigslot::signal2<TransportChannel*, const Candidate&> SignalRouteChange;
+
+ // Invoked when the channel is being destroyed.
+ sigslot::signal1<TransportChannel*> SignalDestroyed;
+
+ // Debugging description of this transport channel.
+ std::string ToString() const;
+
+ protected:
+ // Sets the readable state, signaling if necessary.
+ void set_readable(bool readable);
+
+ // Sets the writable state, signaling if necessary.
+ void set_writable(bool writable);
+
+ private:
+ std::string name_;
+ std::string content_type_;
+ bool readable_;
+ bool writable_;
+
+ DISALLOW_EVIL_CONSTRUCTORS(TransportChannel);
+};
+
+} // namespace cricket
+
+#endif // TALK_P2P_BASE_TRANSPORTCHANNEL_H_
diff --git a/third_party_mods/libjingle/source/talk/p2p/base/transportchannelproxy.cc b/third_party_mods/libjingle/source/talk/p2p/base/transportchannelproxy.cc
new file mode 100644
index 0000000..96fd563
--- /dev/null
+++ b/third_party_mods/libjingle/source/talk/p2p/base/transportchannelproxy.cc
@@ -0,0 +1,112 @@
+/*
+ * libjingle
+ * Copyright 2004--2005, Google Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright notice,
+ * this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ * 3. The name of the author may not be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED
+ * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO
+ * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
+ * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
+ * OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
+ * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "talk/p2p/base/transportchannelproxy.h"
+#include "talk/base/common.h"
+#include "talk/p2p/base/transport.h"
+#include "talk/p2p/base/transportchannelimpl.h"
+
+namespace cricket {
+
+TransportChannelProxy::TransportChannelProxy(const std::string& name,
+ const std::string& content_type)
+ : TransportChannel(name, content_type), impl_(NULL) {
+}
+
+TransportChannelProxy::~TransportChannelProxy() {
+ if (impl_)
+ impl_->GetTransport()->DestroyChannel(impl_->name());
+}
+
+void TransportChannelProxy::SetImplementation(TransportChannelImpl* impl) {
+ impl_ = impl;
+ impl_->SignalReadableState.connect(
+ this, &TransportChannelProxy::OnReadableState);
+ impl_->SignalWritableState.connect(
+ this, &TransportChannelProxy::OnWritableState);
+ impl_->SignalReadPacket.connect(this, &TransportChannelProxy::OnReadPacket);
+ impl_->SignalRouteChange.connect(this, &TransportChannelProxy::OnRouteChange);
+ for (OptionList::iterator it = pending_options_.begin();
+ it != pending_options_.end();
+ ++it) {
+ impl_->SetOption(it->first, it->second);
+ }
+ pending_options_.clear();
+}
+
+int TransportChannelProxy::SendPacket(talk_base::Buffer* packet) {
+ // Fail if we don't have an impl yet.
+ return (impl_) ? impl_->SendPacket(packet) : -1;
+}
+
+int TransportChannelProxy::SendPacket(const char *data, size_t len) {
+ // Fail if we don't have an impl yet.
+ return (impl_) ? impl_->SendPacket(data, len) : -1;
+}
+
+int TransportChannelProxy::SetOption(talk_base::Socket::Option opt, int value) {
+ if (impl_)
+ return impl_->SetOption(opt, value);
+ pending_options_.push_back(OptionPair(opt, value));
+ return 0;
+}
+
+int TransportChannelProxy::GetError() {
+ ASSERT(impl_ != NULL); // should not be used until channel is writable
+ return impl_->GetError();
+}
+
+P2PTransportChannel* TransportChannelProxy::GetP2PChannel() {
+ if (impl_) {
+ return impl_->GetP2PChannel();
+ }
+ return NULL;
+}
+
+void TransportChannelProxy::OnReadableState(TransportChannel* channel) {
+ ASSERT(channel == impl_);
+ set_readable(impl_->readable());
+}
+
+void TransportChannelProxy::OnWritableState(TransportChannel* channel) {
+ ASSERT(channel == impl_);
+ set_writable(impl_->writable());
+}
+
+void TransportChannelProxy::OnReadPacket(TransportChannel* channel,
+ const char* data, size_t size) {
+ ASSERT(channel == impl_);
+ SignalReadPacket(this, data, size);
+}
+
+void TransportChannelProxy::OnRouteChange(TransportChannel* channel,
+ const Candidate& candidate) {
+ ASSERT(channel == impl_);
+ SignalRouteChange(this, candidate);
+}
+
+} // namespace cricket
diff --git a/third_party_mods/libjingle/source/talk/p2p/base/transportchannelproxy.h b/third_party_mods/libjingle/source/talk/p2p/base/transportchannelproxy.h
new file mode 100644
index 0000000..aa9dffc
--- /dev/null
+++ b/third_party_mods/libjingle/source/talk/p2p/base/transportchannelproxy.h
@@ -0,0 +1,84 @@
+/*
+ * libjingle
+ * Copyright 2004--2005, Google Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright notice,
+ * this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ * 3. The name of the author may not be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED
+ * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO
+ * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
+ * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
+ * OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
+ * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef TALK_P2P_BASE_TRANSPORTCHANNELPROXY_H_
+#define TALK_P2P_BASE_TRANSPORTCHANNELPROXY_H_
+
+#include <string>
+#include <vector>
+#include "talk/p2p/base/transportchannel.h"
+
+namespace talk_base {
+class Buffer;
+}
+
+namespace cricket {
+
+class TransportChannelImpl;
+
+// Proxies calls between the client and the transport channel implementation.
+// This is needed because clients are allowed to create channels before the
+// network negotiation is complete. Hence, we create a proxy up front, and
+// when negotiation completes, connect the proxy to the implementaiton.
+class TransportChannelProxy: public TransportChannel {
+ public:
+ TransportChannelProxy(const std::string& name,
+ const std::string& content_type);
+ virtual ~TransportChannelProxy();
+
+ TransportChannelImpl* impl() { return impl_; }
+
+ // Sets the implementation to which we will proxy.
+ void SetImplementation(TransportChannelImpl* impl);
+
+ // Implementation of the TransportChannel interface. These simply forward to
+ // the implementation.
+ virtual int SendPacket(talk_base::Buffer* packet);
+ virtual int SendPacket(const char *data, size_t len);
+ virtual int SetOption(talk_base::Socket::Option opt, int value);
+ virtual int GetError();
+ virtual P2PTransportChannel* GetP2PChannel();
+
+ private:
+ typedef std::pair<talk_base::Socket::Option, int> OptionPair;
+ typedef std::vector<OptionPair> OptionList;
+ TransportChannelImpl* impl_;
+ OptionList pending_options_;
+
+ // Catch signals from the implementation channel. These just forward to the
+ // client (after updating our state to match).
+ void OnReadableState(TransportChannel* channel);
+ void OnWritableState(TransportChannel* channel);
+ void OnReadPacket(TransportChannel* channel, const char* data, size_t size);
+ void OnRouteChange(TransportChannel* channel, const Candidate& candidate);
+
+ DISALLOW_EVIL_CONSTRUCTORS(TransportChannelProxy);
+};
+
+} // namespace cricket
+
+#endif // TALK_P2P_BASE_TRANSPORTCHANNELPROXY_H_
diff --git a/third_party_mods/libjingle/source/talk/session/phone/channel.cc b/third_party_mods/libjingle/source/talk/session/phone/channel.cc
new file mode 100644
index 0000000..7d27b0c
--- /dev/null
+++ b/third_party_mods/libjingle/source/talk/session/phone/channel.cc
@@ -0,0 +1,1259 @@
+/*
+ * libjingle
+ * Copyright 2004--2007, Google Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright notice,
+ * this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ * 3. The name of the author may not be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED
+ * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO
+ * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
+ * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
+ * OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
+ * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "talk/session/phone/channel.h"
+
+#include "talk/base/buffer.h"
+#include "talk/base/byteorder.h"
+#include "talk/base/common.h"
+#include "talk/base/logging.h"
+#include "talk/p2p/base/transportchannel.h"
+#include "talk/session/phone/channelmanager.h"
+#include "talk/session/phone/mediasessionclient.h"
+#include "talk/session/phone/mediasink.h"
+#include "talk/session/phone/rtcpmuxfilter.h"
+#include "talk/session/phone/rtputils.h"
+
+namespace cricket {
+
+struct PacketMessageData : public talk_base::MessageData {
+ talk_base::Buffer packet;
+};
+
+struct VoiceChannelErrorMessageData : public talk_base::MessageData {
+ VoiceChannelErrorMessageData(uint32 in_ssrc,
+ VoiceMediaChannel::Error in_error)
+ : ssrc(in_ssrc),
+ error(in_error) {}
+ uint32 ssrc;
+ VoiceMediaChannel::Error error;
+};
+
+struct VideoChannelErrorMessageData : public talk_base::MessageData {
+ VideoChannelErrorMessageData(uint32 in_ssrc,
+ VideoMediaChannel::Error in_error)
+ : ssrc(in_ssrc),
+ error(in_error) {}
+ uint32 ssrc;
+ VideoMediaChannel::Error error;
+};
+
+static const char* PacketType(bool rtcp) {
+ return (!rtcp) ? "RTP" : "RTCP";
+}
+
+static bool ValidPacket(bool rtcp, const talk_base::Buffer* packet) {
+ // Check the packet size. We could check the header too if needed.
+ return (packet &&
+ packet->length() >= (!rtcp ? kMinRtpPacketLen : kMinRtcpPacketLen) &&
+ packet->length() <= kMaxRtpPacketLen);
+}
+
+BaseChannel::BaseChannel(talk_base::Thread* thread, MediaEngine* media_engine,
+ MediaChannel* media_channel, BaseSession* session,
+ const std::string& content_name,
+ TransportChannel* transport_channel)
+ : worker_thread_(thread),
+ media_engine_(media_engine),
+ session_(session),
+ media_channel_(media_channel),
+ received_media_sink_(NULL),
+ sent_media_sink_(NULL),
+ content_name_(content_name),
+ transport_channel_(transport_channel),
+ rtcp_transport_channel_(NULL),
+ enabled_(false),
+ writable_(false),
+ has_codec_(false),
+ muted_(false) {
+ ASSERT(worker_thread_ == talk_base::Thread::Current());
+ media_channel_->SetInterface(this);
+
+#ifdef PLATFORM_CHROMIUM
+ session_->SignalWritableState.connect(
+ this, &BaseChannel::OnWritableState);
+ session_->SignalReadPacket.connect(
+ this, &BaseChannel::OnChannelRead);
+#else
+ transport_channel_->SignalWritableState.connect(
+ this, &BaseChannel::OnWritableState);
+ transport_channel_->SignalReadPacket.connect(
+ this, &BaseChannel::OnChannelRead);
+#endif
+
+
+ LOG(LS_INFO) << "Created channel";
+
+ session->SignalState.connect(this, &BaseChannel::OnSessionState);
+}
+
+BaseChannel::~BaseChannel() {
+ ASSERT(worker_thread_ == talk_base::Thread::Current());
+ StopConnectionMonitor();
+ FlushRtcpMessages(); // Send any outstanding RTCP packets.
+ Clear(); // eats any outstanding messages or packets
+ // We must destroy the media channel before the transport channel, otherwise
+ // the media channel may try to send on the dead transport channel. NULLing
+ // is not an effective strategy since the sends will come on another thread.
+ delete media_channel_;
+ set_rtcp_transport_channel(NULL);
+ if (transport_channel_ != NULL)
+ session_->DestroyChannel(content_name_, transport_channel_->name());
+ LOG(LS_INFO) << "Destroyed channel";
+}
+
+bool BaseChannel::Enable(bool enable) {
+ // Can be called from thread other than worker thread
+ Send(enable ? MSG_ENABLE : MSG_DISABLE);
+ return true;
+}
+
+bool BaseChannel::Mute(bool mute) {
+ // Can be called from thread other than worker thread
+ Send(mute ? MSG_MUTE : MSG_UNMUTE);
+ return true;
+}
+
+bool BaseChannel::RemoveStream(uint32 ssrc) {
+ StreamMessageData data(ssrc, 0);
+ Send(MSG_REMOVESTREAM, &data);
+ return true;
+}
+
+bool BaseChannel::SetRtcpCName(const std::string& cname) {
+ SetRtcpCNameData data(cname);
+ Send(MSG_SETRTCPCNAME, &data);
+ return data.result;
+}
+
+bool BaseChannel::SetLocalContent(const MediaContentDescription* content,
+ ContentAction action) {
+ SetContentData data(content, action);
+ Send(MSG_SETLOCALCONTENT, &data);
+ return data.result;
+}
+
+bool BaseChannel::SetRemoteContent(const MediaContentDescription* content,
+ ContentAction action) {
+ SetContentData data(content, action);
+ Send(MSG_SETREMOTECONTENT, &data);
+ return data.result;
+}
+
+bool BaseChannel::SetMaxSendBandwidth(int max_bandwidth) {
+ SetBandwidthData data(max_bandwidth);
+ Send(MSG_SETMAXSENDBANDWIDTH, &data);
+ return data.result;
+}
+
+void BaseChannel::StartConnectionMonitor(int cms) {
+ socket_monitor_.reset(new SocketMonitor(transport_channel_,
+ worker_thread(),
+ talk_base::Thread::Current()));
+ socket_monitor_->SignalUpdate.connect(
+ this, &BaseChannel::OnConnectionMonitorUpdate);
+ socket_monitor_->Start(cms);
+}
+
+void BaseChannel::StopConnectionMonitor() {
+ if (socket_monitor_.get()) {
+ socket_monitor_->Stop();
+ socket_monitor_.reset();
+ }
+}
+
+void BaseChannel::set_rtcp_transport_channel(TransportChannel* channel) {
+ if (rtcp_transport_channel_ != channel) {
+ if (rtcp_transport_channel_) {
+ session_->DestroyChannel(content_name_, rtcp_transport_channel_->name());
+ }
+ rtcp_transport_channel_ = channel;
+ if (rtcp_transport_channel_) {
+ rtcp_transport_channel_->SignalWritableState.connect(
+ this, &BaseChannel::OnWritableState);
+ rtcp_transport_channel_->SignalReadPacket.connect(
+ this, &BaseChannel::OnChannelRead);
+ }
+ }
+}
+
+bool BaseChannel::SendPacket(talk_base::Buffer* packet) {
+ return SendPacket(false, packet);
+}
+
+bool BaseChannel::SendRtcp(talk_base::Buffer* packet) {
+ return SendPacket(true, packet);
+}
+
+int BaseChannel::SetOption(SocketType type, talk_base::Socket::Option opt,
+ int value) {
+ switch (type) {
+ case ST_RTP: return transport_channel_->SetOption(opt, value);
+ case ST_RTCP: return rtcp_transport_channel_->SetOption(opt, value);
+ default: return -1;
+ }
+}
+
+void BaseChannel::OnWritableState(TransportChannel* channel) {
+#ifdef PLATFORM_CHROMIUM
+ // since session is issuing signal, there can be multiple channels
+ // for a session
+ if (channel != transport_channel_ && channel != rtcp_transport_channel_) {
+ return;
+ }
+#else
+ ASSERT(channel == transport_channel_ || channel == rtcp_transport_channel_);
+#endif
+ if (transport_channel_->writable()
+ && (!rtcp_transport_channel_ || rtcp_transport_channel_->writable())) {
+ ChannelWritable_w();
+ } else {
+ ChannelNotWritable_w();
+ }
+}
+
+void BaseChannel::OnChannelRead(TransportChannel* channel,
+ const char* data, size_t len) {
+ // OnChannelRead gets called from P2PSocket; now pass data to MediaEngine
+ ASSERT(worker_thread_ == talk_base::Thread::Current());
+#ifdef PLATFORM_CHROMIUM
+ if (channel != transport_channel_ && channel != rtcp_transport_channel_) {
+ return;
+ }
+#endif
+
+ // When using RTCP multiplexing we might get RTCP packets on the RTP
+ // transport. We feed RTP traffic into the demuxer to determine if it is RTCP.
+ bool rtcp = PacketIsRtcp(channel, data, len);
+ talk_base::Buffer packet(data, len);
+ HandlePacket(rtcp, &packet);
+}
+
+bool BaseChannel::PacketIsRtcp(const TransportChannel* channel,
+ const char* data, size_t len) {
+ return (channel == rtcp_transport_channel_ ||
+ rtcp_mux_filter_.DemuxRtcp(data, len));
+}
+
+bool BaseChannel::SendPacket(bool rtcp, talk_base::Buffer* packet) {
+ // SendPacket gets called from MediaEngine, typically on an encoder thread.
+ // If the thread is not our worker thread, we will post to our worker
+ // so that the real work happens on our worker. This avoids us having to
+ // synchronize access to all the pieces of the send path, including
+ // SRTP and the inner workings of the transport channels.
+ // The only downside is that we can't return a proper failure code if
+ // needed. Since UDP is unreliable anyway, this should be a non-issue.
+ if (talk_base::Thread::Current() != worker_thread_) {
+ // Avoid a copy by transferring the ownership of the packet data.
+ int message_id = (!rtcp) ? MSG_RTPPACKET : MSG_RTCPPACKET;
+ PacketMessageData* data = new PacketMessageData;
+ packet->TransferTo(&data->packet);
+ worker_thread_->Post(this, message_id, data);
+ return true;
+ }
+
+ // Make sure we have a place to send this packet before doing anything.
+ // (We might get RTCP packets that we don't intend to send.)
+ // If we've negotiated RTCP mux, send RTCP over the RTP transport.
+ TransportChannel* channel = (!rtcp || rtcp_mux_filter_.IsActive()) ?
+ transport_channel_ : rtcp_transport_channel_;
+ if (!channel) {
+ return false;
+ }
+
+ // Protect ourselves against crazy data.
+ if (!ValidPacket(rtcp, packet)) {
+ LOG(LS_ERROR) << "Dropping outgoing " << content_name_ << " "
+ << PacketType(rtcp) << " packet: wrong size="
+ << packet->length();
+ return false;
+ }
+
+ // Push the packet down to the media sink.
+ // Need to do this before protecting the packet.
+ {
+ talk_base::CritScope cs(&sink_critical_section_);
+ if (sent_media_sink_) {
+ if (!rtcp) {
+ sent_media_sink_->OnRtpPacket(packet->data(), packet->length());
+ } else {
+ sent_media_sink_->OnRtcpPacket(packet->data(), packet->length());
+ }
+ }
+ }
+
+ // Protect if needed.
+ if (srtp_filter_.IsActive()) {
+ bool res;
+ char* data = packet->data();
+ int len = packet->length();
+ if (!rtcp) {
+ res = srtp_filter_.ProtectRtp(data, len, packet->capacity(), &len);
+ if (!res) {
+ int seq_num = -1;
+ uint32 ssrc = 0;
+ GetRtpSeqNum(data, len, &seq_num);
+ GetRtpSsrc(data, len, &ssrc);
+ LOG(LS_ERROR) << "Failed to protect " << content_name_
+ << " RTP packet: size=" << len
+ << ", seqnum=" << seq_num << ", SSRC=" << ssrc;
+ return false;
+ }
+ } else {
+ res = srtp_filter_.ProtectRtcp(data, len, packet->capacity(), &len);
+ if (!res) {
+ int type = -1;
+ GetRtcpType(data, len, &type);
+ LOG(LS_ERROR) << "Failed to protect " << content_name_
+ << " RTCP packet: size=" << len << ", type=" << type;
+ return false;
+ }
+ }
+
+ // Update the length of the packet now that we've added the auth tag.
+ packet->SetLength(len);
+ }
+
+ // Bon voyage.
+ return (channel->SendPacket(packet)
+ == static_cast<int>(packet->length()));
+}
+
+void BaseChannel::HandlePacket(bool rtcp, talk_base::Buffer* packet) {
+ // Protect ourselvs against crazy data.
+ if (!ValidPacket(rtcp, packet)) {
+ LOG(LS_ERROR) << "Dropping incoming " << content_name_ << " "
+ << PacketType(rtcp) << " packet: wrong size="
+ << packet->length();
+ return;
+ }
+
+ // Unprotect the packet, if needed.
+ if (srtp_filter_.IsActive()) {
+ char* data = packet->data();
+ int len = packet->length();
+ bool res;
+ if (!rtcp) {
+ res = srtp_filter_.UnprotectRtp(data, len, &len);
+ if (!res) {
+ int seq_num = -1;
+ uint32 ssrc = 0;
+ GetRtpSeqNum(data, len, &seq_num);
+ GetRtpSsrc(data, len, &ssrc);
+ LOG(LS_ERROR) << "Failed to unprotect " << content_name_
+ << " RTP packet: size=" << len
+ << ", seqnum=" << seq_num << ", SSRC=" << ssrc;
+ return;
+ }
+ } else {
+ res = srtp_filter_.UnprotectRtcp(data, len, &len);
+ if (!res) {
+ int type = -1;
+ GetRtcpType(data, len, &type);
+ LOG(LS_ERROR) << "Failed to unprotect " << content_name_
+ << " RTCP packet: size=" << len << ", type=" << type;
+ return;
+ }
+ }
+
+ packet->SetLength(len);
+ }
+
+ // Push it down to the media channel.
+ if (!rtcp) {
+ media_channel_->OnPacketReceived(packet);
+ } else {
+ media_channel_->OnRtcpReceived(packet);
+ }
+
+ // Push it down to the media sink.
+ {
+ talk_base::CritScope cs(&sink_critical_section_);
+ if (received_media_sink_) {
+ if (!rtcp) {
+ received_media_sink_->OnRtpPacket(packet->data(), packet->length());
+ } else {
+ received_media_sink_->OnRtcpPacket(packet->data(), packet->length());
+ }
+ }
+ }
+}
+
+void BaseChannel::OnSessionState(BaseSession* session,
+ BaseSession::State state) {
+ const MediaContentDescription* content = NULL;
+ switch (state) {
+ case Session::STATE_SENTINITIATE:
+ content = GetFirstContent(session->local_description());
+ if (content && !SetLocalContent(content, CA_OFFER)) {
+ LOG(LS_ERROR) << "Failure in SetLocalContent with CA_OFFER";
+ session->SetError(BaseSession::ERROR_CONTENT);
+ }
+ break;
+ case Session::STATE_SENTACCEPT:
+ content = GetFirstContent(session->local_description());
+ if (content && !SetLocalContent(content, CA_ANSWER)) {
+ LOG(LS_ERROR) << "Failure in SetLocalContent with CA_ANSWER";
+ session->SetError(BaseSession::ERROR_CONTENT);
+ }
+ break;
+ case Session::STATE_RECEIVEDINITIATE:
+ content = GetFirstContent(session->remote_description());
+ if (content && !SetRemoteContent(content, CA_OFFER)) {
+ LOG(LS_ERROR) << "Failure in SetRemoteContent with CA_OFFER";
+ session->SetError(BaseSession::ERROR_CONTENT);
+ }
+ break;
+ case Session::STATE_RECEIVEDACCEPT:
+ content = GetFirstContent(session->remote_description());
+ if (content && !SetRemoteContent(content, CA_ANSWER)) {
+ LOG(LS_ERROR) << "Failure in SetRemoteContent with CA_ANSWER";
+ session->SetError(BaseSession::ERROR_CONTENT);
+ }
+ break;
+ default:
+ break;
+ }
+}
+
+void BaseChannel::EnableMedia_w() {
+ ASSERT(worker_thread_ == talk_base::Thread::Current());
+ if (enabled_)
+ return;
+
+ LOG(LS_INFO) << "Channel enabled";
+ enabled_ = true;
+ ChangeState();
+}
+
+void BaseChannel::DisableMedia_w() {
+ ASSERT(worker_thread_ == talk_base::Thread::Current());
+ if (!enabled_)
+ return;
+
+ LOG(LS_INFO) << "Channel disabled";
+ enabled_ = false;
+ ChangeState();
+}
+
+void BaseChannel::MuteMedia_w() {
+ ASSERT(worker_thread_ == talk_base::Thread::Current());
+ if (muted_)
+ return;
+
+ if (media_channel()->Mute(true)) {
+ LOG(LS_INFO) << "Channel muted";
+ muted_ = true;
+ }
+}
+
+void BaseChannel::UnmuteMedia_w() {
+ ASSERT(worker_thread_ == talk_base::Thread::Current());
+ if (!muted_)
+ return;
+
+ if (media_channel()->Mute(false)) {
+ LOG(LS_INFO) << "Channel unmuted";
+ muted_ = false;
+ }
+}
+
+void BaseChannel::ChannelWritable_w() {
+ ASSERT(worker_thread_ == talk_base::Thread::Current());
+ if (writable_)
+ return;
+ LOG(LS_INFO) << "Channel socket writable ("
+ << transport_channel_->name().c_str() << ")";
+ writable_ = true;
+ ChangeState();
+}
+
+void BaseChannel::ChannelNotWritable_w() {
+ ASSERT(worker_thread_ == talk_base::Thread::Current());
+ if (!writable_)
+ return;
+
+ LOG(LS_INFO) << "Channel socket not writable ("
+ << transport_channel_->name().c_str() << ")";
+ writable_ = false;
+ ChangeState();
+}
+
+// Sets the maximum video bandwidth for automatic bandwidth adjustment.
+bool BaseChannel::SetMaxSendBandwidth_w(int max_bandwidth) {
+ return media_channel()->SetSendBandwidth(true, max_bandwidth);
+}
+
+bool BaseChannel::SetRtcpCName_w(const std::string& cname) {
+ return media_channel()->SetRtcpCName(cname);
+}
+
+bool BaseChannel::SetSrtp_w(const std::vector<CryptoParams>& cryptos,
+ ContentAction action, ContentSource src) {
+ bool ret;
+ if (action == CA_OFFER) {
+ ret = srtp_filter_.SetOffer(cryptos, src);
+ } else if (action == CA_ANSWER) {
+ ret = srtp_filter_.SetAnswer(cryptos, src);
+ } else {
+ // CA_UPDATE, no crypto params.
+ ret = true;
+ }
+ return ret;
+}
+
+bool BaseChannel::SetRtcpMux_w(bool enable, ContentAction action,
+ ContentSource src) {
+ bool ret;
+ if (action == CA_OFFER) {
+ ret = rtcp_mux_filter_.SetOffer(enable, src);
+ } else if (action == CA_ANSWER) {
+ ret = rtcp_mux_filter_.SetAnswer(enable, src);
+ if (ret && rtcp_mux_filter_.IsActive()) {
+ // We activated RTCP mux, close down the RTCP transport.
+ set_rtcp_transport_channel(NULL);
+ // If the RTP transport is already writable, then so are we.
+ if (transport_channel_->writable()) {
+ ChannelWritable_w();
+ }
+ }
+ } else {
+ // CA_UPDATE, no RTCP mux info.
+ ret = true;
+ }
+ return ret;
+}
+
+void BaseChannel::OnMessage(talk_base::Message *pmsg) {
+ switch (pmsg->message_id) {
+ case MSG_ENABLE:
+ EnableMedia_w();
+ break;
+ case MSG_DISABLE:
+ DisableMedia_w();
+ break;
+
+ case MSG_MUTE:
+ MuteMedia_w();
+ break;
+ case MSG_UNMUTE:
+ UnmuteMedia_w();
+ break;
+
+ case MSG_SETRTCPCNAME: {
+ SetRtcpCNameData* data = static_cast<SetRtcpCNameData*>(pmsg->pdata);
+ data->result = SetRtcpCName_w(data->cname);
+ break;
+ }
+
+ case MSG_SETLOCALCONTENT: {
+ SetContentData* data = static_cast<SetContentData*>(pmsg->pdata);
+ data->result = SetLocalContent_w(data->content, data->action);
+ break;
+ }
+ case MSG_SETREMOTECONTENT: {
+ SetContentData* data = static_cast<SetContentData*>(pmsg->pdata);
+ data->result = SetRemoteContent_w(data->content, data->action);
+ break;
+ }
+
+ case MSG_REMOVESTREAM: {
+ StreamMessageData* data = static_cast<StreamMessageData*>(pmsg->pdata);
+ RemoveStream_w(data->ssrc1);
+ break;
+ }
+
+ case MSG_SETMAXSENDBANDWIDTH: {
+ SetBandwidthData* data = static_cast<SetBandwidthData*>(pmsg->pdata);
+ data->result = SetMaxSendBandwidth_w(data->value);
+ break;
+ }
+
+ case MSG_RTPPACKET:
+ case MSG_RTCPPACKET: {
+ PacketMessageData* data = static_cast<PacketMessageData*>(pmsg->pdata);
+ SendPacket(pmsg->message_id == MSG_RTCPPACKET, &data->packet);
+ delete data; // because it is Posted
+ break;
+ }
+ }
+}
+
+void BaseChannel::Send(uint32 id, talk_base::MessageData *pdata) {
+ worker_thread_->Send(this, id, pdata);
+}
+
+void BaseChannel::Post(uint32 id, talk_base::MessageData *pdata) {
+ worker_thread_->Post(this, id, pdata);
+}
+
+void BaseChannel::PostDelayed(int cmsDelay, uint32 id,
+ talk_base::MessageData *pdata) {
+ worker_thread_->PostDelayed(cmsDelay, this, id, pdata);
+}
+
+void BaseChannel::Clear(uint32 id, talk_base::MessageList* removed) {
+ worker_thread_->Clear(this, id, removed);
+}
+
+void BaseChannel::FlushRtcpMessages() {
+ // Flush all remaining RTCP messages. This should only be called in
+ // destructor.
+ ASSERT(talk_base::Thread::Current() == worker_thread_);
+ talk_base::MessageList rtcp_messages;
+ Clear(MSG_RTCPPACKET, &rtcp_messages);
+ for (talk_base::MessageList::iterator it = rtcp_messages.begin();
+ it != rtcp_messages.end(); ++it) {
+ Send(MSG_RTCPPACKET, it->pdata);
+ }
+}
+
+VoiceChannel::VoiceChannel(talk_base::Thread* thread,
+ MediaEngine* media_engine,
+ VoiceMediaChannel* media_channel,
+ BaseSession* session,
+ const std::string& content_name,
+ bool rtcp)
+ : BaseChannel(thread, media_engine, media_channel, session, content_name,
+ session->CreateChannel(content_name, "rtp")),
+ received_media_(false) {
+ if (rtcp) {
+ set_rtcp_transport_channel(session->CreateChannel(content_name, "rtcp"));
+ }
+ // Can't go in BaseChannel because certain session states will
+ // trigger pure virtual functions, such as GetFirstContent().
+ OnSessionState(session, session->state());
+
+ media_channel->SignalMediaError.connect(
+ this, &VoiceChannel::OnVoiceChannelError);
+ srtp_filter()->SignalSrtpError.connect(
+ this, &VoiceChannel::OnSrtpError);
+}
+
+VoiceChannel::~VoiceChannel() {
+ StopAudioMonitor();
+ StopMediaMonitor();
+ // this can't be done in the base class, since it calls a virtual
+ DisableMedia_w();
+}
+
+bool VoiceChannel::AddStream(uint32 ssrc) {
+ StreamMessageData data(ssrc, 0);
+ Send(MSG_ADDSTREAM, &data);
+ return true;
+}
+
+bool VoiceChannel::SetRingbackTone(const void* buf, int len) {
+ SetRingbackToneMessageData data(buf, len);
+ Send(MSG_SETRINGBACKTONE, &data);
+ return data.result;
+}
+
+// TODO: Handle early media the right way. We should get an explicit
+// ringing message telling us to start playing local ringback, which we cancel
+// if any early media actually arrives. For now, we do the opposite, which is
+// to wait 1 second for early media, and start playing local ringback if none
+// arrives.
+void VoiceChannel::SetEarlyMedia(bool enable) {
+ if (enable) {
+ // Start the early media timeout
+ PostDelayed(kEarlyMediaTimeout, MSG_EARLYMEDIATIMEOUT);
+ } else {
+ // Stop the timeout if currently going.
+ Clear(MSG_EARLYMEDIATIMEOUT);
+ }
+}
+
+bool VoiceChannel::PlayRingbackTone(uint32 ssrc, bool play, bool loop) {
+ PlayRingbackToneMessageData data(ssrc, play, loop);
+ Send(MSG_PLAYRINGBACKTONE, &data);
+ return data.result;
+}
+
+bool VoiceChannel::PressDTMF(int digit, bool playout) {
+ DtmfMessageData data(digit, playout);
+ Send(MSG_PRESSDTMF, &data);
+ return data.result;
+}
+
+void VoiceChannel::StartMediaMonitor(int cms) {
+ media_monitor_.reset(new VoiceMediaMonitor(media_channel(), worker_thread(),
+ talk_base::Thread::Current()));
+ media_monitor_->SignalUpdate.connect(
+ this, &VoiceChannel::OnMediaMonitorUpdate);
+ media_monitor_->Start(cms);
+}
+
+void VoiceChannel::StopMediaMonitor() {
+ if (media_monitor_.get()) {
+ media_monitor_->Stop();
+ media_monitor_->SignalUpdate.disconnect(this);
+ media_monitor_.reset();
+ }
+}
+
+void VoiceChannel::StartAudioMonitor(int cms) {
+ audio_monitor_.reset(new AudioMonitor(this, talk_base::Thread::Current()));
+ audio_monitor_
+ ->SignalUpdate.connect(this, &VoiceChannel::OnAudioMonitorUpdate);
+ audio_monitor_->Start(cms);
+}
+
+void VoiceChannel::StopAudioMonitor() {
+ if (audio_monitor_.get()) {
+ audio_monitor_->Stop();
+ audio_monitor_.reset();
+ }
+}
+
+int VoiceChannel::GetInputLevel_w() {
+ return media_engine()->GetInputLevel();
+}
+
+int VoiceChannel::GetOutputLevel_w() {
+ return media_channel()->GetOutputLevel();
+}
+
+void VoiceChannel::GetActiveStreams_w(AudioInfo::StreamList* actives) {
+ media_channel()->GetActiveStreams(actives);
+}
+
+void VoiceChannel::OnChannelRead(TransportChannel* channel,
+ const char* data, size_t len) {
+ BaseChannel::OnChannelRead(channel, data, len);
+
+ // Set a flag when we've received an RTP packet. If we're waiting for early
+ // media, this will disable the timeout.
+ if (!received_media_ && !PacketIsRtcp(channel, data, len)) {
+ received_media_ = true;
+ }
+}
+
+void VoiceChannel::ChangeState() {
+ // render incoming data if we are the active call
+ // we receive data on the default channel and multiplexed streams
+ bool recv = enabled();
+ if (!media_channel()->SetPlayout(recv)) {
+ SendLastMediaError();
+ }
+
+ // send outgoing data if we are the active call, have the
+ // remote party's codec, and have a writable transport
+ // we only send data on the default channel
+ bool send = enabled() && has_codec() && writable();
+ SendFlags send_flag = send ? SEND_MICROPHONE : SEND_NOTHING;
+ if (!media_channel()->SetSend(send_flag)) {
+ LOG(LS_ERROR) << "Failed to SetSend " << send_flag << " on voice channel";
+ SendLastMediaError();
+ }
+
+ LOG(LS_INFO) << "Changing voice state, recv=" << recv << " send=" << send;
+}
+
+const MediaContentDescription* VoiceChannel::GetFirstContent(
+ const SessionDescription* sdesc) {
+ const ContentInfo* cinfo = GetFirstAudioContent(sdesc);
+ if (cinfo == NULL)
+ return NULL;
+
+ return static_cast<const MediaContentDescription*>(cinfo->description);
+}
+
+bool VoiceChannel::SetLocalContent_w(const MediaContentDescription* content,
+ ContentAction action) {
+ ASSERT(worker_thread() == talk_base::Thread::Current());
+ LOG(LS_INFO) << "Setting local voice description";
+
+ const AudioContentDescription* audio =
+ static_cast<const AudioContentDescription*>(content);
+ ASSERT(audio != NULL);
+
+ bool ret;
+ if (audio->ssrc_set()) {
+ media_channel()->SetSendSsrc(audio->ssrc());
+ LOG(LS_INFO) << "Set send ssrc for audio: " << audio->ssrc();
+ }
+ // set SRTP
+ ret = SetSrtp_w(audio->cryptos(), action, CS_LOCAL);
+
+ // set RTCP mux
+ if (ret)
+ ret = SetRtcpMux_w(audio->rtcp_mux(), action, CS_LOCAL);
+
+ // set payload type and config for voice codecs
+ if (ret)
+ ret = media_channel()->SetRecvCodecs(audio->codecs());
+
+ // set header extensions
+ if (ret && audio->rtp_header_extensions_set()) {
+ ret = media_channel()->SetRecvRtpHeaderExtensions(
+ audio->rtp_header_extensions());
+ }
+
+ return ret;
+}
+
+bool VoiceChannel::SetRemoteContent_w(const MediaContentDescription* content,
+ ContentAction action) {
+ ASSERT(worker_thread() == talk_base::Thread::Current());
+ LOG(LS_INFO) << "Setting remote voice description";
+
+ const AudioContentDescription* audio =
+ static_cast<const AudioContentDescription*>(content);
+ ASSERT(audio != NULL);
+
+ // set SRTP
+ bool ret = SetSrtp_w(audio->cryptos(), action, CS_REMOTE);
+ // set RTCP mux
+ if (ret) {
+ ret = SetRtcpMux_w(audio->rtcp_mux(), action, CS_REMOTE);
+ }
+ // set codecs and payload types
+ if (ret) {
+ ret = media_channel()->SetSendCodecs(audio->codecs());
+ }
+ // set header extensions
+ if (ret && audio->rtp_header_extensions_set()) {
+ ret = media_channel()->SetSendRtpHeaderExtensions(
+ audio->rtp_header_extensions());
+ }
+
+ int audio_options = 0;
+ if (audio->conference_mode()) {
+ audio_options |= OPT_CONFERENCE;
+ }
+ if (!media_channel()->SetOptions(audio_options)) {
+ // Log an error on failure, but don't abort the call.
+ LOG(LS_ERROR) << "Failed to set voice channel options";
+ }
+
+ // update state
+ if (ret) {
+ set_has_codec(true);
+ ChangeState();
+ }
+ return ret;
+}
+
+void VoiceChannel::AddStream_w(uint32 ssrc) {
+ ASSERT(worker_thread() == talk_base::Thread::Current());
+ media_channel()->AddStream(ssrc);
+}
+
+void VoiceChannel::RemoveStream_w(uint32 ssrc) {
+ media_channel()->RemoveStream(ssrc);
+}
+
+bool VoiceChannel::SetRingbackTone_w(const void* buf, int len) {
+ ASSERT(worker_thread() == talk_base::Thread::Current());
+ return media_channel()->SetRingbackTone(static_cast<const char*>(buf), len);
+}
+
+bool VoiceChannel::PlayRingbackTone_w(uint32 ssrc, bool play, bool loop) {
+ ASSERT(worker_thread() == talk_base::Thread::Current());
+ if (play) {
+ LOG(LS_INFO) << "Playing ringback tone, loop=" << loop;
+ } else {
+ LOG(LS_INFO) << "Stopping ringback tone";
+ }
+ return media_channel()->PlayRingbackTone(ssrc, play, loop);
+}
+
+void VoiceChannel::HandleEarlyMediaTimeout() {
+ // This occurs on the main thread, not the worker thread.
+ if (!received_media_) {
+ LOG(LS_INFO) << "No early media received before timeout";
+ SignalEarlyMediaTimeout(this);
+ }
+}
+
+bool VoiceChannel::PressDTMF_w(int digit, bool playout) {
+ if (!enabled() || !writable()) {
+ return false;
+ }
+
+ return media_channel()->PressDTMF(digit, playout);
+}
+
+void VoiceChannel::OnMessage(talk_base::Message *pmsg) {
+ switch (pmsg->message_id) {
+ case MSG_ADDSTREAM: {
+ StreamMessageData* data = static_cast<StreamMessageData*>(pmsg->pdata);
+ AddStream_w(data->ssrc1);
+ break;
+ }
+ case MSG_SETRINGBACKTONE: {
+ SetRingbackToneMessageData* data =
+ static_cast<SetRingbackToneMessageData*>(pmsg->pdata);
+ data->result = SetRingbackTone_w(data->buf, data->len);
+ break;
+ }
+ case MSG_PLAYRINGBACKTONE: {
+ PlayRingbackToneMessageData* data =
+ static_cast<PlayRingbackToneMessageData*>(pmsg->pdata);
+ data->result = PlayRingbackTone_w(data->ssrc, data->play, data->loop);
+ break;
+ }
+ case MSG_EARLYMEDIATIMEOUT:
+ HandleEarlyMediaTimeout();
+ break;
+ case MSG_PRESSDTMF: {
+ DtmfMessageData* data = static_cast<DtmfMessageData*>(pmsg->pdata);
+ data->result = PressDTMF_w(data->digit, data->playout);
+ break;
+ }
+ case MSG_CHANNEL_ERROR: {
+ VoiceChannelErrorMessageData* data =
+ static_cast<VoiceChannelErrorMessageData*>(pmsg->pdata);
+ SignalMediaError(this, data->ssrc, data->error);
+ delete data;
+ break;
+ }
+
+ default:
+ BaseChannel::OnMessage(pmsg);
+ break;
+ }
+}
+
+void VoiceChannel::OnConnectionMonitorUpdate(
+ SocketMonitor* monitor, const std::vector<ConnectionInfo>& infos) {
+ SignalConnectionMonitor(this, infos);
+}
+
+void VoiceChannel::OnMediaMonitorUpdate(
+ VoiceMediaChannel* media_channel, const VoiceMediaInfo& info) {
+ ASSERT(media_channel == this->media_channel());
+ SignalMediaMonitor(this, info);
+}
+
+void VoiceChannel::OnAudioMonitorUpdate(AudioMonitor* monitor,
+ const AudioInfo& info) {
+ SignalAudioMonitor(this, info);
+}
+
+void VoiceChannel::OnVoiceChannelError(
+ uint32 ssrc, VoiceMediaChannel::Error error) {
+ VoiceChannelErrorMessageData *data = new VoiceChannelErrorMessageData(
+ ssrc, error);
+ signaling_thread()->Post(this, MSG_CHANNEL_ERROR, data);
+}
+
+void VoiceChannel::OnSrtpError(uint32 ssrc, SrtpFilter::Mode mode,
+ SrtpFilter::Error error) {
+ switch (error) {
+ case SrtpFilter::ERROR_FAIL:
+ OnVoiceChannelError(ssrc, (mode == SrtpFilter::PROTECT) ?
+ VoiceMediaChannel::ERROR_REC_SRTP_ERROR :
+ VoiceMediaChannel::ERROR_PLAY_SRTP_ERROR);
+ break;
+ case SrtpFilter::ERROR_AUTH:
+ OnVoiceChannelError(ssrc, (mode == SrtpFilter::PROTECT) ?
+ VoiceMediaChannel::ERROR_REC_SRTP_AUTH_FAILED :
+ VoiceMediaChannel::ERROR_PLAY_SRTP_AUTH_FAILED);
+ break;
+ case SrtpFilter::ERROR_REPLAY:
+ // Only receving channel should have this error.
+ ASSERT(mode == SrtpFilter::UNPROTECT);
+ OnVoiceChannelError(ssrc, VoiceMediaChannel::ERROR_PLAY_SRTP_REPLAY);
+ break;
+ default:
+ break;
+ }
+}
+
+VideoChannel::VideoChannel(talk_base::Thread* thread,
+ MediaEngine* media_engine,
+ VideoMediaChannel* media_channel,
+ BaseSession* session,
+ const std::string& content_name,
+ bool rtcp,
+ VoiceChannel* voice_channel)
+ : BaseChannel(thread, media_engine, media_channel, session, content_name,
+ session->CreateChannel(content_name, "video_rtp")),
+ voice_channel_(voice_channel), renderer_(NULL) {
+ if (rtcp) {
+ set_rtcp_transport_channel(
+ session->CreateChannel(content_name, "video_rtcp"));
+ }
+ // Can't go in BaseChannel because certain session states will
+ // trigger pure virtual functions, such as GetFirstContent()
+ OnSessionState(session, session->state());
+
+ media_channel->SignalMediaError.connect(
+ this, &VideoChannel::OnVideoChannelError);
+ srtp_filter()->SignalSrtpError.connect(
+ this, &VideoChannel::OnSrtpError);
+}
+
+void VoiceChannel::SendLastMediaError() {
+ uint32 ssrc;
+ VoiceMediaChannel::Error error;
+ media_channel()->GetLastMediaError(&ssrc, &error);
+ SignalMediaError(this, ssrc, error);
+}
+
+VideoChannel::~VideoChannel() {
+ StopMediaMonitor();
+ // this can't be done in the base class, since it calls a virtual
+ DisableMedia_w();
+}
+
+bool VideoChannel::AddStream(uint32 ssrc, uint32 voice_ssrc) {
+ StreamMessageData data(ssrc, voice_ssrc);
+ Send(MSG_ADDSTREAM, &data);
+ return true;
+}
+
+bool VideoChannel::SetRenderer(uint32 ssrc, VideoRenderer* renderer) {
+ RenderMessageData data(ssrc, renderer);
+ Send(MSG_SETRENDERER, &data);
+ return true;
+}
+
+
+
+bool VideoChannel::SendIntraFrame() {
+ Send(MSG_SENDINTRAFRAME);
+ return true;
+}
+bool VideoChannel::RequestIntraFrame() {
+ Send(MSG_REQUESTINTRAFRAME);
+ return true;
+}
+
+void VideoChannel::ChangeState() {
+ // render incoming data if we are the active call
+ // we receive data on the default channel and multiplexed streams
+ bool recv = enabled();
+ if (!media_channel()->SetRender(recv)) {
+ LOG(LS_ERROR) << "Failed to SetRender on video channel";
+ // TODO: Report error back to server.
+ }
+
+ // send outgoing data if we are the active call, have the
+ // remote party's codec, and have a writable transport
+ // we only send data on the default channel
+ bool send = enabled() && has_codec() && writable();
+ if (!media_channel()->SetSend(send)) {
+ LOG(LS_ERROR) << "Failed to SetSend on video channel";
+ // TODO: Report error back to server.
+ }
+
+ LOG(LS_INFO) << "Changing video state, recv=" << recv << " send=" << send;
+}
+
+void VideoChannel::StartMediaMonitor(int cms) {
+ media_monitor_.reset(new VideoMediaMonitor(media_channel(), worker_thread(),
+ talk_base::Thread::Current()));
+ media_monitor_->SignalUpdate.connect(
+ this, &VideoChannel::OnMediaMonitorUpdate);
+ media_monitor_->Start(cms);
+}
+
+void VideoChannel::StopMediaMonitor() {
+ if (media_monitor_.get()) {
+ media_monitor_->Stop();
+ media_monitor_.reset();
+ }
+}
+
+const MediaContentDescription* VideoChannel::GetFirstContent(
+ const SessionDescription* sdesc) {
+ const ContentInfo* cinfo = GetFirstVideoContent(sdesc);
+ if (cinfo == NULL)
+ return NULL;
+
+ return static_cast<const MediaContentDescription*>(cinfo->description);
+}
+
+bool VideoChannel::SetLocalContent_w(const MediaContentDescription* content,
+ ContentAction action) {
+ ASSERT(worker_thread() == talk_base::Thread::Current());
+ LOG(LS_INFO) << "Setting local video description";
+
+ const VideoContentDescription* video =
+ static_cast<const VideoContentDescription*>(content);
+ ASSERT(video != NULL);
+
+ bool ret;
+ if (video->ssrc_set()) {
+ media_channel()->SetSendSsrc(video->ssrc());
+ LOG(LS_INFO) << "Set send ssrc for video: " << video->ssrc();
+ }
+ // set SRTP
+ ret = SetSrtp_w(video->cryptos(), action, CS_LOCAL);
+
+ // set RTCP mux
+ if (ret)
+ ret = SetRtcpMux_w(video->rtcp_mux(), action, CS_LOCAL);
+
+ // set payload types and config for receiving video
+ if (ret)
+ ret = media_channel()->SetRecvCodecs(video->codecs());
+
+ if (ret && video->rtp_header_extensions_set()) {
+ ret = media_channel()->SetRecvRtpHeaderExtensions(
+ video->rtp_header_extensions());
+ }
+
+ return ret;
+}
+
+bool VideoChannel::SetRemoteContent_w(const MediaContentDescription* content,
+ ContentAction action) {
+ ASSERT(worker_thread() == talk_base::Thread::Current());
+ LOG(LS_INFO) << "Setting remote video description";
+
+ const VideoContentDescription* video =
+ static_cast<const VideoContentDescription*>(content);
+ ASSERT(video != NULL);
+
+ // set SRTP
+ bool ret = SetSrtp_w(video->cryptos(), action, CS_REMOTE);
+ // set RTCP mux
+ if (ret) {
+ ret = SetRtcpMux_w(video->rtcp_mux(), action, CS_REMOTE);
+ }
+
+ // Set the send codecs before we can tweak bandwidth parameters.
+ // Otherwise the send_codec in the media channel won't be initialized
+ // and we can't set the bandwidth.
+ if (ret) {
+ ret = media_channel()->SetSendCodecs(video->codecs());
+ }
+
+ // Set video bandwidth parameters.
+ if (ret) {
+ int bandwidth_bps = video->bandwidth();
+ bool auto_bandwidth = (bandwidth_bps == kAutoBandwidth);
+ // Ignore errors from SetSendBandwidth.
+ // TODO(mallinath): SetSendCodec has already been called, so this call
+ // may fail.
+ /*ret = */media_channel()->SetSendBandwidth(auto_bandwidth, bandwidth_bps);
+ }
+ // set header extensions
+ if (ret && video->rtp_header_extensions_set()) {
+ ret = media_channel()->SetSendRtpHeaderExtensions(
+ video->rtp_header_extensions());
+ }
+ if (ret) {
+ set_has_codec(true);
+ ChangeState();
+ }
+ return ret;
+}
+
+void VideoChannel::AddStream_w(uint32 ssrc, uint32 voice_ssrc) {
+ media_channel()->AddStream(ssrc, voice_ssrc);
+}
+
+void VideoChannel::RemoveStream_w(uint32 ssrc) {
+ media_channel()->RemoveStream(ssrc);
+}
+
+void VideoChannel::SetRenderer_w(uint32 ssrc, VideoRenderer* renderer) {
+ media_channel()->SetRenderer(ssrc, renderer);
+}
+
+
+void VideoChannel::OnMessage(talk_base::Message *pmsg) {
+ switch (pmsg->message_id) {
+ case MSG_ADDSTREAM: {
+ StreamMessageData* data = static_cast<StreamMessageData*>(pmsg->pdata);
+ AddStream_w(data->ssrc1, data->ssrc2);
+ break;
+ }
+ case MSG_SETRENDERER: {
+ RenderMessageData* data = static_cast<RenderMessageData*>(pmsg->pdata);
+ SetRenderer_w(data->ssrc, data->renderer);
+ break;
+ }
+ case MSG_SENDINTRAFRAME:
+ SendIntraFrame_w();
+ break;
+ case MSG_REQUESTINTRAFRAME:
+ RequestIntraFrame_w();
+ break;
+ case MSG_CHANNEL_ERROR: {
+ const VideoChannelErrorMessageData* data =
+ static_cast<VideoChannelErrorMessageData*>(pmsg->pdata);
+ SignalMediaError(this, data->ssrc, data->error);
+ delete data;
+ break;
+ }
+ default:
+ BaseChannel::OnMessage(pmsg);
+ break;
+ }
+}
+
+void VideoChannel::OnConnectionMonitorUpdate(
+ SocketMonitor *monitor, const std::vector<ConnectionInfo> &infos) {
+ SignalConnectionMonitor(this, infos);
+}
+
+void VideoChannel::OnMediaMonitorUpdate(
+ VideoMediaChannel* media_channel, const VideoMediaInfo &info) {
+ ASSERT(media_channel == this->media_channel());
+ SignalMediaMonitor(this, info);
+}
+
+
+void VideoChannel::OnVideoChannelError(uint32 ssrc,
+ VideoMediaChannel::Error error) {
+ VideoChannelErrorMessageData* data = new VideoChannelErrorMessageData(
+ ssrc, error);
+ signaling_thread()->Post(this, MSG_CHANNEL_ERROR, data);
+}
+
+void VideoChannel::OnSrtpError(uint32 ssrc, SrtpFilter::Mode mode,
+ SrtpFilter::Error error) {
+ switch (error) {
+ case SrtpFilter::ERROR_FAIL:
+ OnVideoChannelError(ssrc, (mode == SrtpFilter::PROTECT) ?
+ VideoMediaChannel::ERROR_REC_SRTP_ERROR :
+ VideoMediaChannel::ERROR_PLAY_SRTP_ERROR);
+ break;
+ case SrtpFilter::ERROR_AUTH:
+ OnVideoChannelError(ssrc, (mode == SrtpFilter::PROTECT) ?
+ VideoMediaChannel::ERROR_REC_SRTP_AUTH_FAILED :
+ VideoMediaChannel::ERROR_PLAY_SRTP_AUTH_FAILED);
+ break;
+ case SrtpFilter::ERROR_REPLAY:
+ // Only receving channel should have this error.
+ ASSERT(mode == SrtpFilter::UNPROTECT);
+ // TODO: Turn on the signaling of replay error once we have
+ // switched to the new mechanism for doing video retransmissions.
+ // OnVideoChannelError(ssrc, VideoMediaChannel::ERROR_PLAY_SRTP_REPLAY);
+ break;
+ default:
+ break;
+ }
+}
+
+} // namespace cricket
diff --git a/third_party_mods/libjingle/source/talk/session/phone/channelmanager.cc b/third_party_mods/libjingle/source/talk/session/phone/channelmanager.cc
new file mode 100644
index 0000000..1c76f02
--- /dev/null
+++ b/third_party_mods/libjingle/source/talk/session/phone/channelmanager.cc
@@ -0,0 +1,798 @@
+/*
+ * libjingle
+ * Copyright 2004--2008, Google Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright notice,
+ * this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ * 3. The name of the author may not be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED
+ * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO
+ * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
+ * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
+ * OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
+ * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "talk/session/phone/channelmanager.h"
+
+#ifdef HAVE_CONFIG_H
+#include <config.h>
+#endif
+
+#include <algorithm>
+
+#include "talk/base/common.h"
+#include "talk/base/logging.h"
+#include "talk/base/sigslotrepeater.h"
+#include "talk/base/stringencode.h"
+#include "talk/session/phone/mediaengine.h"
+#include "talk/session/phone/soundclip.h"
+
+namespace cricket {
+
+enum {
+ MSG_CREATEVOICECHANNEL = 1,
+ MSG_DESTROYVOICECHANNEL = 2,
+ MSG_SETAUDIOOPTIONS = 3,
+ MSG_GETOUTPUTVOLUME = 4,
+ MSG_SETOUTPUTVOLUME = 5,
+ MSG_SETLOCALMONITOR = 6,
+ MSG_SETVOICELOGGING = 7,
+ MSG_CREATEVIDEOCHANNEL = 11,
+ MSG_DESTROYVIDEOCHANNEL = 12,
+ MSG_SETVIDEOOPTIONS = 13,
+ MSG_SETLOCALRENDERER = 14,
+ MSG_SETDEFAULTVIDEOENCODERCONFIG = 15,
+ MSG_SETVIDEOLOGGING = 16,
+ MSG_CREATESOUNDCLIP = 17,
+ MSG_DESTROYSOUNDCLIP = 18,
+ MSG_CAMERASTARTED = 19,
+ MSG_SETVIDEOCAPTURE = 20,
+};
+
+struct CreationParams : public talk_base::MessageData {
+ CreationParams(BaseSession* session, const std::string& content_name,
+ bool rtcp, VoiceChannel* voice_channel)
+ : session(session),
+ content_name(content_name),
+ rtcp(rtcp),
+ voice_channel(voice_channel),
+ video_channel(NULL) {}
+ BaseSession* session;
+ std::string content_name;
+ bool rtcp;
+ VoiceChannel* voice_channel;
+ VideoChannel* video_channel;
+};
+
+struct AudioOptions : public talk_base::MessageData {
+ AudioOptions(int o, const Device* in, const Device* out)
+ : options(o), in_device(in), out_device(out) {}
+ int options;
+ const Device* in_device;
+ const Device* out_device;
+ bool result;
+};
+
+struct VolumeLevel : public talk_base::MessageData {
+ VolumeLevel() : level(-1), result(false) {}
+ explicit VolumeLevel(int l) : level(l), result(false) {}
+ int level;
+ bool result;
+};
+
+struct VideoOptions : public talk_base::MessageData {
+ explicit VideoOptions(const Device* d) : cam_device(d), result(false) {}
+ const Device* cam_device;
+ bool result;
+};
+
+struct DefaultVideoEncoderConfig : public talk_base::MessageData {
+ explicit DefaultVideoEncoderConfig(const VideoEncoderConfig& c)
+ : config(c), result(false) {}
+ VideoEncoderConfig config;
+ bool result;
+};
+
+struct LocalMonitor : public talk_base::MessageData {
+ explicit LocalMonitor(bool e) : enable(e), result(false) {}
+ bool enable;
+ bool result;
+};
+
+struct LocalRenderer : public talk_base::MessageData {
+ explicit LocalRenderer(VideoRenderer* r) : renderer(r), result(false) {}
+ VideoRenderer* renderer;
+ bool result;
+};
+
+struct LoggingOptions : public talk_base::MessageData {
+ explicit LoggingOptions(int lev, const char* f) : level(lev), filter(f) {}
+ int level;
+ std::string filter;
+};
+
+struct CaptureParams : public talk_base::MessageData {
+ explicit CaptureParams(bool c) : capture(c), result(CR_FAILURE) {}
+
+ bool capture;
+ CaptureResult result;
+};
+
+ChannelManager::ChannelManager(talk_base::Thread* worker_thread)
+ : media_engine_(MediaEngine::Create()),
+ device_manager_(new DeviceManager()),
+ initialized_(false),
+ main_thread_(talk_base::Thread::Current()),
+ worker_thread_(worker_thread),
+ audio_in_device_(DeviceManager::kDefaultDeviceName),
+ audio_out_device_(DeviceManager::kDefaultDeviceName),
+ audio_options_(MediaEngine::DEFAULT_AUDIO_OPTIONS),
+ local_renderer_(NULL),
+ capturing_(false),
+ monitoring_(false) {
+ Construct();
+}
+
+ChannelManager::ChannelManager(MediaEngine* me, DeviceManager* dm,
+ talk_base::Thread* worker_thread)
+ : media_engine_(me),
+ device_manager_(dm),
+ initialized_(false),
+ main_thread_(talk_base::Thread::Current()),
+ worker_thread_(worker_thread),
+ audio_in_device_(DeviceManager::kDefaultDeviceName),
+ audio_out_device_(DeviceManager::kDefaultDeviceName),
+ audio_options_(MediaEngine::DEFAULT_AUDIO_OPTIONS),
+ local_renderer_(NULL),
+ capturing_(false),
+ monitoring_(false) {
+ Construct();
+}
+
+void ChannelManager::Construct() {
+ // Init the device manager immediately, and set up our default video device.
+ SignalDevicesChange.repeat(device_manager_->SignalDevicesChange);
+ device_manager_->Init();
+ // Set camera_device_ to the name of the default video capturer.
+ SetVideoOptions(DeviceManager::kDefaultDeviceName);
+
+ // Camera is started asynchronously, request callbacks when startup
+ // completes to be able to forward them to the rendering manager.
+ media_engine_->SignalVideoCaptureResult.connect(
+ this, &ChannelManager::OnVideoCaptureResult);
+}
+
+ChannelManager::~ChannelManager() {
+ if (initialized_)
+ Terminate();
+}
+
+int ChannelManager::GetCapabilities() {
+ return media_engine_->GetCapabilities() & device_manager_->GetCapabilities();
+}
+
+void ChannelManager::GetSupportedAudioCodecs(
+ std::vector<AudioCodec>* codecs) const {
+ codecs->clear();
+
+ for (std::vector<AudioCodec>::const_iterator it =
+ media_engine_->audio_codecs().begin();
+ it != media_engine_->audio_codecs().end(); ++it) {
+ codecs->push_back(*it);
+ }
+}
+
+void ChannelManager::GetSupportedVideoCodecs(
+ std::vector<VideoCodec>* codecs) const {
+ codecs->clear();
+
+ std::vector<VideoCodec>::const_iterator it;
+ for (it = media_engine_->video_codecs().begin();
+ it != media_engine_->video_codecs().end(); ++it) {
+ codecs->push_back(*it);
+ }
+}
+
+bool ChannelManager::Init() {
+ ASSERT(!initialized_);
+ if (initialized_) {
+ return false;
+ }
+
+ ASSERT(worker_thread_ != NULL);
+ if (worker_thread_ && worker_thread_->started()) {
+ if (media_engine_->Init()) {
+ initialized_ = true;
+ // Now that we're initialized, apply any stored preferences. A preferred
+ // device might have been unplugged. In this case, we fallback to the
+ // default device but keep the user preferences. The preferences are
+ // changed only when the Javascript FE changes them.
+ const std::string preferred_audio_in_device = audio_in_device_;
+ const std::string preferred_audio_out_device = audio_out_device_;
+ const std::string preferred_camera_device = camera_device_;
+ Device device;
+ if (!device_manager_->GetAudioInputDevice(audio_in_device_, &device)) {
+ LOG(LS_WARNING) << "The preferred microphone '" << audio_in_device_
+ << "' is unavailable. Fall back to the default.";
+ audio_in_device_ = DeviceManager::kDefaultDeviceName;
+ }
+ if (!device_manager_->GetAudioOutputDevice(audio_out_device_, &device)) {
+ LOG(LS_WARNING) << "The preferred speaker '" << audio_out_device_
+ << "' is unavailable. Fall back to the default.";
+ audio_out_device_ = DeviceManager::kDefaultDeviceName;
+ }
+ if (!device_manager_->GetVideoCaptureDevice(camera_device_, &device)) {
+ if (!camera_device_.empty()) {
+ LOG(LS_WARNING) << "The preferred camera '" << camera_device_
+ << "' is unavailable. Fall back to the default.";
+ }
+ camera_device_ = DeviceManager::kDefaultDeviceName;
+ }
+
+ if (!SetAudioOptions(audio_in_device_, audio_out_device_,
+ audio_options_)) {
+ LOG(LS_WARNING) << "Failed to SetAudioOptions with"
+ << " microphone: " << audio_in_device_
+ << " speaker: " << audio_out_device_
+ << " options: " << audio_options_;
+ }
+ if (!SetVideoOptions(camera_device_) && !camera_device_.empty()) {
+ LOG(LS_WARNING) << "Failed to SetVideoOptions with camera: "
+ << camera_device_;
+ }
+
+ // Restore the user preferences.
+ audio_in_device_ = preferred_audio_in_device;
+ audio_out_device_ = preferred_audio_out_device;
+ camera_device_ = preferred_camera_device;
+
+ // Now apply the default video codec that has been set earlier.
+ if (default_video_encoder_config_.max_codec.id != 0) {
+ SetDefaultVideoEncoderConfig(default_video_encoder_config_);
+ }
+ // And the local renderer.
+ if (local_renderer_) {
+ SetLocalRenderer(local_renderer_);
+ }
+ }
+
+ }
+ return initialized_;
+}
+
+void ChannelManager::Terminate() {
+ ASSERT(initialized_);
+ if (!initialized_) {
+ return;
+ }
+
+ // Need to destroy the voice/video channels
+ while (!video_channels_.empty()) {
+ DestroyVideoChannel_w(video_channels_.back());
+ }
+ while (!voice_channels_.empty()) {
+ DestroyVoiceChannel_w(voice_channels_.back());
+ }
+ while (!soundclips_.empty()) {
+ DestroySoundclip_w(soundclips_.back());
+ }
+
+ media_engine_->Terminate();
+ initialized_ = false;
+}
+
+VoiceChannel* ChannelManager::CreateVoiceChannel(
+ BaseSession* session, const std::string& content_name, bool rtcp) {
+ CreationParams params(session, content_name, rtcp, NULL);
+ return (Send(MSG_CREATEVOICECHANNEL, ¶ms)) ? params.voice_channel : NULL;
+}
+
+VoiceChannel* ChannelManager::CreateVoiceChannel_w(
+ BaseSession* session, const std::string& content_name, bool rtcp) {
+ talk_base::CritScope cs(&crit_);
+
+ // This is ok to alloc from a thread other than the worker thread
+ ASSERT(initialized_);
+ VoiceMediaChannel* media_channel = media_engine_->CreateChannel();
+ if (media_channel == NULL)
+ return NULL;
+
+ VoiceChannel* voice_channel = new VoiceChannel(
+ worker_thread_, media_engine_.get(), media_channel,
+ session, content_name, rtcp);
+ voice_channels_.push_back(voice_channel);
+ return voice_channel;
+}
+
+void ChannelManager::DestroyVoiceChannel(VoiceChannel* voice_channel) {
+ if (voice_channel) {
+ talk_base::TypedMessageData<VoiceChannel *> data(voice_channel);
+ Send(MSG_DESTROYVOICECHANNEL, &data);
+ }
+}
+
+void ChannelManager::DestroyVoiceChannel_w(VoiceChannel* voice_channel) {
+ talk_base::CritScope cs(&crit_);
+ // Destroy voice channel.
+ ASSERT(initialized_);
+ VoiceChannels::iterator it = std::find(voice_channels_.begin(),
+ voice_channels_.end(), voice_channel);
+ ASSERT(it != voice_channels_.end());
+ if (it == voice_channels_.end())
+ return;
+
+ voice_channels_.erase(it);
+ delete voice_channel;
+}
+
+VideoChannel* ChannelManager::CreateVideoChannel(
+ BaseSession* session, const std::string& content_name, bool rtcp,
+ VoiceChannel* voice_channel) {
+ CreationParams params(session, content_name, rtcp, voice_channel);
+ return (Send(MSG_CREATEVIDEOCHANNEL, ¶ms)) ? params.video_channel : NULL;
+}
+
+VideoChannel* ChannelManager::CreateVideoChannel_w(
+ BaseSession* session, const std::string& content_name, bool rtcp,
+ VoiceChannel* voice_channel) {
+ talk_base::CritScope cs(&crit_);
+
+ // This is ok to alloc from a thread other than the worker thread
+ ASSERT(initialized_);
+ VideoMediaChannel* media_channel =
+ // voice_channel can be NULL in case of NullVoiceEngine.
+ media_engine_->CreateVideoChannel(voice_channel ?
+ voice_channel->media_channel() : NULL);
+ if (media_channel == NULL)
+ return NULL;
+
+ VideoChannel* video_channel = new VideoChannel(
+ worker_thread_, media_engine_.get(), media_channel,
+ session, content_name, rtcp, voice_channel);
+ video_channels_.push_back(video_channel);
+ return video_channel;
+}
+
+void ChannelManager::DestroyVideoChannel(VideoChannel* video_channel) {
+ if (video_channel) {
+ talk_base::TypedMessageData<VideoChannel *> data(video_channel);
+ Send(MSG_DESTROYVIDEOCHANNEL, &data);
+ }
+}
+
+void ChannelManager::DestroyVideoChannel_w(VideoChannel *video_channel) {
+ talk_base::CritScope cs(&crit_);
+ // Destroy voice channel.
+ ASSERT(initialized_);
+ VideoChannels::iterator it = std::find(video_channels_.begin(),
+ video_channels_.end(), video_channel);
+ if (it == video_channels_.end())
+ return;
+
+ video_channels_.erase(it);
+ delete video_channel;
+}
+
+Soundclip* ChannelManager::CreateSoundclip() {
+ talk_base::TypedMessageData<Soundclip*> data(NULL);
+ Send(MSG_CREATESOUNDCLIP, &data);
+ return data.data();
+}
+
+Soundclip* ChannelManager::CreateSoundclip_w() {
+ talk_base::CritScope cs(&crit_);
+
+ ASSERT(initialized_);
+ ASSERT(worker_thread_ == talk_base::Thread::Current());
+
+ SoundclipMedia* soundclip_media = media_engine_->CreateSoundclip();
+ if (!soundclip_media) {
+ return NULL;
+ }
+
+ Soundclip* soundclip = new Soundclip(worker_thread_, soundclip_media);
+ soundclips_.push_back(soundclip);
+ return soundclip;
+}
+
+void ChannelManager::DestroySoundclip(Soundclip* soundclip) {
+ if (soundclip) {
+ talk_base::TypedMessageData<Soundclip*> data(soundclip);
+ Send(MSG_DESTROYSOUNDCLIP, &data);
+ }
+}
+
+void ChannelManager::DestroySoundclip_w(Soundclip* soundclip) {
+ talk_base::CritScope cs(&crit_);
+ // Destroy soundclip.
+ ASSERT(initialized_);
+ Soundclips::iterator it = std::find(soundclips_.begin(),
+ soundclips_.end(), soundclip);
+ ASSERT(it != soundclips_.end());
+ if (it == soundclips_.end())
+ return;
+
+ soundclips_.erase(it);
+ delete soundclip;
+}
+
+bool ChannelManager::GetAudioOptions(std::string* in_name,
+ std::string* out_name, int* opts) {
+ *in_name = audio_in_device_;
+ *out_name = audio_out_device_;
+ *opts = audio_options_;
+ return true;
+}
+
+bool ChannelManager::SetAudioOptions(const std::string& in_name,
+ const std::string& out_name, int opts) {
+ // Get device ids from DeviceManager.
+ Device in_dev, out_dev;
+ if (!device_manager_->GetAudioInputDevice(in_name, &in_dev)) {
+ LOG(LS_WARNING) << "Failed to GetAudioInputDevice: " << in_name;
+ return false;
+ }
+ if (!device_manager_->GetAudioOutputDevice(out_name, &out_dev)) {
+ LOG(LS_WARNING) << "Failed to GetAudioOutputDevice: " << out_name;
+ return false;
+ }
+
+ // If we're initialized, pass the settings to the media engine.
+ bool ret = true;
+ if (initialized_) {
+ AudioOptions options(opts, &in_dev, &out_dev);
+ ret = (Send(MSG_SETAUDIOOPTIONS, &options) && options.result);
+ }
+
+ // If all worked well, save the values for use in GetAudioOptions.
+ if (ret) {
+ audio_options_ = opts;
+ audio_in_device_ = in_name;
+ audio_out_device_ = out_name;
+ }
+ return ret;
+}
+
+bool ChannelManager::SetAudioOptions_w(int opts, const Device* in_dev,
+ const Device* out_dev) {
+ ASSERT(worker_thread_ == talk_base::Thread::Current());
+ ASSERT(initialized_);
+
+ // Set audio options
+ bool ret = media_engine_->SetAudioOptions(opts);
+
+ // Set the audio devices
+ if (ret) {
+ talk_base::CritScope cs(&crit_);
+ ret = media_engine_->SetSoundDevices(in_dev, out_dev);
+ }
+
+ return ret;
+}
+
+bool ChannelManager::GetOutputVolume(int* level) {
+ VolumeLevel volume;
+ if (!Send(MSG_GETOUTPUTVOLUME, &volume) || !volume.result) {
+ return false;
+ }
+
+ *level = volume.level;
+ return true;
+}
+
+bool ChannelManager::GetOutputVolume_w(int* level) {
+ ASSERT(worker_thread_ == talk_base::Thread::Current());
+ ASSERT(initialized_);
+ return media_engine_->GetOutputVolume(level);
+}
+
+bool ChannelManager::SetOutputVolume(int level) {
+ VolumeLevel volume(level);
+ return (Send(MSG_SETOUTPUTVOLUME, &volume) && volume.result);
+}
+
+bool ChannelManager::SetOutputVolume_w(int level) {
+ ASSERT(worker_thread_ == talk_base::Thread::Current());
+ ASSERT(initialized_);
+ return media_engine_->SetOutputVolume(level);
+}
+
+bool ChannelManager::GetVideoOptions(std::string* cam_name) {
+ *cam_name = camera_device_;
+ return true;
+}
+
+bool ChannelManager::SetVideoOptions(const std::string& cam_name) {
+ Device device;
+ if (!device_manager_->GetVideoCaptureDevice(cam_name, &device)) {
+ if (!cam_name.empty()) {
+ LOG(LS_WARNING) << "Device manager can't find camera: " << cam_name;
+ }
+ return false;
+ }
+
+ // If we're running, tell the media engine about it.
+ bool ret = true;
+ if (initialized_) {
+ VideoOptions options(&device);
+ ret = (Send(MSG_SETVIDEOOPTIONS, &options) && options.result);
+ }
+
+ // If everything worked, retain the name of the selected camera.
+ if (ret) {
+ camera_device_ = device.name;
+ }
+ return ret;
+}
+
+bool ChannelManager::SetVideoOptions_w(const Device* cam_device) {
+ ASSERT(worker_thread_ == talk_base::Thread::Current());
+ ASSERT(initialized_);
+
+ // Set the video input device
+ return media_engine_->SetVideoCaptureDevice(cam_device);
+}
+
+bool ChannelManager::SetDefaultVideoEncoderConfig(const VideoEncoderConfig& c) {
+ bool ret = true;
+ if (initialized_) {
+ DefaultVideoEncoderConfig config(c);
+ ret = Send(MSG_SETDEFAULTVIDEOENCODERCONFIG, &config) && config.result;
+ }
+ if (ret) {
+ default_video_encoder_config_ = c;
+ }
+ return ret;
+}
+
+bool ChannelManager::SetDefaultVideoEncoderConfig_w(
+ const VideoEncoderConfig& c) {
+ ASSERT(worker_thread_ == talk_base::Thread::Current());
+ ASSERT(initialized_);
+ return media_engine_->SetDefaultVideoEncoderConfig(c);
+}
+
+bool ChannelManager::SetLocalMonitor(bool enable) {
+ LocalMonitor monitor(enable);
+ bool ret = Send(MSG_SETLOCALMONITOR, &monitor) && monitor.result;
+ if (ret) {
+ monitoring_ = enable;
+ }
+ return ret;
+}
+
+bool ChannelManager::SetLocalMonitor_w(bool enable) {
+ ASSERT(worker_thread_ == talk_base::Thread::Current());
+ ASSERT(initialized_);
+ return media_engine_->SetLocalMonitor(enable);
+}
+
+bool ChannelManager::SetLocalRenderer(VideoRenderer* renderer) {
+ bool ret = true;
+ if (initialized_) {
+ LocalRenderer local(renderer);
+ ret = (Send(MSG_SETLOCALRENDERER, &local) && local.result);
+ }
+ if (ret) {
+ local_renderer_ = renderer;
+ }
+ return ret;
+}
+
+bool ChannelManager::SetLocalRenderer_w(VideoRenderer* renderer) {
+ ASSERT(worker_thread_ == talk_base::Thread::Current());
+ ASSERT(initialized_);
+ return media_engine_->SetLocalRenderer(renderer);
+}
+
+CaptureResult ChannelManager::SetVideoCapture(bool capture) {
+ bool ret;
+ CaptureParams capture_params(capture);
+ ret = (Send(MSG_SETVIDEOCAPTURE, &capture_params) &&
+ (capture_params.result != CR_FAILURE));
+ if (ret) {
+ capturing_ = capture;
+ }
+ return capture_params.result;
+}
+
+CaptureResult ChannelManager::SetVideoCapture_w(bool capture) {
+ ASSERT(worker_thread_ == talk_base::Thread::Current());
+ ASSERT(initialized_);
+ return media_engine_->SetVideoCapture(capture);
+}
+
+void ChannelManager::SetVoiceLogging(int level, const char* filter) {
+ SetMediaLogging(false, level, filter);
+}
+
+void ChannelManager::SetVideoLogging(int level, const char* filter) {
+ SetMediaLogging(true, level, filter);
+}
+
+void ChannelManager::SetMediaLogging(bool video, int level,
+ const char* filter) {
+ // Can be called before initialization; in this case, the worker function
+ // is simply called on the main thread.
+ if (initialized_) {
+ LoggingOptions options(level, filter);
+ Send((video) ? MSG_SETVIDEOLOGGING : MSG_SETVOICELOGGING, &options);
+ } else {
+ SetMediaLogging_w(video, level, filter);
+ }
+}
+
+void ChannelManager::SetMediaLogging_w(bool video, int level,
+ const char* filter) {
+ // Can be called before initialization
+ ASSERT(worker_thread_ == talk_base::Thread::Current() || !initialized_);
+ if (video) {
+ media_engine_->SetVideoLogging(level, filter);
+ } else {
+ media_engine_->SetVoiceLogging(level, filter);
+ }
+}
+
+bool ChannelManager::Send(uint32 id, talk_base::MessageData* data) {
+ if (!worker_thread_ || !initialized_) return false;
+ worker_thread_->Send(this, id, data);
+ return true;
+}
+
+void ChannelManager::OnVideoCaptureResult(CaptureResult result) {
+ capturing_ = result == CR_SUCCESS;
+ main_thread_->Post(this, MSG_CAMERASTARTED,
+ new talk_base::TypedMessageData<CaptureResult>(result));
+}
+
+void ChannelManager::OnMessage(talk_base::Message* message) {
+ talk_base::MessageData* data = message->pdata;
+ switch (message->message_id) {
+ case MSG_CREATEVOICECHANNEL: {
+ CreationParams* p = static_cast<CreationParams*>(data);
+ p->voice_channel =
+ CreateVoiceChannel_w(p->session, p->content_name, p->rtcp);
+ break;
+ }
+ case MSG_DESTROYVOICECHANNEL: {
+ VoiceChannel* p = static_cast<talk_base::TypedMessageData<VoiceChannel*>*>
+ (data)->data();
+ DestroyVoiceChannel_w(p);
+ break;
+ }
+ case MSG_CREATEVIDEOCHANNEL: {
+ CreationParams* p = static_cast<CreationParams*>(data);
+ p->video_channel = CreateVideoChannel_w(p->session, p->content_name,
+ p->rtcp, p->voice_channel);
+ break;
+ }
+ case MSG_DESTROYVIDEOCHANNEL: {
+ VideoChannel* p = static_cast<talk_base::TypedMessageData<VideoChannel*>*>
+ (data)->data();
+ DestroyVideoChannel_w(p);
+ break;
+ }
+ case MSG_CREATESOUNDCLIP: {
+ talk_base::TypedMessageData<Soundclip*> *p =
+ static_cast<talk_base::TypedMessageData<Soundclip*>*>(data);
+ p->data() = CreateSoundclip_w();
+ break;
+ }
+ case MSG_DESTROYSOUNDCLIP: {
+ talk_base::TypedMessageData<Soundclip*> *p =
+ static_cast<talk_base::TypedMessageData<Soundclip*>*>(data);
+ DestroySoundclip_w(p->data());
+ break;
+ }
+ case MSG_SETAUDIOOPTIONS: {
+ AudioOptions* p = static_cast<AudioOptions*>(data);
+ p->result = SetAudioOptions_w(p->options,
+ p->in_device, p->out_device);
+ break;
+ }
+ case MSG_GETOUTPUTVOLUME: {
+ VolumeLevel* p = static_cast<VolumeLevel*>(data);
+ p->result = GetOutputVolume_w(&p->level);
+ break;
+ }
+ case MSG_SETOUTPUTVOLUME: {
+ VolumeLevel* p = static_cast<VolumeLevel*>(data);
+ p->result = SetOutputVolume_w(p->level);
+ break;
+ }
+ case MSG_SETLOCALMONITOR: {
+ LocalMonitor* p = static_cast<LocalMonitor*>(data);
+ p->result = SetLocalMonitor_w(p->enable);
+ break;
+ }
+ case MSG_SETVIDEOOPTIONS: {
+ VideoOptions* p = static_cast<VideoOptions*>(data);
+ p->result = SetVideoOptions_w(p->cam_device);
+ break;
+ }
+ case MSG_SETDEFAULTVIDEOENCODERCONFIG: {
+ DefaultVideoEncoderConfig* p =
+ static_cast<DefaultVideoEncoderConfig*>(data);
+ p->result = SetDefaultVideoEncoderConfig_w(p->config);
+ break;
+ }
+ case MSG_SETLOCALRENDERER: {
+ LocalRenderer* p = static_cast<LocalRenderer*>(data);
+ p->result = SetLocalRenderer_w(p->renderer);
+ break;
+ }
+ case MSG_SETVIDEOCAPTURE: {
+ CaptureParams* p = static_cast<CaptureParams*>(data);
+ p->result = SetVideoCapture_w(p->capture);
+ break;
+ }
+ case MSG_SETVOICELOGGING:
+ case MSG_SETVIDEOLOGGING: {
+ LoggingOptions* p = static_cast<LoggingOptions*>(data);
+ bool video = (message->message_id == MSG_SETVIDEOLOGGING);
+ SetMediaLogging_w(video, p->level, p->filter.c_str());
+ break;
+ }
+ case MSG_CAMERASTARTED: {
+ talk_base::TypedMessageData<CaptureResult>* data =
+ static_cast<talk_base::TypedMessageData<CaptureResult>*>(
+ message->pdata);
+ SignalVideoCaptureResult(data->data());
+ delete data;
+ break;
+ }
+ }
+}
+
+static void GetDeviceNames(const std::vector<Device>& devs,
+ std::vector<std::string>* names) {
+ names->clear();
+ for (size_t i = 0; i < devs.size(); ++i) {
+ names->push_back(devs[i].name);
+ }
+}
+
+bool ChannelManager::GetAudioInputDevices(std::vector<std::string>* names) {
+ names->clear();
+ std::vector<Device> devs;
+ bool ret = device_manager_->GetAudioInputDevices(&devs);
+ if (ret)
+ GetDeviceNames(devs, names);
+
+ return ret;
+}
+
+bool ChannelManager::GetAudioOutputDevices(std::vector<std::string>* names) {
+ names->clear();
+ std::vector<Device> devs;
+ bool ret = device_manager_->GetAudioOutputDevices(&devs);
+ if (ret)
+ GetDeviceNames(devs, names);
+
+ return ret;
+}
+
+bool ChannelManager::GetVideoCaptureDevices(std::vector<std::string>* names) {
+ names->clear();
+ std::vector<Device> devs;
+ bool ret = device_manager_->GetVideoCaptureDevices(&devs);
+ if (ret)
+ GetDeviceNames(devs, names);
+
+ return ret;
+}
+
+} // namespace cricket
diff --git a/third_party_mods/libjingle/source/talk/session/phone/channelmanager.h b/third_party_mods/libjingle/source/talk/session/phone/channelmanager.h
new file mode 100644
index 0000000..bfdf5d5
--- /dev/null
+++ b/third_party_mods/libjingle/source/talk/session/phone/channelmanager.h
@@ -0,0 +1,208 @@
+/*
+ * libjingle
+ * Copyright 2004--2008, Google Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright notice,
+ * this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ * 3. The name of the author may not be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED
+ * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO
+ * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
+ * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
+ * OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
+ * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef TALK_SESSION_PHONE_CHANNELMANAGER_H_
+#define TALK_SESSION_PHONE_CHANNELMANAGER_H_
+
+#include <string>
+#include <vector>
+
+#include "talk/base/criticalsection.h"
+#include "talk/base/sigslotrepeater.h"
+#include "talk/base/thread.h"
+#include "talk/p2p/base/session.h"
+#include "talk/session/phone/voicechannel.h"
+#include "talk/session/phone/mediaengine.h"
+#include "talk/session/phone/devicemanager.h"
+
+namespace cricket {
+
+class Soundclip;
+class VoiceChannel;
+
+// ChannelManager allows the MediaEngine to run on a separate thread, and takes
+// care of marshalling calls between threads. It also creates and keeps track of
+// voice and video channels; by doing so, it can temporarily pause all the
+// channels when a new audio or video device is chosen. The voice and video
+// channels are stored in separate vectors, to easily allow operations on just
+// voice or just video channels.
+// ChannelManager also allows the application to discover what devices it has
+// using device manager.
+class ChannelManager : public talk_base::MessageHandler,
+ public sigslot::has_slots<> {
+ public:
+ // Creates the channel manager, and specifies the worker thread to use.
+ explicit ChannelManager(talk_base::Thread* worker);
+ // For testing purposes. Allows the media engine and dev manager to be mocks.
+ // The ChannelManager takes ownership of these objects.
+ ChannelManager(MediaEngine* me, DeviceManager* dm, talk_base::Thread* worker);
+ ~ChannelManager();
+
+ // Accessors for the worker thread, allowing it to be set after construction,
+ // but before Init. set_worker_thread will return false if called after Init.
+ talk_base::Thread* worker_thread() const { return worker_thread_; }
+ bool set_worker_thread(talk_base::Thread* thread) {
+ if (initialized_) return false;
+ worker_thread_ = thread;
+ return true;
+ }
+
+ // Gets capabilities. Can be called prior to starting the media engine.
+ int GetCapabilities();
+
+ // Retrieves the list of supported audio & video codec types.
+ // Can be called before starting the media engine.
+ void GetSupportedAudioCodecs(std::vector<AudioCodec>* codecs) const;
+ void GetSupportedVideoCodecs(std::vector<VideoCodec>* codecs) const;
+
+ // Indicates whether the media engine is started.
+ bool initialized() const { return initialized_; }
+ // Starts up the media engine.
+ bool Init();
+ // TODO: Remove this temporary API once Flute is updated.
+ bool Init(talk_base::Thread* thread) {
+ return set_worker_thread(thread) && Init();
+ }
+ // Shuts down the media engine.
+ void Terminate();
+
+ // The operations below all occur on the worker thread.
+
+ // Creates a voice channel, to be associated with the specified session.
+ VoiceChannel* CreateVoiceChannel(
+ BaseSession* session, const std::string& content_name, bool rtcp);
+ // Destroys a voice channel created with the Create API.
+ void DestroyVoiceChannel(VoiceChannel* voice_channel);
+ // Creates a video channel, synced with the specified voice channel, and
+ // associated with the specified session.
+ VideoChannel* CreateVideoChannel(
+ BaseSession* session, const std::string& content_name, bool rtcp,
+ VoiceChannel* voice_channel);
+ // Destroys a video channel created with the Create API.
+ void DestroyVideoChannel(VideoChannel* video_channel);
+
+ // Creates a soundclip.
+ Soundclip* CreateSoundclip();
+ // Destroys a soundclip created with the Create API.
+ void DestroySoundclip(Soundclip* soundclip);
+
+ // Indicates whether any channels exist.
+ bool has_channels() const {
+ return (!voice_channels_.empty() || !video_channels_.empty() ||
+ !soundclips_.empty());
+ }
+
+ // Configures the audio and video devices.
+ bool GetAudioOptions(std::string* wave_in_device,
+ std::string* wave_out_device, int* opts);
+ bool SetAudioOptions(const std::string& wave_in_device,
+ const std::string& wave_out_device, int opts);
+ bool GetOutputVolume(int* level);
+ bool SetOutputVolume(int level);
+ bool GetVideoOptions(std::string* cam_device);
+ bool SetVideoOptions(const std::string& cam_device);
+ bool SetDefaultVideoEncoderConfig(const VideoEncoderConfig& config);
+
+ // Starts/stops the local microphone and enables polling of the input level.
+ bool SetLocalMonitor(bool enable);
+ bool monitoring() const { return monitoring_; }
+ // Sets the local renderer where to renderer the local camera.
+ bool SetLocalRenderer(VideoRenderer* renderer);
+ // Starts and stops the local camera and renders it to the local renderer.
+ CaptureResult SetVideoCapture(bool capture);
+ bool capturing() const { return capturing_; }
+
+ // Configures the logging output of the mediaengine(s).
+ void SetVoiceLogging(int level, const char* filter);
+ void SetVideoLogging(int level, const char* filter);
+
+ // The operations below occur on the main thread.
+
+ bool GetAudioInputDevices(std::vector<std::string>* names);
+ bool GetAudioOutputDevices(std::vector<std::string>* names);
+ bool GetVideoCaptureDevices(std::vector<std::string>* names);
+ sigslot::repeater0<> SignalDevicesChange;
+ sigslot::signal1<CaptureResult> SignalVideoCaptureResult;
+
+ protected:
+ bool Send(uint32 id, talk_base::MessageData* pdata);
+ void OnMessage(talk_base::Message *message);
+ MediaEngine* media_engine() { return media_engine_.get(); }
+
+ private:
+ typedef std::vector<VoiceChannel*> VoiceChannels;
+ typedef std::vector<VideoChannel*> VideoChannels;
+ typedef std::vector<Soundclip*> Soundclips;
+
+ void Construct();
+ VoiceChannel* CreateVoiceChannel_w(
+ BaseSession* session, const std::string& content_name, bool rtcp);
+ void DestroyVoiceChannel_w(VoiceChannel* voice_channel);
+ VideoChannel* CreateVideoChannel_w(
+ BaseSession* session, const std::string& content_name, bool rtcp,
+ VoiceChannel* voice_channel);
+ void DestroyVideoChannel_w(VideoChannel* video_channel);
+ Soundclip* CreateSoundclip_w();
+ void DestroySoundclip_w(Soundclip* soundclip);
+ bool SetAudioOptions_w(int opts, const Device* in_dev,
+ const Device* out_dev);
+ bool GetOutputVolume_w(int* level);
+ bool SetOutputVolume_w(int level);
+ bool SetLocalMonitor_w(bool enable);
+ bool SetVideoOptions_w(const Device* cam_device);
+ bool SetDefaultVideoEncoderConfig_w(const VideoEncoderConfig& config);
+ bool SetLocalRenderer_w(VideoRenderer* renderer);
+ CaptureResult SetVideoCapture_w(bool capture);
+ void SetMediaLogging(bool video, int level, const char* filter);
+ void SetMediaLogging_w(bool video, int level, const char* filter);
+ void OnVideoCaptureResult(CaptureResult result);
+
+ talk_base::CriticalSection crit_;
+ talk_base::scoped_ptr<MediaEngine> media_engine_;
+ talk_base::scoped_ptr<DeviceManager> device_manager_;
+ bool initialized_;
+ talk_base::Thread* main_thread_;
+ talk_base::Thread* worker_thread_;
+
+ VoiceChannels voice_channels_;
+ VideoChannels video_channels_;
+ Soundclips soundclips_;
+
+ std::string audio_in_device_;
+ std::string audio_out_device_;
+ int audio_options_;
+ std::string camera_device_;
+ VideoEncoderConfig default_video_encoder_config_;
+ VideoRenderer* local_renderer_;
+
+ bool capturing_;
+ bool monitoring_;
+};
+
+} // namespace cricket
+
+#endif // TALK_SESSION_PHONE_CHANNELMANAGER_H_
diff --git a/third_party_mods/libjingle/source/talk/session/phone/devicemanager.cc b/third_party_mods/libjingle/source/talk/session/phone/devicemanager.cc
new file mode 100644
index 0000000..6eca88f
--- /dev/null
+++ b/third_party_mods/libjingle/source/talk/session/phone/devicemanager.cc
@@ -0,0 +1,1028 @@
+/*
+ * libjingle
+ * Copyright 2004--2011, Google Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright notice,
+ * this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ * 3. The name of the author may not be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED
+ * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO
+ * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
+ * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
+ * OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
+ * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "talk/session/phone/devicemanager.h"
+
+#if WIN32
+#include <atlbase.h>
+#include <dbt.h>
+#include <strmif.h> // must come before ks.h
+#include <mmsystem.h>
+#include <ks.h>
+#include <ksmedia.h>
+#include <mmdeviceapi.h>
+#include <functiondiscoverykeys_devpkey.h>
+#include <uuids.h>
+#include "talk/base/win32.h" // ToUtf8
+#include "talk/base/win32window.h"
+
+// PKEY_AudioEndpoint_GUID isn't included in uuid.lib and we don't want
+// to define INITGUID in order to define all the uuids in this object file
+// as it will conflict with uuid.lib (multiply defined symbols).
+// So our workaround is to define this one missing symbol here manually.
+EXTERN_C const PROPERTYKEY PKEY_AudioEndpoint_GUID = { {
+ 0x1da5d803, 0xd492, 0x4edd, {
+ 0x8c, 0x23, 0xe0, 0xc0, 0xff, 0xee, 0x7f, 0x0e
+ } }, 4
+};
+
+#elif OSX
+#include <CoreAudio/CoreAudio.h>
+#include <QuickTime/QuickTime.h>
+#elif LINUX
+#include <libudev.h>
+#include <unistd.h>
+#include "talk/base/linux.h"
+#include "talk/base/fileutils.h"
+#include "talk/base/pathutils.h"
+#include "talk/base/physicalsocketserver.h"
+#include "talk/base/stream.h"
+#include "talk/session/phone/libudevsymboltable.h"
+#include "talk/session/phone/v4llookup.h"
+#if defined(LINUX_SOUND_USED)
+#include "talk/sound/platformsoundsystem.h"
+#include "talk/sound/platformsoundsystemfactory.h"
+#include "talk/sound/sounddevicelocator.h"
+#include "talk/sound/soundsysteminterface.h"
+#endif
+#endif
+
+#include "talk/base/logging.h"
+#include "talk/base/stringutils.h"
+#include "talk/session/phone/mediaengine.h"
+
+namespace cricket {
+// Initialize to empty string.
+const std::string DeviceManager::kDefaultDeviceName;
+
+#ifdef PLATFORM_CHROMIUM
+class DeviceWatcher {
+ public:
+ explicit DeviceWatcher(DeviceManager* dm);
+ bool Start();
+ void Stop();
+};
+#elif defined(WIN32)
+class DeviceWatcher : public talk_base::Win32Window {
+ public:
+ explicit DeviceWatcher(DeviceManager* dm);
+ bool Start();
+ void Stop();
+
+ private:
+ HDEVNOTIFY Register(REFGUID guid);
+ void Unregister(HDEVNOTIFY notify);
+ virtual bool OnMessage(UINT msg, WPARAM wp, LPARAM lp, LRESULT& result);
+
+ DeviceManager* manager_;
+ HDEVNOTIFY audio_notify_;
+ HDEVNOTIFY video_notify_;
+};
+#elif defined(LINUX)
+class DeviceWatcher : private talk_base::Dispatcher {
+ public:
+ explicit DeviceWatcher(DeviceManager* dm);
+ bool Start();
+ void Stop();
+
+ private:
+ virtual uint32 GetRequestedEvents();
+ virtual void OnPreEvent(uint32 ff);
+ virtual void OnEvent(uint32 ff, int err);
+ virtual int GetDescriptor();
+ virtual bool IsDescriptorClosed();
+
+ DeviceManager* manager_;
+ LibUDevSymbolTable libudev_;
+ struct udev* udev_;
+ struct udev_monitor* udev_monitor_;
+ bool registered_;
+};
+#define LATE(sym) LATESYM_GET(LibUDevSymbolTable, &libudev_, sym)
+#elif defined(OSX)
+class DeviceWatcher {
+ public:
+ explicit DeviceWatcher(DeviceManager* dm);
+ bool Start();
+ void Stop();
+ private:
+ DeviceManager* manager_;
+ void* impl_;
+};
+#endif
+
+#if !defined(LINUX) && !defined(IOS)
+static bool ShouldDeviceBeIgnored(const std::string& device_name);
+#endif
+#ifndef OSX
+static bool GetVideoDevices(std::vector<Device>* out);
+#endif
+#if WIN32
+static const wchar_t kFriendlyName[] = L"FriendlyName";
+static const wchar_t kDevicePath[] = L"DevicePath";
+static const char kUsbDevicePathPrefix[] = "\\\\?\\usb";
+static bool GetDevices(const CLSID& catid, std::vector<Device>* out);
+static bool GetCoreAudioDevices(bool input, std::vector<Device>* devs);
+static bool GetWaveDevices(bool input, std::vector<Device>* devs);
+#elif OSX
+static const int kVideoDeviceOpenAttempts = 3;
+static const UInt32 kAudioDeviceNameLength = 64;
+// Obj-C functions defined in devicemanager-mac.mm
+extern void* CreateDeviceWatcherCallback(DeviceManager* dm);
+extern void ReleaseDeviceWatcherCallback(void* impl);
+extern bool GetQTKitVideoDevices(std::vector<Device>* out);
+static bool GetAudioDeviceIDs(bool inputs, std::vector<AudioDeviceID>* out);
+static bool GetAudioDeviceName(AudioDeviceID id, bool input, std::string* out);
+#endif
+
+DeviceManager::DeviceManager()
+ : initialized_(false),
+#if defined(WIN32)
+ need_couninitialize_(false),
+#endif
+ watcher_(new DeviceWatcher(this))
+#ifdef LINUX_SOUND_USED
+ , sound_system_(new PlatformSoundSystemFactory())
+#endif
+ {
+}
+
+DeviceManager::~DeviceManager() {
+ if (initialized_) {
+ Terminate();
+ }
+ delete watcher_;
+}
+
+bool DeviceManager::Init() {
+ if (!initialized_) {
+#if defined(WIN32) && !defined(PLATFORM_CHROMIUM)
+ HRESULT hr = CoInitializeEx(NULL, COINIT_MULTITHREADED);
+ need_couninitialize_ = SUCCEEDED(hr);
+ if (FAILED(hr)) {
+ LOG(LS_ERROR) << "CoInitialize failed, hr=" << hr;
+ if (hr != RPC_E_CHANGED_MODE) {
+ return false;
+ }
+ }
+#endif
+ if (!watcher_->Start()) {
+ return false;
+ }
+ initialized_ = true;
+ }
+ return true;
+}
+
+void DeviceManager::Terminate() {
+ if (initialized_) {
+ watcher_->Stop();
+#if defined(WIN32) && !defined(PLATFORM_CHROMIUM)
+ if (need_couninitialize_) {
+ CoUninitialize();
+ need_couninitialize_ = false;
+ }
+#endif
+ initialized_ = false;
+ }
+}
+
+int DeviceManager::GetCapabilities() {
+ std::vector<Device> devices;
+ int caps = MediaEngine::VIDEO_RECV;
+ if (GetAudioInputDevices(&devices) && !devices.empty()) {
+ caps |= MediaEngine::AUDIO_SEND;
+ }
+ if (GetAudioOutputDevices(&devices) && !devices.empty()) {
+ caps |= MediaEngine::AUDIO_RECV;
+ }
+ if (GetVideoCaptureDevices(&devices) && !devices.empty()) {
+ caps |= MediaEngine::VIDEO_SEND;
+ }
+ return caps;
+}
+
+bool DeviceManager::GetAudioInputDevices(std::vector<Device>* devices) {
+ return GetAudioDevicesByPlatform(true, devices);
+}
+
+bool DeviceManager::GetAudioOutputDevices(std::vector<Device>* devices) {
+ return GetAudioDevicesByPlatform(false, devices);
+}
+
+bool DeviceManager::GetAudioInputDevice(const std::string& name, Device* out) {
+ return GetAudioDevice(true, name, out);
+}
+
+bool DeviceManager::GetAudioOutputDevice(const std::string& name, Device* out) {
+ return GetAudioDevice(false, name, out);
+}
+
+#ifdef OSX
+static bool FilterDevice(const Device& d) {
+ return ShouldDeviceBeIgnored(d.name);
+}
+#endif
+
+bool DeviceManager::GetVideoCaptureDevices(std::vector<Device>* devices) {
+ devices->clear();
+#ifdef PLATFORM_CHROMIUM
+ devices->push_back(Device("", -1));
+ return true;
+#elif OSX
+ if (GetQTKitVideoDevices(devices)) {
+ // Now filter out any known incompatible devices
+ devices->erase(remove_if(devices->begin(), devices->end(), FilterDevice),
+ devices->end());
+ return true;
+ }
+ return false;
+#else
+ return GetVideoDevices(devices);
+#endif
+}
+
+bool DeviceManager::GetDefaultVideoCaptureDevice(Device* device) {
+ bool ret = false;
+#ifdef PLATFORM_CHROMIUM
+ *device = Device("", -1);
+ ret = true;
+#elif WIN32
+ // If there are multiple capture devices, we want the first USB one.
+ // This avoids issues with defaulting to virtual cameras or grabber cards.
+ std::vector<Device> devices;
+ ret = (GetVideoDevices(&devices) && !devices.empty());
+ if (ret) {
+ *device = devices[0];
+ for (size_t i = 0; i < devices.size(); ++i) {
+ if (strnicmp(devices[i].id.c_str(), kUsbDevicePathPrefix,
+ ARRAY_SIZE(kUsbDevicePathPrefix) - 1) == 0) {
+ *device = devices[i];
+ break;
+ }
+ }
+ }
+#else
+ // We just return the first device.
+ std::vector<Device> devices;
+ ret = (GetVideoCaptureDevices(&devices) && !devices.empty());
+ if (ret) {
+ *device = devices[0];
+ }
+#endif
+ return ret;
+}
+
+bool DeviceManager::GetVideoCaptureDevice(const std::string& name,
+ Device* out) {
+ // If the name is empty, return the default device.
+ if (name.empty() || name == kDefaultDeviceName) {
+ return GetDefaultVideoCaptureDevice(out);
+ }
+
+ std::vector<Device> devices;
+ if (!GetVideoCaptureDevices(&devices)) {
+ return false;
+ }
+
+#ifdef PLATFORM_CHROMIUM
+ *out = Device(name, name);
+ return true;
+#else
+ for (std::vector<Device>::const_iterator it = devices.begin();
+ it != devices.end(); ++it) {
+ if (name == it->name) {
+ *out = *it;
+ return true;
+ }
+ }
+#endif
+
+ return false;
+}
+
+bool DeviceManager::GetAudioDevice(bool is_input, const std::string& name,
+ Device* out) {
+ // If the name is empty, return the default device id.
+ if (name.empty() || name == kDefaultDeviceName) {
+ *out = Device(name, -1);
+ return true;
+ }
+
+ std::vector<Device> devices;
+ bool ret = is_input ? GetAudioInputDevices(&devices) :
+ GetAudioOutputDevices(&devices);
+ if (ret) {
+ ret = false;
+ for (size_t i = 0; i < devices.size(); ++i) {
+ if (devices[i].name == name) {
+ *out = devices[i];
+ ret = true;
+ break;
+ }
+ }
+ }
+ return ret;
+}
+
+bool DeviceManager::GetAudioDevicesByPlatform(bool input,
+ std::vector<Device>* devs) {
+ devs->clear();
+#ifdef PLATFORM_CHROMIUM
+ devs->push_back(Device("", -1));
+ return true;
+#elif defined(LINUX_SOUND_USED)
+ if (!sound_system_.get()) {
+ return false;
+ }
+ SoundSystemInterface::SoundDeviceLocatorList list;
+ bool success;
+ if (input) {
+ success = sound_system_->EnumerateCaptureDevices(&list);
+ } else {
+ success = sound_system_->EnumeratePlaybackDevices(&list);
+ }
+ if (!success) {
+ LOG(LS_ERROR) << "Can't enumerate devices";
+ sound_system_.release();
+ return false;
+ }
+ // We have to start the index at 1 because GIPS VoiceEngine puts the default
+ // device at index 0, but Enumerate(Capture|Playback)Devices does not include
+ // a locator for the default device.
+ int index = 1;
+ for (SoundSystemInterface::SoundDeviceLocatorList::iterator i = list.begin();
+ i != list.end();
+ ++i, ++index) {
+ devs->push_back(Device((*i)->name(), index));
+ }
+ SoundSystemInterface::ClearSoundDeviceLocatorList(&list);
+ sound_system_.release();
+ return true;
+
+#elif defined(WIN32)
+ if (talk_base::IsWindowsVistaOrLater()) {
+ return GetCoreAudioDevices(input, devs);
+ } else {
+ return GetWaveDevices(input, devs);
+ }
+
+#elif defined(OSX)
+ std::vector<AudioDeviceID> dev_ids;
+ bool ret = GetAudioDeviceIDs(input, &dev_ids);
+ if (ret) {
+ for (size_t i = 0; i < dev_ids.size(); ++i) {
+ std::string name;
+ if (GetAudioDeviceName(dev_ids[i], input, &name)) {
+ devs->push_back(Device(name, dev_ids[i]));
+ }
+ }
+ }
+ return ret;
+
+#else
+ return false;
+#endif
+}
+
+#if defined(PLATFORM_CHROMIUM)
+DeviceWatcher::DeviceWatcher(DeviceManager* manager) {
+}
+
+bool DeviceWatcher::Start() {
+ return true;
+}
+
+void DeviceWatcher::Stop() {
+}
+
+#elif defined(WIN32)
+bool GetVideoDevices(std::vector<Device>* devices) {
+ return GetDevices(CLSID_VideoInputDeviceCategory, devices);
+}
+
+bool GetDevices(const CLSID& catid, std::vector<Device>* devices) {
+ HRESULT hr;
+
+ // CComPtr is a scoped pointer that will be auto released when going
+ // out of scope. CoUninitialize must not be called before the
+ // release.
+ CComPtr<ICreateDevEnum> sys_dev_enum;
+ CComPtr<IEnumMoniker> cam_enum;
+ if (FAILED(hr = sys_dev_enum.CoCreateInstance(CLSID_SystemDeviceEnum)) ||
+ FAILED(hr = sys_dev_enum->CreateClassEnumerator(catid, &cam_enum, 0))) {
+ LOG(LS_ERROR) << "Failed to create device enumerator, hr=" << hr;
+ return false;
+ }
+
+ // Only enum devices if CreateClassEnumerator returns S_OK. If there are no
+ // devices available, S_FALSE will be returned, but enumMk will be NULL.
+ if (hr == S_OK) {
+ CComPtr<IMoniker> mk;
+ while (cam_enum->Next(1, &mk, NULL) == S_OK) {
+ CComPtr<IPropertyBag> bag;
+ if (SUCCEEDED(mk->BindToStorage(NULL, NULL,
+ __uuidof(bag), reinterpret_cast<void**>(&bag)))) {
+ CComVariant name, path;
+ std::string name_str, path_str;
+ if (SUCCEEDED(bag->Read(kFriendlyName, &name, 0)) &&
+ name.vt == VT_BSTR) {
+ name_str = talk_base::ToUtf8(name.bstrVal);
+ if (!ShouldDeviceBeIgnored(name_str)) {
+ // Get the device id if one exists.
+ if (SUCCEEDED(bag->Read(kDevicePath, &path, 0)) &&
+ path.vt == VT_BSTR) {
+ path_str = talk_base::ToUtf8(path.bstrVal);
+ }
+
+ devices->push_back(Device(name_str, path_str));
+ }
+ }
+ }
+ mk = NULL;
+ }
+ }
+
+ return true;
+}
+
+HRESULT GetStringProp(IPropertyStore* bag, PROPERTYKEY key, std::string* out) {
+ out->clear();
+ PROPVARIANT var;
+ PropVariantInit(&var);
+
+ HRESULT hr = bag->GetValue(key, &var);
+ if (SUCCEEDED(hr)) {
+ if (var.pwszVal)
+ *out = talk_base::ToUtf8(var.pwszVal);
+ else
+ hr = E_FAIL;
+ }
+
+ PropVariantClear(&var);
+ return hr;
+}
+
+// Adapted from http://msdn.microsoft.com/en-us/library/dd370812(v=VS.85).aspx
+HRESULT CricketDeviceFromImmDevice(IMMDevice* device, Device* out) {
+ CComPtr<IPropertyStore> props;
+
+ HRESULT hr = device->OpenPropertyStore(STGM_READ, &props);
+ if (FAILED(hr)) {
+ return hr;
+ }
+
+ // Get the endpoint's name and id.
+ std::string name, guid;
+ hr = GetStringProp(props, PKEY_Device_FriendlyName, &name);
+ if (SUCCEEDED(hr)) {
+ hr = GetStringProp(props, PKEY_AudioEndpoint_GUID, &guid);
+
+ if (SUCCEEDED(hr)) {
+ out->name = name;
+ out->id = guid;
+ }
+ }
+ return hr;
+}
+
+bool GetCoreAudioDevices(bool input, std::vector<Device>* devs) {
+ HRESULT hr = S_OK;
+ CComPtr<IMMDeviceEnumerator> enumerator;
+
+ hr = CoCreateInstance(__uuidof(MMDeviceEnumerator), NULL, CLSCTX_ALL,
+ __uuidof(IMMDeviceEnumerator), reinterpret_cast<void**>(&enumerator));
+ if (SUCCEEDED(hr)) {
+ CComPtr<IMMDeviceCollection> devices;
+ hr = enumerator->EnumAudioEndpoints((input ? eCapture : eRender),
+ DEVICE_STATE_ACTIVE, &devices);
+ if (SUCCEEDED(hr)) {
+ unsigned int count;
+ hr = devices->GetCount(&count);
+
+ if (SUCCEEDED(hr)) {
+ for (unsigned int i = 0; i < count; i++) {
+ CComPtr<IMMDevice> device;
+
+ // Get pointer to endpoint number i.
+ hr = devices->Item(i, &device);
+ if (FAILED(hr)) {
+ break;
+ }
+
+ Device dev;
+ hr = CricketDeviceFromImmDevice(device, &dev);
+ if (SUCCEEDED(hr)) {
+ devs->push_back(dev);
+ } else {
+ LOG(LS_WARNING) << "Unable to query IMM Device, skipping. HR="
+ << hr;
+ hr = S_FALSE;
+ }
+ }
+ }
+ }
+ }
+
+ if (!SUCCEEDED(hr)) {
+ LOG(LS_WARNING) << "GetCoreAudioDevices failed with hr " << hr;
+ return false;
+ }
+ return true;
+}
+
+bool GetWaveDevices(bool input, std::vector<Device>* devs) {
+ // Note, we don't use the System Device Enumerator interface here since it
+ // adds lots of pseudo-devices to the list, such as DirectSound and Wave
+ // variants of the same device.
+ if (input) {
+ int num_devs = waveInGetNumDevs();
+ for (int i = 0; i < num_devs; ++i) {
+ WAVEINCAPS caps;
+ if (waveInGetDevCaps(i, &caps, sizeof(caps)) == MMSYSERR_NOERROR &&
+ caps.wChannels > 0) {
+ devs->push_back(Device(talk_base::ToUtf8(caps.szPname),
+ talk_base::ToString(i)));
+ }
+ }
+ } else {
+ int num_devs = waveOutGetNumDevs();
+ for (int i = 0; i < num_devs; ++i) {
+ WAVEOUTCAPS caps;
+ if (waveOutGetDevCaps(i, &caps, sizeof(caps)) == MMSYSERR_NOERROR &&
+ caps.wChannels > 0) {
+ devs->push_back(Device(talk_base::ToUtf8(caps.szPname), i));
+ }
+ }
+ }
+ return true;
+}
+
+DeviceWatcher::DeviceWatcher(DeviceManager* manager)
+ : manager_(manager), audio_notify_(NULL), video_notify_(NULL) {
+}
+
+bool DeviceWatcher::Start() {
+ if (!Create(NULL, _T("libjingle DeviceWatcher Window"),
+ 0, 0, 0, 0, 0, 0)) {
+ return false;
+ }
+
+ audio_notify_ = Register(KSCATEGORY_AUDIO);
+ if (!audio_notify_) {
+ Stop();
+ return false;
+ }
+
+ video_notify_ = Register(KSCATEGORY_VIDEO);
+ if (!video_notify_) {
+ Stop();
+ return false;
+ }
+
+ return true;
+}
+
+void DeviceWatcher::Stop() {
+ UnregisterDeviceNotification(video_notify_);
+ video_notify_ = NULL;
+ UnregisterDeviceNotification(audio_notify_);
+ audio_notify_ = NULL;
+ Destroy();
+}
+
+HDEVNOTIFY DeviceWatcher::Register(REFGUID guid) {
+ DEV_BROADCAST_DEVICEINTERFACE dbdi;
+ dbdi.dbcc_size = sizeof(dbdi);
+ dbdi.dbcc_devicetype = DBT_DEVTYP_DEVICEINTERFACE;
+ dbdi.dbcc_classguid = guid;
+ dbdi.dbcc_name[0] = '\0';
+ return RegisterDeviceNotification(handle(), &dbdi,
+ DEVICE_NOTIFY_WINDOW_HANDLE);
+}
+
+void DeviceWatcher::Unregister(HDEVNOTIFY handle) {
+ UnregisterDeviceNotification(handle);
+}
+
+bool DeviceWatcher::OnMessage(UINT uMsg, WPARAM wParam, LPARAM lParam,
+ LRESULT& result) {
+ if (uMsg == WM_DEVICECHANGE) {
+ if (wParam == DBT_DEVICEARRIVAL ||
+ wParam == DBT_DEVICEREMOVECOMPLETE) {
+ DEV_BROADCAST_DEVICEINTERFACE* dbdi =
+ reinterpret_cast<DEV_BROADCAST_DEVICEINTERFACE*>(lParam);
+ if (dbdi->dbcc_classguid == KSCATEGORY_AUDIO ||
+ dbdi->dbcc_classguid == KSCATEGORY_VIDEO) {
+ manager_->OnDevicesChange();
+ }
+ }
+ result = 0;
+ return true;
+ }
+
+ return false;
+}
+#elif defined(OSX)
+static bool GetAudioDeviceIDs(bool input,
+ std::vector<AudioDeviceID>* out_dev_ids) {
+ UInt32 propsize;
+ OSErr err = AudioHardwareGetPropertyInfo(kAudioHardwarePropertyDevices,
+ &propsize, NULL);
+ if (0 != err) {
+ LOG(LS_ERROR) << "Couldn't get information about property, "
+ << "so no device list acquired.";
+ return false;
+ }
+
+ size_t num_devices = propsize / sizeof(AudioDeviceID);
+ talk_base::scoped_array<AudioDeviceID> device_ids(
+ new AudioDeviceID[num_devices]);
+
+ err = AudioHardwareGetProperty(kAudioHardwarePropertyDevices,
+ &propsize, device_ids.get());
+ if (0 != err) {
+ LOG(LS_ERROR) << "Failed to get device ids, "
+ << "so no device listing acquired.";
+ return false;
+ }
+
+ for (size_t i = 0; i < num_devices; ++i) {
+ AudioDeviceID an_id = device_ids[i];
+ // find out the number of channels for this direction
+ // (input/output) on this device -
+ // we'll ignore anything with no channels.
+ err = AudioDeviceGetPropertyInfo(an_id, 0, input,
+ kAudioDevicePropertyStreams,
+ &propsize, NULL);
+ if (0 == err) {
+ unsigned num_channels = propsize / sizeof(AudioStreamID);
+ if (0 < num_channels) {
+ out_dev_ids->push_back(an_id);
+ }
+ } else {
+ LOG(LS_ERROR) << "No property info for stream property for device id "
+ << an_id << "(is_input == " << input
+ << "), so not including it in the list.";
+ }
+ }
+
+ return true;
+}
+
+static bool GetAudioDeviceName(AudioDeviceID id,
+ bool input,
+ std::string* out_name) {
+ UInt32 nameLength = kAudioDeviceNameLength;
+ char name[kAudioDeviceNameLength + 1];
+ OSErr err = AudioDeviceGetProperty(id, 0, input,
+ kAudioDevicePropertyDeviceName,
+ &nameLength, name);
+ if (0 != err) {
+ LOG(LS_ERROR) << "No name acquired for device id " << id;
+ return false;
+ }
+
+ *out_name = name;
+ return true;
+}
+
+DeviceWatcher::DeviceWatcher(DeviceManager* manager)
+ : manager_(manager), impl_(NULL) {
+}
+
+bool DeviceWatcher::Start() {
+ if (!impl_) {
+ impl_ = CreateDeviceWatcherCallback(manager_);
+ }
+ return impl_ != NULL;
+}
+
+void DeviceWatcher::Stop() {
+ if (impl_) {
+ ReleaseDeviceWatcherCallback(impl_);
+ impl_ = NULL;
+ }
+}
+
+#elif defined(LINUX)
+static const std::string kVideoMetaPathK2_4("/proc/video/dev/");
+static const std::string kVideoMetaPathK2_6("/sys/class/video4linux/");
+
+enum MetaType { M2_4, M2_6, NONE };
+
+static void ScanDeviceDirectory(const std::string& devdir,
+ std::vector<Device>* devices) {
+ talk_base::scoped_ptr<talk_base::DirectoryIterator> directoryIterator(
+ talk_base::Filesystem::IterateDirectory());
+
+ if (directoryIterator->Iterate(talk_base::Pathname(devdir))) {
+ do {
+ std::string filename = directoryIterator->Name();
+ std::string device_name = devdir + filename;
+ if (!directoryIterator->IsDots()) {
+ if (filename.find("video") == 0 &&
+ V4LLookup::IsV4L2Device(device_name)) {
+ devices->push_back(Device(device_name, device_name));
+ }
+ }
+ } while (directoryIterator->Next());
+ }
+}
+
+static std::string GetVideoDeviceNameK2_6(const std::string& device_meta_path) {
+ std::string device_name;
+
+ talk_base::scoped_ptr<talk_base::FileStream> device_meta_stream(
+ talk_base::Filesystem::OpenFile(device_meta_path, "r"));
+
+ if (device_meta_stream.get() != NULL) {
+ if (device_meta_stream->ReadLine(&device_name) != talk_base::SR_SUCCESS) {
+ LOG(LS_ERROR) << "Failed to read V4L2 device meta " << device_meta_path;
+ }
+ device_meta_stream->Close();
+ }
+
+ return device_name;
+}
+
+static std::string Trim(const std::string& s, const std::string& drop = " \t") {
+ std::string::size_type first = s.find_first_not_of(drop);
+ std::string::size_type last = s.find_last_not_of(drop);
+
+ if (first == std::string::npos || last == std::string::npos)
+ return std::string("");
+
+ return s.substr(first, last - first + 1);
+}
+
+static std::string GetVideoDeviceNameK2_4(const std::string& device_meta_path) {
+ talk_base::ConfigParser::MapVector all_values;
+
+ talk_base::ConfigParser config_parser;
+ talk_base::FileStream* file_stream =
+ talk_base::Filesystem::OpenFile(device_meta_path, "r");
+
+ if (file_stream == NULL) return "";
+
+ config_parser.Attach(file_stream);
+ config_parser.Parse(&all_values);
+
+ for (talk_base::ConfigParser::MapVector::iterator i = all_values.begin();
+ i != all_values.end(); ++i) {
+ talk_base::ConfigParser::SimpleMap::iterator device_name_i =
+ i->find("name");
+
+ if (device_name_i != i->end()) {
+ return device_name_i->second;
+ }
+ }
+
+ return "";
+}
+
+static std::string GetVideoDeviceName(MetaType meta,
+ const std::string& device_file_name) {
+ std::string device_meta_path;
+ std::string device_name;
+ std::string meta_file_path;
+
+ if (meta == M2_6) {
+ meta_file_path = kVideoMetaPathK2_6 + device_file_name + "/name";
+
+ LOG(LS_INFO) << "Trying " + meta_file_path;
+ device_name = GetVideoDeviceNameK2_6(meta_file_path);
+
+ if (device_name.empty()) {
+ meta_file_path = kVideoMetaPathK2_6 + device_file_name + "/model";
+
+ LOG(LS_INFO) << "Trying " << meta_file_path;
+ device_name = GetVideoDeviceNameK2_6(meta_file_path);
+ }
+ } else {
+ meta_file_path = kVideoMetaPathK2_4 + device_file_name;
+ LOG(LS_INFO) << "Trying " << meta_file_path;
+ device_name = GetVideoDeviceNameK2_4(meta_file_path);
+ }
+
+ if (device_name.empty()) {
+ device_name = "/dev/" + device_file_name;
+ LOG(LS_ERROR)
+ << "Device name not found, defaulting to device path " << device_name;
+ }
+
+ LOG(LS_INFO) << "Name for " << device_file_name << " is " << device_name;
+
+ return Trim(device_name);
+}
+
+static void ScanV4L2Devices(std::vector<Device>* devices) {
+ LOG(LS_INFO) << ("Enumerating V4L2 devices");
+
+ MetaType meta;
+ std::string metadata_dir;
+
+ talk_base::scoped_ptr<talk_base::DirectoryIterator> directoryIterator(
+ talk_base::Filesystem::IterateDirectory());
+
+ // Try and guess kernel version
+ if (directoryIterator->Iterate(kVideoMetaPathK2_6)) {
+ meta = M2_6;
+ metadata_dir = kVideoMetaPathK2_6;
+ } else if (directoryIterator->Iterate(kVideoMetaPathK2_4)) {
+ meta = M2_4;
+ metadata_dir = kVideoMetaPathK2_4;
+ } else {
+ meta = NONE;
+ }
+
+ if (meta != NONE) {
+ LOG(LS_INFO) << "V4L2 device metadata found at " << metadata_dir;
+
+ do {
+ std::string filename = directoryIterator->Name();
+
+ if (filename.find("video") == 0) {
+ std::string device_path = "/dev/" + filename;
+
+ if (V4LLookup::IsV4L2Device(device_path)) {
+ devices->push_back(
+ Device(GetVideoDeviceName(meta, filename), device_path));
+ }
+ }
+ } while (directoryIterator->Next());
+ } else {
+ LOG(LS_ERROR) << "Unable to detect v4l2 metadata directory";
+ }
+
+ if (devices->size() == 0) {
+ LOG(LS_INFO) << "Plan B. Scanning all video devices in /dev directory";
+ ScanDeviceDirectory("/dev/", devices);
+ }
+
+ LOG(LS_INFO) << "Total V4L2 devices found : " << devices->size();
+}
+
+static bool GetVideoDevices(std::vector<Device>* devices) {
+ ScanV4L2Devices(devices);
+ return true;
+}
+
+DeviceWatcher::DeviceWatcher(DeviceManager* dm)
+ : manager_(dm), udev_(NULL), udev_monitor_(NULL), registered_(false) {}
+
+bool DeviceWatcher::Start() {
+ // We deliberately return true in the failure paths here because libudev is
+ // not a critical component of a Linux system so it may not be present/usable,
+ // and we don't want to halt DeviceManager initialization in such a case.
+ if (!libudev_.Load()) {
+ LOG(LS_WARNING) << "libudev not present/usable; DeviceWatcher disabled";
+ return true;
+ }
+ udev_ = LATE(udev_new)();
+ if (!udev_) {
+ LOG_ERR(LS_ERROR) << "udev_new()";
+ return true;
+ }
+ // The second argument here is the event source. It can be either "kernel" or
+ // "udev", but "udev" is the only correct choice. Apps listen on udev and the
+ // udev daemon in turn listens on the kernel.
+ udev_monitor_ = LATE(udev_monitor_new_from_netlink)(udev_, "udev");
+ if (!udev_monitor_) {
+ LOG_ERR(LS_ERROR) << "udev_monitor_new_from_netlink()";
+ return true;
+ }
+ // We only listen for changes in the video devices. Audio devices are more or
+ // less unimportant because receiving device change notifications really only
+ // matters for broadcasting updated send/recv capabilities based on whether
+ // there is at least one device available, and almost all computers have at
+ // least one audio device. Also, PulseAudio device notifications don't come
+ // from the udev daemon, they come from the PulseAudio daemon, so we'd only
+ // want to listen for audio device changes from udev if using ALSA. For
+ // simplicity, we don't bother with any audio stuff at all.
+ if (LATE(udev_monitor_filter_add_match_subsystem_devtype)(udev_monitor_,
+ "video4linux",
+ NULL) < 0) {
+ LOG_ERR(LS_ERROR) << "udev_monitor_filter_add_match_subsystem_devtype()";
+ return true;
+ }
+ if (LATE(udev_monitor_enable_receiving)(udev_monitor_) < 0) {
+ LOG_ERR(LS_ERROR) << "udev_monitor_enable_receiving()";
+ return true;
+ }
+ static_cast<talk_base::PhysicalSocketServer*>(
+ talk_base::Thread::Current()->socketserver())->Add(this);
+ registered_ = true;
+ return true;
+}
+
+void DeviceWatcher::Stop() {
+ if (registered_) {
+ static_cast<talk_base::PhysicalSocketServer*>(
+ talk_base::Thread::Current()->socketserver())->Remove(this);
+ registered_ = false;
+ }
+ if (udev_monitor_) {
+ LATE(udev_monitor_unref)(udev_monitor_);
+ udev_monitor_ = NULL;
+ }
+ if (udev_) {
+ LATE(udev_unref)(udev_);
+ udev_ = NULL;
+ }
+ libudev_.Unload();
+}
+
+uint32 DeviceWatcher::GetRequestedEvents() {
+ return talk_base::DE_READ;
+}
+
+void DeviceWatcher::OnPreEvent(uint32 ff) {
+ // Nothing to do.
+}
+
+void DeviceWatcher::OnEvent(uint32 ff, int err) {
+ udev_device* device = LATE(udev_monitor_receive_device)(udev_monitor_);
+ if (!device) {
+ // Probably the socket connection to the udev daemon was terminated (perhaps
+ // the daemon crashed or is being restarted?).
+ LOG_ERR(LS_WARNING) << "udev_monitor_receive_device()";
+ // Stop listening to avoid potential livelock (an fd with EOF in it is
+ // always considered readable).
+ static_cast<talk_base::PhysicalSocketServer*>(
+ talk_base::Thread::Current()->socketserver())->Remove(this);
+ registered_ = false;
+ return;
+ }
+ // Else we read the device successfully.
+
+ // Since we already have our own filesystem-based device enumeration code, we
+ // simply re-enumerate rather than inspecting the device event.
+ LATE(udev_device_unref)(device);
+ manager_->OnDevicesChange();
+}
+
+int DeviceWatcher::GetDescriptor() {
+ return LATE(udev_monitor_get_fd)(udev_monitor_);
+}
+
+bool DeviceWatcher::IsDescriptorClosed() {
+ // If it is closed then we will just get an error in
+ // udev_monitor_receive_device and unregister, so we don't need to check for
+ // it separately.
+ return false;
+}
+
+#endif
+
+// TODO: Try to get hold of a copy of Final Cut to understand why we
+// crash while scanning their components on OS X.
+#if !defined(LINUX) && !defined(IOS)
+static bool ShouldDeviceBeIgnored(const std::string& device_name) {
+ static const char* const kFilteredDevices[] = {
+ "Google Camera Adapter", // Our own magiccams
+#ifdef WIN32
+ "Asus virtual Camera", // Bad Asus desktop virtual cam
+ "Bluetooth Video", // Bad Sony viao bluetooth sharing driver
+#elif OSX
+ "DVCPRO HD", // Final cut
+ "Sonix SN9C201p", // Crashes in OpenAComponent and CloseComponent
+#endif
+ };
+
+ for (int i = 0; i < ARRAY_SIZE(kFilteredDevices); ++i) {
+ if (strnicmp(device_name.c_str(), kFilteredDevices[i],
+ strlen(kFilteredDevices[i])) == 0) {
+ LOG(LS_INFO) << "Ignoring device " << device_name;
+ return true;
+ }
+ }
+ return false;
+}
+#endif
+
+}; // namespace cricket
diff --git a/third_party_mods/libjingle/source/talk/session/phone/devicemanager.h b/third_party_mods/libjingle/source/talk/session/phone/devicemanager.h
new file mode 100644
index 0000000..3dd1916
--- /dev/null
+++ b/third_party_mods/libjingle/source/talk/session/phone/devicemanager.h
@@ -0,0 +1,110 @@
+/*
+ * libjingle
+ * Copyright 2004--2008, Google Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright notice,
+ * this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ * 3. The name of the author may not be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED
+ * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO
+ * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
+ * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
+ * OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
+ * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef TALK_SESSION_PHONE_DEVICEMANAGER_H_
+#define TALK_SESSION_PHONE_DEVICEMANAGER_H_
+
+#include <string>
+#include <vector>
+
+#include "talk/base/sigslot.h"
+#include "talk/base/stringencode.h"
+#ifdef LINUX_SOUND_USED
+#include "talk/sound/soundsystemfactory.h"
+#endif
+
+namespace cricket {
+
+class DeviceWatcher;
+
+// Used to represent an audio or video capture or render device.
+class Device {
+ public:
+ Device() {}
+ Device(const std::string& first, int second)
+ : name(first),
+ id(talk_base::ToString(second)) {
+ }
+ Device(const std::string& first, const std::string& second)
+ : name(first), id(second) {}
+
+ std::string name;
+ std::string id;
+};
+
+// DeviceManager manages the audio and video devices on the system.
+// Methods are virtual to allow for easy stubbing/mocking in tests.
+class DeviceManager {
+ public:
+ DeviceManager();
+ virtual ~DeviceManager();
+
+ // Initialization
+ virtual bool Init();
+ virtual void Terminate();
+ bool initialized() const { return initialized_; }
+
+ // Capabilities
+ virtual int GetCapabilities();
+
+ // Device enumeration
+ virtual bool GetAudioInputDevices(std::vector<Device>* devices);
+ virtual bool GetAudioOutputDevices(std::vector<Device>* devices);
+
+ bool GetAudioInputDevice(const std::string& name, Device* out);
+ bool GetAudioOutputDevice(const std::string& name, Device* out);
+
+ virtual bool GetVideoCaptureDevices(std::vector<Device>* devs);
+ bool GetVideoCaptureDevice(const std::string& name, Device* out);
+
+ sigslot::signal0<> SignalDevicesChange;
+
+ void OnDevicesChange() { SignalDevicesChange(); }
+
+ static const std::string kDefaultDeviceName;
+
+ protected:
+ virtual bool GetAudioDevice(bool is_input, const std::string& name,
+ Device* out);
+ virtual bool GetDefaultVideoCaptureDevice(Device* device);
+
+ private:
+ bool GetAudioDevicesByPlatform(bool input, std::vector<Device>* devs);
+
+ bool initialized_;
+#ifdef WIN32
+ bool need_couninitialize_;
+#endif
+ DeviceWatcher* watcher_;
+#ifdef LINUX_SOUND_USED
+ SoundSystemHandle sound_system_;
+#endif
+};
+
+} // namespace cricket
+
+#endif // TALK_SESSION_PHONE_DEVICEMANAGER_H_
diff --git a/third_party_mods/libjingle/source/talk/session/phone/filemediaengine.h b/third_party_mods/libjingle/source/talk/session/phone/filemediaengine.h
new file mode 100644
index 0000000..2ece53a
--- /dev/null
+++ b/third_party_mods/libjingle/source/talk/session/phone/filemediaengine.h
@@ -0,0 +1,221 @@
+// libjingle
+// Copyright 2004--2005, Google Inc.
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// 1. Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// 2. Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// 3. The name of the author may not be used to endorse or promote products
+// derived from this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED
+// WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO
+// EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
+// OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+// WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
+// OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
+// ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+#ifndef TALK_SESSION_PHONE_FILEMEDIAENGINE_H_
+#define TALK_SESSION_PHONE_FILEMEDIAENGINE_H_
+
+#include <string>
+#include <vector>
+
+#include "talk/base/scoped_ptr.h"
+#include "talk/session/phone/codec.h"
+#include "talk/session/phone/mediachannel.h"
+#include "talk/session/phone/mediaengine.h"
+
+namespace talk_base {
+class StreamInterface;
+}
+
+namespace cricket {
+
+// A media engine contains a capturer, an encoder, and a sender in the sender
+// side and a receiver, a decoder, and a renderer in the receiver side.
+// FileMediaEngine simulates the capturer and the encoder via an input RTP dump
+// stream and simulates the decoder and the renderer via an output RTP dump
+// stream. Depending on the parameters of the constructor, FileMediaEngine can
+// act as file voice engine, file video engine, or both. Currently, we use
+// only the RTP dump packets. TODO: Enable RTCP packets.
+class FileMediaEngine : public MediaEngine {
+ public:
+ FileMediaEngine() {}
+ virtual ~FileMediaEngine() {}
+
+ // Set the file name of the input or output RTP dump for voice or video.
+ // Should be called before the channel is created.
+ void set_voice_input_filename(const std::string& filename) {
+ voice_input_filename_ = filename;
+ }
+ void set_voice_output_filename(const std::string& filename) {
+ voice_output_filename_ = filename;
+ }
+ void set_video_input_filename(const std::string& filename) {
+ video_input_filename_ = filename;
+ }
+ void set_video_output_filename(const std::string& filename) {
+ video_output_filename_ = filename;
+ }
+
+ // Should be called before codecs() and video_codecs() are called. We need to
+ // set the voice and video codecs; otherwise, Jingle initiation will fail.
+ void set_voice_codecs(const std::vector<AudioCodec>& codecs) {
+ voice_codecs_ = codecs;
+ }
+ void set_video_codecs(const std::vector<VideoCodec>& codecs) {
+ video_codecs_ = codecs;
+ }
+
+ // Implement pure virtual methods of MediaEngine.
+ virtual bool Init() { return true; }
+ virtual void Terminate() {}
+ virtual int GetCapabilities();
+ virtual VoiceMediaChannel* CreateChannel();
+ virtual VideoMediaChannel* CreateVideoChannel(VoiceMediaChannel* voice_ch);
+ virtual SoundclipMedia* CreateSoundclip() { return NULL; }
+ virtual bool SetAudioOptions(int options) { return true; }
+ virtual bool SetVideoOptions(int options) { return true; }
+ virtual bool SetDefaultVideoEncoderConfig(const VideoEncoderConfig& config) {
+ return true;
+ }
+ virtual bool SetSoundDevices(const Device* in_dev, const Device* out_dev) {
+ return true;
+ }
+ virtual bool SetVideoCaptureDevice(const Device* cam_device) { return true; }
+ virtual bool GetOutputVolume(int* level) { *level = 0; return true; }
+ virtual bool SetOutputVolume(int level) { return true; }
+ virtual int GetInputLevel() { return 0; }
+ virtual bool SetLocalMonitor(bool enable) { return true; }
+ virtual bool SetLocalRenderer(VideoRenderer* renderer) { return true; }
+ // TODO: control channel send?
+ virtual CaptureResult SetVideoCapture(bool capture) { return CR_SUCCESS; }
+ virtual const std::vector<AudioCodec>& audio_codecs() {
+ return voice_codecs_;
+ }
+ virtual const std::vector<VideoCodec>& video_codecs() {
+ return video_codecs_;
+ }
+ virtual bool FindAudioCodec(const AudioCodec& codec) { return true; }
+ virtual bool FindVideoCodec(const VideoCodec& codec) { return true; }
+ virtual void SetVoiceLogging(int min_sev, const char* filter) {}
+ virtual void SetVideoLogging(int min_sev, const char* filter) {}
+
+ private:
+ std::string voice_input_filename_;
+ std::string voice_output_filename_;
+ std::string video_input_filename_;
+ std::string video_output_filename_;
+ std::vector<AudioCodec> voice_codecs_;
+ std::vector<VideoCodec> video_codecs_;
+
+ DISALLOW_COPY_AND_ASSIGN(FileMediaEngine);
+};
+
+class RtpSenderReceiver; // Forward declaration. Defined in the .cc file.
+
+class FileVoiceChannel : public VoiceMediaChannel {
+ public:
+ FileVoiceChannel(const std::string& in_file, const std::string& out_file);
+ virtual ~FileVoiceChannel();
+
+ // Implement pure virtual methods of VoiceMediaChannel.
+ virtual bool SetRecvCodecs(const std::vector<AudioCodec>& codecs) {
+ return true;
+ }
+ virtual bool SetSendCodecs(const std::vector<AudioCodec>& codecs);
+ virtual bool SetRecvRtpHeaderExtensions(
+ const std::vector<RtpHeaderExtension>& extensions) {
+ return true;
+ }
+ virtual bool SetSendRtpHeaderExtensions(
+ const std::vector<RtpHeaderExtension>& extensions) {
+ return true;
+ }
+ virtual bool SetPlayout(bool playout) { return true; }
+ virtual bool SetSend(SendFlags flag);
+ virtual bool AddStream(uint32 ssrc) { return true; }
+ virtual bool RemoveStream(uint32 ssrc) { return true; }
+ virtual bool GetActiveStreams(AudioInfo::StreamList* actives) { return true; }
+ virtual int GetOutputLevel() { return 0; }
+ virtual bool SetRingbackTone(const char* buf, int len) { return true; }
+ virtual bool PlayRingbackTone(uint32 ssrc, bool play, bool loop) {
+ return true;
+ }
+ virtual bool PressDTMF(int event, bool playout) { return true; }
+ virtual bool GetStats(VoiceMediaInfo* info) { return true; }
+
+ // Implement pure virtual methods of MediaChannel.
+ virtual void OnPacketReceived(talk_base::Buffer* packet);
+ virtual void OnRtcpReceived(talk_base::Buffer* packet) {}
+ virtual void SetSendSsrc(uint32 id) {} // TODO: change RTP packet?
+ virtual bool SetRtcpCName(const std::string& cname) { return true; }
+ virtual bool Mute(bool on) { return false; }
+ virtual bool SetSendBandwidth(bool autobw, int bps) { return true; }
+ virtual bool SetOptions(int options) { return true; }
+ virtual int GetMediaChannelId() { return -1; }
+
+ private:
+ talk_base::scoped_ptr<RtpSenderReceiver> rtp_sender_receiver_;
+ DISALLOW_COPY_AND_ASSIGN(FileVoiceChannel);
+};
+
+class FileVideoChannel : public VideoMediaChannel {
+ public:
+ FileVideoChannel(const std::string& in_file, const std::string& out_file);
+ virtual ~FileVideoChannel();
+
+ // Implement pure virtual methods of VideoMediaChannel.
+ virtual bool SetRecvCodecs(const std::vector<VideoCodec>& codecs) {
+ return true;
+ }
+ virtual bool SetSendCodecs(const std::vector<VideoCodec>& codecs);
+ virtual bool SetRecvRtpHeaderExtensions(
+ const std::vector<RtpHeaderExtension>& extensions) {
+ return true;
+ }
+ virtual bool SetSendRtpHeaderExtensions(
+ const std::vector<RtpHeaderExtension>& extensions) {
+ return true;
+ }
+ virtual bool SetRender(bool render) { return true; }
+ virtual bool SetSend(bool send);
+ virtual bool AddStream(uint32 ssrc, uint32 voice_ssrc) { return true; }
+ virtual bool RemoveStream(uint32 ssrc) { return true; }
+ virtual bool SetRenderer(uint32 ssrc, VideoRenderer* renderer) {
+ return true;
+ }
+ virtual bool SetExternalRenderer(uint32 ssrc, void* renderer) {
+ return true;
+ }
+ virtual bool GetStats(VideoMediaInfo* info) { return true; }
+ virtual bool SendIntraFrame() { return false; }
+ virtual bool RequestIntraFrame() { return false; }
+
+ // Implement pure virtual methods of MediaChannel.
+ virtual void OnPacketReceived(talk_base::Buffer* packet);
+ virtual void OnRtcpReceived(talk_base::Buffer* packet) {}
+ virtual void SetSendSsrc(uint32 id) {} // TODO: change RTP packet?
+ virtual bool SetRtcpCName(const std::string& cname) { return true; }
+ virtual bool Mute(bool on) { return false; }
+ virtual bool SetSendBandwidth(bool autobw, int bps) { return true; }
+ virtual bool SetOptions(int options) { return true; }
+ virtual int GetMediaChannelId() { return -1; }
+
+ private:
+ talk_base::scoped_ptr<RtpSenderReceiver> rtp_sender_receiver_;
+ DISALLOW_COPY_AND_ASSIGN(FileVideoChannel);
+};
+
+} // namespace cricket
+
+#endif // TALK_SESSION_PHONE_FILEMEDIAENGINE_H_
diff --git a/third_party_mods/libjingle/source/talk/session/phone/mediachannel.h b/third_party_mods/libjingle/source/talk/session/phone/mediachannel.h
new file mode 100644
index 0000000..7062761
--- /dev/null
+++ b/third_party_mods/libjingle/source/talk/session/phone/mediachannel.h
@@ -0,0 +1,501 @@
+/*
+ * libjingle
+ * Copyright 2004--2010, Google Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright notice,
+ * this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ * 3. The name of the author may not be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED
+ * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO
+ * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
+ * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
+ * OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
+ * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef TALK_SESSION_PHONE_MEDIACHANNEL_H_
+#define TALK_SESSION_PHONE_MEDIACHANNEL_H_
+
+#include <string>
+#include <vector>
+
+#include "talk/base/basictypes.h"
+#include "talk/base/sigslot.h"
+#include "talk/base/socket.h"
+#include "talk/session/phone/codec.h"
+// TODO: re-evaluate this include
+#include "talk/session/phone/audiomonitor.h"
+
+namespace talk_base {
+class Buffer;
+}
+
+namespace flute {
+class MagicCamVideoRenderer;
+}
+
+namespace cricket {
+
+const int kMinRtpHeaderExtensionId = 1;
+const int kMaxRtpHeaderExtensionId = 255;
+
+struct RtpHeaderExtension {
+ RtpHeaderExtension(const std::string& u, int i) : uri(u), id(i) {}
+ std::string uri;
+ int id;
+ // TODO: SendRecv direction;
+};
+
+enum VoiceMediaChannelOptions {
+ OPT_CONFERENCE = 0x10000, // tune the audio stream for conference mode
+
+};
+
+enum VideoMediaChannelOptions {
+ OPT_INTERPOLATE = 0x10000 // Increase the output framerate by 2x by
+ // interpolating frames
+};
+
+class MediaChannel : public sigslot::has_slots<> {
+ public:
+ class NetworkInterface {
+ public:
+ enum SocketType { ST_RTP, ST_RTCP };
+ virtual bool SendPacket(talk_base::Buffer* packet) = 0;
+ virtual bool SendRtcp(talk_base::Buffer* packet) = 0;
+ virtual int SetOption(SocketType type, talk_base::Socket::Option opt,
+ int option) = 0;
+ virtual ~NetworkInterface() {}
+ };
+
+ MediaChannel() : network_interface_(NULL) {}
+ virtual ~MediaChannel() {}
+
+ // Gets/sets the abstract inteface class for sending RTP/RTCP data.
+ NetworkInterface *network_interface() { return network_interface_; }
+ virtual void SetInterface(NetworkInterface *iface) {
+ network_interface_ = iface;
+ }
+
+ // Called when a RTP packet is received.
+ virtual void OnPacketReceived(talk_base::Buffer* packet) = 0;
+ // Called when a RTCP packet is received.
+ virtual void OnRtcpReceived(talk_base::Buffer* packet) = 0;
+ // Sets the SSRC to be used for outgoing data.
+ virtual void SetSendSsrc(uint32 id) = 0;
+ // Set the CNAME of RTCP
+ virtual bool SetRtcpCName(const std::string& cname) = 0;
+ // Mutes the channel.
+ virtual bool Mute(bool on) = 0;
+
+ // Sets the RTP extension headers and IDs to use when sending RTP.
+ virtual bool SetRecvRtpHeaderExtensions(
+ const std::vector<RtpHeaderExtension>& extensions) = 0;
+ virtual bool SetSendRtpHeaderExtensions(
+ const std::vector<RtpHeaderExtension>& extensions) = 0;
+ // Sets the rate control to use when sending data.
+ virtual bool SetSendBandwidth(bool autobw, int bps) = 0;
+ // Sets the media options to use.
+ virtual bool SetOptions(int options) = 0;
+ // Gets the Rtc channel id
+ virtual int GetMediaChannelId() = 0;
+
+ protected:
+ NetworkInterface *network_interface_;
+};
+
+enum SendFlags {
+ SEND_NOTHING,
+ SEND_RINGBACKTONE,
+ SEND_MICROPHONE
+};
+
+struct VoiceSenderInfo {
+ uint32 ssrc;
+ int bytes_sent;
+ int packets_sent;
+ int packets_lost;
+ float fraction_lost;
+ int ext_seqnum;
+ int rtt_ms;
+ int jitter_ms;
+ int audio_level;
+};
+
+struct VoiceReceiverInfo {
+ uint32 ssrc;
+ int bytes_rcvd;
+ int packets_rcvd;
+ int packets_lost;
+ float fraction_lost;
+ int ext_seqnum;
+ int jitter_ms;
+ int jitter_buffer_ms;
+ int jitter_buffer_preferred_ms;
+ int delay_estimate_ms;
+ int audio_level;
+};
+
+struct VideoSenderInfo {
+ uint32 ssrc;
+ int bytes_sent;
+ int packets_sent;
+ int packets_cached;
+ int packets_lost;
+ float fraction_lost;
+ int firs_rcvd;
+ int nacks_rcvd;
+ int rtt_ms;
+ int frame_width;
+ int frame_height;
+ int framerate_input;
+ int framerate_sent;
+ int nominal_bitrate;
+ int preferred_bitrate;
+};
+
+struct VideoReceiverInfo {
+ uint32 ssrc;
+ int bytes_rcvd;
+ // vector<int> layer_bytes_rcvd;
+ int packets_rcvd;
+ int packets_lost;
+ int packets_concealed;
+ float fraction_lost;
+ int firs_sent;
+ int nacks_sent;
+ int frame_width;
+ int frame_height;
+ int framerate_rcvd;
+ int framerate_decoded;
+ int framerate_output;
+};
+
+struct BandwidthEstimationInfo {
+ int available_send_bandwidth;
+ int available_recv_bandwidth;
+ int target_enc_bitrate;
+ int actual_enc_bitrate;
+ int retransmit_bitrate;
+ int transmit_bitrate;
+ int bucket_delay;
+};
+
+struct VoiceMediaInfo {
+ void Clear() {
+ senders.clear();
+ receivers.clear();
+ }
+ std::vector<VoiceSenderInfo> senders;
+ std::vector<VoiceReceiverInfo> receivers;
+};
+
+struct VideoMediaInfo {
+ void Clear() {
+ senders.clear();
+ receivers.clear();
+ bw_estimations.clear();
+ }
+ std::vector<VideoSenderInfo> senders;
+ std::vector<VideoReceiverInfo> receivers;
+ std::vector<BandwidthEstimationInfo> bw_estimations;
+};
+
+class VoiceMediaChannel : public MediaChannel {
+ public:
+ enum Error {
+ ERROR_NONE = 0, // No error.
+ ERROR_OTHER, // Other errors.
+ ERROR_REC_DEVICE_OPEN_FAILED = 100, // Could not open mic.
+ ERROR_REC_DEVICE_MUTED, // Mic was muted by OS.
+ ERROR_REC_DEVICE_SILENT, // No background noise picked up.
+ ERROR_REC_DEVICE_SATURATION, // Mic input is clipping.
+ ERROR_REC_DEVICE_REMOVED, // Mic was removed while active.
+ ERROR_REC_RUNTIME_ERROR, // Processing is encountering errors.
+ ERROR_REC_SRTP_ERROR, // Generic SRTP failure.
+ ERROR_REC_SRTP_AUTH_FAILED, // Failed to authenticate packets.
+ ERROR_REC_TYPING_NOISE_DETECTED, // Typing noise is detected.
+ ERROR_PLAY_DEVICE_OPEN_FAILED = 200, // Could not open playout.
+ ERROR_PLAY_DEVICE_MUTED, // Playout muted by OS.
+ ERROR_PLAY_DEVICE_REMOVED, // Playout removed while active.
+ ERROR_PLAY_RUNTIME_ERROR, // Errors in voice processing.
+ ERROR_PLAY_SRTP_ERROR, // Generic SRTP failure.
+ ERROR_PLAY_SRTP_AUTH_FAILED, // Failed to authenticate packets.
+ ERROR_PLAY_SRTP_REPLAY, // Packet replay detected.
+ };
+
+ VoiceMediaChannel() {}
+ virtual ~VoiceMediaChannel() {}
+ // Sets the codecs/payload types to be used for incoming media.
+ virtual bool SetRecvCodecs(const std::vector<AudioCodec>& codecs) = 0;
+ // Sets the codecs/payload types to be used for outgoing media.
+ virtual bool SetSendCodecs(const std::vector<AudioCodec>& codecs) = 0;
+ // Starts or stops playout of received audio.
+ virtual bool SetPlayout(bool playout) = 0;
+ // Starts or stops sending (and potentially capture) of local audio.
+ virtual bool SetSend(SendFlags flag) = 0;
+ // Adds a new receive-only stream with the specified SSRC.
+ virtual bool AddStream(uint32 ssrc) = 0;
+ // Removes a stream added with AddStream.
+ virtual bool RemoveStream(uint32 ssrc) = 0;
+ // Gets current energy levels for all incoming streams.
+ virtual bool GetActiveStreams(AudioInfo::StreamList* actives) = 0;
+ // Get the current energy level for the outgoing stream.
+ virtual int GetOutputLevel() = 0;
+ // Specifies a ringback tone to be played during call setup.
+ virtual bool SetRingbackTone(const char *buf, int len) = 0;
+ // Plays or stops the aforementioned ringback tone
+ virtual bool PlayRingbackTone(uint32 ssrc, bool play, bool loop) = 0;
+ // Sends a out-of-band DTMF signal using the specified event.
+ virtual bool PressDTMF(int event, bool playout) = 0;
+ // Gets quality stats for the channel.
+ virtual bool GetStats(VoiceMediaInfo* info) = 0;
+ // Gets last reported error for this media channel.
+ virtual void GetLastMediaError(uint32* ssrc,
+ VoiceMediaChannel::Error* error) {
+ ASSERT(error != NULL);
+ *error = ERROR_NONE;
+ }
+ // Signal errors from MediaChannel. Arguments are:
+ // ssrc(uint32), and error(VoiceMediaChannel::Error).
+ sigslot::signal2<uint32, VoiceMediaChannel::Error> SignalMediaError;
+};
+
+// Represents a YUV420 (a.k.a. I420) video frame.
+class VideoFrame {
+ friend class flute::MagicCamVideoRenderer;
+
+ public:
+ VideoFrame() : rendered_(false) {}
+
+ virtual ~VideoFrame() {}
+
+ virtual size_t GetWidth() const = 0;
+ virtual size_t GetHeight() const = 0;
+ virtual const uint8 *GetYPlane() const = 0;
+ virtual const uint8 *GetUPlane() const = 0;
+ virtual const uint8 *GetVPlane() const = 0;
+ virtual uint8 *GetYPlane() = 0;
+ virtual uint8 *GetUPlane() = 0;
+ virtual uint8 *GetVPlane() = 0;
+ virtual int32 GetYPitch() const = 0;
+ virtual int32 GetUPitch() const = 0;
+ virtual int32 GetVPitch() const = 0;
+
+ // For retrieving the aspect ratio of each pixel. Usually this is 1x1, but
+ // the aspect_ratio_idc parameter of H.264 can specify non-square pixels.
+ virtual size_t GetPixelWidth() const = 0;
+ virtual size_t GetPixelHeight() const = 0;
+
+ // TODO: Add a fourcc format here and probably combine VideoFrame
+ // with CapturedFrame.
+ virtual int64 GetElapsedTime() const = 0;
+ virtual int64 GetTimeStamp() const = 0;
+ virtual void SetElapsedTime(int64 elapsed_time) = 0;
+ virtual void SetTimeStamp(int64 time_stamp) = 0;
+
+ // Make a copy of the frame. The frame buffer itself may not be copied,
+ // in which case both the current and new VideoFrame will share a single
+ // reference-counted frame buffer.
+ virtual VideoFrame *Copy() const = 0;
+
+ // Writes the frame into the given frame buffer, provided that it is of
+ // sufficient size. Returns the frame's actual size, regardless of whether
+ // it was written or not (like snprintf). If there is insufficient space,
+ // nothing is written.
+ virtual size_t CopyToBuffer(uint8 *buffer, size_t size) const = 0;
+
+ // Converts the I420 data to RGB of a certain type such as ARGB and ABGR.
+ // Returns the frame's actual size, regardless of whether it was written or
+ // not (like snprintf). Parameters size and pitch_rgb are in units of bytes.
+ // If there is insufficient space, nothing is written.
+ virtual size_t ConvertToRgbBuffer(uint32 to_fourcc, uint8 *buffer,
+ size_t size, size_t pitch_rgb) const = 0;
+
+ // Writes the frame into the given planes, stretched to the given width and
+ // height. The parameter "interpolate" controls whether to interpolate or just
+ // take the nearest-point. The parameter "crop" controls whether to crop this
+ // frame to the aspect ratio of the given dimensions before stretching.
+ virtual void StretchToPlanes(uint8 *y, uint8 *u, uint8 *v,
+ int32 pitchY, int32 pitchU, int32 pitchV,
+ size_t width, size_t height,
+ bool interpolate, bool crop) const = 0;
+
+ // Writes the frame into the given frame buffer, stretched to the given width
+ // and height, provided that it is of sufficient size. Returns the frame's
+ // actual size, regardless of whether it was written or not (like snprintf).
+ // If there is insufficient space, nothing is written. The parameter
+ // "interpolate" controls whether to interpolate or just take the
+ // nearest-point. The parameter "crop" controls whether to crop this frame to
+ // the aspect ratio of the given dimensions before stretching.
+ virtual size_t StretchToBuffer(size_t w, size_t h, uint8 *buffer, size_t size,
+ bool interpolate, bool crop) const = 0;
+
+ // Writes the frame into the target VideoFrame, stretched to the size of that
+ // frame. The parameter "interpolate" controls whether to interpolate or just
+ // take the nearest-point. The parameter "crop" controls whether to crop this
+ // frame to the aspect ratio of the target frame before stretching.
+ virtual void StretchToFrame(VideoFrame *target, bool interpolate,
+ bool crop) const = 0;
+
+ // Stretches the frame to the given size, creating a new VideoFrame object to
+ // hold it. The parameter "interpolate" controls whether to interpolate or
+ // just take the nearest-point. The parameter "crop" controls whether to crop
+ // this frame to the aspect ratio of the given dimensions before stretching.
+ virtual VideoFrame *Stretch(size_t w, size_t h, bool interpolate,
+ bool crop) const = 0;
+
+ // Size of an I420 image of given dimensions when stored as a frame buffer.
+ static size_t SizeOf(size_t w, size_t h) {
+ return w * h + ((w + 1) / 2) * ((h + 1) / 2) * 2;
+ }
+
+ protected:
+ // The frame needs to be rendered to magiccam only once.
+ // TODO: Remove this flag once magiccam rendering is fully replaced
+ // by client3d rendering.
+ mutable bool rendered_;
+};
+
+// Simple subclass for use in mocks.
+class NullVideoFrame : public VideoFrame {
+ public:
+ virtual size_t GetWidth() const { return 0; }
+ virtual size_t GetHeight() const { return 0; }
+ virtual const uint8 *GetYPlane() const { return NULL; }
+ virtual const uint8 *GetUPlane() const { return NULL; }
+ virtual const uint8 *GetVPlane() const { return NULL; }
+ virtual uint8 *GetYPlane() { return NULL; }
+ virtual uint8 *GetUPlane() { return NULL; }
+ virtual uint8 *GetVPlane() { return NULL; }
+ virtual int32 GetYPitch() const { return 0; }
+ virtual int32 GetUPitch() const { return 0; }
+ virtual int32 GetVPitch() const { return 0; }
+
+ virtual size_t GetPixelWidth() const { return 1; }
+ virtual size_t GetPixelHeight() const { return 1; }
+ virtual int64 GetElapsedTime() const { return 0; }
+ virtual int64 GetTimeStamp() const { return 0; }
+ virtual void SetElapsedTime(int64 elapsed_time) {}
+ virtual void SetTimeStamp(int64 time_stamp) {}
+
+ virtual VideoFrame *Copy() const {
+ return NULL;
+ }
+
+ virtual size_t CopyToBuffer(uint8 *buffer, size_t size) const {
+ return 0;
+ }
+
+ virtual size_t ConvertToRgbBuffer(uint32 to_fourcc, uint8 *buffer,
+ size_t size, size_t pitch_rgb) const {
+ return 0;
+ }
+
+ virtual void StretchToPlanes(uint8 *y, uint8 *u, uint8 *v,
+ int32 pitchY, int32 pitchU, int32 pitchV,
+ size_t width, size_t height,
+ bool interpolate, bool crop) const {
+ }
+
+ virtual size_t StretchToBuffer(size_t w, size_t h, uint8 *buffer, size_t size,
+ bool interpolate, bool crop) const {
+ return 0;
+ }
+
+ virtual void StretchToFrame(VideoFrame *target, bool interpolate,
+ bool crop) const {
+ }
+
+ virtual VideoFrame *Stretch(size_t w, size_t h, bool interpolate,
+ bool crop) const {
+ return NULL;
+ }
+};
+
+// Abstract interface for rendering VideoFrames.
+class VideoRenderer {
+ public:
+ virtual ~VideoRenderer() {}
+ // Called when the video has changed size.
+ virtual bool SetSize(int width, int height, int reserved) = 0;
+ // Called when a new frame is available for display.
+ virtual bool RenderFrame(const VideoFrame *frame) = 0;
+};
+
+// Simple implementation for use in tests.
+class NullVideoRenderer : public VideoRenderer {
+ virtual bool SetSize(int width, int height, int reserved) {
+ return true;
+ }
+ // Called when a new frame is available for display.
+ virtual bool RenderFrame(const VideoFrame *frame) {
+ return true;
+ }
+};
+
+class VideoMediaChannel : public MediaChannel {
+ public:
+ enum Error {
+ ERROR_NONE = 0, // No error.
+ ERROR_OTHER, // Other errors.
+ ERROR_REC_DEVICE_OPEN_FAILED = 100, // Could not open camera.
+ ERROR_REC_DEVICE_NO_DEVICE, // No camera.
+ ERROR_REC_DEVICE_IN_USE, // Device is in already use.
+ ERROR_REC_DEVICE_REMOVED, // Device is removed.
+ ERROR_REC_SRTP_ERROR, // Generic sender SRTP failure.
+ ERROR_REC_SRTP_AUTH_FAILED, // Failed to authenticate packets.
+ ERROR_PLAY_SRTP_ERROR = 200, // Generic receiver SRTP failure.
+ ERROR_PLAY_SRTP_AUTH_FAILED, // Failed to authenticate packets.
+ ERROR_PLAY_SRTP_REPLAY, // Packet replay detected.
+ };
+
+ VideoMediaChannel() { renderer_ = NULL; }
+ virtual ~VideoMediaChannel() {}
+ // Sets the codecs/payload types to be used for incoming media.
+ virtual bool SetRecvCodecs(const std::vector<VideoCodec> &codecs) = 0;
+ // Sets the codecs/payload types to be used for outgoing media.
+ virtual bool SetSendCodecs(const std::vector<VideoCodec> &codecs) = 0;
+ // Starts or stops playout of received video.
+ virtual bool SetRender(bool render) = 0;
+ // Starts or stops transmission (and potentially capture) of local video.
+ virtual bool SetSend(bool send) = 0;
+ // Adds a new receive-only stream with the specified SSRC.
+ virtual bool AddStream(uint32 ssrc, uint32 voice_ssrc) = 0;
+ // Removes a stream added with AddStream.
+ virtual bool RemoveStream(uint32 ssrc) = 0;
+ // Sets the renderer object to be used for the specified stream.
+ // If SSRC is 0, the renderer is used for the 'default' stream.
+ virtual bool SetRenderer(uint32 ssrc, VideoRenderer* renderer) = 0;
+ // Sets the renderer object to be used for the specified stream.
+ // If SSRC is 0, the renderer is used for the 'default' stream.
+ virtual bool SetExternalRenderer(uint32 ssrc, void* renderer) = 0;
+ // Gets quality stats for the channel.
+ virtual bool GetStats(VideoMediaInfo* info) = 0;
+
+ // Send an intra frame to the receivers.
+ virtual bool SendIntraFrame() = 0;
+ // Reuqest each of the remote senders to send an intra frame.
+ virtual bool RequestIntraFrame() = 0;
+
+ sigslot::signal2<uint32, Error> SignalMediaError;
+
+ protected:
+ VideoRenderer *renderer_;
+};
+
+} // namespace cricket
+
+#endif // TALK_SESSION_PHONE_MEDIACHANNEL_H_
diff --git a/third_party_mods/libjingle/source/talk/session/phone/mediaengine.cc b/third_party_mods/libjingle/source/talk/session/phone/mediaengine.cc
new file mode 100644
index 0000000..b6a8728
--- /dev/null
+++ b/third_party_mods/libjingle/source/talk/session/phone/mediaengine.cc
@@ -0,0 +1,58 @@
+//
+// libjingle
+// Copyright 2004--2007, Google Inc.
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are met:
+//
+// 1. Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// 2. Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+// 3. The name of the author may not be used to endorse or promote products
+// derived from this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED
+// WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO
+// EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
+// OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+// WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
+// OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
+// ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+//
+
+#ifdef HAVE_WEBRTC
+#include "talk/app/voicemediaengine.h"
+#include "talk/app/videomediaengine.h"
+#endif
+#include "talk/session/phone/mediaengine.h"
+#ifdef HAVE_LINPHONE
+#include "talk/session/phone/linphonemediaengine.h"
+#endif
+
+
+namespace cricket {
+
+#ifdef HAVE_WEBRTC
+template<>
+CompositeMediaEngine<webrtc::RtcVoiceEngine, webrtc::RtcVideoEngine>
+ ::CompositeMediaEngine() : video_(&voice_) {
+}
+MediaEngine* MediaEngine::Create() {
+ return new CompositeMediaEngine<webrtc::RtcVoiceEngine,
+ webrtc::RtcVideoEngine>();
+}
+#else
+MediaEngine* MediaEngine::Create() {
+#ifdef HAVE_LINPHONE
+ return new LinphoneMediaEngine("", "");
+#else
+ return new NullMediaEngine();
+#endif
+}
+#endif
+}; // namespace cricket
diff --git a/third_party_mods/libjingle/source/talk/session/phone/mediaengine.h b/third_party_mods/libjingle/source/talk/session/phone/mediaengine.h
new file mode 100644
index 0000000..05f0821
--- /dev/null
+++ b/third_party_mods/libjingle/source/talk/session/phone/mediaengine.h
@@ -0,0 +1,328 @@
+/*
+ * libjingle
+ * Copyright 2004--2007, Google Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright notice,
+ * this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ * 3. The name of the author may not be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED
+ * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO
+ * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
+ * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
+ * OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
+ * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef TALK_SESSION_PHONE_MEDIAENGINE_H_
+#define TALK_SESSION_PHONE_MEDIAENGINE_H_
+
+#ifdef OSX
+#include <CoreAudio/CoreAudio.h>
+#endif
+
+#include <string>
+#include <vector>
+
+#include "talk/base/sigslotrepeater.h"
+#include "talk/session/phone/codec.h"
+#include "talk/session/phone/devicemanager.h"
+#include "talk/session/phone/mediachannel.h"
+#include "talk/session/phone/videocommon.h"
+
+namespace cricket {
+
+// A class for playing out soundclips.
+class SoundclipMedia {
+ public:
+ enum SoundclipFlags {
+ SF_LOOP = 1,
+ };
+
+ virtual ~SoundclipMedia() {}
+
+ // Plays a sound out to the speakers with the given audio stream. The stream
+ // must be 16-bit little-endian 16 kHz PCM. If a stream is already playing
+ // on this SoundclipMedia, it is stopped. If clip is NULL, nothing is played.
+ // Returns whether it was successful.
+ virtual bool PlaySound(const char *clip, int len, int flags) = 0;
+};
+
+// MediaEngine is an abstraction of a media engine which can be subclassed
+// to support different media componentry backends. It supports voice and
+// video operations in the same class to facilitate proper synchronization
+// between both media types.
+class MediaEngine {
+ public:
+ // TODO: Move this to a global location (also used in DeviceManager)
+ // Capabilities of the media engine.
+ enum Capabilities {
+ AUDIO_RECV = 1 << 0,
+ AUDIO_SEND = 1 << 1,
+ VIDEO_RECV = 1 << 2,
+ VIDEO_SEND = 1 << 3,
+ };
+
+ // Bitmask flags for options that may be supported by the media engine
+ // implementation
+ enum AudioOptions {
+ ECHO_CANCELLATION = 1 << 0,
+ AUTO_GAIN_CONTROL = 1 << 1,
+ DEFAULT_AUDIO_OPTIONS = ECHO_CANCELLATION | AUTO_GAIN_CONTROL
+ };
+ enum VideoOptions {
+ };
+
+ virtual ~MediaEngine() {}
+ static MediaEngine* Create();
+
+ // Initialization
+ // Starts the engine.
+ virtual bool Init() = 0;
+ // Shuts down the engine.
+ virtual void Terminate() = 0;
+ // Returns what the engine is capable of, as a set of Capabilities, above.
+ virtual int GetCapabilities() = 0;
+
+ // MediaChannel creation
+ // Creates a voice media channel. Returns NULL on failure.
+ virtual VoiceMediaChannel *CreateChannel() = 0;
+ // Creates a video media channel, paired with the specified voice channel.
+ // Returns NULL on failure.
+ virtual VideoMediaChannel *CreateVideoChannel(
+ VoiceMediaChannel* voice_media_channel) = 0;
+
+ // Creates a soundclip object for playing sounds on. Returns NULL on failure.
+ virtual SoundclipMedia *CreateSoundclip() = 0;
+
+ // Configuration
+ // Sets global audio options. "options" are from AudioOptions, above.
+ virtual bool SetAudioOptions(int options) = 0;
+ // Sets global video options. "options" are from VideoOptions, above.
+ virtual bool SetVideoOptions(int options) = 0;
+ // Sets the default (maximum) codec/resolution and encoder option to capture
+ // and encode video.
+ virtual bool SetDefaultVideoEncoderConfig(const VideoEncoderConfig& config)
+ = 0;
+
+ // Device selection
+ // TODO: Add method for selecting the soundclip device.
+ virtual bool SetSoundDevices(const Device* in_device,
+ const Device* out_device) = 0;
+ virtual bool SetVideoCaptureDevice(const Device* cam_device) = 0;
+ virtual bool SetVideoRenderer(int channel_id,
+ void* window,
+ unsigned int zOrder,
+ float left,
+ float top,
+ float right,
+ float bottom) = 0;
+
+ // Device configuration
+ // Gets the current speaker volume, as a value between 0 and 255.
+ virtual bool GetOutputVolume(int* level) = 0;
+ // Sets the current speaker volume, as a value between 0 and 255.
+ virtual bool SetOutputVolume(int level) = 0;
+
+ // Local monitoring
+ // Gets the current microphone level, as a value between 0 and 10.
+ virtual int GetInputLevel() = 0;
+ // Starts or stops the local microphone. Useful if local mic info is needed
+ // prior to a call being connected; the mic will be started automatically
+ // when a VoiceMediaChannel starts sending.
+ virtual bool SetLocalMonitor(bool enable) = 0;
+ // Installs a callback for raw frames from the local camera.
+ virtual bool SetLocalRenderer(VideoRenderer* renderer) = 0;
+ // Starts/stops local camera.
+ virtual CaptureResult SetVideoCapture(bool capture) = 0;
+
+ virtual const std::vector<AudioCodec>& audio_codecs() = 0;
+ virtual const std::vector<VideoCodec>& video_codecs() = 0;
+
+ // Logging control
+ virtual void SetVoiceLogging(int min_sev, const char* filter) = 0;
+ virtual void SetVideoLogging(int min_sev, const char* filter) = 0;
+
+ sigslot::repeater1<CaptureResult> SignalVideoCaptureResult;
+};
+
+// CompositeMediaEngine constructs a MediaEngine from separate
+// voice and video engine classes.
+template<class VOICE, class VIDEO>
+class CompositeMediaEngine : public MediaEngine {
+ public:
+ CompositeMediaEngine() {}
+ virtual ~CompositeMediaEngine() {}
+ virtual bool Init() {
+ if (!voice_.Init())
+ return false;
+ if (!video_.Init()) {
+ voice_.Terminate();
+ return false;
+ }
+ SignalVideoCaptureResult.repeat(video_.SignalCaptureResult);
+ return true;
+ }
+ virtual void Terminate() {
+ video_.Terminate();
+ voice_.Terminate();
+ }
+
+ virtual int GetCapabilities() {
+ return (voice_.GetCapabilities() | video_.GetCapabilities());
+ }
+ virtual VoiceMediaChannel *CreateChannel() {
+ return voice_.CreateChannel();
+ }
+ virtual VideoMediaChannel *CreateVideoChannel(VoiceMediaChannel* channel) {
+ return video_.CreateChannel(channel);
+ }
+ virtual SoundclipMedia *CreateSoundclip() {
+ return voice_.CreateSoundclip();
+ }
+
+ virtual bool SetAudioOptions(int o) {
+ return voice_.SetOptions(o);
+ }
+ virtual bool SetVideoOptions(int o) {
+ return video_.SetOptions(o);
+ }
+ virtual bool SetDefaultVideoEncoderConfig(const VideoEncoderConfig& config) {
+ return video_.SetDefaultEncoderConfig(config);
+ }
+
+ virtual bool SetSoundDevices(const Device* in_device,
+ const Device* out_device) {
+ return voice_.SetDevices(in_device, out_device);
+ }
+
+ virtual bool SetVideoCaptureDevice(const Device* cam_device) {
+ return video_.SetCaptureDevice(cam_device);
+ }
+
+ virtual bool SetVideoRenderer(int channel_id,
+ void* window,
+ unsigned int zOrder,
+ float left,
+ float top,
+ float right,
+ float bottom) {
+ return video_.SetVideoRenderer(channel_id,
+ window,
+ zOrder,
+ left,
+ top,
+ right,
+ bottom);
+ }
+
+ virtual bool GetOutputVolume(int* level) {
+ return voice_.GetOutputVolume(level);
+ }
+
+ virtual bool SetOutputVolume(int level) {
+ return voice_.SetOutputVolume(level);
+ }
+
+ virtual int GetInputLevel() {
+ return voice_.GetInputLevel();
+ }
+ virtual bool SetLocalMonitor(bool enable) {
+ return voice_.SetLocalMonitor(enable);
+ }
+ virtual bool SetLocalRenderer(VideoRenderer* renderer) {
+ return video_.SetLocalRenderer(renderer);
+ }
+ virtual CaptureResult SetVideoCapture(bool capture) {
+ return video_.SetCapture(capture);
+ }
+
+ virtual const std::vector<AudioCodec>& audio_codecs() {
+ return voice_.codecs();
+ }
+ virtual const std::vector<VideoCodec>& video_codecs() {
+ return video_.codecs();
+ }
+
+ virtual void SetVoiceLogging(int min_sev, const char* filter) {
+ return voice_.SetLogging(min_sev, filter);
+ }
+ virtual void SetVideoLogging(int min_sev, const char* filter) {
+ return video_.SetLogging(min_sev, filter);
+ }
+
+ protected:
+ VOICE voice_;
+ VIDEO video_;
+};
+
+// NullVoiceEngine can be used with CompositeMediaEngine in the case where only
+// a video engine is desired.
+class NullVoiceEngine {
+ public:
+ bool Init() { return true; }
+ void Terminate() {}
+ int GetCapabilities() { return 0; }
+ // If you need this to return an actual channel, use FakeMediaEngine instead.
+ VoiceMediaChannel* CreateChannel() {
+ return NULL;
+ }
+ SoundclipMedia* CreateSoundclip() {
+ return NULL;
+ }
+ bool SetOptions(int opts) { return true; }
+ bool SetDevices(const Device* in_device, const Device* out_device) {
+ return true;
+ }
+ bool GetOutputVolume(int* level) { *level = 0; return true; }
+ bool SetOutputVolume(int level) { return true; }
+ int GetInputLevel() { return 0; }
+ bool SetLocalMonitor(bool enable) { return true; }
+ const std::vector<AudioCodec>& codecs() { return codecs_; }
+ void SetLogging(int min_sev, const char* filter) {}
+ private:
+ std::vector<AudioCodec> codecs_;
+};
+
+// NullVideoEngine can be used with CompositeMediaEngine in the case where only
+// a voice engine is desired.
+class NullVideoEngine {
+ public:
+ bool Init() { return true; }
+ void Terminate() {}
+ int GetCapabilities() { return 0; }
+ // If you need this to return an actual channel, use FakeMediaEngine instead.
+ VideoMediaChannel* CreateChannel(
+ VoiceMediaChannel* voice_media_channel) {
+ return NULL;
+ }
+ bool SetOptions(int opts) { return true; }
+ bool SetDefaultEncoderConfig(const VideoEncoderConfig& config) {
+ return true;
+ }
+ bool SetCaptureDevice(const Device* cam_device) { return true; }
+ bool SetLocalRenderer(VideoRenderer* renderer) { return true; }
+ CaptureResult SetCapture(bool capture) { return CR_SUCCESS; }
+ const std::vector<VideoCodec>& codecs() { return codecs_; }
+ void SetLogging(int min_sev, const char* filter) {}
+ sigslot::signal1<CaptureResult> SignalCaptureResult;
+ private:
+ std::vector<VideoCodec> codecs_;
+};
+
+typedef CompositeMediaEngine<NullVoiceEngine, NullVideoEngine> NullMediaEngine;
+
+} // namespace cricket
+
+#endif // TALK_SESSION_PHONE_MEDIAENGINE_H_
diff --git a/third_party_mods/libjingle/source/talk/session/phone/mediamessages.cc b/third_party_mods/libjingle/source/talk/session/phone/mediamessages.cc
new file mode 100644
index 0000000..b1e9b76
--- /dev/null
+++ b/third_party_mods/libjingle/source/talk/session/phone/mediamessages.cc
@@ -0,0 +1,242 @@
+/*
+ * libjingle
+ * Copyright 2010, Google Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright notice,
+ * this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ * 3. The name of the author may not be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED
+ * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO
+ * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
+ * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
+ * OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
+ * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "talk/session/phone/mediamessages.h"
+
+#include "talk/base/stringencode.h"
+#include "talk/p2p/base/constants.h"
+#include "talk/session/phone/mediasessionclient.h"
+#include "talk/xmllite/xmlelement.h"
+
+namespace cricket {
+
+const NamedSource* GetFirstSourceByNick(const NamedSources& sources,
+ const std::string& nick) {
+ for (NamedSources::const_iterator source = sources.begin();
+ source != sources.end(); ++source) {
+ if (source->nick == nick) {
+ return &*source;
+ }
+ }
+ return NULL;
+}
+
+const NamedSource* GetSourceBySsrc(const NamedSources& sources, uint32 ssrc) {
+ for (NamedSources::const_iterator source = sources.begin();
+ source != sources.end(); ++source) {
+ if (source->ssrc == ssrc) {
+ return &*source;
+ }
+ }
+ return NULL;
+}
+
+const NamedSource* MediaSources::GetFirstAudioSourceByNick(
+ const std::string& nick) {
+ return GetFirstSourceByNick(audio, nick);
+}
+
+const NamedSource* MediaSources::GetFirstVideoSourceByNick(
+ const std::string& nick) {
+ return GetFirstSourceByNick(video, nick);
+}
+
+const NamedSource* MediaSources::GetAudioSourceBySsrc(uint32 ssrc) {
+ return GetSourceBySsrc(audio, ssrc);
+}
+
+const NamedSource* MediaSources::GetVideoSourceBySsrc(uint32 ssrc) {
+ return GetSourceBySsrc(video, ssrc);
+}
+
+// NOTE: There is no check here for duplicate sources, so check before
+// adding.
+void AddSource(NamedSources* sources, const NamedSource& source) {
+ sources->push_back(source);
+}
+
+void MediaSources::AddAudioSource(const NamedSource& source) {
+ AddSource(&audio, source);
+}
+
+void MediaSources::AddVideoSource(const NamedSource& source) {
+ AddSource(&video, source);
+}
+
+void RemoveSourceBySsrc(NamedSources* sources, uint32 ssrc) {
+ for (NamedSources::iterator source = sources->begin();
+ source != sources->end(); ) {
+ if (source->ssrc == ssrc) {
+ source = sources->erase(source);
+ } else {
+ ++source;
+ }
+ }
+}
+
+void MediaSources::RemoveAudioSourceBySsrc(uint32 ssrc) {
+ RemoveSourceBySsrc(&audio, ssrc);
+}
+
+void MediaSources::RemoveVideoSourceBySsrc(uint32 ssrc) {
+ RemoveSourceBySsrc(&video, ssrc);
+}
+
+bool ParseSsrc(const std::string& string, uint32* ssrc) {
+ return talk_base::FromString(string, ssrc);
+}
+
+bool ParseSsrc(const buzz::XmlElement* element, uint32* ssrc) {
+ if (element == NULL) {
+ return false;
+ }
+ return ParseSsrc(element->BodyText(), ssrc);
+}
+
+bool ParseNamedSource(const buzz::XmlElement* source_elem,
+ NamedSource* named_source,
+ ParseError* error) {
+ named_source->nick = source_elem->Attr(QN_JINGLE_DRAFT_SOURCE_NICK);
+ if (named_source->nick.empty()) {
+ return BadParse("Missing or invalid nick.", error);
+ }
+
+ named_source->name = source_elem->Attr(QN_JINGLE_DRAFT_SOURCE_NAME);
+ named_source->usage = source_elem->Attr(QN_JINGLE_DRAFT_SOURCE_USAGE);
+ named_source->removed =
+ (STR_JINGLE_DRAFT_SOURCE_STATE_REMOVED ==
+ source_elem->Attr(QN_JINGLE_DRAFT_SOURCE_STATE));
+
+ const buzz::XmlElement* ssrc_elem =
+ source_elem->FirstNamed(QN_JINGLE_DRAFT_SOURCE_SSRC);
+ if (ssrc_elem != NULL && !ssrc_elem->BodyText().empty()) {
+ uint32 ssrc;
+ if (!ParseSsrc(ssrc_elem->BodyText(), &ssrc)) {
+ return BadParse("Missing or invalid ssrc.", error);
+ }
+ named_source->SetSsrc(ssrc);
+ }
+
+ return true;
+}
+
+bool IsSourcesNotify(const buzz::XmlElement* action_elem) {
+ return action_elem->FirstNamed(QN_JINGLE_DRAFT_NOTIFY) != NULL;
+}
+
+bool ParseSourcesNotify(const buzz::XmlElement* action_elem,
+ const SessionDescription* session_description,
+ MediaSources* sources,
+ ParseError* error) {
+ for (const buzz::XmlElement* notify_elem
+ = action_elem->FirstNamed(QN_JINGLE_DRAFT_NOTIFY);
+ notify_elem != NULL;
+ notify_elem = notify_elem->NextNamed(QN_JINGLE_DRAFT_NOTIFY)) {
+ std::string content_name = notify_elem->Attr(QN_JINGLE_DRAFT_CONTENT_NAME);
+ for (const buzz::XmlElement* source_elem
+ = notify_elem->FirstNamed(QN_JINGLE_DRAFT_SOURCE);
+ source_elem != NULL;
+ source_elem = source_elem->NextNamed(QN_JINGLE_DRAFT_SOURCE)) {
+ NamedSource named_source;
+ if (!ParseNamedSource(source_elem, &named_source, error)) {
+ return false;
+ }
+
+ if (session_description == NULL) {
+ return BadParse("unknown content name: " + content_name, error);
+ }
+ const ContentInfo* content =
+ FindContentInfoByName(session_description->contents(), content_name);
+ if (content == NULL) {
+ return BadParse("unknown content name: " + content_name, error);
+ }
+
+ if (IsAudioContent(content)) {
+ sources->audio.push_back(named_source);
+ } else if (IsVideoContent(content)) {
+ sources->video.push_back(named_source);
+ }
+ }
+ }
+
+ return true;
+}
+
+buzz::XmlElement* CreateViewElem(const std::string& name,
+ const std::string& type) {
+ buzz::XmlElement* view_elem =
+ new buzz::XmlElement(QN_JINGLE_DRAFT_VIEW, true);
+ view_elem->AddAttr(QN_JINGLE_DRAFT_CONTENT_NAME, name);
+ view_elem->SetAttr(QN_JINGLE_DRAFT_VIEW_TYPE, type);
+ return view_elem;
+}
+
+buzz::XmlElement* CreateVideoViewElem(const std::string& content_name,
+ const std::string& type) {
+ return CreateViewElem(content_name, type);
+}
+
+buzz::XmlElement* CreateNoneVideoViewElem(const std::string& content_name) {
+ return CreateVideoViewElem(content_name, STR_JINGLE_DRAFT_VIEW_TYPE_NONE);
+}
+
+buzz::XmlElement* CreateStaticVideoViewElem(const std::string& content_name,
+ const StaticVideoView& view) {
+ buzz::XmlElement* view_elem =
+ CreateVideoViewElem(content_name, STR_JINGLE_DRAFT_VIEW_TYPE_STATIC);
+ AddXmlAttr(view_elem, QN_JINGLE_DRAFT_VIEW_SSRC, view.ssrc);
+
+ buzz::XmlElement* params_elem = new buzz::XmlElement(
+ QN_JINGLE_DRAFT_VIEW_PARAMS);
+ AddXmlAttr(params_elem, QN_JINGLE_DRAFT_VIEW_PARAMS_WIDTH, view.width);
+ AddXmlAttr(params_elem, QN_JINGLE_DRAFT_VIEW_PARAMS_HEIGHT, view.height);
+ AddXmlAttr(params_elem, QN_JINGLE_DRAFT_VIEW_PARAMS_FRAMERATE,
+ view.framerate);
+ AddXmlAttr(params_elem, QN_JINGLE_DRAFT_VIEW_PARAMS_PREFERENCE,
+ view.preference);
+ view_elem->AddElement(params_elem);
+
+ return view_elem;
+}
+
+bool WriteViewRequest(const std::string& content_name,
+ const ViewRequest& request,
+ XmlElements* elems,
+ WriteError* error) {
+ if (request.static_video_views.size() == 0) {
+ elems->push_back(CreateNoneVideoViewElem(content_name));
+ } else {
+ for (StaticVideoViews::const_iterator view =
+ request.static_video_views.begin();
+ view != request.static_video_views.end(); ++view) {
+ elems->push_back(CreateStaticVideoViewElem(content_name, *view));
+ }
+ }
+ return true;
+}
+
+} // namespace cricket
diff --git a/third_party_mods/libjingle/source/talk/session/phone/mediamessages.h b/third_party_mods/libjingle/source/talk/session/phone/mediamessages.h
new file mode 100644
index 0000000..58e8793
--- /dev/null
+++ b/third_party_mods/libjingle/source/talk/session/phone/mediamessages.h
@@ -0,0 +1,106 @@
+/*
+ * libjingle
+ * Copyright 2010, Google Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright notice,
+ * this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ * 3. The name of the author may not be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED
+ * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO
+ * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
+ * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
+ * OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
+ * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef TALK_SESSION_PHONE_MEDIAMESSAGES_H_
+#define TALK_SESSION_PHONE_MEDIAMESSAGES_H_
+
+#include <string>
+#include <vector>
+#include "talk/base/basictypes.h"
+#include "talk/p2p/base/parsing.h"
+#include "talk/p2p/base/sessiondescription.h"
+
+namespace cricket {
+
+struct NamedSource {
+ NamedSource() : ssrc(0), ssrc_set(false), removed(false) {}
+
+ void SetSsrc(uint32 ssrc) {
+ this->ssrc = ssrc;
+ this->ssrc_set = true;
+ }
+
+ std::string nick;
+ std::string name;
+ std::string usage;
+ uint32 ssrc;
+ bool ssrc_set;
+ bool removed;
+};
+
+typedef std::vector<NamedSource> NamedSources;
+
+class MediaSources {
+ public:
+ const NamedSource* GetAudioSourceBySsrc(uint32 ssrc);
+ const NamedSource* GetVideoSourceBySsrc(uint32 ssrc);
+ // TODO: Remove once all senders use excplict remove by ssrc.
+ const NamedSource* GetFirstAudioSourceByNick(const std::string& nick);
+ const NamedSource* GetFirstVideoSourceByNick(const std::string& nick);
+ void AddAudioSource(const NamedSource& source);
+ void AddVideoSource(const NamedSource& source);
+ void RemoveAudioSourceBySsrc(uint32 ssrc);
+ void RemoveVideoSourceBySsrc(uint32 ssrc);
+ NamedSources audio;
+ NamedSources video;
+};
+
+struct StaticVideoView {
+ StaticVideoView(uint32 ssrc, int width, int height, int framerate)
+ : ssrc(ssrc),
+ width(width),
+ height(height),
+ framerate(framerate),
+ preference(0) {}
+
+ uint32 ssrc;
+ int width;
+ int height;
+ int framerate;
+ int preference;
+};
+
+typedef std::vector<StaticVideoView> StaticVideoViews;
+
+struct ViewRequest {
+ StaticVideoViews static_video_views;
+};
+
+bool WriteViewRequest(const std::string& content_name,
+ const ViewRequest& view,
+ XmlElements* elems,
+ WriteError* error);
+
+bool IsSourcesNotify(const buzz::XmlElement* action_elem);
+// The session_description is needed to map content_name => media type.
+bool ParseSourcesNotify(const buzz::XmlElement* action_elem,
+ const SessionDescription* session_description,
+ MediaSources* sources,
+ ParseError* error);
+} // namespace cricket
+
+#endif // TALK_SESSION_PHONE_MEDIAMESSAGES_H_
diff --git a/third_party_mods/libjingle/source/talk/session/phone/mediasessionclient.h b/third_party_mods/libjingle/source/talk/session/phone/mediasessionclient.h
new file mode 100644
index 0000000..07fc258
--- /dev/null
+++ b/third_party_mods/libjingle/source/talk/session/phone/mediasessionclient.h
@@ -0,0 +1,289 @@
+/*
+ * libjingle
+ * Copyright 2004--2005, Google Inc.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright notice,
+ * this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ * 3. The name of the author may not be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED
+ * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO
+ * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
+ * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
+ * OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
+ * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef TALK_SESSION_PHONE_MEDIASESSIONCLIENT_H_
+#define TALK_SESSION_PHONE_MEDIASESSIONCLIENT_H_
+
+#include <string>
+#include <vector>
+#include <map>
+#include <algorithm>
+#include "talk/session/phone/call.h"
+#include "talk/session/phone/channelmanager.h"
+#include "talk/session/phone/cryptoparams.h"
+#include "talk/base/sigslot.h"
+#include "talk/base/sigslotrepeater.h"
+#include "talk/base/messagequeue.h"
+#include "talk/base/thread.h"
+#include "talk/p2p/base/sessionmanager.h"
+#include "talk/p2p/base/session.h"
+#include "talk/p2p/base/sessionclient.h"
+#include "talk/p2p/base/sessiondescription.h"
+
+namespace cricket {
+
+class Call;
+class SessionDescription;
+typedef std::vector<AudioCodec> AudioCodecs;
+typedef std::vector<VideoCodec> VideoCodecs;
+
+// SEC_ENABLED and SEC_REQUIRED should only be used if the session
+// was negotiated over TLS, to protect the inline crypto material
+// exchange.
+// SEC_DISABLED: No crypto in outgoing offer and answer. Fail any
+// offer with crypto required.
+// SEC_ENABLED: Crypto in outgoing offer and answer. Fail any offer
+// with unsupported required crypto. Crypto set but not
+// required in outgoing offer.
+// SEC_REQUIRED: Crypto in outgoing offer and answer with
+// required='true'. Fail any offer with no or
+// unsupported crypto (implicit crypto required='true'
+// in the offer.)
+enum SecureMediaPolicy {SEC_DISABLED, SEC_ENABLED, SEC_REQUIRED};
+
+const int kAutoBandwidth = -1;
+
+struct CallOptions {
+ CallOptions() :
+ is_video(false),
+ is_muc(false),
+ video_bandwidth(kAutoBandwidth) {
+ }
+
+ bool is_video;
+ bool is_muc;
+ // bps. -1 == auto.
+ int video_bandwidth;
+};
+
+class MediaSessionClient: public SessionClient, public sigslot::has_slots<> {
+ public:
+
+ MediaSessionClient(const buzz::Jid& jid, SessionManager *manager);
+ // Alternative constructor, allowing injection of media_engine
+ // and device_manager.
+ MediaSessionClient(const buzz::Jid& jid, SessionManager *manager,
+ MediaEngine* media_engine, DeviceManager* device_manager);
+ ~MediaSessionClient();
+
+ const buzz::Jid &jid() const { return jid_; }
+ SessionManager* session_manager() const { return session_manager_; }
+ ChannelManager* channel_manager() const { return channel_manager_; }
+
+ int GetCapabilities() { return channel_manager_->GetCapabilities(); }
+
+ Call *CreateCall();
+ void DestroyCall(Call *call);
+
+ Call *GetFocus();
+ void SetFocus(Call *call);
+
+ void JoinCalls(Call *call_to_join, Call *call);
+
+ bool GetAudioInputDevices(std::vector<std::string>* names) {
+ return channel_manager_->GetAudioInputDevices(names);
+ }
+ bool GetAudioOutputDevices(std::vector<std::string>* names) {
+ return channel_manager_->GetAudioOutputDevices(names);
+ }
+ bool GetVideoCaptureDevices(std::vector<std::string>* names) {
+ return channel_manager_->GetVideoCaptureDevices(names);
+ }
+
+ bool SetAudioOptions(const std::string& in_name, const std::string& out_name,
+ int opts) {
+ return channel_manager_->SetAudioOptions(in_name, out_name, opts);
+ }
+ bool SetOutputVolume(int level) {
+ return channel_manager_->SetOutputVolume(level);
+ }
+ bool SetVideoOptions(const std::string& cam_device) {
+ return channel_manager_->SetVideoOptions(cam_device);
+ }
+
+ sigslot::signal2<Call *, Call *> SignalFocus;
+ sigslot::signal1<Call *> SignalCallCreate;
+ sigslot::signal1<Call *> SignalCallDestroy;
+ sigslot::repeater0<> SignalDevicesChange;
+
+ SessionDescription* CreateOffer(const CallOptions& options);
+ SessionDescription* CreateAnswer(const SessionDescription* offer,
+ const CallOptions& options);
+
+ SecureMediaPolicy secure() const { return secure_; }
+ void set_secure(SecureMediaPolicy s) { secure_ = s; }
+
+ private:
+ void Construct();
+ void OnSessionCreate(Session *session, bool received_initiate);
+ void OnSessionState(BaseSession *session, BaseSession::State state);
+ void OnSessionDestroy(Session *session);
+ virtual bool ParseContent(SignalingProtocol protocol,
+ const buzz::XmlElement* elem,
+ const ContentDescription** content,
+ ParseError* error);
+ virtual bool WriteContent(SignalingProtocol protocol,
+ const ContentDescription* content,
+ buzz::XmlElement** elem,
+ WriteError* error);
+ Session *CreateSession(Call *call);
+
+ buzz::Jid jid_;
+ SessionManager* session_manager_;
+ Call *focus_call_;
+ ChannelManager *channel_manager_;
+ std::map<uint32, Call *> calls_;
+ std::map<std::string, Call *> session_map_;
+ SecureMediaPolicy secure_;
+ friend class Call;
+};
+
+enum MediaType {
+ MEDIA_TYPE_AUDIO,
+ MEDIA_TYPE_VIDEO
+};
+
+class MediaContentDescription : public ContentDescription {
+ public:
+ MediaContentDescription()
+ : ssrc_(0),
+ ssrc_set_(false),
+ rtcp_mux_(false),
+ bandwidth_(kAutoBandwidth),
+ crypto_required_(false),
+ rtp_header_extensions_set_(false) {
+ }
+
+ virtual MediaType type() const = 0;
+
+ uint32 ssrc() const { return ssrc_; }
+ bool ssrc_set() const { return ssrc_set_; }
+ void set_ssrc(uint32 ssrc) {
+ ssrc_ = ssrc;
+ ssrc_set_ = true;
+ }
+
+ bool rtcp_mux() const { return rtcp_mux_; }
+ void set_rtcp_mux(bool mux) { rtcp_mux_ = mux; }
+
+ int bandwidth() const { return bandwidth_; }
+ void set_bandwidth(int bandwidth) { bandwidth_ = bandwidth; }
+
+ const std::vector<CryptoParams>& cryptos() const { return cryptos_; }
+ void AddCrypto(const CryptoParams& params) {
+ cryptos_.push_back(params);
+ }
+ bool crypto_required() const { return crypto_required_; }
+ void set_crypto_required(bool crypto) {
+ crypto_required_ = crypto;
+ }
+
+ const std::vector<RtpHeaderExtension>& rtp_header_extensions() const {
+ return rtp_header_extensions_;
+ }
+ void AddRtpHeaderExtension(const RtpHeaderExtension& ext) {
+ rtp_header_extensions_.push_back(ext);
+ rtp_header_extensions_set_ = true;
+ }
+ void ClearRtpHeaderExtensions() {
+ rtp_header_extensions_.clear();
+ rtp_header_extensions_set_ = true;
+ }
+ // We can't always tell if an empty list of header extensions is
+ // because the other side doesn't support them, or just isn't hooked up to
+ // signal them. For now we assume an empty list means no signaling, but
+ // provide the ClearRtpHeaderExtensions method to allow "no support" to be
+ // clearly indicated (i.e. when derived from other information).
+ bool rtp_header_extensions_set() const {
+ return rtp_header_extensions_set_;
+ }
+
+ protected:
+ uint32 ssrc_;
+ bool ssrc_set_;
+ bool rtcp_mux_;
+ int bandwidth_;
+ std::vector<CryptoParams> cryptos_;
+ bool crypto_required_;
+ std::vector<RtpHeaderExtension> rtp_header_extensions_;
+ bool rtp_header_extensions_set_;
+};
+
+template <class C>
+class MediaContentDescriptionImpl : public MediaContentDescription {
+ public:
+ struct PreferenceSort {
+ bool operator()(C a, C b) { return a.preference > b.preference; }
+ };
+
+ const std::vector<C>& codecs() const { return codecs_; }
+ void AddCodec(const C& codec) {
+ codecs_.push_back(codec);
+ }
+ void SortCodecs() {
+ std::sort(codecs_.begin(), codecs_.end(), PreferenceSort());
+ }
+
+ private:
+ std::vector<C> codecs_;
+};
+
+class AudioContentDescription : public MediaContentDescriptionImpl<AudioCodec> {
+ public:
+ AudioContentDescription() :
+ conference_mode_(false) {}
+
+ virtual MediaType type() const { return MEDIA_TYPE_AUDIO; }
+
+ bool conference_mode() const { return conference_mode_; }
+ void set_conference_mode(bool enable) {
+ conference_mode_ = enable;
+ }
+
+ const std::string &lang() const { return lang_; }
+ void set_lang(const std::string &lang) { lang_ = lang; }
+
+
+ private:
+ bool conference_mode_;
+ std::string lang_;
+};
+
+class VideoContentDescription : public MediaContentDescriptionImpl<VideoCodec> {
+ public:
+ virtual MediaType type() const { return MEDIA_TYPE_VIDEO; }
+};
+
+// Convenience functions.
+bool IsAudioContent(const ContentInfo* content);
+bool IsVideoContent(const ContentInfo* content);
+const ContentInfo* GetFirstAudioContent(const SessionDescription* sdesc);
+const ContentInfo* GetFirstVideoContent(const SessionDescription* sdesc);
+
+} // namespace cricket
+
+#endif // TALK_SESSION_PHONE_MEDIASESSIONCLIENT_H_
diff --git a/third_party_mods/libvpx/libvpx.gyp b/third_party_mods/libvpx/libvpx.gyp
new file mode 100644
index 0000000..25633b8
--- /dev/null
+++ b/third_party_mods/libvpx/libvpx.gyp
@@ -0,0 +1,254 @@
+# Copyright (c) 2010 The Chromium Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+{
+ 'targets': [
+ {
+ 'target_name': 'libvpx',
+ 'type': 'static_library',
+ # Don't build yasm from source on Windows
+ 'conditions': [
+ ['OS!="win"', {
+ 'dependencies': [
+ '../yasm/yasm.gyp:yasm#host',
+ ],
+ },
+ ],
+ ],
+ 'variables': {
+ 'shared_generated_dir':
+ '<(SHARED_INTERMEDIATE_DIR)/third_party/libvpx',
+ 'yasm_path': '<(PRODUCT_DIR)/yasm',
+ 'yasm_flags': [
+ '-I', 'config/<(OS)/<(target_arch)',
+ '-I', '.'
+ ],
+ 'conditions': [
+ ['OS!="win"', {
+ 'asm_obj_dir':
+ '<(shared_generated_dir)',
+ 'obj_file_ending':
+ 'o',
+ },
+ {
+ 'asm_obj_dir':
+ 'asm',
+ 'obj_file_ending':
+ 'obj',
+ 'yasm_path': '../yasm/binaries/win/yasm.exe',
+ }
+ ],
+ ['target_arch=="ia32"', {
+ 'conditions': [
+ ['OS=="linux"', {
+ 'yasm_flags': [
+ '-felf32',
+ ],
+ },
+ ],
+ ['OS=="mac"', {
+ 'yasm_flags': [
+ '-fmacho32',
+ ],
+ },
+ ],
+ ['OS=="win"', {
+ 'yasm_flags': [
+ '-fwin32',
+ ],
+ },
+ ],
+ ],
+ 'yasm_flags': [
+ '-m', 'x86',
+ ],
+ },
+ ],
+ ['target_arch=="x64"', {
+ 'conditions': [
+ ['OS=="linux"', {
+ 'yasm_flags': [
+ '-felf64',
+ ],
+ },
+ ],
+ ['OS=="mac"', {
+ 'yasm_flags': [
+ '-fmacho64',
+ ],
+ },
+ ],
+ ['OS=="win"', {
+ 'yasm_flags': [
+ '-win64',
+ ],
+ },
+ ],
+ ],
+ 'yasm_flags': [
+ '-m', 'amd64',
+ ],
+ },
+ ],
+ ],
+ },
+ 'include_dirs': [
+ 'config/<(OS)/<(target_arch)',
+ 'build',
+ '.',
+ 'vp8/common',
+ 'vp8/decoder',
+ 'vp8/encoder',
+ ],
+ 'rules': [
+ {
+ 'rule_name': 'assemble',
+ 'extension': 'asm',
+ 'inputs': [ '<(yasm_path)', ],
+ 'outputs': [
+ '<(asm_obj_dir)/<(RULE_INPUT_ROOT).<(obj_file_ending)',
+ ],
+ 'action': [
+ '<(yasm_path)',
+ '<@(yasm_flags)',
+ '-o', '<(asm_obj_dir)/<(RULE_INPUT_ROOT).<(obj_file_ending)',
+ '<(RULE_INPUT_PATH)',
+ ],
+ 'process_outputs_as_sources': 1,
+ 'message': 'Build libvpx yasm build <(RULE_INPUT_PATH).',
+ },
+ ],
+
+ 'sources': [
+ 'vpx/src/vpx_decoder.c',
+ 'vpx/src/vpx_decoder_compat.c',
+ 'vpx/src/vpx_encoder.c',
+ 'vpx/src/vpx_codec.c',
+ 'vpx/src/vpx_image.c',
+ 'vpx_mem/vpx_mem.c',
+ 'vpx_scale/generic/vpxscale.c',
+ 'vpx_scale/generic/yv12config.c',
+ 'vpx_scale/generic/yv12extend.c',
+ 'vpx_scale/generic/scalesystemdependant.c',
+ 'vpx_scale/generic/gen_scalers.c',
+ 'vp8/common/alloccommon.c',
+ 'vp8/common/blockd.c',
+ 'vp8/common/debugmodes.c',
+ 'vp8/common/entropy.c',
+ 'vp8/common/entropymode.c',
+ 'vp8/common/entropymv.c',
+ 'vp8/common/extend.c',
+ 'vp8/common/filter.c',
+ 'vp8/common/findnearmv.c',
+ 'vp8/common/generic/systemdependent.c',
+ 'vp8/common/idctllm.c',
+ 'vp8/common/invtrans.c',
+ 'vp8/common/loopfilter.c',
+ 'vp8/common/loopfilter_filters.c',
+ 'vp8/common/mbpitch.c',
+ 'vp8/common/modecont.c',
+ 'vp8/common/modecontext.c',
+ 'vp8/common/postproc.c',
+ 'vp8/common/quant_common.c',
+ 'vp8/common/recon.c',
+ 'vp8/common/reconinter.c',
+ 'vp8/common/reconintra.c',
+ 'vp8/common/reconintra4x4.c',
+ 'vp8/common/setupintrarecon.c',
+ 'vp8/common/swapyv12buffer.c',
+ 'vp8/common/textblit.c',
+ 'vp8/common/treecoder.c',
+ 'vp8/common/x86/x86_systemdependent.c',
+ 'vp8/common/x86/vp8_asm_stubs.c',
+ 'vp8/common/x86/loopfilter_x86.c',
+ 'vp8/vp8_cx_iface.c',
+ 'vp8/encoder/bitstream.c',
+ 'vp8/encoder/boolhuff.c',
+ 'vp8/encoder/dct.c',
+ 'vp8/encoder/encodeframe.c',
+ 'vp8/encoder/encodeintra.c',
+ 'vp8/encoder/encodemb.c',
+ 'vp8/encoder/encodemv.c',
+ 'vp8/encoder/ethreading.c',
+ 'vp8/encoder/firstpass.c',
+ 'vp8/encoder/generic/csystemdependent.c',
+ 'vp8/encoder/mcomp.c',
+ 'vp8/encoder/modecosts.c',
+ 'vp8/encoder/onyx_if.c',
+ 'vp8/encoder/pickinter.c',
+ 'vp8/encoder/picklpf.c',
+ 'vp8/encoder/psnr.c',
+ 'vp8/encoder/quantize.c',
+ 'vp8/encoder/ratectrl.c',
+ 'vp8/encoder/rdopt.c',
+ 'vp8/encoder/sad_c.c',
+ 'vp8/encoder/segmentation.c',
+ 'vp8/encoder/tokenize.c',
+ 'vp8/encoder/treewriter.c',
+ 'vp8/encoder/variance_c.c',
+ 'vp8/encoder/temporal_filter.c',
+ 'vp8/encoder/x86/x86_csystemdependent.c',
+ 'vp8/encoder/x86/variance_mmx.c',
+ 'vp8/encoder/x86/variance_sse2.c',
+ 'vp8/vp8_dx_iface.c',
+ 'vp8/decoder/dboolhuff.c',
+ 'vp8/decoder/decodemv.c',
+ 'vp8/decoder/decodframe.c',
+ 'vp8/decoder/dequantize.c',
+ 'vp8/decoder/detokenize.c',
+ 'vp8/decoder/generic/dsystemdependent.c',
+ 'vp8/decoder/onyxd_if.c',
+ 'vp8/decoder/threading.c',
+ 'vp8/decoder/idct_blk.c',
+ 'vp8/decoder/reconintra_mt.c',
+ 'vp8/decoder/x86/x86_dsystemdependent.c',
+ 'vp8/decoder/x86/idct_blk_mmx.c',
+ 'vp8/decoder/x86/idct_blk_sse2.c',
+ 'vpx_ports/x86_cpuid.c',
+ # Yasm inputs.
+ 'vp8/common/x86/idctllm_mmx.asm',
+ 'vp8/common/x86/idctllm_sse2.asm',
+ 'vp8/common/x86/iwalsh_mmx.asm',
+ 'vp8/common/x86/iwalsh_sse2.asm',
+ 'vp8/common/x86/loopfilter_mmx.asm',
+ 'vp8/common/x86/loopfilter_sse2.asm',
+ 'vp8/common/x86/postproc_mmx.asm',
+ 'vp8/common/x86/postproc_sse2.asm',
+ 'vp8/common/x86/recon_mmx.asm',
+ 'vp8/common/x86/recon_sse2.asm',
+ 'vp8/common/x86/subpixel_mmx.asm',
+ 'vp8/common/x86/subpixel_sse2.asm',
+ 'vp8/common/x86/subpixel_ssse3.asm',
+ 'vp8/decoder/x86/dequantize_mmx.asm',
+ 'vp8/encoder/x86/dct_mmx.asm',
+ 'vp8/encoder/x86/dct_sse2.asm',
+ 'vp8/encoder/x86/encodeopt.asm',
+ 'vp8/encoder/x86/fwalsh_sse2.asm',
+ 'vp8/encoder/x86/quantize_mmx.asm',
+ 'vp8/encoder/x86/quantize_sse2.asm',
+ 'vp8/encoder/x86/quantize_ssse3.asm',
+ 'vp8/encoder/x86/sad_mmx.asm',
+ 'vp8/encoder/x86/sad_sse2.asm',
+ 'vp8/encoder/x86/sad_sse3.asm',
+ 'vp8/encoder/x86/sad_sse4.asm',
+ 'vp8/encoder/x86/sad_ssse3.asm',
+ 'vp8/encoder/x86/subtract_mmx.asm',
+ 'vp8/encoder/x86/subtract_sse2.asm',
+ 'vp8/encoder/x86/temporal_filter_apply_sse2.asm',
+ 'vp8/encoder/x86/variance_impl_mmx.asm',
+ 'vp8/encoder/x86/variance_impl_sse2.asm',
+ 'vpx_ports/emms.asm',
+ 'vpx_ports/x86_abi_support.asm',
+
+ # Generated by ./configure and checked in.
+ 'config/<(OS)/<(target_arch)/vpx_config.c',
+ ]
+ }
+ ]
+}
+
+# Local Variables:
+# tab-width:2
+# indent-tabs-mode:nil
+# End:
+# vim: set expandtab tabstop=2 shiftwidth=2:
diff --git a/third_party_mods/libvpx/source/config/android/vpx_config.c b/third_party_mods/libvpx/source/config/android/vpx_config.c
new file mode 100644
index 0000000..ad2775b
--- /dev/null
+++ b/third_party_mods/libvpx/source/config/android/vpx_config.c
@@ -0,0 +1,2 @@
+static const char* const cfg = "--target=generic-gnu";
+const char *vpx_codec_build_config(void) {return cfg;}
diff --git a/third_party_mods/libvpx/source/config/android/vpx_config.h b/third_party_mods/libvpx/source/config/android/vpx_config.h
new file mode 100644
index 0000000..598d215
--- /dev/null
+++ b/third_party_mods/libvpx/source/config/android/vpx_config.h
@@ -0,0 +1,83 @@
+/* This file automatically generated by configure. Do not edit! */
+#define INLINE
+#define FORCEINLINE
+#define RESTRICT
+
+#if defined(__arm__)
+
+#define ARCH_ARM 1
+#define HAVE_ARMV5TE 0
+#else
+#define ARCH_ARM 0
+#endif
+
+#if defined(__ARM_HAVE_NEON)
+#define HAVE_ARMV7 1
+#else
+#define HAVE_ARMV7 0
+#endif
+
+#if defined(__ARM_HAVE_ARMV6)
+#define HAVE_ARMV6 1
+#else
+#define HAVE_ARMV6 0
+#endif
+
+#define ARCH_MIPS 0
+#define ARCH_X86 0
+#define ARCH_X86_64 0
+#define ARCH_PPC32 0
+#define ARCH_PPC64 0
+
+#define HAVE_IWMMXT 0
+#define HAVE_IWMMXT2 0
+#define HAVE_MIPS32 0
+#define HAVE_MMX 0
+#define HAVE_SSE 0
+#define HAVE_SSE2 0
+#define HAVE_SSE3 0
+#define HAVE_SSSE3 0
+#define HAVE_ALTIVEC 0
+#define HAVE_VPX_PORTS 1
+#define HAVE_STDINT_H 1
+#define HAVE_ALT_TREE_LAYOUT 0
+#define HAVE_PTHREAD_H 1
+#define HAVE_SYS_MMAN_H 1
+#define CONFIG_EXTERNAL_BUILD 0
+#define CONFIG_INSTALL_DOCS 0
+#define CONFIG_INSTALL_BINS 1
+#define CONFIG_INSTALL_LIBS 1
+#define CONFIG_INSTALL_SRCS 1
+#define CONFIG_DEBUG 0
+#define CONFIG_GPROF 0
+#define CONFIG_GCOV 0
+#define CONFIG_RVCT 0
+#define CONFIG_GCC 0
+#define CONFIG_MSVS 0
+#define CONFIG_PIC 0
+#define CONFIG_BIG_ENDIAN 0
+#define CONFIG_CODEC_SRCS 0
+#define CONFIG_DEBUG_LIBS 0
+#define CONFIG_FAST_UNALIGNED 1
+#define CONFIG_MEM_MANAGER 0
+#define CONFIG_MEM_TRACKER 0
+#define CONFIG_MEM_CHECKS 0
+#define CONFIG_MD5 1
+#define CONFIG_DEQUANT_TOKENS 0
+#define CONFIG_DC_RECON 0
+#define CONFIG_NEW_TOKENS 0
+#define CONFIG_EVAL_LIMIT 0
+#define CONFIG_RUNTIME_CPU_DETECT 0
+#define CONFIG_POSTPROC 0
+#define CONFIG_POSTPROC_GENERIC 0
+#define CONFIG_OS_SUPPORT 1
+#define CONFIG_MULTITHREAD 1
+#define CONFIG_PSNR 0
+#define CONFIG_VP8_ENCODER 1
+#define CONFIG_VP8_DECODER 1
+#define CONFIG_VP8 1
+#define CONFIG_ENCODERS 1
+#define CONFIG_DECODERS 1
+#define CONFIG_STATIC_MSVCRT 0
+#define CONFIG_SPATIAL_RESAMPLING 1
+#define CONFIG_REALTIME_ONLY 0
diff --git a/third_party_mods/libvpx/source/config/android/vpx_version.h b/third_party_mods/libvpx/source/config/android/vpx_version.h
new file mode 100644
index 0000000..1d8ba96
--- /dev/null
+++ b/third_party_mods/libvpx/source/config/android/vpx_version.h
@@ -0,0 +1,7 @@
+#define VERSION_MAJOR 0
+#define VERSION_MINOR 9
+#define VERSION_PATCH 6
+#define VERSION_EXTRA ""
+#define VERSION_PACKED ((VERSION_MAJOR<<16)|(VERSION_MINOR<<8)|(VERSION_PATCH))
+#define VERSION_STRING_NOSP "v0.9.6"
+#define VERSION_STRING " v0.9.6"
diff --git a/third_party_mods/libvpx/source/config/linux/ia32/vpx_config.asm b/third_party_mods/libvpx/source/config/linux/ia32/vpx_config.asm
new file mode 100644
index 0000000..74276ad
--- /dev/null
+++ b/third_party_mods/libvpx/source/config/linux/ia32/vpx_config.asm
@@ -0,0 +1,62 @@
+ARCH_ARM equ 0
+ARCH_MIPS equ 0
+ARCH_X86 equ 1
+ARCH_X86_64 equ 0
+ARCH_PPC32 equ 0
+ARCH_PPC64 equ 0
+HAVE_ARMV5TE equ 0
+HAVE_ARMV6 equ 0
+HAVE_ARMV7 equ 0
+HAVE_IWMMXT equ 0
+HAVE_IWMMXT2 equ 0
+HAVE_MIPS32 equ 0
+HAVE_MMX equ 1
+HAVE_SSE equ 1
+HAVE_SSE2 equ 1
+HAVE_SSE3 equ 1
+HAVE_SSSE3 equ 1
+HAVE_SSE4_1 equ 1
+HAVE_ALTIVEC equ 0
+HAVE_VPX_PORTS equ 1
+HAVE_STDINT_H equ 1
+HAVE_ALT_TREE_LAYOUT equ 0
+HAVE_PTHREAD_H equ 1
+HAVE_SYS_MMAN_H equ 1
+CONFIG_EXTERNAL_BUILD equ 0
+CONFIG_INSTALL_DOCS equ 0
+CONFIG_INSTALL_BINS equ 1
+CONFIG_INSTALL_LIBS equ 1
+CONFIG_INSTALL_SRCS equ 0
+CONFIG_DEBUG equ 0
+CONFIG_GPROF equ 0
+CONFIG_GCOV equ 0
+CONFIG_RVCT equ 0
+CONFIG_GCC equ 1
+CONFIG_MSVS equ 0
+CONFIG_PIC equ 1
+CONFIG_BIG_ENDIAN equ 0
+CONFIG_CODEC_SRCS equ 0
+CONFIG_DEBUG_LIBS equ 0
+CONFIG_FAST_UNALIGNED equ 1
+CONFIG_MEM_MANAGER equ 0
+CONFIG_MEM_TRACKER equ 0
+CONFIG_MEM_CHECKS equ 0
+CONFIG_MD5 equ 1
+CONFIG_DEQUANT_TOKENS equ 0
+CONFIG_DC_RECON equ 0
+CONFIG_RUNTIME_CPU_DETECT equ 1
+CONFIG_POSTPROC equ 1
+CONFIG_MULTITHREAD equ 1
+CONFIG_PSNR equ 0
+CONFIG_VP8_ENCODER equ 1
+CONFIG_VP8_DECODER equ 1
+CONFIG_VP8 equ 1
+CONFIG_ENCODERS equ 1
+CONFIG_DECODERS equ 1
+CONFIG_STATIC_MSVCRT equ 0
+CONFIG_SPATIAL_RESAMPLING equ 1
+CONFIG_REALTIME_ONLY equ 0
+CONFIG_SHARED equ 0
+CONFIG_SMALL equ 0
+CONFIG_POSTPROC_VISUALIZER equ 0
+CONFIG_OS_SUPPORT equ 1
diff --git a/third_party_mods/libvpx/source/config/linux/ia32/vpx_config.c b/third_party_mods/libvpx/source/config/linux/ia32/vpx_config.c
new file mode 100644
index 0000000..b851fac
--- /dev/null
+++ b/third_party_mods/libvpx/source/config/linux/ia32/vpx_config.c
@@ -0,0 +1,2 @@
+static const char* const cfg = "--target=x86-linux-gcc --enable-pic --disable-install-docs --disable-install-srcs --disable-examples --disable-psnr";
+const char *vpx_codec_build_config(void) {return cfg;}
diff --git a/third_party_mods/libvpx/source/config/linux/ia32/vpx_config.h b/third_party_mods/libvpx/source/config/linux/ia32/vpx_config.h
new file mode 100644
index 0000000..1a49989
--- /dev/null
+++ b/third_party_mods/libvpx/source/config/linux/ia32/vpx_config.h
@@ -0,0 +1,64 @@
+/* This file automatically generated by configure. Do not edit! */
+#define RESTRICT
+#define ARCH_ARM 0
+#define ARCH_MIPS 0
+#define ARCH_X86 1
+#define ARCH_X86_64 0
+#define ARCH_PPC32 0
+#define ARCH_PPC64 0
+#define HAVE_ARMV5TE 0
+#define HAVE_ARMV6 0
+#define HAVE_ARMV7 0
+#define HAVE_IWMMXT 0
+#define HAVE_IWMMXT2 0
+#define HAVE_MIPS32 0
+#define HAVE_MMX 1
+#define HAVE_SSE 1
+#define HAVE_SSE2 1
+#define HAVE_SSE3 1
+#define HAVE_SSSE3 1
+#define HAVE_SSE4_1 1
+#define HAVE_ALTIVEC 0
+#define HAVE_VPX_PORTS 1
+#define HAVE_STDINT_H 1
+#define HAVE_ALT_TREE_LAYOUT 0
+#define HAVE_PTHREAD_H 1
+#define HAVE_SYS_MMAN_H 1
+#define CONFIG_EXTERNAL_BUILD 0
+#define CONFIG_INSTALL_DOCS 0
+#define CONFIG_INSTALL_BINS 1
+#define CONFIG_INSTALL_LIBS 1
+#define CONFIG_INSTALL_SRCS 0
+#define CONFIG_DEBUG 0
+#define CONFIG_GPROF 0
+#define CONFIG_GCOV 0
+#define CONFIG_RVCT 0
+#define CONFIG_GCC 1
+#define CONFIG_MSVS 0
+#define CONFIG_PIC 1
+#define CONFIG_BIG_ENDIAN 0
+#define CONFIG_CODEC_SRCS 0
+#define CONFIG_DEBUG_LIBS 0
+#define CONFIG_FAST_UNALIGNED 1
+#define CONFIG_MEM_MANAGER 0
+#define CONFIG_MEM_TRACKER 0
+#define CONFIG_MEM_CHECKS 0
+#define CONFIG_MD5 1
+#define CONFIG_DEQUANT_TOKENS 0
+#define CONFIG_DC_RECON 0
+#define CONFIG_RUNTIME_CPU_DETECT 1
+#define CONFIG_POSTPROC 1
+#define CONFIG_MULTITHREAD 1
+#define CONFIG_PSNR 0
+#define CONFIG_VP8_ENCODER 1
+#define CONFIG_VP8_DECODER 1
+#define CONFIG_VP8 1
+#define CONFIG_ENCODERS 1
+#define CONFIG_DECODERS 1
+#define CONFIG_STATIC_MSVCRT 0
+#define CONFIG_SPATIAL_RESAMPLING 1
+#define CONFIG_REALTIME_ONLY 0
+#define CONFIG_SHARED 0
+#define CONFIG_SMALL 0
+#define CONFIG_POSTPROC_VISUALIZER 0
+#define CONFIG_OS_SUPPORT 1
diff --git a/third_party_mods/libvpx/source/config/linux/ia32/vpx_version.h b/third_party_mods/libvpx/source/config/linux/ia32/vpx_version.h
new file mode 100644
index 0000000..1d8ba96
--- /dev/null
+++ b/third_party_mods/libvpx/source/config/linux/ia32/vpx_version.h
@@ -0,0 +1,7 @@
+#define VERSION_MAJOR 0
+#define VERSION_MINOR 9
+#define VERSION_PATCH 6
+#define VERSION_EXTRA ""
+#define VERSION_PACKED ((VERSION_MAJOR<<16)|(VERSION_MINOR<<8)|(VERSION_PATCH))
+#define VERSION_STRING_NOSP "v0.9.6"
+#define VERSION_STRING " v0.9.6"
diff --git a/third_party_mods/libvpx/source/config/linux/x64/vpx_config.asm b/third_party_mods/libvpx/source/config/linux/x64/vpx_config.asm
new file mode 100644
index 0000000..6d5f859
--- /dev/null
+++ b/third_party_mods/libvpx/source/config/linux/x64/vpx_config.asm
@@ -0,0 +1,62 @@
+ARCH_ARM equ 0
+ARCH_MIPS equ 0
+ARCH_X86 equ 0
+ARCH_X86_64 equ 1
+ARCH_PPC32 equ 0
+ARCH_PPC64 equ 0
+HAVE_ARMV5TE equ 0
+HAVE_ARMV6 equ 0
+HAVE_ARMV7 equ 0
+HAVE_IWMMXT equ 0
+HAVE_IWMMXT2 equ 0
+HAVE_MIPS32 equ 0
+HAVE_MMX equ 1
+HAVE_SSE equ 1
+HAVE_SSE2 equ 1
+HAVE_SSE3 equ 1
+HAVE_SSSE3 equ 1
+HAVE_SSE4_1 equ 1
+HAVE_ALTIVEC equ 0
+HAVE_VPX_PORTS equ 1
+HAVE_STDINT_H equ 1
+HAVE_ALT_TREE_LAYOUT equ 0
+HAVE_PTHREAD_H equ 1
+HAVE_SYS_MMAN_H equ 1
+CONFIG_EXTERNAL_BUILD equ 0
+CONFIG_INSTALL_DOCS equ 0
+CONFIG_INSTALL_BINS equ 1
+CONFIG_INSTALL_LIBS equ 1
+CONFIG_INSTALL_SRCS equ 0
+CONFIG_DEBUG equ 0
+CONFIG_GPROF equ 0
+CONFIG_GCOV equ 0
+CONFIG_RVCT equ 0
+CONFIG_GCC equ 1
+CONFIG_MSVS equ 0
+CONFIG_PIC equ 1
+CONFIG_BIG_ENDIAN equ 0
+CONFIG_CODEC_SRCS equ 0
+CONFIG_DEBUG_LIBS equ 0
+CONFIG_FAST_UNALIGNED equ 1
+CONFIG_MEM_MANAGER equ 0
+CONFIG_MEM_TRACKER equ 0
+CONFIG_MEM_CHECKS equ 0
+CONFIG_MD5 equ 1
+CONFIG_DEQUANT_TOKENS equ 0
+CONFIG_DC_RECON equ 0
+CONFIG_RUNTIME_CPU_DETECT equ 1
+CONFIG_POSTPROC equ 1
+CONFIG_MULTITHREAD equ 1
+CONFIG_PSNR equ 0
+CONFIG_VP8_ENCODER equ 1
+CONFIG_VP8_DECODER equ 1
+CONFIG_VP8 equ 1
+CONFIG_ENCODERS equ 1
+CONFIG_DECODERS equ 1
+CONFIG_STATIC_MSVCRT equ 0
+CONFIG_SPATIAL_RESAMPLING equ 1
+CONFIG_REALTIME_ONLY equ 0
+CONFIG_SHARED equ 0
+CONFIG_SMALL equ 0
+CONFIG_POSTPROC_VISUALIZER equ 0
+CONFIG_OS_SUPPORT equ 1
diff --git a/third_party_mods/libvpx/source/config/linux/x64/vpx_config.c b/third_party_mods/libvpx/source/config/linux/x64/vpx_config.c
new file mode 100644
index 0000000..8b9b21a
--- /dev/null
+++ b/third_party_mods/libvpx/source/config/linux/x64/vpx_config.c
@@ -0,0 +1,2 @@
+static const char* const cfg = "--target=x86_64-linux-gcc --enable-pic --disable-install-docs --disable-install-srcs --disable-examples --disable-psnr";
+const char *vpx_codec_build_config(void) {return cfg;}
diff --git a/third_party_mods/libvpx/source/config/linux/x64/vpx_config.h b/third_party_mods/libvpx/source/config/linux/x64/vpx_config.h
new file mode 100644
index 0000000..1e7662b
--- /dev/null
+++ b/third_party_mods/libvpx/source/config/linux/x64/vpx_config.h
@@ -0,0 +1,64 @@
+/* This file automatically generated by configure. Do not edit! */
+#define RESTRICT
+#define ARCH_ARM 0
+#define ARCH_MIPS 0
+#define ARCH_X86 0
+#define ARCH_X86_64 1
+#define ARCH_PPC32 0
+#define ARCH_PPC64 0
+#define HAVE_ARMV5TE 0
+#define HAVE_ARMV6 0
+#define HAVE_ARMV7 0
+#define HAVE_IWMMXT 0
+#define HAVE_IWMMXT2 0
+#define HAVE_MIPS32 0
+#define HAVE_MMX 1
+#define HAVE_SSE 1
+#define HAVE_SSE2 1
+#define HAVE_SSE3 1
+#define HAVE_SSSE3 1
+#define HAVE_SSE4_1 1
+#define HAVE_ALTIVEC 0
+#define HAVE_VPX_PORTS 1
+#define HAVE_STDINT_H 1
+#define HAVE_ALT_TREE_LAYOUT 0
+#define HAVE_PTHREAD_H 1
+#define HAVE_SYS_MMAN_H 1
+#define CONFIG_EXTERNAL_BUILD 0
+#define CONFIG_INSTALL_DOCS 0
+#define CONFIG_INSTALL_BINS 1
+#define CONFIG_INSTALL_LIBS 1
+#define CONFIG_INSTALL_SRCS 0
+#define CONFIG_DEBUG 0
+#define CONFIG_GPROF 0
+#define CONFIG_GCOV 0
+#define CONFIG_RVCT 0
+#define CONFIG_GCC 1
+#define CONFIG_MSVS 0
+#define CONFIG_PIC 1
+#define CONFIG_BIG_ENDIAN 0
+#define CONFIG_CODEC_SRCS 0
+#define CONFIG_DEBUG_LIBS 0
+#define CONFIG_FAST_UNALIGNED 1
+#define CONFIG_MEM_MANAGER 0
+#define CONFIG_MEM_TRACKER 0
+#define CONFIG_MEM_CHECKS 0
+#define CONFIG_MD5 1
+#define CONFIG_DEQUANT_TOKENS 0
+#define CONFIG_DC_RECON 0
+#define CONFIG_RUNTIME_CPU_DETECT 1
+#define CONFIG_POSTPROC 1
+#define CONFIG_MULTITHREAD 1
+#define CONFIG_PSNR 0
+#define CONFIG_VP8_ENCODER 1
+#define CONFIG_VP8_DECODER 1
+#define CONFIG_VP8 1
+#define CONFIG_ENCODERS 1
+#define CONFIG_DECODERS 1
+#define CONFIG_STATIC_MSVCRT 0
+#define CONFIG_SPATIAL_RESAMPLING 1
+#define CONFIG_REALTIME_ONLY 0
+#define CONFIG_SHARED 0
+#define CONFIG_SMALL 0
+#define CONFIG_POSTPROC_VISUALIZER 0
+#define CONFIG_OS_SUPPORT 1
diff --git a/third_party_mods/libvpx/source/config/linux/x64/vpx_version.h b/third_party_mods/libvpx/source/config/linux/x64/vpx_version.h
new file mode 100644
index 0000000..1d8ba96
--- /dev/null
+++ b/third_party_mods/libvpx/source/config/linux/x64/vpx_version.h
@@ -0,0 +1,7 @@
+#define VERSION_MAJOR 0
+#define VERSION_MINOR 9
+#define VERSION_PATCH 6
+#define VERSION_EXTRA ""
+#define VERSION_PACKED ((VERSION_MAJOR<<16)|(VERSION_MINOR<<8)|(VERSION_PATCH))
+#define VERSION_STRING_NOSP "v0.9.6"
+#define VERSION_STRING " v0.9.6"
diff --git a/third_party_mods/libvpx/source/config/mac/ia32/vpx_config.asm b/third_party_mods/libvpx/source/config/mac/ia32/vpx_config.asm
new file mode 100644
index 0000000..74276ad
--- /dev/null
+++ b/third_party_mods/libvpx/source/config/mac/ia32/vpx_config.asm
@@ -0,0 +1,62 @@
+ARCH_ARM equ 0
+ARCH_MIPS equ 0
+ARCH_X86 equ 1
+ARCH_X86_64 equ 0
+ARCH_PPC32 equ 0
+ARCH_PPC64 equ 0
+HAVE_ARMV5TE equ 0
+HAVE_ARMV6 equ 0
+HAVE_ARMV7 equ 0
+HAVE_IWMMXT equ 0
+HAVE_IWMMXT2 equ 0
+HAVE_MIPS32 equ 0
+HAVE_MMX equ 1
+HAVE_SSE equ 1
+HAVE_SSE2 equ 1
+HAVE_SSE3 equ 1
+HAVE_SSSE3 equ 1
+HAVE_SSE4_1 equ 1
+HAVE_ALTIVEC equ 0
+HAVE_VPX_PORTS equ 1
+HAVE_STDINT_H equ 1
+HAVE_ALT_TREE_LAYOUT equ 0
+HAVE_PTHREAD_H equ 1
+HAVE_SYS_MMAN_H equ 1
+CONFIG_EXTERNAL_BUILD equ 0
+CONFIG_INSTALL_DOCS equ 0
+CONFIG_INSTALL_BINS equ 1
+CONFIG_INSTALL_LIBS equ 1
+CONFIG_INSTALL_SRCS equ 0
+CONFIG_DEBUG equ 0
+CONFIG_GPROF equ 0
+CONFIG_GCOV equ 0
+CONFIG_RVCT equ 0
+CONFIG_GCC equ 1
+CONFIG_MSVS equ 0
+CONFIG_PIC equ 1
+CONFIG_BIG_ENDIAN equ 0
+CONFIG_CODEC_SRCS equ 0
+CONFIG_DEBUG_LIBS equ 0
+CONFIG_FAST_UNALIGNED equ 1
+CONFIG_MEM_MANAGER equ 0
+CONFIG_MEM_TRACKER equ 0
+CONFIG_MEM_CHECKS equ 0
+CONFIG_MD5 equ 1
+CONFIG_DEQUANT_TOKENS equ 0
+CONFIG_DC_RECON equ 0
+CONFIG_RUNTIME_CPU_DETECT equ 1
+CONFIG_POSTPROC equ 1
+CONFIG_MULTITHREAD equ 1
+CONFIG_PSNR equ 0
+CONFIG_VP8_ENCODER equ 1
+CONFIG_VP8_DECODER equ 1
+CONFIG_VP8 equ 1
+CONFIG_ENCODERS equ 1
+CONFIG_DECODERS equ 1
+CONFIG_STATIC_MSVCRT equ 0
+CONFIG_SPATIAL_RESAMPLING equ 1
+CONFIG_REALTIME_ONLY equ 0
+CONFIG_SHARED equ 0
+CONFIG_SMALL equ 0
+CONFIG_POSTPROC_VISUALIZER equ 0
+CONFIG_OS_SUPPORT equ 1
diff --git a/third_party_mods/libvpx/source/config/mac/ia32/vpx_config.c b/third_party_mods/libvpx/source/config/mac/ia32/vpx_config.c
new file mode 100644
index 0000000..9d5fe81
--- /dev/null
+++ b/third_party_mods/libvpx/source/config/mac/ia32/vpx_config.c
@@ -0,0 +1,2 @@
+static const char* const cfg = "--target=x86-darwin9-gcc --enable-pic --disable-install-docs --disable-install-srcs --disable-examples --disable-psnr";
+const char *vpx_codec_build_config(void) {return cfg;}
diff --git a/third_party_mods/libvpx/source/config/mac/ia32/vpx_config.h b/third_party_mods/libvpx/source/config/mac/ia32/vpx_config.h
new file mode 100644
index 0000000..1a49989
--- /dev/null
+++ b/third_party_mods/libvpx/source/config/mac/ia32/vpx_config.h
@@ -0,0 +1,64 @@
+/* This file automatically generated by configure. Do not edit! */
+#define RESTRICT
+#define ARCH_ARM 0
+#define ARCH_MIPS 0
+#define ARCH_X86 1
+#define ARCH_X86_64 0
+#define ARCH_PPC32 0
+#define ARCH_PPC64 0
+#define HAVE_ARMV5TE 0
+#define HAVE_ARMV6 0
+#define HAVE_ARMV7 0
+#define HAVE_IWMMXT 0
+#define HAVE_IWMMXT2 0
+#define HAVE_MIPS32 0
+#define HAVE_MMX 1
+#define HAVE_SSE 1
+#define HAVE_SSE2 1
+#define HAVE_SSE3 1
+#define HAVE_SSSE3 1
+#define HAVE_SSE4_1 1
+#define HAVE_ALTIVEC 0
+#define HAVE_VPX_PORTS 1
+#define HAVE_STDINT_H 1
+#define HAVE_ALT_TREE_LAYOUT 0
+#define HAVE_PTHREAD_H 1
+#define HAVE_SYS_MMAN_H 1
+#define CONFIG_EXTERNAL_BUILD 0
+#define CONFIG_INSTALL_DOCS 0
+#define CONFIG_INSTALL_BINS 1
+#define CONFIG_INSTALL_LIBS 1
+#define CONFIG_INSTALL_SRCS 0
+#define CONFIG_DEBUG 0
+#define CONFIG_GPROF 0
+#define CONFIG_GCOV 0
+#define CONFIG_RVCT 0
+#define CONFIG_GCC 1
+#define CONFIG_MSVS 0
+#define CONFIG_PIC 1
+#define CONFIG_BIG_ENDIAN 0
+#define CONFIG_CODEC_SRCS 0
+#define CONFIG_DEBUG_LIBS 0
+#define CONFIG_FAST_UNALIGNED 1
+#define CONFIG_MEM_MANAGER 0
+#define CONFIG_MEM_TRACKER 0
+#define CONFIG_MEM_CHECKS 0
+#define CONFIG_MD5 1
+#define CONFIG_DEQUANT_TOKENS 0
+#define CONFIG_DC_RECON 0
+#define CONFIG_RUNTIME_CPU_DETECT 1
+#define CONFIG_POSTPROC 1
+#define CONFIG_MULTITHREAD 1
+#define CONFIG_PSNR 0
+#define CONFIG_VP8_ENCODER 1
+#define CONFIG_VP8_DECODER 1
+#define CONFIG_VP8 1
+#define CONFIG_ENCODERS 1
+#define CONFIG_DECODERS 1
+#define CONFIG_STATIC_MSVCRT 0
+#define CONFIG_SPATIAL_RESAMPLING 1
+#define CONFIG_REALTIME_ONLY 0
+#define CONFIG_SHARED 0
+#define CONFIG_SMALL 0
+#define CONFIG_POSTPROC_VISUALIZER 0
+#define CONFIG_OS_SUPPORT 1
diff --git a/third_party_mods/libvpx/source/config/mac/ia32/vpx_version.h b/third_party_mods/libvpx/source/config/mac/ia32/vpx_version.h
new file mode 100644
index 0000000..1d8ba96
--- /dev/null
+++ b/third_party_mods/libvpx/source/config/mac/ia32/vpx_version.h
@@ -0,0 +1,7 @@
+#define VERSION_MAJOR 0
+#define VERSION_MINOR 9
+#define VERSION_PATCH 6
+#define VERSION_EXTRA ""
+#define VERSION_PACKED ((VERSION_MAJOR<<16)|(VERSION_MINOR<<8)|(VERSION_PATCH))
+#define VERSION_STRING_NOSP "v0.9.6"
+#define VERSION_STRING " v0.9.6"
diff --git a/third_party_mods/libvpx/source/config/mac/x64/vpx_config.asm b/third_party_mods/libvpx/source/config/mac/x64/vpx_config.asm
new file mode 100644
index 0000000..6d5f859
--- /dev/null
+++ b/third_party_mods/libvpx/source/config/mac/x64/vpx_config.asm
@@ -0,0 +1,62 @@
+ARCH_ARM equ 0
+ARCH_MIPS equ 0
+ARCH_X86 equ 0
+ARCH_X86_64 equ 1
+ARCH_PPC32 equ 0
+ARCH_PPC64 equ 0
+HAVE_ARMV5TE equ 0
+HAVE_ARMV6 equ 0
+HAVE_ARMV7 equ 0
+HAVE_IWMMXT equ 0
+HAVE_IWMMXT2 equ 0
+HAVE_MIPS32 equ 0
+HAVE_MMX equ 1
+HAVE_SSE equ 1
+HAVE_SSE2 equ 1
+HAVE_SSE3 equ 1
+HAVE_SSSE3 equ 1
+HAVE_SSE4_1 equ 1
+HAVE_ALTIVEC equ 0
+HAVE_VPX_PORTS equ 1
+HAVE_STDINT_H equ 1
+HAVE_ALT_TREE_LAYOUT equ 0
+HAVE_PTHREAD_H equ 1
+HAVE_SYS_MMAN_H equ 1
+CONFIG_EXTERNAL_BUILD equ 0
+CONFIG_INSTALL_DOCS equ 0
+CONFIG_INSTALL_BINS equ 1
+CONFIG_INSTALL_LIBS equ 1
+CONFIG_INSTALL_SRCS equ 0
+CONFIG_DEBUG equ 0
+CONFIG_GPROF equ 0
+CONFIG_GCOV equ 0
+CONFIG_RVCT equ 0
+CONFIG_GCC equ 1
+CONFIG_MSVS equ 0
+CONFIG_PIC equ 1
+CONFIG_BIG_ENDIAN equ 0
+CONFIG_CODEC_SRCS equ 0
+CONFIG_DEBUG_LIBS equ 0
+CONFIG_FAST_UNALIGNED equ 1
+CONFIG_MEM_MANAGER equ 0
+CONFIG_MEM_TRACKER equ 0
+CONFIG_MEM_CHECKS equ 0
+CONFIG_MD5 equ 1
+CONFIG_DEQUANT_TOKENS equ 0
+CONFIG_DC_RECON equ 0
+CONFIG_RUNTIME_CPU_DETECT equ 1
+CONFIG_POSTPROC equ 1
+CONFIG_MULTITHREAD equ 1
+CONFIG_PSNR equ 0
+CONFIG_VP8_ENCODER equ 1
+CONFIG_VP8_DECODER equ 1
+CONFIG_VP8 equ 1
+CONFIG_ENCODERS equ 1
+CONFIG_DECODERS equ 1
+CONFIG_STATIC_MSVCRT equ 0
+CONFIG_SPATIAL_RESAMPLING equ 1
+CONFIG_REALTIME_ONLY equ 0
+CONFIG_SHARED equ 0
+CONFIG_SMALL equ 0
+CONFIG_POSTPROC_VISUALIZER equ 0
+CONFIG_OS_SUPPORT equ 1
diff --git a/third_party_mods/libvpx/source/config/mac/x64/vpx_config.c b/third_party_mods/libvpx/source/config/mac/x64/vpx_config.c
new file mode 100644
index 0000000..769a3fa
--- /dev/null
+++ b/third_party_mods/libvpx/source/config/mac/x64/vpx_config.c
@@ -0,0 +1,2 @@
+static const char* const cfg = "--target=x86_64-darwin10-gcc --enable-pic --disable-install-docs --disable-install-srcs --disable-examples --disable-psnr";
+const char *vpx_codec_build_config(void) {return cfg;}
diff --git a/third_party_mods/libvpx/source/config/mac/x64/vpx_config.h b/third_party_mods/libvpx/source/config/mac/x64/vpx_config.h
new file mode 100644
index 0000000..1e7662b
--- /dev/null
+++ b/third_party_mods/libvpx/source/config/mac/x64/vpx_config.h
@@ -0,0 +1,64 @@
+/* This file automatically generated by configure. Do not edit! */
+#define RESTRICT
+#define ARCH_ARM 0
+#define ARCH_MIPS 0
+#define ARCH_X86 0
+#define ARCH_X86_64 1
+#define ARCH_PPC32 0
+#define ARCH_PPC64 0
+#define HAVE_ARMV5TE 0
+#define HAVE_ARMV6 0
+#define HAVE_ARMV7 0
+#define HAVE_IWMMXT 0
+#define HAVE_IWMMXT2 0
+#define HAVE_MIPS32 0
+#define HAVE_MMX 1
+#define HAVE_SSE 1
+#define HAVE_SSE2 1
+#define HAVE_SSE3 1
+#define HAVE_SSSE3 1
+#define HAVE_SSE4_1 1
+#define HAVE_ALTIVEC 0
+#define HAVE_VPX_PORTS 1
+#define HAVE_STDINT_H 1
+#define HAVE_ALT_TREE_LAYOUT 0
+#define HAVE_PTHREAD_H 1
+#define HAVE_SYS_MMAN_H 1
+#define CONFIG_EXTERNAL_BUILD 0
+#define CONFIG_INSTALL_DOCS 0
+#define CONFIG_INSTALL_BINS 1
+#define CONFIG_INSTALL_LIBS 1
+#define CONFIG_INSTALL_SRCS 0
+#define CONFIG_DEBUG 0
+#define CONFIG_GPROF 0
+#define CONFIG_GCOV 0
+#define CONFIG_RVCT 0
+#define CONFIG_GCC 1
+#define CONFIG_MSVS 0
+#define CONFIG_PIC 1
+#define CONFIG_BIG_ENDIAN 0
+#define CONFIG_CODEC_SRCS 0
+#define CONFIG_DEBUG_LIBS 0
+#define CONFIG_FAST_UNALIGNED 1
+#define CONFIG_MEM_MANAGER 0
+#define CONFIG_MEM_TRACKER 0
+#define CONFIG_MEM_CHECKS 0
+#define CONFIG_MD5 1
+#define CONFIG_DEQUANT_TOKENS 0
+#define CONFIG_DC_RECON 0
+#define CONFIG_RUNTIME_CPU_DETECT 1
+#define CONFIG_POSTPROC 1
+#define CONFIG_MULTITHREAD 1
+#define CONFIG_PSNR 0
+#define CONFIG_VP8_ENCODER 1
+#define CONFIG_VP8_DECODER 1
+#define CONFIG_VP8 1
+#define CONFIG_ENCODERS 1
+#define CONFIG_DECODERS 1
+#define CONFIG_STATIC_MSVCRT 0
+#define CONFIG_SPATIAL_RESAMPLING 1
+#define CONFIG_REALTIME_ONLY 0
+#define CONFIG_SHARED 0
+#define CONFIG_SMALL 0
+#define CONFIG_POSTPROC_VISUALIZER 0
+#define CONFIG_OS_SUPPORT 1
diff --git a/third_party_mods/libvpx/source/config/mac/x64/vpx_version.h b/third_party_mods/libvpx/source/config/mac/x64/vpx_version.h
new file mode 100644
index 0000000..1d8ba96
--- /dev/null
+++ b/third_party_mods/libvpx/source/config/mac/x64/vpx_version.h
@@ -0,0 +1,7 @@
+#define VERSION_MAJOR 0
+#define VERSION_MINOR 9
+#define VERSION_PATCH 6
+#define VERSION_EXTRA ""
+#define VERSION_PACKED ((VERSION_MAJOR<<16)|(VERSION_MINOR<<8)|(VERSION_PATCH))
+#define VERSION_STRING_NOSP "v0.9.6"
+#define VERSION_STRING " v0.9.6"
diff --git a/third_party_mods/libvpx/source/config/win/ia32/vpx_config.asm b/third_party_mods/libvpx/source/config/win/ia32/vpx_config.asm
new file mode 100644
index 0000000..14467af
--- /dev/null
+++ b/third_party_mods/libvpx/source/config/win/ia32/vpx_config.asm
@@ -0,0 +1,62 @@
+ARCH_ARM equ 0
+ARCH_MIPS equ 0
+ARCH_X86 equ 1
+ARCH_X86_64 equ 0
+ARCH_PPC32 equ 0
+ARCH_PPC64 equ 0
+HAVE_ARMV5TE equ 0
+HAVE_ARMV6 equ 0
+HAVE_ARMV7 equ 0
+HAVE_IWMMXT equ 0
+HAVE_IWMMXT2 equ 0
+HAVE_MIPS32 equ 0
+HAVE_MMX equ 1
+HAVE_SSE equ 1
+HAVE_SSE2 equ 1
+HAVE_SSE3 equ 1
+HAVE_SSSE3 equ 1
+HAVE_SSE4_1 equ 1
+HAVE_ALTIVEC equ 0
+HAVE_VPX_PORTS equ 1
+HAVE_STDINT_H equ 0
+HAVE_ALT_TREE_LAYOUT equ 0
+HAVE_PTHREAD_H equ 0
+HAVE_SYS_MMAN_H equ 0
+CONFIG_EXTERNAL_BUILD equ 1
+CONFIG_INSTALL_DOCS equ 0
+CONFIG_INSTALL_BINS equ 1
+CONFIG_INSTALL_LIBS equ 1
+CONFIG_INSTALL_SRCS equ 0
+CONFIG_DEBUG equ 0
+CONFIG_GPROF equ 0
+CONFIG_GCOV equ 0
+CONFIG_RVCT equ 0
+CONFIG_GCC equ 0
+CONFIG_MSVS equ 1
+CONFIG_PIC equ 1
+CONFIG_BIG_ENDIAN equ 0
+CONFIG_CODEC_SRCS equ 0
+CONFIG_DEBUG_LIBS equ 0
+CONFIG_FAST_UNALIGNED equ 1
+CONFIG_MEM_MANAGER equ 0
+CONFIG_MEM_TRACKER equ 0
+CONFIG_MEM_CHECKS equ 0
+CONFIG_MD5 equ 1
+CONFIG_DEQUANT_TOKENS equ 0
+CONFIG_DC_RECON equ 0
+CONFIG_RUNTIME_CPU_DETECT equ 1
+CONFIG_POSTPROC equ 1
+CONFIG_MULTITHREAD equ 1
+CONFIG_PSNR equ 0
+CONFIG_VP8_ENCODER equ 1
+CONFIG_VP8_DECODER equ 1
+CONFIG_VP8 equ 1
+CONFIG_ENCODERS equ 1
+CONFIG_DECODERS equ 1
+CONFIG_STATIC_MSVCRT equ 0
+CONFIG_SPATIAL_RESAMPLING equ 1
+CONFIG_REALTIME_ONLY equ 0
+CONFIG_SHARED equ 0
+CONFIG_SMALL equ 0
+CONFIG_POSTPROC_VISUALIZER equ 0
+CONFIG_OS_SUPPORT equ 1
diff --git a/third_party_mods/libvpx/source/config/win/ia32/vpx_config.c b/third_party_mods/libvpx/source/config/win/ia32/vpx_config.c
new file mode 100644
index 0000000..aad4027
--- /dev/null
+++ b/third_party_mods/libvpx/source/config/win/ia32/vpx_config.c
@@ -0,0 +1,2 @@
+static const char* const cfg = "--target=x86-win32-vs8 --enable-pic --disable-install-docs --disable-install-srcs --disable-examples --disable-psnr";
+const char *vpx_codec_build_config(void) {return cfg;}
diff --git a/third_party_mods/libvpx/source/config/win/ia32/vpx_config.h b/third_party_mods/libvpx/source/config/win/ia32/vpx_config.h
new file mode 100644
index 0000000..8365b1b
--- /dev/null
+++ b/third_party_mods/libvpx/source/config/win/ia32/vpx_config.h
@@ -0,0 +1,64 @@
+/* This file automatically generated by configure. Do not edit! */
+#define RESTRICT
+#define ARCH_ARM 0
+#define ARCH_MIPS 0
+#define ARCH_X86 1
+#define ARCH_X86_64 0
+#define ARCH_PPC32 0
+#define ARCH_PPC64 0
+#define HAVE_ARMV5TE 0
+#define HAVE_ARMV6 0
+#define HAVE_ARMV7 0
+#define HAVE_IWMMXT 0
+#define HAVE_IWMMXT2 0
+#define HAVE_MIPS32 0
+#define HAVE_MMX 1
+#define HAVE_SSE 1
+#define HAVE_SSE2 1
+#define HAVE_SSE3 1
+#define HAVE_SSSE3 1
+#define HAVE_SSE4_1 1
+#define HAVE_ALTIVEC 0
+#define HAVE_VPX_PORTS 1
+#define HAVE_STDINT_H 0
+#define HAVE_ALT_TREE_LAYOUT 0
+#define HAVE_PTHREAD_H 0
+#define HAVE_SYS_MMAN_H 0
+#define CONFIG_EXTERNAL_BUILD 1
+#define CONFIG_INSTALL_DOCS 0
+#define CONFIG_INSTALL_BINS 1
+#define CONFIG_INSTALL_LIBS 1
+#define CONFIG_INSTALL_SRCS 0
+#define CONFIG_DEBUG 0
+#define CONFIG_GPROF 0
+#define CONFIG_GCOV 0
+#define CONFIG_RVCT 0
+#define CONFIG_GCC 0
+#define CONFIG_MSVS 1
+#define CONFIG_PIC 1
+#define CONFIG_BIG_ENDIAN 0
+#define CONFIG_CODEC_SRCS 0
+#define CONFIG_DEBUG_LIBS 0
+#define CONFIG_FAST_UNALIGNED 1
+#define CONFIG_MEM_MANAGER 0
+#define CONFIG_MEM_TRACKER 0
+#define CONFIG_MEM_CHECKS 0
+#define CONFIG_MD5 1
+#define CONFIG_DEQUANT_TOKENS 0
+#define CONFIG_DC_RECON 0
+#define CONFIG_RUNTIME_CPU_DETECT 1
+#define CONFIG_POSTPROC 1
+#define CONFIG_MULTITHREAD 1
+#define CONFIG_PSNR 0
+#define CONFIG_VP8_ENCODER 1
+#define CONFIG_VP8_DECODER 1
+#define CONFIG_VP8 1
+#define CONFIG_ENCODERS 1
+#define CONFIG_DECODERS 1
+#define CONFIG_STATIC_MSVCRT 0
+#define CONFIG_SPATIAL_RESAMPLING 1
+#define CONFIG_REALTIME_ONLY 0
+#define CONFIG_SHARED 0
+#define CONFIG_SMALL 0
+#define CONFIG_POSTPROC_VISUALIZER 0
+#define CONFIG_OS_SUPPORT 1
diff --git a/third_party_mods/libvpx/source/config/win/ia32/vpx_version.h b/third_party_mods/libvpx/source/config/win/ia32/vpx_version.h
new file mode 100644
index 0000000..1d8ba96
--- /dev/null
+++ b/third_party_mods/libvpx/source/config/win/ia32/vpx_version.h
@@ -0,0 +1,7 @@
+#define VERSION_MAJOR 0
+#define VERSION_MINOR 9
+#define VERSION_PATCH 6
+#define VERSION_EXTRA ""
+#define VERSION_PACKED ((VERSION_MAJOR<<16)|(VERSION_MINOR<<8)|(VERSION_PATCH))
+#define VERSION_STRING_NOSP "v0.9.6"
+#define VERSION_STRING " v0.9.6"
diff --git a/third_party_mods/libvpx/source/config/win/x64/vpx_config.asm b/third_party_mods/libvpx/source/config/win/x64/vpx_config.asm
new file mode 100644
index 0000000..d5e198a
--- /dev/null
+++ b/third_party_mods/libvpx/source/config/win/x64/vpx_config.asm
@@ -0,0 +1,62 @@
+ARCH_ARM equ 0
+ARCH_MIPS equ 0
+ARCH_X86 equ 0
+ARCH_X86_64 equ 1
+ARCH_PPC32 equ 0
+ARCH_PPC64 equ 0
+HAVE_ARMV5TE equ 0
+HAVE_ARMV6 equ 0
+HAVE_ARMV7 equ 0
+HAVE_IWMMXT equ 0
+HAVE_IWMMXT2 equ 0
+HAVE_MIPS32 equ 0
+HAVE_MMX equ 1
+HAVE_SSE equ 1
+HAVE_SSE2 equ 1
+HAVE_SSE3 equ 1
+HAVE_SSSE3 equ 1
+HAVE_SSE4_1 equ 1
+HAVE_ALTIVEC equ 0
+HAVE_VPX_PORTS equ 1
+HAVE_STDINT_H equ 0
+HAVE_ALT_TREE_LAYOUT equ 0
+HAVE_PTHREAD_H equ 0
+HAVE_SYS_MMAN_H equ 0
+CONFIG_EXTERNAL_BUILD equ 1
+CONFIG_INSTALL_DOCS equ 0
+CONFIG_INSTALL_BINS equ 1
+CONFIG_INSTALL_LIBS equ 1
+CONFIG_INSTALL_SRCS equ 0
+CONFIG_DEBUG equ 0
+CONFIG_GPROF equ 0
+CONFIG_GCOV equ 0
+CONFIG_RVCT equ 0
+CONFIG_GCC equ 0
+CONFIG_MSVS equ 1
+CONFIG_PIC equ 1
+CONFIG_BIG_ENDIAN equ 0
+CONFIG_CODEC_SRCS equ 0
+CONFIG_DEBUG_LIBS equ 0
+CONFIG_FAST_UNALIGNED equ 1
+CONFIG_MEM_MANAGER equ 0
+CONFIG_MEM_TRACKER equ 0
+CONFIG_MEM_CHECKS equ 0
+CONFIG_MD5 equ 1
+CONFIG_DEQUANT_TOKENS equ 0
+CONFIG_DC_RECON equ 0
+CONFIG_RUNTIME_CPU_DETECT equ 1
+CONFIG_POSTPROC equ 1
+CONFIG_MULTITHREAD equ 1
+CONFIG_PSNR equ 0
+CONFIG_VP8_ENCODER equ 1
+CONFIG_VP8_DECODER equ 1
+CONFIG_VP8 equ 1
+CONFIG_ENCODERS equ 1
+CONFIG_DECODERS equ 1
+CONFIG_STATIC_MSVCRT equ 0
+CONFIG_SPATIAL_RESAMPLING equ 1
+CONFIG_REALTIME_ONLY equ 0
+CONFIG_SHARED equ 0
+CONFIG_SMALL equ 0
+CONFIG_POSTPROC_VISUALIZER equ 0
+CONFIG_OS_SUPPORT equ 1
diff --git a/third_party_mods/libvpx/source/config/win/x64/vpx_config.c b/third_party_mods/libvpx/source/config/win/x64/vpx_config.c
new file mode 100644
index 0000000..36e5407
--- /dev/null
+++ b/third_party_mods/libvpx/source/config/win/x64/vpx_config.c
@@ -0,0 +1,2 @@
+static const char* const cfg = "--target=x86_64-win64-vs8 --enable-pic --disable-install-docs --disable-install-srcs --disable-examples --disable-psnr";
+const char *vpx_codec_build_config(void) {return cfg;}
diff --git a/third_party_mods/libvpx/source/config/win/x64/vpx_config.h b/third_party_mods/libvpx/source/config/win/x64/vpx_config.h
new file mode 100644
index 0000000..7186e12
--- /dev/null
+++ b/third_party_mods/libvpx/source/config/win/x64/vpx_config.h
@@ -0,0 +1,64 @@
+/* This file automatically generated by configure. Do not edit! */
+#define RESTRICT
+#define ARCH_ARM 0
+#define ARCH_MIPS 0
+#define ARCH_X86 0
+#define ARCH_X86_64 1
+#define ARCH_PPC32 0
+#define ARCH_PPC64 0
+#define HAVE_ARMV5TE 0
+#define HAVE_ARMV6 0
+#define HAVE_ARMV7 0
+#define HAVE_IWMMXT 0
+#define HAVE_IWMMXT2 0
+#define HAVE_MIPS32 0
+#define HAVE_MMX 1
+#define HAVE_SSE 1
+#define HAVE_SSE2 1
+#define HAVE_SSE3 1
+#define HAVE_SSSE3 1
+#define HAVE_SSE4_1 1
+#define HAVE_ALTIVEC 0
+#define HAVE_VPX_PORTS 1
+#define HAVE_STDINT_H 0
+#define HAVE_ALT_TREE_LAYOUT 0
+#define HAVE_PTHREAD_H 0
+#define HAVE_SYS_MMAN_H 0
+#define CONFIG_EXTERNAL_BUILD 1
+#define CONFIG_INSTALL_DOCS 0
+#define CONFIG_INSTALL_BINS 1
+#define CONFIG_INSTALL_LIBS 1
+#define CONFIG_INSTALL_SRCS 0
+#define CONFIG_DEBUG 0
+#define CONFIG_GPROF 0
+#define CONFIG_GCOV 0
+#define CONFIG_RVCT 0
+#define CONFIG_GCC 0
+#define CONFIG_MSVS 1
+#define CONFIG_PIC 1
+#define CONFIG_BIG_ENDIAN 0
+#define CONFIG_CODEC_SRCS 0
+#define CONFIG_DEBUG_LIBS 0
+#define CONFIG_FAST_UNALIGNED 1
+#define CONFIG_MEM_MANAGER 0
+#define CONFIG_MEM_TRACKER 0
+#define CONFIG_MEM_CHECKS 0
+#define CONFIG_MD5 1
+#define CONFIG_DEQUANT_TOKENS 0
+#define CONFIG_DC_RECON 0
+#define CONFIG_RUNTIME_CPU_DETECT 1
+#define CONFIG_POSTPROC 1
+#define CONFIG_MULTITHREAD 1
+#define CONFIG_PSNR 0
+#define CONFIG_VP8_ENCODER 1
+#define CONFIG_VP8_DECODER 1
+#define CONFIG_VP8 1
+#define CONFIG_ENCODERS 1
+#define CONFIG_DECODERS 1
+#define CONFIG_STATIC_MSVCRT 0
+#define CONFIG_SPATIAL_RESAMPLING 1
+#define CONFIG_REALTIME_ONLY 0
+#define CONFIG_SHARED 0
+#define CONFIG_SMALL 0
+#define CONFIG_POSTPROC_VISUALIZER 0
+#define CONFIG_OS_SUPPORT 1
diff --git a/third_party_mods/libvpx/source/config/win/x64/vpx_version.h b/third_party_mods/libvpx/source/config/win/x64/vpx_version.h
new file mode 100644
index 0000000..1d8ba96
--- /dev/null
+++ b/third_party_mods/libvpx/source/config/win/x64/vpx_version.h
@@ -0,0 +1,7 @@
+#define VERSION_MAJOR 0
+#define VERSION_MINOR 9
+#define VERSION_PATCH 6
+#define VERSION_EXTRA ""
+#define VERSION_PACKED ((VERSION_MAJOR<<16)|(VERSION_MINOR<<8)|(VERSION_PATCH))
+#define VERSION_STRING_NOSP "v0.9.6"
+#define VERSION_STRING " v0.9.6"
diff --git a/third_party_mods/mslpl/LICENSE b/third_party_mods/mslpl/LICENSE
new file mode 100644
index 0000000..9a3d932
--- /dev/null
+++ b/third_party_mods/mslpl/LICENSE
@@ -0,0 +1,64 @@
+This license governs use of code marked as sample or example available on
+this web site without a license agreement, as provided under the section above
+titled NOTICE SPECIFIC TO SOFTWARE AVAILABLE ON THIS WEB SITE. If you use
+such code (the software), you accept this license. If you do not accept the
+license, do not use the software.
+
+1. Definitions
+
+The terms reproduce, reproduction, derivative works, and distribution
+have the same meaning here as under U.S. copyright law.
+
+A contribution is the original software, or any additions or changes to the
+software.
+
+A contributor is any person that distributes its contribution under this
+license.
+
+Licensed patents are a contributors patent claims that read directly on its
+contribution.
+
+2. Grant of Rights
+
+(A) Copyright Grant - Subject to the terms of this license, including the
+license conditions and limitations in section 3, each contributor grants you a
+non-exclusive, worldwide, royalty-free copyright license to reproduce its
+contribution, prepare derivative works of its contribution, and distribute its
+contribution or any derivative works that you create.
+
+(B) Patent Grant - Subject to the terms of this license, including the license
+conditions and limitations in section 3, each contributor grants you a
+non-exclusive, worldwide, royalty-free license under its licensed patents to
+make, have made, use, sell, offer for sale, import, and/or otherwise dispose
+of its contribution in the software or derivative works of the contribution in
+the software.
+
+3. Conditions and Limitations
+
+(A) No Trademark License- This license does not grant you rights to use any
+contributors name, logo, or trademarks.
+
+(B) If you bring a patent claim against any contributor over patents that you
+claim are infringed by the software, your patent license from such contributor
+to the software ends automatically.
+
+(C) If you distribute any portion of the software, you must retain all
+copyright, patent, trademark, and attribution notices that are present in the
+software.
+
+(D) If you distribute any portion of the software in source code form, you may
+do so only under this license by including a complete copy of this license
+with your distribution. If you distribute any portion of the software in
+compiled or object code form, you may only do so under a license that complies
+with this license.
+
+(E) The software is licensed as-is. You bear the risk of using it. The
+contributors give no express warranties, guarantees or conditions. You may
+have additional consumer rights under your local laws which this license
+cannot change. To the extent permitted under your local laws, the contributors
+exclude the implied warranties of merchantability, fitness for a particular
+purpose and non-infringement.
+
+(F) Platform Limitation - The licenses granted in sections 2(A) and 2(B)
+extend only to the software or derivative works that you create that run on a
+Microsoft Windows operating system product.