commit | 26bace63e509647b7c742fe86e3521697ee5bbec | [log] [tgz] |
---|---|---|
author | Eric Anderson <ejona@google.com> | Mon Jun 13 08:49:22 2016 -0700 |
committer | Eric Anderson <ejona@google.com> | Tue Jun 14 09:55:18 2016 -0700 |
tree | 6be41c99690dc14c30dc91260f1d900ec8fa4b33 | |
parent | 56a2938830eb2c00f4f010318bdc56ca3af7263d [diff] |
core: Improve fail fast status messages Don't wrap the Status when a fail-fast RPC is already queued when the transport fails, since that Status is directly responsible for the failing of the RPC. For future fail-fast RPCs, use the saved status as a cause to make debugging easier. This came out of #1330, where the unnecessary nesting of the status codes just added noise.
gRPC-Java works with JDK 6. TLS usage typically requires using Java 8, or Play Services Dynamic Security Provider on Android. Please see the Security Readme.
Download the JARs. Or for Maven, add to your pom.xml
:
<dependency> <groupId>io.grpc</groupId> <artifactId>grpc-all</artifactId> <version>0.14.0</version> </dependency>
Or for Gradle, add to your dependencies:
compile 'io.grpc:grpc-all:0.14.0'
For Android client, you only need to depend on the needed sub-projects, such as:
compile 'io.grpc:grpc-okhttp:0.14.0' compile 'io.grpc:grpc-protobuf-nano:0.14.0' compile 'io.grpc:grpc-stub:0.14.0'
Development snapshots are available in Sonatypes's snapshot repository.
For protobuf-based codegen, you can put your proto files in the src/main/proto
and src/test/proto
directories along with an appropriate plugin.
For protobuf-based codegen integrated with the Maven build system, you can use protobuf-maven-plugin:
<build> <extensions> <extension> <groupId>kr.motd.maven</groupId> <artifactId>os-maven-plugin</artifactId> <version>1.4.1.Final</version> </extension> </extensions> <plugins> <plugin> <groupId>org.xolstice.maven.plugins</groupId> <artifactId>protobuf-maven-plugin</artifactId> <version>0.5.0</version> <configuration> <!-- The version of protoc must match protobuf-java. If you don't depend on protobuf-java directly, you will be transitively depending on the protobuf-java version that grpc depends on. --> <protocArtifact>com.google.protobuf:protoc:3.0.0-beta-2:exe:${os.detected.classifier}</protocArtifact> <pluginId>grpc-java</pluginId> <pluginArtifact>io.grpc:protoc-gen-grpc-java:0.14.0:exe:${os.detected.classifier}</pluginArtifact> </configuration> <executions> <execution> <goals> <goal>compile</goal> <goal>compile-custom</goal> </goals> </execution> </executions> </plugin> </plugins> </build>
For protobuf-based codegen integrated with the Gradle build system, you can use protobuf-gradle-plugin:
apply plugin: 'java' apply plugin: 'com.google.protobuf' buildscript { repositories { mavenCentral() } dependencies { // ASSUMES GRADLE 2.12 OR HIGHER. Use plugin version 0.7.5 with earlier // gradle versions classpath 'com.google.protobuf:protobuf-gradle-plugin:0.7.7' } } protobuf { protoc { // The version of protoc must match protobuf-java. If you don't depend on // protobuf-java directly, you will be transitively depending on the // protobuf-java version that grpc depends on. artifact = "com.google.protobuf:protoc:3.0.0-beta-2" } plugins { grpc { artifact = 'io.grpc:protoc-gen-grpc-java:0.14.0' } } generateProtoTasks { all()*.plugins { grpc {} } } }
If you are making changes to gRPC-Java, see the compiling instructions.
Here's a quick readers' guide to the code to help folks get started. At a high level there are three distinct layers to the library: Stub, Channel & Transport.
The Stub layer is what is exposed to most developers and provides type-safe bindings to whatever datamodel/IDL/interface you are adapting. gRPC comes with a plugin to the protocol-buffers compiler that generates Stub interfaces out of .proto
files, but bindings to other datamodel/IDL should be trivial to add and are welcome.
The Channel layer is an abstraction over Transport handling that is suitable for interception/decoration and exposes more behavior to the application than the Stub layer. It is intended to be easy for application frameworks to use this layer to address cross-cutting concerns such as logging, monitoring, auth etc. Flow-control is also exposed at this layer to allow more sophisticated applications to interact with it directly.
The Transport layer does the heavy lifting of putting and taking bytes off the wire. The interfaces to it are abstract just enough to allow plugging in of different implementations. Transports are modeled as Stream
factories. The variation in interface between a server Stream and a client Stream exists to codify their differing semantics for cancellation and error reporting.
Note the transport layer API is considered internal to gRPC and has weaker API guarantees than the core API under package io.grpc
.
gRPC comes with three Transport implementations:
Tests showing how these layers are composed to execute calls using protobuf messages can be found here https://github.com/google/grpc-java/tree/master/interop-testing/src/main/java/io/grpc/testing/integration