commit | 8c18a0d35589f21678f614361a9ec1ba82794e13 | [log] [tgz] |
---|---|---|
author | Jakob Buchgraber <bucjac@gmail.com> | Fri Sep 09 23:15:18 2016 +0200 |
committer | GitHub <noreply@github.com> | Fri Sep 09 23:15:18 2016 +0200 |
tree | 956e6ce80ac09ec4d58f0c3b3167d2a496720fcb | |
parent | de9c320196176b7fb9ed63bd3c7b87ff57c62019 [diff] |
netty: use custom http2 headers for decoding. The DefaultHttp2Headers class is a general-purpose Http2Headers implementation and provides much more functionality than we need in gRPC. In gRPC, when reading headers off the wire, we only inspect a handful of them, before converting to Metadata. This commit introduces a Http2Headers implementation that aims for insertion efficiency, a low memory footprint and fast conversion to Metadata. - Header names and values are stored in plain byte[]. - Insertion is O(1), while lookup is now O(n). - Binary header values are base64 decoded as they are inserted. - The byte[][] returned by namesAndValues() can directly be used to construct a new Metadata object. - For HTTP/2 request headers, the pseudo headers are no longer carried over to Metadata. A microbenchmark aiming to replicate the usage of Http2Headers in NettyClientHandler and NettyServerHandler shows decent throughput gains when compared to DefaultHttp2Headers. Benchmark Mode Cnt Score Error Units InboundHeadersBenchmark.defaultHeaders_clientHandler avgt 10 283.830 ± 4.063 ns/op InboundHeadersBenchmark.defaultHeaders_serverHandler avgt 10 1179.975 ± 21.810 ns/op InboundHeadersBenchmark.grpcHeaders_clientHandler avgt 10 190.108 ± 3.510 ns/op InboundHeadersBenchmark.grpcHeaders_serverHandler avgt 10 561.426 ± 9.079 ns/op Additionally, the memory footprint is reduced by more than 50%! gRPC Request Headers: 864 bytes Netty Request Headers: 1728 bytes gRPC Response Headers: 216 bytes Netty Response Headers: 528 bytes Furthermore, this change does most of the gRPC groundwork necessary to be able to cache higher ordered objects in HPACK's dynamic table, as discussed in [1]. [1] https://github.com/grpc/grpc-java/issues/2217
gRPC-Java works with JDK 6. TLS usage typically requires using Java 8, or Play Services Dynamic Security Provider on Android. Please see the Security Readme.
Download the JARs. Or for Maven with non-Android, add to your pom.xml
:
<dependency> <groupId>io.grpc</groupId> <artifactId>grpc-netty</artifactId> <version>1.0.0</version> </dependency> <dependency> <groupId>io.grpc</groupId> <artifactId>grpc-protobuf</artifactId> <version>1.0.0</version> </dependency> <dependency> <groupId>io.grpc</groupId> <artifactId>grpc-stub</artifactId> <version>1.0.0</version> </dependency>
Or for Gradle with non-Android, add to your dependencies:
compile 'io.grpc:grpc-netty:1.0.0' compile 'io.grpc:grpc-protobuf:1.0.0' compile 'io.grpc:grpc-stub:1.0.0'
For Android client, use grpc-okhttp
instead of grpc-netty
and grpc-protobuf-lite
or grpc-protobuf-nano
instead of grpc-protobuf
:
compile 'io.grpc:grpc-okhttp:1.0.0' compile 'io.grpc:grpc-protobuf-lite:1.0.0' compile 'io.grpc:grpc-stub:1.0.0'
Development snapshots are available in Sonatypes's snapshot repository.
For protobuf-based codegen, you can put your proto files in the src/main/proto
and src/test/proto
directories along with an appropriate plugin.
For protobuf-based codegen integrated with the Maven build system, you can use protobuf-maven-plugin:
<build> <extensions> <extension> <groupId>kr.motd.maven</groupId> <artifactId>os-maven-plugin</artifactId> <version>1.4.1.Final</version> </extension> </extensions> <plugins> <plugin> <groupId>org.xolstice.maven.plugins</groupId> <artifactId>protobuf-maven-plugin</artifactId> <version>0.5.0</version> <configuration> <!-- The version of protoc must match protobuf-java. If you don't depend on protobuf-java directly, you will be transitively depending on the protobuf-java version that grpc depends on. --> <protocArtifact>com.google.protobuf:protoc:3.0.0:exe:${os.detected.classifier}</protocArtifact> <pluginId>grpc-java</pluginId> <pluginArtifact>io.grpc:protoc-gen-grpc-java:1.0.0:exe:${os.detected.classifier}</pluginArtifact> </configuration> <executions> <execution> <goals> <goal>compile</goal> <goal>compile-custom</goal> </goals> </execution> </executions> </plugin> </plugins> </build>
For protobuf-based codegen integrated with the Gradle build system, you can use protobuf-gradle-plugin:
apply plugin: 'java' apply plugin: 'com.google.protobuf' buildscript { repositories { mavenCentral() } dependencies { // ASSUMES GRADLE 2.12 OR HIGHER. Use plugin version 0.7.5 with earlier // gradle versions classpath 'com.google.protobuf:protobuf-gradle-plugin:0.8.0' } } protobuf { protoc { // The version of protoc must match protobuf-java. If you don't depend on // protobuf-java directly, you will be transitively depending on the // protobuf-java version that grpc depends on. artifact = "com.google.protobuf:protoc:3.0.0" } plugins { grpc { artifact = 'io.grpc:protoc-gen-grpc-java:1.0.0' } } generateProtoTasks { all()*.plugins { grpc {} } } }
If you are making changes to gRPC-Java, see the compiling instructions.
Here's a quick readers' guide to the code to help folks get started. At a high level there are three distinct layers to the library: Stub, Channel & Transport.
The Stub layer is what is exposed to most developers and provides type-safe bindings to whatever datamodel/IDL/interface you are adapting. gRPC comes with a plugin to the protocol-buffers compiler that generates Stub interfaces out of .proto
files, but bindings to other datamodel/IDL should be trivial to add and are welcome.
The Channel layer is an abstraction over Transport handling that is suitable for interception/decoration and exposes more behavior to the application than the Stub layer. It is intended to be easy for application frameworks to use this layer to address cross-cutting concerns such as logging, monitoring, auth etc. Flow-control is also exposed at this layer to allow more sophisticated applications to interact with it directly.
The Transport layer does the heavy lifting of putting and taking bytes off the wire. The interfaces to it are abstract just enough to allow plugging in of different implementations. Transports are modeled as Stream
factories. The variation in interface between a server Stream and a client Stream exists to codify their differing semantics for cancellation and error reporting.
Note the transport layer API is considered internal to gRPC and has weaker API guarantees than the core API under package io.grpc
.
gRPC comes with three Transport implementations:
Tests showing how these layers are composed to execute calls using protobuf messages can be found here https://github.com/google/grpc-java/tree/master/interop-testing/src/main/java/io/grpc/testing/integration