blob: da82b3f4dada1286586a8898ff4f30b78138d7f1 [file] [log] [blame]
David Garcia Quintas3fb8f732016-06-15 22:53:08 -07001/*
2 *
Jan Tattermusch7897ae92017-06-07 22:57:36 +02003 * Copyright 2016 gRPC authors.
David Garcia Quintas3fb8f732016-06-15 22:53:08 -07004 *
Jan Tattermusch7897ae92017-06-07 22:57:36 +02005 * Licensed under the Apache License, Version 2.0 (the "License");
6 * you may not use this file except in compliance with the License.
7 * You may obtain a copy of the License at
David Garcia Quintas3fb8f732016-06-15 22:53:08 -07008 *
Jan Tattermusch7897ae92017-06-07 22:57:36 +02009 * http://www.apache.org/licenses/LICENSE-2.0
David Garcia Quintas3fb8f732016-06-15 22:53:08 -070010 *
Jan Tattermusch7897ae92017-06-07 22:57:36 +020011 * Unless required by applicable law or agreed to in writing, software
12 * distributed under the License is distributed on an "AS IS" BASIS,
13 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 * See the License for the specific language governing permissions and
15 * limitations under the License.
David Garcia Quintas3fb8f732016-06-15 22:53:08 -070016 *
17 */
18
David Garcia Quintas8b3b97f2016-07-15 07:46:47 -070019/** Implementation of the gRPC LB policy.
20 *
David Garcia Quintas43339842016-07-18 12:56:09 -070021 * This policy takes as input a set of resolved addresses {a1..an} for which the
22 * LB set was set (it's the resolver's responsibility to ensure this). That is
23 * to say, {a1..an} represent a collection of LB servers.
24 *
25 * An internal channel (\a glb_lb_policy.lb_channel) is created over {a1..an}.
26 * This channel behaves just like a regular channel. In particular, the
27 * constructed URI over the addresses a1..an will use the default pick first
28 * policy to select from this list of LB server backends.
29 *
David Garcia Quintas41bef452016-07-28 19:19:58 -070030 * The first time the policy gets a request for a pick, a ping, or to exit the
David Garcia Quintas98da61b2016-10-29 08:46:31 +020031 * idle state, \a query_for_backends_locked() is called. This function sets up
32 * and initiates the internal communication with the LB server. In particular,
33 * it's responsible for instantiating the internal *streaming* call to the LB
34 * server (whichever address from {a1..an} pick-first chose). This call is
David Garcia Quintas7ec29132016-11-01 04:09:05 +010035 * serviced by two callbacks, \a lb_on_server_status_received and \a
36 * lb_on_response_received. The former will be called when the call to the LB
37 * server completes. This can happen if the LB server closes the connection or
38 * if this policy itself cancels the call (for example because it's shutting
David Garcia Quintas246c5642016-11-01 11:16:52 -070039 * down). If the internal call times out, the usual behavior of pick-first
David Garcia Quintas7ec29132016-11-01 04:09:05 +010040 * applies, continuing to pick from the list {a1..an}.
David Garcia Quintas43339842016-07-18 12:56:09 -070041 *
David Garcia Quintas98da61b2016-10-29 08:46:31 +020042 * Upon sucesss, the incoming \a LoadBalancingResponse is processed by \a
43 * res_recv. An invalid one results in the termination of the streaming call. A
44 * new streaming call should be created if possible, failing the original call
45 * otherwise. For a valid \a LoadBalancingResponse, the server list of actual
46 * backends is extracted. A Round Robin policy will be created from this list.
47 * There are two possible scenarios:
David Garcia Quintas43339842016-07-18 12:56:09 -070048 *
49 * 1. This is the first server list received. There was no previous instance of
David Garcia Quintas90712d52016-10-13 19:33:04 -070050 * the Round Robin policy. \a rr_handover_locked() will instantiate the RR
51 * policy and perform all the pending operations over it.
David Garcia Quintas43339842016-07-18 12:56:09 -070052 * 2. There's already a RR policy instance active. We need to introduce the new
53 * one build from the new serverlist, but taking care not to disrupt the
54 * operations in progress over the old RR instance. This is done by
55 * decreasing the reference count on the old policy. The moment no more
56 * references are held on the old RR policy, it'll be destroyed and \a
Mark D. Rothc0febd32018-01-09 10:25:24 -080057 * on_rr_connectivity_changed notified with a \a GRPC_CHANNEL_SHUTDOWN
David Garcia Quintas348cfdb2016-08-19 12:19:43 -070058 * state. At this point we can transition to a new RR instance safely, which
David Garcia Quintas90712d52016-10-13 19:33:04 -070059 * is done once again via \a rr_handover_locked().
David Garcia Quintas43339842016-07-18 12:56:09 -070060 *
61 *
62 * Once a RR policy instance is in place (and getting updated as described),
63 * calls to for a pick, a ping or a cancellation will be serviced right away by
64 * forwarding them to the RR instance. Any time there's no RR policy available
David Garcia Quintas7ec29132016-11-01 04:09:05 +010065 * (ie, right after the creation of the gRPCLB policy, if an empty serverlist is
66 * received, etc), pick/ping requests are added to a list of pending picks/pings
67 * to be flushed and serviced as part of \a rr_handover_locked() the moment the
68 * RR policy instance becomes available.
David Garcia Quintas43339842016-07-18 12:56:09 -070069 *
70 * \see https://github.com/grpc/grpc/blob/master/doc/load-balancing.md for the
71 * high level design and details. */
David Garcia Quintas8b3b97f2016-07-15 07:46:47 -070072
73/* TODO(dgq):
74 * - Implement LB service forwarding (point 2c. in the doc's diagram).
75 */
76
murgatroid99085f9af2016-10-24 09:55:44 -070077/* With the addition of a libuv endpoint, sockaddr.h now includes uv.h when
78 using that endpoint. Because of various transitive includes in uv.h,
79 including windows.h on Windows, uv.h must be included before other system
80 headers. Therefore, sockaddr.h must always be included first */
murgatroid997871f732016-09-23 13:49:05 -070081#include "src/core/lib/iomgr/sockaddr.h"
82
Yash Tibrewalfcd26bc2017-09-25 15:08:28 -070083#include <inttypes.h>
Mark D. Roth64d922a2017-05-03 12:52:04 -070084#include <limits.h>
David Garcia Quintas22e8f1d2016-06-15 23:53:00 -070085#include <string.h>
86
87#include <grpc/byte_buffer_reader.h>
88#include <grpc/grpc.h>
89#include <grpc/support/alloc.h>
David Garcia Quintas22e8f1d2016-06-15 23:53:00 -070090#include <grpc/support/string_util.h>
David Garcia Quintas69099222016-10-03 11:28:37 -070091#include <grpc/support/time.h>
David Garcia Quintas22e8f1d2016-06-15 23:53:00 -070092
Craig Tiller9eb0fde2017-03-31 16:59:30 -070093#include "src/core/ext/filters/client_channel/client_channel.h"
94#include "src/core/ext/filters/client_channel/client_channel_factory.h"
Mark D. Roth09e458c2017-05-02 08:13:26 -070095#include "src/core/ext/filters/client_channel/lb_policy/grpclb/client_load_reporting_filter.h"
Craig Tiller9eb0fde2017-03-31 16:59:30 -070096#include "src/core/ext/filters/client_channel/lb_policy/grpclb/grpclb.h"
97#include "src/core/ext/filters/client_channel/lb_policy/grpclb/grpclb_channel.h"
Mark D. Roth09e458c2017-05-02 08:13:26 -070098#include "src/core/ext/filters/client_channel/lb_policy/grpclb/grpclb_client_stats.h"
Craig Tiller9eb0fde2017-03-31 16:59:30 -070099#include "src/core/ext/filters/client_channel/lb_policy/grpclb/load_balancer_api.h"
Craig Tillerd52e22f2017-04-02 16:22:52 -0700100#include "src/core/ext/filters/client_channel/lb_policy_factory.h"
101#include "src/core/ext/filters/client_channel/lb_policy_registry.h"
102#include "src/core/ext/filters/client_channel/parse_address.h"
David Garcia Quintas87d5a312017-06-06 19:45:58 -0700103#include "src/core/ext/filters/client_channel/resolver/fake/fake_resolver.h"
Juanli Shen6502ecc2017-09-13 13:10:54 -0700104#include "src/core/ext/filters/client_channel/subchannel_index.h"
Craig Tillerc0df1c02017-07-17 16:12:33 -0700105#include "src/core/lib/backoff/backoff.h"
Mark D. Roth046cf762016-09-26 11:13:51 -0700106#include "src/core/lib/channel/channel_args.h"
Mark D. Roth09e458c2017-05-02 08:13:26 -0700107#include "src/core/lib/channel/channel_stack.h"
Vijay Paiae376bf2018-01-25 22:54:02 -0800108#include "src/core/lib/gpr/host_port.h"
Mark D. Rothdbdf4952018-01-18 11:21:12 -0800109#include "src/core/lib/gpr/string.h"
Mark D. Roth4f2b0fd2018-01-19 12:12:23 -0800110#include "src/core/lib/gprpp/manual_constructor.h"
Mark D. Roth209f6442018-02-08 10:26:46 -0800111#include "src/core/lib/gprpp/ref_counted_ptr.h"
Craig Tiller2400bf52017-02-09 16:25:19 -0800112#include "src/core/lib/iomgr/combiner.h"
David Garcia Quintas98da61b2016-10-29 08:46:31 +0200113#include "src/core/lib/iomgr/sockaddr.h"
David Garcia Quintas3fb8f732016-06-15 22:53:08 -0700114#include "src/core/lib/iomgr/sockaddr_utils.h"
David Garcia Quintas98da61b2016-10-29 08:46:31 +0200115#include "src/core/lib/iomgr/timer.h"
David Garcia Quintas01291502017-02-07 13:26:41 -0800116#include "src/core/lib/slice/slice_hash_table.h"
Craig Tiller18b4ba32016-11-09 15:23:42 -0800117#include "src/core/lib/slice/slice_internal.h"
Craig Tiller0f310802016-10-26 16:25:56 -0700118#include "src/core/lib/slice/slice_string_helpers.h"
David Garcia Quintas3fb8f732016-06-15 22:53:08 -0700119#include "src/core/lib/surface/call.h"
120#include "src/core/lib/surface/channel.h"
Mark D. Roth09e458c2017-05-02 08:13:26 -0700121#include "src/core/lib/surface/channel_init.h"
David Garcia Quintas331b9c02016-09-12 18:37:05 -0700122#include "src/core/lib/transport/static_metadata.h"
David Garcia Quintas3fb8f732016-06-15 22:53:08 -0700123
David Garcia Quintas1edfb952016-11-22 17:15:34 -0800124#define GRPC_GRPCLB_INITIAL_CONNECT_BACKOFF_SECONDS 1
125#define GRPC_GRPCLB_RECONNECT_BACKOFF_MULTIPLIER 1.6
126#define GRPC_GRPCLB_RECONNECT_MAX_BACKOFF_SECONDS 120
127#define GRPC_GRPCLB_RECONNECT_JITTER 0.2
Juanli Shenfe408152017-09-27 12:27:20 -0700128#define GRPC_GRPCLB_DEFAULT_FALLBACK_TIMEOUT_MS 10000
David Garcia Quintas98da61b2016-10-29 08:46:31 +0200129
Craig Tiller694580f2017-10-18 14:48:14 -0700130grpc_core::TraceFlag grpc_lb_glb_trace(false, "glb");
David Garcia Quintas3fb8f732016-06-15 22:53:08 -0700131
Mark D. Rothc0febd32018-01-09 10:25:24 -0800132struct glb_lb_policy;
David Garcia Quintas3fb8f732016-06-15 22:53:08 -0700133
Vijay Pai849bd732018-01-02 23:30:47 +0000134namespace {
Mark D. Rothc0febd32018-01-09 10:25:24 -0800135
136/// Linked list of pending pick requests. It stores all information needed to
137/// eventually call (Round Robin's) pick() on them. They mainly stay pending
138/// waiting for the RR policy to be created.
139///
140/// Note that when a pick is sent to the RR policy, we inject our own
141/// on_complete callback, so that we can intercept the result before
142/// invoking the original on_complete callback. This allows us to set the
143/// LB token metadata and add client_stats to the call context.
144/// See \a pending_pick_complete() for details.
Vijay Pai849bd732018-01-02 23:30:47 +0000145struct pending_pick {
Mark D. Rothc0febd32018-01-09 10:25:24 -0800146 // Our on_complete closure and the original one.
147 grpc_closure on_complete;
148 grpc_closure* original_on_complete;
149 // The original pick.
150 grpc_lb_policy_pick_state* pick;
151 // Stats for client-side load reporting. Note that this holds a
152 // reference, which must be either passed on via context or unreffed.
153 grpc_grpclb_client_stats* client_stats;
154 // The LB token associated with the pick. This is set via user_data in
155 // the pick.
156 grpc_mdelem lb_token;
157 // The grpclb instance that created the wrapping. This instance is not owned,
158 // reference counts are untouched. It's used only for logging purposes.
159 glb_lb_policy* glb_policy;
160 // Next pending pick.
Craig Tillerbaa14a92017-11-03 09:09:36 -0700161 struct pending_pick* next;
Vijay Pai849bd732018-01-02 23:30:47 +0000162};
Mark D. Rothc0febd32018-01-09 10:25:24 -0800163
164/// A linked list of pending pings waiting for the RR policy to be created.
165struct pending_ping {
166 grpc_closure* on_initiate;
167 grpc_closure* on_ack;
168 struct pending_ping* next;
169};
170
Vijay Pai849bd732018-01-02 23:30:47 +0000171} // namespace
David Garcia Quintas3fb8f732016-06-15 22:53:08 -0700172
Juanli Shen8e4c9d32018-01-23 16:26:39 -0800173typedef struct glb_lb_call_data {
174 struct glb_lb_policy* glb_policy;
Juanli Shenc872ad42018-01-24 10:29:38 -0800175 // TODO(juanlishen): c++ize this struct.
Juanli Shen8e4c9d32018-01-23 16:26:39 -0800176 gpr_refcount refs;
177
178 /** The streaming call to the LB server. Always non-NULL. */
179 grpc_call* lb_call;
180
181 /** The initial metadata received from the LB server. */
182 grpc_metadata_array lb_initial_metadata_recv;
183
184 /** The message sent to the LB server. It's used to query for backends (the
185 * value may vary if the LB server indicates a redirect) or send client load
186 * report. */
187 grpc_byte_buffer* send_message_payload;
188 /** The callback after the initial request is sent. */
189 grpc_closure lb_on_sent_initial_request;
190
191 /** The response received from the LB server, if any. */
192 grpc_byte_buffer* recv_message_payload;
193 /** The callback to process the response received from the LB server. */
194 grpc_closure lb_on_response_received;
195 bool seen_initial_response;
196
197 /** The callback to process the status received from the LB server, which
198 * signals the end of the LB call. */
199 grpc_closure lb_on_server_status_received;
200 /** The trailing metadata from the LB server. */
201 grpc_metadata_array lb_trailing_metadata_recv;
202 /** The call status code and details. */
203 grpc_status_code lb_call_status;
204 grpc_slice lb_call_status_details;
205
206 /** The stats for client-side load reporting associated with this LB call.
207 * Created after the first serverlist is received. */
208 grpc_grpclb_client_stats* client_stats;
209 /** The interval and timer for next client load report. */
210 grpc_millis client_stats_report_interval;
211 grpc_timer client_load_report_timer;
212 bool client_load_report_timer_callback_pending;
213 bool last_client_load_report_counters_were_zero;
214 bool client_load_report_is_due;
215 /** The closure used for either the load report timer or the callback for
216 * completion of sending the load report. */
217 grpc_closure client_load_report_closure;
218} glb_lb_call_data;
219
220typedef struct glb_lb_policy {
221 /** Base policy: must be first. */
David Garcia Quintas3fb8f732016-06-15 22:53:08 -0700222 grpc_lb_policy base;
223
Juanli Shen8e4c9d32018-01-23 16:26:39 -0800224 /** Who the client is trying to communicate with. */
Craig Tillerbaa14a92017-11-03 09:09:36 -0700225 const char* server_name;
Juanli Shen8e4c9d32018-01-23 16:26:39 -0800226
227 /** Channel related data that will be propagated to the internal RR policy. */
Craig Tillerbaa14a92017-11-03 09:09:36 -0700228 grpc_client_channel_factory* cc_factory;
229 grpc_channel_args* args;
David Garcia Quintas3fb8f732016-06-15 22:53:08 -0700230
Juanli Shen8e4c9d32018-01-23 16:26:39 -0800231 /** Timeout in milliseconds for before using fallback backend addresses.
Juanli Shenfe408152017-09-27 12:27:20 -0700232 * 0 means not using fallback. */
233 int lb_fallback_timeout_ms;
234
Juanli Shen8e4c9d32018-01-23 16:26:39 -0800235 /** The channel for communicating with the LB server. */
Craig Tillerbaa14a92017-11-03 09:09:36 -0700236 grpc_channel* lb_channel;
David Garcia Quintas3fb8f732016-06-15 22:53:08 -0700237
Juanli Shen8e4c9d32018-01-23 16:26:39 -0800238 /** The data associated with the current LB call. It holds a ref to this LB
239 * policy. It's initialized every time we query for backends. It's reset to
240 * NULL whenever the current LB call is no longer needed (e.g., the LB policy
241 * is shutting down, or the LB call has ended). A non-NULL lb_calld always
242 * contains a non-NULL lb_call. */
243 glb_lb_call_data* lb_calld;
244
David Garcia Quintas87d5a312017-06-06 19:45:58 -0700245 /** response generator to inject address updates into \a lb_channel */
Mark D. Roth209f6442018-02-08 10:26:46 -0800246 grpc_core::RefCountedPtr<grpc_core::FakeResolverResponseGenerator>
247 response_generator;
David Garcia Quintas87d5a312017-06-06 19:45:58 -0700248
David Garcia Quintas3fb8f732016-06-15 22:53:08 -0700249 /** the RR policy to use of the backend servers returned by the LB server */
Craig Tillerbaa14a92017-11-03 09:09:36 -0700250 grpc_lb_policy* rr_policy;
David Garcia Quintas3fb8f732016-06-15 22:53:08 -0700251
Juanli Shen87c65042018-02-15 09:42:45 -0800252 /** the connectivity state of the embedded RR policy */
Mark D. Rothc0febd32018-01-09 10:25:24 -0800253 grpc_connectivity_state rr_connectivity_state;
254
David Garcia Quintas3fb8f732016-06-15 22:53:08 -0700255 bool started_picking;
256
257 /** our connectivity state tracker */
258 grpc_connectivity_state_tracker state_tracker;
259
David Garcia Quintas87d5a312017-06-06 19:45:58 -0700260 /** connectivity state of the LB channel */
261 grpc_connectivity_state lb_channel_connectivity;
262
Yash Tibrewal8cf14702017-12-06 09:47:54 -0800263 /** stores the deserialized response from the LB. May be nullptr until one
264 * such response has arrived. */
Craig Tillerbaa14a92017-11-03 09:09:36 -0700265 grpc_grpclb_serverlist* serverlist;
David Garcia Quintas3fb8f732016-06-15 22:53:08 -0700266
Mark D. Rothd7389b42017-05-17 12:22:17 -0700267 /** Index into serverlist for next pick.
268 * If the server at this index is a drop, we return a drop.
269 * Otherwise, we delegate to the RR policy. */
270 size_t serverlist_index;
271
Juanli Shenfe408152017-09-27 12:27:20 -0700272 /** stores the backend addresses from the resolver */
Craig Tillerbaa14a92017-11-03 09:09:36 -0700273 grpc_lb_addresses* fallback_backend_addresses;
Juanli Shenfe408152017-09-27 12:27:20 -0700274
David Garcia Quintasea11d162016-07-14 17:27:28 -0700275 /** list of picks that are waiting on RR's policy connectivity */
Craig Tillerbaa14a92017-11-03 09:09:36 -0700276 pending_pick* pending_picks;
David Garcia Quintas3fb8f732016-06-15 22:53:08 -0700277
David Garcia Quintasea11d162016-07-14 17:27:28 -0700278 /** list of pings that are waiting on RR's policy connectivity */
Craig Tillerbaa14a92017-11-03 09:09:36 -0700279 pending_ping* pending_pings;
David Garcia Quintas3fb8f732016-06-15 22:53:08 -0700280
David Garcia Quintas98da61b2016-10-29 08:46:31 +0200281 bool shutting_down;
David Garcia Quintas3fb8f732016-06-15 22:53:08 -0700282
David Garcia Quintas87d5a312017-06-06 19:45:58 -0700283 /** are we already watching the LB channel's connectivity? */
284 bool watching_lb_channel;
285
Juanli Shen4ed35d12018-01-08 18:01:45 -0800286 /** is the callback associated with \a lb_call_retry_timer pending? */
287 bool retry_timer_callback_pending;
David Garcia Quintas87d5a312017-06-06 19:45:58 -0700288
Juanli Shen4ed35d12018-01-08 18:01:45 -0800289 /** is the callback associated with \a lb_fallback_timer pending? */
290 bool fallback_timer_callback_pending;
Juanli Shenfe408152017-09-27 12:27:20 -0700291
David Garcia Quintas87d5a312017-06-06 19:45:58 -0700292 /** called upon changes to the LB channel's connectivity. */
293 grpc_closure lb_channel_on_connectivity_changed;
294
Juanli Shen87c65042018-02-15 09:42:45 -0800295 /** called upon changes to the RR's connectivity. */
296 grpc_closure rr_on_connectivity_changed;
297
298 /** called upon reresolution request from the RR policy. */
299 grpc_closure rr_on_reresolution_requested;
300
David Garcia Quintas98da61b2016-10-29 08:46:31 +0200301 /************************************************************/
302 /* client data associated with the LB server communication */
303 /************************************************************/
David Garcia Quintas98da61b2016-10-29 08:46:31 +0200304
305 /** LB call retry backoff state */
David Garcia Quintas0f91e512017-12-04 16:12:54 -0800306 grpc_core::ManualConstructor<grpc_core::BackOff> lb_call_backoff;
David Garcia Quintas98da61b2016-10-29 08:46:31 +0200307
Juanli Shen8e4c9d32018-01-23 16:26:39 -0800308 /** timeout in milliseconds for the LB call. 0 means no deadline. */
309 int lb_call_timeout_ms;
310
David Garcia Quintas98da61b2016-10-29 08:46:31 +0200311 /** LB call retry timer */
312 grpc_timer lb_call_retry_timer;
Juanli Shen8e4c9d32018-01-23 16:26:39 -0800313 /** LB call retry timer callback */
314 grpc_closure lb_on_call_retry;
Mark D. Roth09e458c2017-05-02 08:13:26 -0700315
Juanli Shenfe408152017-09-27 12:27:20 -0700316 /** LB fallback timer */
317 grpc_timer lb_fallback_timer;
Juanli Shen8e4c9d32018-01-23 16:26:39 -0800318 /** LB fallback timer callback */
319 grpc_closure lb_on_fallback;
320} glb_lb_policy;
Juanli Shenfe408152017-09-27 12:27:20 -0700321
Juanli Shen8e4c9d32018-01-23 16:26:39 -0800322static void glb_lb_call_data_ref(glb_lb_call_data* lb_calld,
323 const char* reason) {
324 gpr_ref_non_zero(&lb_calld->refs);
325 if (grpc_lb_glb_trace.enabled()) {
326 const gpr_atm count = gpr_atm_acq_load(&lb_calld->refs.count);
327 gpr_log(GPR_DEBUG, "[%s %p] lb_calld %p REF %lu->%lu (%s)",
328 grpc_lb_glb_trace.name(), lb_calld->glb_policy, lb_calld,
Noah Eisen4d20a662018-02-09 09:34:04 -0800329 static_cast<unsigned long>(count - 1),
330 static_cast<unsigned long>(count), reason);
Juanli Shen8e4c9d32018-01-23 16:26:39 -0800331 }
332}
Mark D. Roth09e458c2017-05-02 08:13:26 -0700333
Juanli Shen8e4c9d32018-01-23 16:26:39 -0800334static void glb_lb_call_data_unref(glb_lb_call_data* lb_calld,
335 const char* reason) {
336 const bool done = gpr_unref(&lb_calld->refs);
337 if (grpc_lb_glb_trace.enabled()) {
338 const gpr_atm count = gpr_atm_acq_load(&lb_calld->refs.count);
339 gpr_log(GPR_DEBUG, "[%s %p] lb_calld %p UNREF %lu->%lu (%s)",
340 grpc_lb_glb_trace.name(), lb_calld->glb_policy, lb_calld,
Noah Eisen4d20a662018-02-09 09:34:04 -0800341 static_cast<unsigned long>(count + 1),
342 static_cast<unsigned long>(count), reason);
Juanli Shen8e4c9d32018-01-23 16:26:39 -0800343 }
344 if (done) {
345 GPR_ASSERT(lb_calld->lb_call != nullptr);
346 grpc_call_unref(lb_calld->lb_call);
347 grpc_metadata_array_destroy(&lb_calld->lb_initial_metadata_recv);
348 grpc_metadata_array_destroy(&lb_calld->lb_trailing_metadata_recv);
349 grpc_byte_buffer_destroy(lb_calld->send_message_payload);
350 grpc_byte_buffer_destroy(lb_calld->recv_message_payload);
351 grpc_slice_unref_internal(lb_calld->lb_call_status_details);
352 if (lb_calld->client_stats != nullptr) {
353 grpc_grpclb_client_stats_unref(lb_calld->client_stats);
354 }
355 GRPC_LB_POLICY_UNREF(&lb_calld->glb_policy->base, "lb_calld");
356 gpr_free(lb_calld);
357 }
358}
359
360static void lb_call_data_shutdown(glb_lb_policy* glb_policy) {
361 GPR_ASSERT(glb_policy->lb_calld != nullptr);
362 GPR_ASSERT(glb_policy->lb_calld->lb_call != nullptr);
363 // lb_on_server_status_received will complete the cancellation and clean up.
364 grpc_call_cancel(glb_policy->lb_calld->lb_call, nullptr);
365 if (glb_policy->lb_calld->client_load_report_timer_callback_pending) {
366 grpc_timer_cancel(&glb_policy->lb_calld->client_load_report_timer);
367 }
368 glb_policy->lb_calld = nullptr;
369}
David Garcia Quintas8d489112016-07-29 15:20:42 -0700370
Mark D. Rothc0febd32018-01-09 10:25:24 -0800371/* add lb_token of selected subchannel (address) to the call's initial
372 * metadata */
373static grpc_error* initial_metadata_add_lb_token(
374 grpc_metadata_batch* initial_metadata,
375 grpc_linked_mdelem* lb_token_mdelem_storage, grpc_mdelem lb_token) {
376 GPR_ASSERT(lb_token_mdelem_storage != nullptr);
377 GPR_ASSERT(!GRPC_MDISNULL(lb_token));
378 return grpc_metadata_batch_add_tail(initial_metadata, lb_token_mdelem_storage,
379 lb_token);
380}
381
382static void destroy_client_stats(void* arg) {
Noah Eisenbe82e642018-02-09 09:16:55 -0800383 grpc_grpclb_client_stats_unref(static_cast<grpc_grpclb_client_stats*>(arg));
Mark D. Rothc0febd32018-01-09 10:25:24 -0800384}
385
386static void pending_pick_set_metadata_and_context(pending_pick* pp) {
387 /* if connected_subchannel is nullptr, no pick has been made by the RR
388 * policy (e.g., all addresses failed to connect). There won't be any
389 * user_data/token available */
David Garcia Quintasbe1b7f92018-01-12 14:01:38 -0800390 if (pp->pick->connected_subchannel != nullptr) {
Mark D. Rothc0febd32018-01-09 10:25:24 -0800391 if (!GRPC_MDISNULL(pp->lb_token)) {
392 initial_metadata_add_lb_token(pp->pick->initial_metadata,
393 &pp->pick->lb_token_mdelem_storage,
394 GRPC_MDELEM_REF(pp->lb_token));
395 } else {
396 gpr_log(GPR_ERROR,
397 "[grpclb %p] No LB token for connected subchannel pick %p",
398 pp->glb_policy, pp->pick);
399 abort();
400 }
401 // Pass on client stats via context. Passes ownership of the reference.
Juanli Shen8e4c9d32018-01-23 16:26:39 -0800402 if (pp->client_stats != nullptr) {
403 pp->pick->subchannel_call_context[GRPC_GRPCLB_CLIENT_STATS].value =
404 pp->client_stats;
405 pp->pick->subchannel_call_context[GRPC_GRPCLB_CLIENT_STATS].destroy =
406 destroy_client_stats;
407 }
Mark D. Rothc0febd32018-01-09 10:25:24 -0800408 } else {
Mark D. Roth83d5cd62018-01-11 08:56:53 -0800409 if (pp->client_stats != nullptr) {
410 grpc_grpclb_client_stats_unref(pp->client_stats);
411 }
Mark D. Rothc0febd32018-01-09 10:25:24 -0800412 }
413}
414
415/* The \a on_complete closure passed as part of the pick requires keeping a
416 * reference to its associated round robin instance. We wrap this closure in
417 * order to unref the round robin instance upon its invocation */
418static void pending_pick_complete(void* arg, grpc_error* error) {
Noah Eisenbe82e642018-02-09 09:16:55 -0800419 pending_pick* pp = static_cast<pending_pick*>(arg);
Mark D. Rothc0febd32018-01-09 10:25:24 -0800420 pending_pick_set_metadata_and_context(pp);
421 GRPC_CLOSURE_SCHED(pp->original_on_complete, GRPC_ERROR_REF(error));
422 gpr_free(pp);
423}
424
425static pending_pick* pending_pick_create(glb_lb_policy* glb_policy,
426 grpc_lb_policy_pick_state* pick) {
Noah Eisenbe82e642018-02-09 09:16:55 -0800427 pending_pick* pp = static_cast<pending_pick*>(gpr_zalloc(sizeof(*pp)));
Mark D. Rothc0febd32018-01-09 10:25:24 -0800428 pp->pick = pick;
429 pp->glb_policy = glb_policy;
430 GRPC_CLOSURE_INIT(&pp->on_complete, pending_pick_complete, pp,
431 grpc_schedule_on_exec_ctx);
432 pp->original_on_complete = pick->on_complete;
433 pp->pick->on_complete = &pp->on_complete;
434 return pp;
435}
436
437static void pending_pick_add(pending_pick** root, pending_pick* new_pp) {
438 new_pp->next = *root;
439 *root = new_pp;
440}
441
442static void pending_ping_add(pending_ping** root, grpc_closure* on_initiate,
443 grpc_closure* on_ack) {
Noah Eisenbe82e642018-02-09 09:16:55 -0800444 pending_ping* pping = static_cast<pending_ping*>(gpr_zalloc(sizeof(*pping)));
Mark D. Rothc0febd32018-01-09 10:25:24 -0800445 pping->on_initiate = on_initiate;
446 pping->on_ack = on_ack;
447 pping->next = *root;
448 *root = pping;
449}
450
Craig Tillerbaa14a92017-11-03 09:09:36 -0700451static bool is_server_valid(const grpc_grpclb_server* server, size_t idx,
David Garcia Quintas35c2aba2016-09-13 15:28:09 -0700452 bool log) {
Mark D. Rothe7751802017-07-27 12:31:45 -0700453 if (server->drop) return false;
Craig Tillerbaa14a92017-11-03 09:09:36 -0700454 const grpc_grpclb_ip_address* ip = &server->ip_address;
David Garcia Quintas35c2aba2016-09-13 15:28:09 -0700455 if (server->port >> 16 != 0) {
456 if (log) {
457 gpr_log(GPR_ERROR,
Jan Tattermusch2b398082016-10-07 14:40:30 +0200458 "Invalid port '%d' at index %lu of serverlist. Ignoring.",
Noah Eisenbe82e642018-02-09 09:16:55 -0800459 server->port, static_cast<unsigned long>(idx));
David Garcia Quintas35c2aba2016-09-13 15:28:09 -0700460 }
461 return false;
462 }
David Garcia Quintas35c2aba2016-09-13 15:28:09 -0700463 if (ip->size != 4 && ip->size != 16) {
464 if (log) {
465 gpr_log(GPR_ERROR,
Jan Tattermusch2b398082016-10-07 14:40:30 +0200466 "Expected IP to be 4 or 16 bytes, got %d at index %lu of "
David Garcia Quintas35c2aba2016-09-13 15:28:09 -0700467 "serverlist. Ignoring",
Noah Eisenbe82e642018-02-09 09:16:55 -0800468 ip->size, static_cast<unsigned long>(idx));
David Garcia Quintas35c2aba2016-09-13 15:28:09 -0700469 }
470 return false;
471 }
472 return true;
473}
474
Mark D. Roth16883a32016-10-21 10:30:58 -0700475/* vtable for LB tokens in grpc_lb_addresses. */
Craig Tillerbaa14a92017-11-03 09:09:36 -0700476static void* lb_token_copy(void* token) {
Noah Eisen882dfed2017-11-14 14:58:20 -0800477 return token == nullptr
478 ? nullptr
Craig Tillerbaa14a92017-11-03 09:09:36 -0700479 : (void*)GRPC_MDELEM_REF(grpc_mdelem{(uintptr_t)token}).payload;
Mark D. Roth16883a32016-10-21 10:30:58 -0700480}
Yash Tibrewal8cf14702017-12-06 09:47:54 -0800481static void lb_token_destroy(void* token) {
Noah Eisen882dfed2017-11-14 14:58:20 -0800482 if (token != nullptr) {
Yash Tibrewal8cf14702017-12-06 09:47:54 -0800483 GRPC_MDELEM_UNREF(grpc_mdelem{(uintptr_t)token});
Craig Tiller7c70b6c2017-01-23 07:48:42 -0800484 }
Mark D. Roth16883a32016-10-21 10:30:58 -0700485}
Craig Tillerbaa14a92017-11-03 09:09:36 -0700486static int lb_token_cmp(void* token1, void* token2) {
Mark D. Roth16883a32016-10-21 10:30:58 -0700487 if (token1 > token2) return 1;
488 if (token1 < token2) return -1;
489 return 0;
490}
491static const grpc_lb_user_data_vtable lb_token_vtable = {
492 lb_token_copy, lb_token_destroy, lb_token_cmp};
493
Craig Tillerbaa14a92017-11-03 09:09:36 -0700494static void parse_server(const grpc_grpclb_server* server,
495 grpc_resolved_address* addr) {
Mark D. Rothd7389b42017-05-17 12:22:17 -0700496 memset(addr, 0, sizeof(*addr));
Mark D. Rothe7751802017-07-27 12:31:45 -0700497 if (server->drop) return;
Noah Eisenbe82e642018-02-09 09:16:55 -0800498 const uint16_t netorder_port = htons(static_cast<uint16_t>(server->port));
David Garcia Quintas7ec29132016-11-01 04:09:05 +0100499 /* the addresses are given in binary format (a in(6)_addr struct) in
500 * server->ip_address.bytes. */
Craig Tillerbaa14a92017-11-03 09:09:36 -0700501 const grpc_grpclb_ip_address* ip = &server->ip_address;
David Garcia Quintas7ec29132016-11-01 04:09:05 +0100502 if (ip->size == 4) {
503 addr->len = sizeof(struct sockaddr_in);
Noah Eisen4d20a662018-02-09 09:34:04 -0800504 struct sockaddr_in* addr4 =
505 reinterpret_cast<struct sockaddr_in*>(&addr->addr);
David Garcia Quintas7ec29132016-11-01 04:09:05 +0100506 addr4->sin_family = AF_INET;
507 memcpy(&addr4->sin_addr, ip->bytes, ip->size);
508 addr4->sin_port = netorder_port;
509 } else if (ip->size == 16) {
510 addr->len = sizeof(struct sockaddr_in6);
Noah Eisen4d20a662018-02-09 09:34:04 -0800511 struct sockaddr_in6* addr6 =
512 reinterpret_cast<struct sockaddr_in6*>(&addr->addr);
David Garcia Quintas107ca162016-11-02 18:17:03 -0700513 addr6->sin6_family = AF_INET6;
David Garcia Quintas7ec29132016-11-01 04:09:05 +0100514 memcpy(&addr6->sin6_addr, ip->bytes, ip->size);
515 addr6->sin6_port = netorder_port;
516 }
517}
518
Mark D. Roth7ce14d22016-09-16 13:03:46 -0700519/* Returns addresses extracted from \a serverlist. */
Craig Tillerbaa14a92017-11-03 09:09:36 -0700520static grpc_lb_addresses* process_serverlist_locked(
Yash Tibrewal8cf14702017-12-06 09:47:54 -0800521 const grpc_grpclb_serverlist* serverlist) {
David Garcia Quintas331b9c02016-09-12 18:37:05 -0700522 size_t num_valid = 0;
523 /* first pass: count how many are valid in order to allocate the necessary
524 * memory in a single block */
525 for (size_t i = 0; i < serverlist->num_servers; ++i) {
David Garcia Quintas35c2aba2016-09-13 15:28:09 -0700526 if (is_server_valid(serverlist->servers[i], i, true)) ++num_valid;
David Garcia Quintasb8b384a2016-08-23 21:10:29 -0700527 }
Craig Tillerbaa14a92017-11-03 09:09:36 -0700528 grpc_lb_addresses* lb_addresses =
Mark D. Roth16883a32016-10-21 10:30:58 -0700529 grpc_lb_addresses_create(num_valid, &lb_token_vtable);
David Garcia Quintas331b9c02016-09-12 18:37:05 -0700530 /* second pass: actually populate the addresses and LB tokens (aka user data
David Garcia Quintas35c2aba2016-09-13 15:28:09 -0700531 * to the outside world) to be read by the RR policy during its creation.
532 * Given that the validity tests are very cheap, they are performed again
533 * instead of marking the valid ones during the first pass, as this would
534 * incurr in an allocation due to the arbitrary number of server */
David Garcia Quintasf47d6fb2016-09-14 12:59:17 -0700535 size_t addr_idx = 0;
536 for (size_t sl_idx = 0; sl_idx < serverlist->num_servers; ++sl_idx) {
Craig Tillerbaa14a92017-11-03 09:09:36 -0700537 const grpc_grpclb_server* server = serverlist->servers[sl_idx];
David Garcia Quintasf47d6fb2016-09-14 12:59:17 -0700538 if (!is_server_valid(serverlist->servers[sl_idx], sl_idx, false)) continue;
David Garcia Quintasc22c65b2017-07-25 14:22:20 -0700539 GPR_ASSERT(addr_idx < num_valid);
David Garcia Quintas331b9c02016-09-12 18:37:05 -0700540 /* address processing */
Mark D. Rothc5c38782016-09-16 08:51:01 -0700541 grpc_resolved_address addr;
David Garcia Quintas7ec29132016-11-01 04:09:05 +0100542 parse_server(server, &addr);
David Garcia Quintasf47d6fb2016-09-14 12:59:17 -0700543 /* lb token processing */
Craig Tillerbaa14a92017-11-03 09:09:36 -0700544 void* user_data;
David Garcia Quintasf47d6fb2016-09-14 12:59:17 -0700545 if (server->has_load_balance_token) {
David Garcia Quintas0baf1dc2016-10-28 04:44:01 +0200546 const size_t lb_token_max_length =
547 GPR_ARRAY_SIZE(server->load_balance_token);
548 const size_t lb_token_length =
549 strnlen(server->load_balance_token, lb_token_max_length);
Craig Tiller7c70b6c2017-01-23 07:48:42 -0800550 grpc_slice lb_token_mdstr = grpc_slice_from_copied_buffer(
551 server->load_balance_token, lb_token_length);
Yash Tibrewal8cf14702017-12-06 09:47:54 -0800552 user_data =
553 (void*)grpc_mdelem_from_slices(GRPC_MDSTR_LB_TOKEN, lb_token_mdstr)
554 .payload;
David Garcia Quintasf47d6fb2016-09-14 12:59:17 -0700555 } else {
Craig Tillerbaa14a92017-11-03 09:09:36 -0700556 char* uri = grpc_sockaddr_to_uri(&addr);
David Garcia Quintas850cbaa2016-11-15 15:13:35 -0800557 gpr_log(GPR_INFO,
David Garcia Quintasf47d6fb2016-09-14 12:59:17 -0700558 "Missing LB token for backend address '%s'. The empty token will "
559 "be used instead",
David Garcia Quintas850cbaa2016-11-15 15:13:35 -0800560 uri);
561 gpr_free(uri);
Craig Tillerbaa14a92017-11-03 09:09:36 -0700562 user_data = (void*)GRPC_MDELEM_LB_TOKEN_EMPTY.payload;
David Garcia Quintasf47d6fb2016-09-14 12:59:17 -0700563 }
Mark D. Roth64f1f8d2016-09-16 09:00:09 -0700564 grpc_lb_addresses_set_address(lb_addresses, addr_idx, &addr.addr, addr.len,
565 false /* is_balancer */,
Noah Eisen882dfed2017-11-14 14:58:20 -0800566 nullptr /* balancer_name */, user_data);
David Garcia Quintasf47d6fb2016-09-14 12:59:17 -0700567 ++addr_idx;
David Garcia Quintas331b9c02016-09-12 18:37:05 -0700568 }
David Garcia Quintasf47d6fb2016-09-14 12:59:17 -0700569 GPR_ASSERT(addr_idx == num_valid);
Mark D. Rothc5c38782016-09-16 08:51:01 -0700570 return lb_addresses;
571}
572
Juanli Shenfe408152017-09-27 12:27:20 -0700573/* Returns the backend addresses extracted from the given addresses */
Craig Tillerbaa14a92017-11-03 09:09:36 -0700574static grpc_lb_addresses* extract_backend_addresses_locked(
Yash Tibrewal8cf14702017-12-06 09:47:54 -0800575 const grpc_lb_addresses* addresses) {
Juanli Shenfe408152017-09-27 12:27:20 -0700576 /* first pass: count the number of backend addresses */
577 size_t num_backends = 0;
578 for (size_t i = 0; i < addresses->num_addresses; ++i) {
579 if (!addresses->addresses[i].is_balancer) {
580 ++num_backends;
581 }
582 }
583 /* second pass: actually populate the addresses and (empty) LB tokens */
Craig Tillerbaa14a92017-11-03 09:09:36 -0700584 grpc_lb_addresses* backend_addresses =
Juanli Shenfe408152017-09-27 12:27:20 -0700585 grpc_lb_addresses_create(num_backends, &lb_token_vtable);
586 size_t num_copied = 0;
587 for (size_t i = 0; i < addresses->num_addresses; ++i) {
588 if (addresses->addresses[i].is_balancer) continue;
Craig Tillerbaa14a92017-11-03 09:09:36 -0700589 const grpc_resolved_address* addr = &addresses->addresses[i].address;
Juanli Shenfe408152017-09-27 12:27:20 -0700590 grpc_lb_addresses_set_address(backend_addresses, num_copied, &addr->addr,
591 addr->len, false /* is_balancer */,
Noah Eisen882dfed2017-11-14 14:58:20 -0800592 nullptr /* balancer_name */,
Craig Tillerbaa14a92017-11-03 09:09:36 -0700593 (void*)GRPC_MDELEM_LB_TOKEN_EMPTY.payload);
Juanli Shenfe408152017-09-27 12:27:20 -0700594 ++num_copied;
595 }
596 return backend_addresses;
597}
598
Juanli Shen87c65042018-02-15 09:42:45 -0800599static void update_lb_connectivity_status_locked(glb_lb_policy* glb_policy,
600 grpc_error* rr_state_error) {
Craig Tiller613dafa2017-02-09 12:00:43 -0800601 const grpc_connectivity_state curr_glb_state =
602 grpc_connectivity_state_check(&glb_policy->state_tracker);
David Garcia Quintas149f09d2016-11-17 20:43:10 -0800603 /* The new connectivity status is a function of the previous one and the new
604 * input coming from the status of the RR policy.
605 *
David Garcia Quintas4283a262016-11-18 10:43:56 -0800606 * current state (grpclb's)
David Garcia Quintas149f09d2016-11-17 20:43:10 -0800607 * |
608 * v || I | C | R | TF | SD | <- new state (RR's)
609 * ===++====+=====+=====+======+======+
David Garcia Quintas4283a262016-11-18 10:43:56 -0800610 * I || I | C | R | [I] | [I] |
David Garcia Quintas149f09d2016-11-17 20:43:10 -0800611 * ---++----+-----+-----+------+------+
David Garcia Quintas4283a262016-11-18 10:43:56 -0800612 * C || I | C | R | [C] | [C] |
David Garcia Quintas149f09d2016-11-17 20:43:10 -0800613 * ---++----+-----+-----+------+------+
David Garcia Quintas4283a262016-11-18 10:43:56 -0800614 * R || I | C | R | [R] | [R] |
David Garcia Quintas149f09d2016-11-17 20:43:10 -0800615 * ---++----+-----+-----+------+------+
David Garcia Quintas4283a262016-11-18 10:43:56 -0800616 * TF || I | C | R | [TF] | [TF] |
David Garcia Quintas149f09d2016-11-17 20:43:10 -0800617 * ---++----+-----+-----+------+------+
618 * SD || NA | NA | NA | NA | NA | (*)
619 * ---++----+-----+-----+------+------+
620 *
David Garcia Quintas4283a262016-11-18 10:43:56 -0800621 * A [STATE] indicates that the old RR policy is kept. In those cases, STATE
622 * is the current state of grpclb, which is left untouched.
623 *
David Garcia Quintas149f09d2016-11-17 20:43:10 -0800624 * In summary, if the new state is TRANSIENT_FAILURE or SHUTDOWN, stick to
625 * the previous RR instance.
626 *
627 * Note that the status is never updated to SHUTDOWN as a result of calling
628 * this function. Only glb_shutdown() has the power to set that state.
629 *
630 * (*) This function mustn't be called during shutting down. */
631 GPR_ASSERT(curr_glb_state != GRPC_CHANNEL_SHUTDOWN);
Juanli Shen87c65042018-02-15 09:42:45 -0800632 switch (glb_policy->rr_connectivity_state) {
David Garcia Quintas149f09d2016-11-17 20:43:10 -0800633 case GRPC_CHANNEL_TRANSIENT_FAILURE:
634 case GRPC_CHANNEL_SHUTDOWN:
David Garcia Quintas87d5a312017-06-06 19:45:58 -0700635 GPR_ASSERT(rr_state_error != GRPC_ERROR_NONE);
636 break;
David Garcia Quintas149f09d2016-11-17 20:43:10 -0800637 case GRPC_CHANNEL_IDLE:
638 case GRPC_CHANNEL_CONNECTING:
639 case GRPC_CHANNEL_READY:
David Garcia Quintas87d5a312017-06-06 19:45:58 -0700640 GPR_ASSERT(rr_state_error == GRPC_ERROR_NONE);
David Garcia Quintas149f09d2016-11-17 20:43:10 -0800641 }
Craig Tiller6014e8a2017-10-16 13:50:29 -0700642 if (grpc_lb_glb_trace.enabled()) {
David Garcia Quintas87d5a312017-06-06 19:45:58 -0700643 gpr_log(
David Garcia Quintasa1c65902017-11-09 10:37:35 -0800644 GPR_INFO,
645 "[grpclb %p] Setting grpclb's state to %s from new RR policy %p state.",
Juanli Shen87c65042018-02-15 09:42:45 -0800646 glb_policy,
647 grpc_connectivity_state_name(glb_policy->rr_connectivity_state),
David Garcia Quintasa1c65902017-11-09 10:37:35 -0800648 glb_policy->rr_policy);
David Garcia Quintas149f09d2016-11-17 20:43:10 -0800649 }
Juanli Shen87c65042018-02-15 09:42:45 -0800650 grpc_connectivity_state_set(&glb_policy->state_tracker,
651 glb_policy->rr_connectivity_state, rr_state_error,
David Garcia Quintas149f09d2016-11-17 20:43:10 -0800652 "update_lb_connectivity_status_locked");
David Garcia Quintas149f09d2016-11-17 20:43:10 -0800653}
654
Mark D. Rothd7389b42017-05-17 12:22:17 -0700655/* Perform a pick over \a glb_policy->rr_policy. Given that a pick can return
656 * immediately (ignoring its completion callback), we need to perform the
Juanli Shen592cf342017-12-04 20:52:01 -0800657 * cleanups this callback would otherwise be responsible for.
Mark D. Rothd7389b42017-05-17 12:22:17 -0700658 * If \a force_async is true, then we will manually schedule the
659 * completion callback even if the pick is available immediately. */
Mark D. Rothc0febd32018-01-09 10:25:24 -0800660static bool pick_from_internal_rr_locked(glb_lb_policy* glb_policy,
661 bool force_async, pending_pick* pp) {
Juanli Shenfe408152017-09-27 12:27:20 -0700662 // Check for drops if we are not using fallback backend addresses.
Noah Eisen882dfed2017-11-14 14:58:20 -0800663 if (glb_policy->serverlist != nullptr) {
Juanli Shenfe408152017-09-27 12:27:20 -0700664 // Look at the index into the serverlist to see if we should drop this call.
Craig Tillerbaa14a92017-11-03 09:09:36 -0700665 grpc_grpclb_server* server =
Juanli Shenfe408152017-09-27 12:27:20 -0700666 glb_policy->serverlist->servers[glb_policy->serverlist_index++];
667 if (glb_policy->serverlist_index == glb_policy->serverlist->num_servers) {
668 glb_policy->serverlist_index = 0; // Wrap-around.
Mark D. Rothd7389b42017-05-17 12:22:17 -0700669 }
Juanli Shenfe408152017-09-27 12:27:20 -0700670 if (server->drop) {
Juanli Shenfe408152017-09-27 12:27:20 -0700671 // Update client load reporting stats to indicate the number of
672 // dropped calls. Note that we have to do this here instead of in
673 // the client_load_reporting filter, because we do not create a
674 // subchannel call (and therefore no client_load_reporting filter)
675 // for dropped calls.
Juanli Shen8e4c9d32018-01-23 16:26:39 -0800676 if (glb_policy->lb_calld != nullptr &&
677 glb_policy->lb_calld->client_stats != nullptr) {
678 grpc_grpclb_client_stats_add_call_dropped_locked(
679 server->load_balance_token, glb_policy->lb_calld->client_stats);
680 }
Juanli Shenfe408152017-09-27 12:27:20 -0700681 if (force_async) {
Mark D. Rothc0febd32018-01-09 10:25:24 -0800682 GRPC_CLOSURE_SCHED(pp->original_on_complete, GRPC_ERROR_NONE);
683 gpr_free(pp);
Juanli Shenfe408152017-09-27 12:27:20 -0700684 return false;
685 }
Mark D. Rothc0febd32018-01-09 10:25:24 -0800686 gpr_free(pp);
Juanli Shenfe408152017-09-27 12:27:20 -0700687 return true;
Mark D. Rothd7389b42017-05-17 12:22:17 -0700688 }
Mark D. Rothd7389b42017-05-17 12:22:17 -0700689 }
Mark D. Rothc0febd32018-01-09 10:25:24 -0800690 // Set client_stats and user_data.
Juanli Shen8e4c9d32018-01-23 16:26:39 -0800691 if (glb_policy->lb_calld != nullptr &&
692 glb_policy->lb_calld->client_stats != nullptr) {
693 pp->client_stats =
694 grpc_grpclb_client_stats_ref(glb_policy->lb_calld->client_stats);
695 }
Mark D. Rothc0febd32018-01-09 10:25:24 -0800696 GPR_ASSERT(pp->pick->user_data == nullptr);
Noah Eisenbe82e642018-02-09 09:16:55 -0800697 pp->pick->user_data = reinterpret_cast<void**>(&pp->lb_token);
Mark D. Rothd7389b42017-05-17 12:22:17 -0700698 // Pick via the RR policy.
Mark D. Rothc0febd32018-01-09 10:25:24 -0800699 bool pick_done = grpc_lb_policy_pick_locked(glb_policy->rr_policy, pp->pick);
David Garcia Quintas20359062016-10-15 15:22:51 -0700700 if (pick_done) {
Mark D. Rothc0febd32018-01-09 10:25:24 -0800701 pending_pick_set_metadata_and_context(pp);
Mark D. Rothd7389b42017-05-17 12:22:17 -0700702 if (force_async) {
Mark D. Rothc0febd32018-01-09 10:25:24 -0800703 GRPC_CLOSURE_SCHED(pp->original_on_complete, GRPC_ERROR_NONE);
704 pick_done = false;
Mark D. Rothd7389b42017-05-17 12:22:17 -0700705 }
Mark D. Rothc0febd32018-01-09 10:25:24 -0800706 gpr_free(pp);
David Garcia Quintas20359062016-10-15 15:22:51 -0700707 }
708 /* else, the pending pick will be registered and taken care of by the
709 * pending pick list inside the RR policy (glb_policy->rr_policy).
710 * Eventually, wrapped_on_complete will be called, which will -among other
711 * things- add the LB token to the call's initial metadata */
David Garcia Quintas20359062016-10-15 15:22:51 -0700712 return pick_done;
David Garcia Quintas58c18e72016-10-14 15:23:45 -0700713}
714
Yash Tibrewal8cf14702017-12-06 09:47:54 -0800715static grpc_lb_policy_args* lb_policy_args_create(glb_lb_policy* glb_policy) {
Craig Tillerbaa14a92017-11-03 09:09:36 -0700716 grpc_lb_addresses* addresses;
Noah Eisen882dfed2017-11-14 14:58:20 -0800717 if (glb_policy->serverlist != nullptr) {
Juanli Shenfe408152017-09-27 12:27:20 -0700718 GPR_ASSERT(glb_policy->serverlist->num_servers > 0);
Yash Tibrewal8cf14702017-12-06 09:47:54 -0800719 addresses = process_serverlist_locked(glb_policy->serverlist);
Juanli Shenfe408152017-09-27 12:27:20 -0700720 } else {
721 // If rr_handover_locked() is invoked when we haven't received any
722 // serverlist from the balancer, we use the fallback backends returned by
723 // the resolver. Note that the fallback backend list may be empty, in which
724 // case the new round_robin policy will keep the requested picks pending.
Noah Eisen882dfed2017-11-14 14:58:20 -0800725 GPR_ASSERT(glb_policy->fallback_backend_addresses != nullptr);
Juanli Shenfe408152017-09-27 12:27:20 -0700726 addresses = grpc_lb_addresses_copy(glb_policy->fallback_backend_addresses);
727 }
Noah Eisen882dfed2017-11-14 14:58:20 -0800728 GPR_ASSERT(addresses != nullptr);
Noah Eisen4d20a662018-02-09 09:34:04 -0800729 grpc_lb_policy_args* args =
730 static_cast<grpc_lb_policy_args*>(gpr_zalloc(sizeof(*args)));
David Garcia Quintas87d5a312017-06-06 19:45:58 -0700731 args->client_channel_factory = glb_policy->cc_factory;
732 args->combiner = glb_policy->base.combiner;
Mark D. Roth5bd7be02016-10-21 14:19:50 -0700733 // Replace the LB addresses in the channel args that we pass down to
734 // the subchannel.
Craig Tillerbaa14a92017-11-03 09:09:36 -0700735 static const char* keys_to_remove[] = {GRPC_ARG_LB_ADDRESSES};
David Garcia Quintas98da61b2016-10-29 08:46:31 +0200736 const grpc_arg arg = grpc_lb_addresses_create_channel_arg(addresses);
David Garcia Quintas87d5a312017-06-06 19:45:58 -0700737 args->args = grpc_channel_args_copy_and_add_and_remove(
Mark D. Roth5bd7be02016-10-21 14:19:50 -0700738 glb_policy->args, keys_to_remove, GPR_ARRAY_SIZE(keys_to_remove), &arg,
739 1);
Yash Tibrewal8cf14702017-12-06 09:47:54 -0800740 grpc_lb_addresses_destroy(addresses);
David Garcia Quintas87d5a312017-06-06 19:45:58 -0700741 return args;
742}
743
Yash Tibrewal8cf14702017-12-06 09:47:54 -0800744static void lb_policy_args_destroy(grpc_lb_policy_args* args) {
745 grpc_channel_args_destroy(args->args);
David Garcia Quintas87d5a312017-06-06 19:45:58 -0700746 gpr_free(args);
David Garcia Quintas65318262016-07-29 13:43:38 -0700747}
David Garcia Quintas8d489112016-07-29 15:20:42 -0700748
Juanli Shen87c65042018-02-15 09:42:45 -0800749static void rr_on_reresolution_requested_locked(void* arg, grpc_error* error) {
750 glb_lb_policy* glb_policy = (glb_lb_policy*)arg;
751 if (glb_policy->shutting_down || error != GRPC_ERROR_NONE) {
752 GRPC_LB_POLICY_UNREF(&glb_policy->base,
753 "rr_on_reresolution_requested_locked");
754 return;
755 }
756 if (grpc_lb_glb_trace.enabled()) {
757 gpr_log(
758 GPR_DEBUG,
759 "[grpclb %p] Re-resolution requested from the internal RR policy (%p).",
760 glb_policy, glb_policy->rr_policy);
761 }
762 // If we are talking to a balancer, we expect to get updated addresses form
763 // the balancer, so we can ignore the re-resolution request from the RR
764 // policy. Otherwise, handle the re-resolution request using glb's original
765 // re-resolution closure.
766 if (glb_policy->lb_calld == nullptr ||
767 !glb_policy->lb_calld->seen_initial_response) {
768 grpc_lb_policy_try_reresolve(&glb_policy->base, &grpc_lb_glb_trace,
769 GRPC_ERROR_NONE);
770 }
771 // Give back the wrapper closure to the RR policy.
772 grpc_lb_policy_set_reresolve_closure_locked(
773 glb_policy->rr_policy, &glb_policy->rr_on_reresolution_requested);
774}
775
Yash Tibrewal8cf14702017-12-06 09:47:54 -0800776static void create_rr_locked(glb_lb_policy* glb_policy,
Craig Tillerbaa14a92017-11-03 09:09:36 -0700777 grpc_lb_policy_args* args) {
Noah Eisen882dfed2017-11-14 14:58:20 -0800778 GPR_ASSERT(glb_policy->rr_policy == nullptr);
Yash Tibrewal8cf14702017-12-06 09:47:54 -0800779 grpc_lb_policy* new_rr_policy = grpc_lb_policy_create("round_robin", args);
Noah Eisen882dfed2017-11-14 14:58:20 -0800780 if (new_rr_policy == nullptr) {
David Garcia Quintas149f09d2016-11-17 20:43:10 -0800781 gpr_log(GPR_ERROR,
David Garcia Quintasa1c65902017-11-09 10:37:35 -0800782 "[grpclb %p] Failure creating a RoundRobin policy for serverlist "
David Garcia Quintas2b372e02017-11-09 14:15:59 -0800783 "update with %" PRIuPTR
784 " entries. The previous RR instance (%p), if any, will continue to "
785 "be used. Future updates from the LB will attempt to create new "
David Garcia Quintas149f09d2016-11-17 20:43:10 -0800786 "instances.",
David Garcia Quintas2b372e02017-11-09 14:15:59 -0800787 glb_policy, glb_policy->serverlist->num_servers,
David Garcia Quintasa1c65902017-11-09 10:37:35 -0800788 glb_policy->rr_policy);
David Garcia Quintas149f09d2016-11-17 20:43:10 -0800789 return;
David Garcia Quintas65318262016-07-29 13:43:38 -0700790 }
Juanli Shen87c65042018-02-15 09:42:45 -0800791 GRPC_LB_POLICY_REF(&glb_policy->base, "rr_on_reresolution_requested_locked");
Juanli Shen592cf342017-12-04 20:52:01 -0800792 grpc_lb_policy_set_reresolve_closure_locked(
Juanli Shen87c65042018-02-15 09:42:45 -0800793 new_rr_policy, &glb_policy->rr_on_reresolution_requested);
David Garcia Quintas4283a262016-11-18 10:43:56 -0800794 glb_policy->rr_policy = new_rr_policy;
Noah Eisen882dfed2017-11-14 14:58:20 -0800795 grpc_error* rr_state_error = nullptr;
Mark D. Rothc0febd32018-01-09 10:25:24 -0800796 glb_policy->rr_connectivity_state = grpc_lb_policy_check_connectivity_locked(
797 glb_policy->rr_policy, &rr_state_error);
David Garcia Quintas87d5a312017-06-06 19:45:58 -0700798 /* Connectivity state is a function of the RR policy updated/created */
Juanli Shen87c65042018-02-15 09:42:45 -0800799 update_lb_connectivity_status_locked(glb_policy, rr_state_error);
David Garcia Quintas149f09d2016-11-17 20:43:10 -0800800 /* Add the gRPC LB's interested_parties pollset_set to that of the newly
801 * created RR policy. This will make the RR policy progress upon activity on
802 * gRPC LB, which in turn is tied to the application's call */
Yash Tibrewal8cf14702017-12-06 09:47:54 -0800803 grpc_pollset_set_add_pollset_set(glb_policy->rr_policy->interested_parties,
Yuchen Zengb4291642016-09-01 19:17:14 -0700804 glb_policy->base.interested_parties);
David Garcia Quintas149f09d2016-11-17 20:43:10 -0800805 /* Subscribe to changes to the connectivity of the new RR */
Juanli Shen87c65042018-02-15 09:42:45 -0800806 GRPC_LB_POLICY_REF(&glb_policy->base, "rr_on_connectivity_changed_locked");
Mark D. Rothc0febd32018-01-09 10:25:24 -0800807 grpc_lb_policy_notify_on_state_change_locked(
808 glb_policy->rr_policy, &glb_policy->rr_connectivity_state,
Juanli Shen87c65042018-02-15 09:42:45 -0800809 &glb_policy->rr_on_connectivity_changed);
Yash Tibrewal8cf14702017-12-06 09:47:54 -0800810 grpc_lb_policy_exit_idle_locked(glb_policy->rr_policy);
Mark D. Rothc0febd32018-01-09 10:25:24 -0800811 // Send pending picks to RR policy.
Craig Tillerbaa14a92017-11-03 09:09:36 -0700812 pending_pick* pp;
David Garcia Quintas65318262016-07-29 13:43:38 -0700813 while ((pp = glb_policy->pending_picks)) {
814 glb_policy->pending_picks = pp->next;
Craig Tiller6014e8a2017-10-16 13:50:29 -0700815 if (grpc_lb_glb_trace.enabled()) {
David Garcia Quintasa1c65902017-11-09 10:37:35 -0800816 gpr_log(GPR_INFO,
817 "[grpclb %p] Pending pick about to (async) PICK from RR %p",
818 glb_policy, glb_policy->rr_policy);
David Garcia Quintas65318262016-07-29 13:43:38 -0700819 }
Mark D. Rothc0febd32018-01-09 10:25:24 -0800820 pick_from_internal_rr_locked(glb_policy, true /* force_async */, pp);
David Garcia Quintas65318262016-07-29 13:43:38 -0700821 }
Mark D. Rothc0febd32018-01-09 10:25:24 -0800822 // Send pending pings to RR policy.
Craig Tillerbaa14a92017-11-03 09:09:36 -0700823 pending_ping* pping;
David Garcia Quintas65318262016-07-29 13:43:38 -0700824 while ((pping = glb_policy->pending_pings)) {
825 glb_policy->pending_pings = pping->next;
Craig Tiller6014e8a2017-10-16 13:50:29 -0700826 if (grpc_lb_glb_trace.enabled()) {
David Garcia Quintasa1c65902017-11-09 10:37:35 -0800827 gpr_log(GPR_INFO, "[grpclb %p] Pending ping about to PING from RR %p",
828 glb_policy, glb_policy->rr_policy);
David Garcia Quintas65318262016-07-29 13:43:38 -0700829 }
Mark D. Rothc0febd32018-01-09 10:25:24 -0800830 grpc_lb_policy_ping_one_locked(glb_policy->rr_policy, pping->on_initiate,
831 pping->on_ack);
Yuchen Zengc272dd72017-12-05 12:18:34 -0800832 gpr_free(pping);
David Garcia Quintas65318262016-07-29 13:43:38 -0700833 }
David Garcia Quintas65318262016-07-29 13:43:38 -0700834}
David Garcia Quintas8d489112016-07-29 15:20:42 -0700835
Yash Tibrewal8cf14702017-12-06 09:47:54 -0800836/* glb_policy->rr_policy may be nullptr (initial handover) */
837static void rr_handover_locked(glb_lb_policy* glb_policy) {
David Garcia Quintas87d5a312017-06-06 19:45:58 -0700838 if (glb_policy->shutting_down) return;
Yash Tibrewal8cf14702017-12-06 09:47:54 -0800839 grpc_lb_policy_args* args = lb_policy_args_create(glb_policy);
Noah Eisen882dfed2017-11-14 14:58:20 -0800840 GPR_ASSERT(args != nullptr);
841 if (glb_policy->rr_policy != nullptr) {
Craig Tiller6014e8a2017-10-16 13:50:29 -0700842 if (grpc_lb_glb_trace.enabled()) {
David Garcia Quintasa1c65902017-11-09 10:37:35 -0800843 gpr_log(GPR_DEBUG, "[grpclb %p] Updating RR policy %p", glb_policy,
844 glb_policy->rr_policy);
David Garcia Quintas87d5a312017-06-06 19:45:58 -0700845 }
Yash Tibrewal8cf14702017-12-06 09:47:54 -0800846 grpc_lb_policy_update_locked(glb_policy->rr_policy, args);
David Garcia Quintas87d5a312017-06-06 19:45:58 -0700847 } else {
Yash Tibrewal8cf14702017-12-06 09:47:54 -0800848 create_rr_locked(glb_policy, args);
Craig Tiller6014e8a2017-10-16 13:50:29 -0700849 if (grpc_lb_glb_trace.enabled()) {
David Garcia Quintasa1c65902017-11-09 10:37:35 -0800850 gpr_log(GPR_DEBUG, "[grpclb %p] Created new RR policy %p", glb_policy,
851 glb_policy->rr_policy);
David Garcia Quintas87d5a312017-06-06 19:45:58 -0700852 }
853 }
Yash Tibrewal8cf14702017-12-06 09:47:54 -0800854 lb_policy_args_destroy(args);
David Garcia Quintas87d5a312017-06-06 19:45:58 -0700855}
856
Juanli Shen87c65042018-02-15 09:42:45 -0800857static void rr_on_connectivity_changed_locked(void* arg, grpc_error* error) {
858 glb_lb_policy* glb_policy = (glb_lb_policy*)arg;
David Garcia Quintasfc950fb2017-07-27 19:41:12 -0700859 if (glb_policy->shutting_down) {
Juanli Shen87c65042018-02-15 09:42:45 -0800860 GRPC_LB_POLICY_UNREF(&glb_policy->base,
861 "rr_on_connectivity_changed_locked");
David Garcia Quintasfc950fb2017-07-27 19:41:12 -0700862 return;
David Garcia Quintas149f09d2016-11-17 20:43:10 -0800863 }
Juanli Shen87c65042018-02-15 09:42:45 -0800864 update_lb_connectivity_status_locked(glb_policy, GRPC_ERROR_REF(error));
865 // Resubscribe. Reuse the "rr_on_connectivity_changed_locked" ref.
Mark D. Rothc0febd32018-01-09 10:25:24 -0800866 grpc_lb_policy_notify_on_state_change_locked(
867 glb_policy->rr_policy, &glb_policy->rr_connectivity_state,
Juanli Shen87c65042018-02-15 09:42:45 -0800868 &glb_policy->rr_on_connectivity_changed);
David Garcia Quintas3fb8f732016-06-15 22:53:08 -0700869}
870
Yash Tibrewal8cf14702017-12-06 09:47:54 -0800871static void destroy_balancer_name(void* balancer_name) {
David Garcia Quintas01291502017-02-07 13:26:41 -0800872 gpr_free(balancer_name);
873}
874
David Garcia Quintas01291502017-02-07 13:26:41 -0800875static grpc_slice_hash_table_entry targets_info_entry_create(
Craig Tillerbaa14a92017-11-03 09:09:36 -0700876 const char* address, const char* balancer_name) {
David Garcia Quintas01291502017-02-07 13:26:41 -0800877 grpc_slice_hash_table_entry entry;
878 entry.key = grpc_slice_from_copied_string(address);
Mark D. Rothe3006702017-04-19 07:43:56 -0700879 entry.value = gpr_strdup(balancer_name);
David Garcia Quintas01291502017-02-07 13:26:41 -0800880 return entry;
881}
882
Craig Tillerbaa14a92017-11-03 09:09:36 -0700883static int balancer_name_cmp_fn(void* a, void* b) {
Noah Eisenbe82e642018-02-09 09:16:55 -0800884 const char* a_str = static_cast<const char*>(a);
885 const char* b_str = static_cast<const char*>(b);
David Garcia Quintas87d5a312017-06-06 19:45:58 -0700886 return strcmp(a_str, b_str);
887}
888
889/* Returns the channel args for the LB channel, used to create a bidirectional
890 * stream for the reception of load balancing updates.
David Garcia Quintas01291502017-02-07 13:26:41 -0800891 *
David Garcia Quintas87d5a312017-06-06 19:45:58 -0700892 * Inputs:
893 * - \a addresses: corresponding to the balancers.
894 * - \a response_generator: in order to propagate updates from the resolver
895 * above the grpclb policy.
896 * - \a args: other args inherited from the grpclb policy. */
Craig Tillerbaa14a92017-11-03 09:09:36 -0700897static grpc_channel_args* build_lb_channel_args(
Yash Tibrewal8cf14702017-12-06 09:47:54 -0800898 const grpc_lb_addresses* addresses,
Mark D. Roth209f6442018-02-08 10:26:46 -0800899 grpc_core::FakeResolverResponseGenerator* response_generator,
Craig Tillerbaa14a92017-11-03 09:09:36 -0700900 const grpc_channel_args* args) {
David Garcia Quintas01291502017-02-07 13:26:41 -0800901 size_t num_grpclb_addrs = 0;
902 for (size_t i = 0; i < addresses->num_addresses; ++i) {
903 if (addresses->addresses[i].is_balancer) ++num_grpclb_addrs;
904 }
905 /* All input addresses come from a resolver that claims they are LB services.
906 * It's the resolver's responsibility to make sure this policy is only
907 * instantiated and used in that case. Otherwise, something has gone wrong. */
908 GPR_ASSERT(num_grpclb_addrs > 0);
Craig Tillerbaa14a92017-11-03 09:09:36 -0700909 grpc_lb_addresses* lb_addresses =
Noah Eisen882dfed2017-11-14 14:58:20 -0800910 grpc_lb_addresses_create(num_grpclb_addrs, nullptr);
Craig Tillerbaa14a92017-11-03 09:09:36 -0700911 grpc_slice_hash_table_entry* targets_info_entries =
Noah Eisen4d20a662018-02-09 09:34:04 -0800912 static_cast<grpc_slice_hash_table_entry*>(
913 gpr_zalloc(sizeof(*targets_info_entries) * num_grpclb_addrs));
David Garcia Quintas01291502017-02-07 13:26:41 -0800914
David Garcia Quintas87d5a312017-06-06 19:45:58 -0700915 size_t lb_addresses_idx = 0;
916 for (size_t i = 0; i < addresses->num_addresses; ++i) {
917 if (!addresses->addresses[i].is_balancer) continue;
Noah Eisen882dfed2017-11-14 14:58:20 -0800918 if (addresses->addresses[i].user_data != nullptr) {
David Garcia Quintas01291502017-02-07 13:26:41 -0800919 gpr_log(GPR_ERROR,
920 "This LB policy doesn't support user data. It will be ignored");
921 }
Craig Tillerbaa14a92017-11-03 09:09:36 -0700922 char* addr_str;
David Garcia Quintas87d5a312017-06-06 19:45:58 -0700923 GPR_ASSERT(grpc_sockaddr_to_string(
924 &addr_str, &addresses->addresses[i].address, true) > 0);
925 targets_info_entries[lb_addresses_idx] = targets_info_entry_create(
926 addr_str, addresses->addresses[i].balancer_name);
927 gpr_free(addr_str);
928
929 grpc_lb_addresses_set_address(
930 lb_addresses, lb_addresses_idx++, addresses->addresses[i].address.addr,
931 addresses->addresses[i].address.len, false /* is balancer */,
Noah Eisen882dfed2017-11-14 14:58:20 -0800932 addresses->addresses[i].balancer_name, nullptr /* user data */);
David Garcia Quintas01291502017-02-07 13:26:41 -0800933 }
David Garcia Quintas87d5a312017-06-06 19:45:58 -0700934 GPR_ASSERT(num_grpclb_addrs == lb_addresses_idx);
Craig Tillerbaa14a92017-11-03 09:09:36 -0700935 grpc_slice_hash_table* targets_info =
David Garcia Quintas87d5a312017-06-06 19:45:58 -0700936 grpc_slice_hash_table_create(num_grpclb_addrs, targets_info_entries,
937 destroy_balancer_name, balancer_name_cmp_fn);
David Garcia Quintas01291502017-02-07 13:26:41 -0800938 gpr_free(targets_info_entries);
939
Craig Tillerbaa14a92017-11-03 09:09:36 -0700940 grpc_channel_args* lb_channel_args =
Yash Tibrewal8cf14702017-12-06 09:47:54 -0800941 grpc_lb_policy_grpclb_build_lb_channel_args(targets_info,
David Garcia Quintas87d5a312017-06-06 19:45:58 -0700942 response_generator, args);
943
944 grpc_arg lb_channel_addresses_arg =
945 grpc_lb_addresses_create_channel_arg(lb_addresses);
946
Craig Tillerbaa14a92017-11-03 09:09:36 -0700947 grpc_channel_args* result = grpc_channel_args_copy_and_add(
David Garcia Quintas87d5a312017-06-06 19:45:58 -0700948 lb_channel_args, &lb_channel_addresses_arg, 1);
Yash Tibrewal8cf14702017-12-06 09:47:54 -0800949 grpc_slice_hash_table_unref(targets_info);
950 grpc_channel_args_destroy(lb_channel_args);
951 grpc_lb_addresses_destroy(lb_addresses);
David Garcia Quintas87d5a312017-06-06 19:45:58 -0700952 return result;
David Garcia Quintas01291502017-02-07 13:26:41 -0800953}
954
Yash Tibrewal8cf14702017-12-06 09:47:54 -0800955static void glb_destroy(grpc_lb_policy* pol) {
Noah Eisenbe82e642018-02-09 09:16:55 -0800956 glb_lb_policy* glb_policy = reinterpret_cast<glb_lb_policy*>(pol);
Noah Eisen882dfed2017-11-14 14:58:20 -0800957 GPR_ASSERT(glb_policy->pending_picks == nullptr);
958 GPR_ASSERT(glb_policy->pending_pings == nullptr);
Craig Tillerbaa14a92017-11-03 09:09:36 -0700959 gpr_free((void*)glb_policy->server_name);
Yash Tibrewal8cf14702017-12-06 09:47:54 -0800960 grpc_channel_args_destroy(glb_policy->args);
Yash Tibrewal8cf14702017-12-06 09:47:54 -0800961 grpc_connectivity_state_destroy(&glb_policy->state_tracker);
Noah Eisen882dfed2017-11-14 14:58:20 -0800962 if (glb_policy->serverlist != nullptr) {
David Garcia Quintas65318262016-07-29 13:43:38 -0700963 grpc_grpclb_destroy_serverlist(glb_policy->serverlist);
964 }
Noah Eisen882dfed2017-11-14 14:58:20 -0800965 if (glb_policy->fallback_backend_addresses != nullptr) {
Yash Tibrewal8cf14702017-12-06 09:47:54 -0800966 grpc_lb_addresses_destroy(glb_policy->fallback_backend_addresses);
Juanli Shenfe408152017-09-27 12:27:20 -0700967 }
Mark D. Roth209f6442018-02-08 10:26:46 -0800968 // TODO(roth): Remove this once the LB policy becomes a C++ object.
969 glb_policy->response_generator.reset();
Juanli Shen6502ecc2017-09-13 13:10:54 -0700970 grpc_subchannel_index_unref();
David Garcia Quintas65318262016-07-29 13:43:38 -0700971 gpr_free(glb_policy);
David Garcia Quintas3fb8f732016-06-15 22:53:08 -0700972}
973
Mark D. Rothc0febd32018-01-09 10:25:24 -0800974static void glb_shutdown_locked(grpc_lb_policy* pol,
975 grpc_lb_policy* new_policy) {
Noah Eisenbe82e642018-02-09 09:16:55 -0800976 glb_lb_policy* glb_policy = reinterpret_cast<glb_lb_policy*>(pol);
Juanli Shen592cf342017-12-04 20:52:01 -0800977 grpc_error* error = GRPC_ERROR_CREATE_FROM_STATIC_STRING("Channel shutdown");
David Garcia Quintas98da61b2016-10-29 08:46:31 +0200978 glb_policy->shutting_down = true;
Juanli Shen8e4c9d32018-01-23 16:26:39 -0800979 if (glb_policy->lb_calld != nullptr) {
980 lb_call_data_shutdown(glb_policy);
David Garcia Quintasa74b2462016-11-11 14:07:27 -0800981 }
Juanli Shen4ed35d12018-01-08 18:01:45 -0800982 if (glb_policy->retry_timer_callback_pending) {
Yash Tibrewal8cf14702017-12-06 09:47:54 -0800983 grpc_timer_cancel(&glb_policy->lb_call_retry_timer);
David Garcia Quintas87d5a312017-06-06 19:45:58 -0700984 }
Juanli Shen4ed35d12018-01-08 18:01:45 -0800985 if (glb_policy->fallback_timer_callback_pending) {
Yash Tibrewal8cf14702017-12-06 09:47:54 -0800986 grpc_timer_cancel(&glb_policy->lb_fallback_timer);
Juanli Shen663f50c2017-10-05 14:36:13 -0700987 }
Noah Eisen882dfed2017-11-14 14:58:20 -0800988 if (glb_policy->rr_policy != nullptr) {
Mark D. Rothc0febd32018-01-09 10:25:24 -0800989 grpc_lb_policy_shutdown_locked(glb_policy->rr_policy, nullptr);
Yash Tibrewal8cf14702017-12-06 09:47:54 -0800990 GRPC_LB_POLICY_UNREF(glb_policy->rr_policy, "glb_shutdown");
David Garcia Quintas87d5a312017-06-06 19:45:58 -0700991 }
992 // We destroy the LB channel here because
993 // glb_lb_channel_on_connectivity_changed_cb needs a valid glb_policy
994 // instance. Destroying the lb channel in glb_destroy would likely result in
995 // a callback invocation without a valid glb_policy arg.
Noah Eisen882dfed2017-11-14 14:58:20 -0800996 if (glb_policy->lb_channel != nullptr) {
David Garcia Quintas87d5a312017-06-06 19:45:58 -0700997 grpc_channel_destroy(glb_policy->lb_channel);
Noah Eisen882dfed2017-11-14 14:58:20 -0800998 glb_policy->lb_channel = nullptr;
David Garcia Quintas87d5a312017-06-06 19:45:58 -0700999 }
Yash Tibrewal8cf14702017-12-06 09:47:54 -08001000 grpc_connectivity_state_set(&glb_policy->state_tracker, GRPC_CHANNEL_SHUTDOWN,
1001 GRPC_ERROR_REF(error), "glb_shutdown");
Juanli Shen87c65042018-02-15 09:42:45 -08001002 grpc_lb_policy_try_reresolve(pol, &grpc_lb_glb_trace, GRPC_ERROR_CANCELLED);
Mark D. Rothc0febd32018-01-09 10:25:24 -08001003 // Clear pending picks.
1004 pending_pick* pp = glb_policy->pending_picks;
1005 glb_policy->pending_picks = nullptr;
Noah Eisen882dfed2017-11-14 14:58:20 -08001006 while (pp != nullptr) {
Craig Tillerbaa14a92017-11-03 09:09:36 -07001007 pending_pick* next = pp->next;
Mark D. Rothc0febd32018-01-09 10:25:24 -08001008 if (new_policy != nullptr) {
1009 // Hand pick over to new policy.
Mark D. Roth83d5cd62018-01-11 08:56:53 -08001010 if (pp->client_stats != nullptr) {
1011 grpc_grpclb_client_stats_unref(pp->client_stats);
1012 }
Mark D. Rothc0febd32018-01-09 10:25:24 -08001013 pp->pick->on_complete = pp->original_on_complete;
1014 if (grpc_lb_policy_pick_locked(new_policy, pp->pick)) {
1015 // Synchronous return; schedule callback.
1016 GRPC_CLOSURE_SCHED(pp->pick->on_complete, GRPC_ERROR_NONE);
1017 }
1018 gpr_free(pp);
1019 } else {
David Garcia Quintas269ee292018-01-12 14:18:37 -08001020 pp->pick->connected_subchannel.reset();
Mark D. Rothc0febd32018-01-09 10:25:24 -08001021 GRPC_CLOSURE_SCHED(&pp->on_complete, GRPC_ERROR_REF(error));
1022 }
David Garcia Quintas65318262016-07-29 13:43:38 -07001023 pp = next;
1024 }
Mark D. Rothc0febd32018-01-09 10:25:24 -08001025 // Clear pending pings.
1026 pending_ping* pping = glb_policy->pending_pings;
1027 glb_policy->pending_pings = nullptr;
Noah Eisen882dfed2017-11-14 14:58:20 -08001028 while (pping != nullptr) {
Craig Tillerbaa14a92017-11-03 09:09:36 -07001029 pending_ping* next = pping->next;
Mark D. Rothc0febd32018-01-09 10:25:24 -08001030 GRPC_CLOSURE_SCHED(pping->on_initiate, GRPC_ERROR_REF(error));
1031 GRPC_CLOSURE_SCHED(pping->on_ack, GRPC_ERROR_REF(error));
Mark D. Roth7a2db962017-10-06 15:06:12 -07001032 gpr_free(pping);
David Garcia Quintas65318262016-07-29 13:43:38 -07001033 pping = next;
1034 }
Juanli Shen592cf342017-12-04 20:52:01 -08001035 GRPC_ERROR_UNREF(error);
David Garcia Quintas65318262016-07-29 13:43:38 -07001036}
1037
David Garcia Quintasc22c65b2017-07-25 14:22:20 -07001038// Cancel a specific pending pick.
1039//
1040// A grpclb pick progresses as follows:
1041// - If there's a Round Robin policy (glb_policy->rr_policy) available, it'll be
1042// handed over to the RR policy (in create_rr_locked()). From that point
1043// onwards, it'll be RR's responsibility. For cancellations, that implies the
1044// pick needs also be cancelled by the RR instance.
1045// - Otherwise, without an RR instance, picks stay pending at this policy's
1046// level (grpclb), inside the glb_policy->pending_picks list. To cancel these,
Yash Tibrewal8cf14702017-12-06 09:47:54 -08001047// we invoke the completion closure and set *target to nullptr right here.
1048static void glb_cancel_pick_locked(grpc_lb_policy* pol,
Mark D. Rothc0febd32018-01-09 10:25:24 -08001049 grpc_lb_policy_pick_state* pick,
Craig Tillerbaa14a92017-11-03 09:09:36 -07001050 grpc_error* error) {
Noah Eisenbe82e642018-02-09 09:16:55 -08001051 glb_lb_policy* glb_policy = reinterpret_cast<glb_lb_policy*>(pol);
Craig Tillerbaa14a92017-11-03 09:09:36 -07001052 pending_pick* pp = glb_policy->pending_picks;
Noah Eisen882dfed2017-11-14 14:58:20 -08001053 glb_policy->pending_picks = nullptr;
1054 while (pp != nullptr) {
Craig Tillerbaa14a92017-11-03 09:09:36 -07001055 pending_pick* next = pp->next;
Mark D. Rothc0febd32018-01-09 10:25:24 -08001056 if (pp->pick == pick) {
David Garcia Quintas269ee292018-01-12 14:18:37 -08001057 pick->connected_subchannel.reset();
Mark D. Rothc0febd32018-01-09 10:25:24 -08001058 GRPC_CLOSURE_SCHED(&pp->on_complete,
ncteisen4b36a3d2017-03-13 19:08:06 -07001059 GRPC_ERROR_CREATE_REFERENCING_FROM_STATIC_STRING(
1060 "Pick Cancelled", &error, 1));
David Garcia Quintas65318262016-07-29 13:43:38 -07001061 } else {
1062 pp->next = glb_policy->pending_picks;
1063 glb_policy->pending_picks = pp;
1064 }
1065 pp = next;
1066 }
Noah Eisen882dfed2017-11-14 14:58:20 -08001067 if (glb_policy->rr_policy != nullptr) {
Mark D. Rothc0febd32018-01-09 10:25:24 -08001068 grpc_lb_policy_cancel_pick_locked(glb_policy->rr_policy, pick,
David Garcia Quintasc22c65b2017-07-25 14:22:20 -07001069 GRPC_ERROR_REF(error));
1070 }
Mark D. Roth5f844002016-09-08 08:20:53 -07001071 GRPC_ERROR_UNREF(error);
David Garcia Quintas65318262016-07-29 13:43:38 -07001072}
1073
David Garcia Quintasc22c65b2017-07-25 14:22:20 -07001074// Cancel all pending picks.
1075//
1076// A grpclb pick progresses as follows:
1077// - If there's a Round Robin policy (glb_policy->rr_policy) available, it'll be
1078// handed over to the RR policy (in create_rr_locked()). From that point
1079// onwards, it'll be RR's responsibility. For cancellations, that implies the
1080// pick needs also be cancelled by the RR instance.
1081// - Otherwise, without an RR instance, picks stay pending at this policy's
1082// level (grpclb), inside the glb_policy->pending_picks list. To cancel these,
Yash Tibrewal8cf14702017-12-06 09:47:54 -08001083// we invoke the completion closure and set *target to nullptr right here.
1084static void glb_cancel_picks_locked(grpc_lb_policy* pol,
Craig Tiller2400bf52017-02-09 16:25:19 -08001085 uint32_t initial_metadata_flags_mask,
1086 uint32_t initial_metadata_flags_eq,
Craig Tillerbaa14a92017-11-03 09:09:36 -07001087 grpc_error* error) {
Noah Eisenbe82e642018-02-09 09:16:55 -08001088 glb_lb_policy* glb_policy = reinterpret_cast<glb_lb_policy*>(pol);
Craig Tillerbaa14a92017-11-03 09:09:36 -07001089 pending_pick* pp = glb_policy->pending_picks;
Noah Eisen882dfed2017-11-14 14:58:20 -08001090 glb_policy->pending_picks = nullptr;
1091 while (pp != nullptr) {
Craig Tillerbaa14a92017-11-03 09:09:36 -07001092 pending_pick* next = pp->next;
Mark D. Rothc0febd32018-01-09 10:25:24 -08001093 if ((pp->pick->initial_metadata_flags & initial_metadata_flags_mask) ==
David Garcia Quintas65318262016-07-29 13:43:38 -07001094 initial_metadata_flags_eq) {
Mark D. Rothc0febd32018-01-09 10:25:24 -08001095 GRPC_CLOSURE_SCHED(&pp->on_complete,
ncteisen4b36a3d2017-03-13 19:08:06 -07001096 GRPC_ERROR_CREATE_REFERENCING_FROM_STATIC_STRING(
1097 "Pick Cancelled", &error, 1));
David Garcia Quintas65318262016-07-29 13:43:38 -07001098 } else {
1099 pp->next = glb_policy->pending_picks;
1100 glb_policy->pending_picks = pp;
1101 }
1102 pp = next;
1103 }
Noah Eisen882dfed2017-11-14 14:58:20 -08001104 if (glb_policy->rr_policy != nullptr) {
David Garcia Quintasc22c65b2017-07-25 14:22:20 -07001105 grpc_lb_policy_cancel_picks_locked(
Yash Tibrewal8cf14702017-12-06 09:47:54 -08001106 glb_policy->rr_policy, initial_metadata_flags_mask,
David Garcia Quintasc22c65b2017-07-25 14:22:20 -07001107 initial_metadata_flags_eq, GRPC_ERROR_REF(error));
1108 }
Mark D. Rothe65ff112016-09-09 13:48:38 -07001109 GRPC_ERROR_UNREF(error);
David Garcia Quintas65318262016-07-29 13:43:38 -07001110}
David Garcia Quintas8d489112016-07-29 15:20:42 -07001111
Yash Tibrewal8cf14702017-12-06 09:47:54 -08001112static void lb_on_fallback_timer_locked(void* arg, grpc_error* error);
1113static void query_for_backends_locked(glb_lb_policy* glb_policy);
1114static void start_picking_locked(glb_lb_policy* glb_policy) {
Juanli Shenfe408152017-09-27 12:27:20 -07001115 /* start a timer to fall back */
1116 if (glb_policy->lb_fallback_timeout_ms > 0 &&
Juanli Shen4ed35d12018-01-08 18:01:45 -08001117 glb_policy->serverlist == nullptr &&
1118 !glb_policy->fallback_timer_callback_pending) {
Craig Tiller1e868f02017-09-29 11:18:26 -07001119 grpc_millis deadline =
Yash Tibrewal8cf14702017-12-06 09:47:54 -08001120 grpc_core::ExecCtx::Get()->Now() + glb_policy->lb_fallback_timeout_ms;
Mark D. Rothc0febd32018-01-09 10:25:24 -08001121 GRPC_LB_POLICY_REF(&glb_policy->base, "grpclb_fallback_timer");
Juanli Shenfe408152017-09-27 12:27:20 -07001122 GRPC_CLOSURE_INIT(&glb_policy->lb_on_fallback, lb_on_fallback_timer_locked,
1123 glb_policy,
1124 grpc_combiner_scheduler(glb_policy->base.combiner));
Juanli Shen4ed35d12018-01-08 18:01:45 -08001125 glb_policy->fallback_timer_callback_pending = true;
Yash Tibrewal8cf14702017-12-06 09:47:54 -08001126 grpc_timer_init(&glb_policy->lb_fallback_timer, deadline,
Craig Tiller1e868f02017-09-29 11:18:26 -07001127 &glb_policy->lb_on_fallback);
Juanli Shenfe408152017-09-27 12:27:20 -07001128 }
David Garcia Quintas65318262016-07-29 13:43:38 -07001129 glb_policy->started_picking = true;
David Garcia Quintasdde6afc2017-11-22 16:31:01 -08001130 glb_policy->lb_call_backoff->Reset();
Yash Tibrewal8cf14702017-12-06 09:47:54 -08001131 query_for_backends_locked(glb_policy);
David Garcia Quintas65318262016-07-29 13:43:38 -07001132}
David Garcia Quintas8d489112016-07-29 15:20:42 -07001133
Yash Tibrewal8cf14702017-12-06 09:47:54 -08001134static void glb_exit_idle_locked(grpc_lb_policy* pol) {
Noah Eisenbe82e642018-02-09 09:16:55 -08001135 glb_lb_policy* glb_policy = reinterpret_cast<glb_lb_policy*>(pol);
David Garcia Quintas65318262016-07-29 13:43:38 -07001136 if (!glb_policy->started_picking) {
Yash Tibrewal8cf14702017-12-06 09:47:54 -08001137 start_picking_locked(glb_policy);
David Garcia Quintas65318262016-07-29 13:43:38 -07001138 }
David Garcia Quintas65318262016-07-29 13:43:38 -07001139}
David Garcia Quintas8d489112016-07-29 15:20:42 -07001140
Yash Tibrewal8cf14702017-12-06 09:47:54 -08001141static int glb_pick_locked(grpc_lb_policy* pol,
Mark D. Rothc0febd32018-01-09 10:25:24 -08001142 grpc_lb_policy_pick_state* pick) {
Noah Eisenbe82e642018-02-09 09:16:55 -08001143 glb_lb_policy* glb_policy = reinterpret_cast<glb_lb_policy*>(pol);
Mark D. Rothc0febd32018-01-09 10:25:24 -08001144 pending_pick* pp = pending_pick_create(glb_policy, pick);
David Garcia Quintas2a95bf42017-09-07 11:26:34 -07001145 bool pick_done = false;
Noah Eisen882dfed2017-11-14 14:58:20 -08001146 if (glb_policy->rr_policy != nullptr) {
David Garcia Quintas2a95bf42017-09-07 11:26:34 -07001147 const grpc_connectivity_state rr_connectivity_state =
Yash Tibrewal8cf14702017-12-06 09:47:54 -08001148 grpc_lb_policy_check_connectivity_locked(glb_policy->rr_policy,
1149 nullptr);
David Garcia Quintasf6c6b922017-11-03 07:48:16 -07001150 // The glb_policy->rr_policy may have transitioned to SHUTDOWN but the
David Garcia Quintas2a95bf42017-09-07 11:26:34 -07001151 // callback registered to capture this event
Mark D. Rothc0febd32018-01-09 10:25:24 -08001152 // (on_rr_connectivity_changed_locked) may not have been invoked yet. We
David Garcia Quintas2a95bf42017-09-07 11:26:34 -07001153 // need to make sure we aren't trying to pick from a RR policy instance
1154 // that's in shutdown.
1155 if (rr_connectivity_state == GRPC_CHANNEL_SHUTDOWN) {
ncteisen72afb762017-11-10 12:23:12 -08001156 if (grpc_lb_glb_trace.enabled()) {
David Garcia Quintas2a95bf42017-09-07 11:26:34 -07001157 gpr_log(GPR_INFO,
David Garcia Quintasa1c65902017-11-09 10:37:35 -08001158 "[grpclb %p] NOT picking from from RR %p: RR conn state=%s",
1159 glb_policy, glb_policy->rr_policy,
David Garcia Quintas2a95bf42017-09-07 11:26:34 -07001160 grpc_connectivity_state_name(rr_connectivity_state));
1161 }
Mark D. Rothc0febd32018-01-09 10:25:24 -08001162 pending_pick_add(&glb_policy->pending_picks, pp);
David Garcia Quintas2a95bf42017-09-07 11:26:34 -07001163 pick_done = false;
1164 } else { // RR not in shutdown
ncteisen72afb762017-11-10 12:23:12 -08001165 if (grpc_lb_glb_trace.enabled()) {
David Garcia Quintasa1c65902017-11-09 10:37:35 -08001166 gpr_log(GPR_INFO, "[grpclb %p] about to PICK from RR %p", glb_policy,
1167 glb_policy->rr_policy);
David Garcia Quintas2a95bf42017-09-07 11:26:34 -07001168 }
Mark D. Rothc0febd32018-01-09 10:25:24 -08001169 pick_done =
1170 pick_from_internal_rr_locked(glb_policy, false /* force_async */, pp);
David Garcia Quintas65318262016-07-29 13:43:38 -07001171 }
David Garcia Quintas2a95bf42017-09-07 11:26:34 -07001172 } else { // glb_policy->rr_policy == NULL
Craig Tiller6014e8a2017-10-16 13:50:29 -07001173 if (grpc_lb_glb_trace.enabled()) {
David Garcia Quintas98da61b2016-10-29 08:46:31 +02001174 gpr_log(GPR_DEBUG,
David Garcia Quintasa1c65902017-11-09 10:37:35 -08001175 "[grpclb %p] No RR policy. Adding to grpclb's pending picks",
1176 glb_policy);
David Garcia Quintas98da61b2016-10-29 08:46:31 +02001177 }
Mark D. Rothc0febd32018-01-09 10:25:24 -08001178 pending_pick_add(&glb_policy->pending_picks, pp);
David Garcia Quintas65318262016-07-29 13:43:38 -07001179 if (!glb_policy->started_picking) {
Yash Tibrewal8cf14702017-12-06 09:47:54 -08001180 start_picking_locked(glb_policy);
David Garcia Quintas65318262016-07-29 13:43:38 -07001181 }
David Garcia Quintas92eb6b92016-09-30 14:07:39 -07001182 pick_done = false;
David Garcia Quintas65318262016-07-29 13:43:38 -07001183 }
David Garcia Quintas92eb6b92016-09-30 14:07:39 -07001184 return pick_done;
David Garcia Quintas65318262016-07-29 13:43:38 -07001185}
David Garcia Quintas8d489112016-07-29 15:20:42 -07001186
Craig Tiller2400bf52017-02-09 16:25:19 -08001187static grpc_connectivity_state glb_check_connectivity_locked(
Yash Tibrewal8cf14702017-12-06 09:47:54 -08001188 grpc_lb_policy* pol, grpc_error** connectivity_error) {
Noah Eisenbe82e642018-02-09 09:16:55 -08001189 glb_lb_policy* glb_policy = reinterpret_cast<glb_lb_policy*>(pol);
Craig Tiller2400bf52017-02-09 16:25:19 -08001190 return grpc_connectivity_state_get(&glb_policy->state_tracker,
1191 connectivity_error);
David Garcia Quintas65318262016-07-29 13:43:38 -07001192}
David Garcia Quintas8d489112016-07-29 15:20:42 -07001193
Yash Tibrewald6c292f2017-12-07 19:38:43 -08001194static void glb_ping_one_locked(grpc_lb_policy* pol, grpc_closure* on_initiate,
Yuchen Zengc272dd72017-12-05 12:18:34 -08001195 grpc_closure* on_ack) {
Noah Eisenbe82e642018-02-09 09:16:55 -08001196 glb_lb_policy* glb_policy = reinterpret_cast<glb_lb_policy*>(pol);
David Garcia Quintas65318262016-07-29 13:43:38 -07001197 if (glb_policy->rr_policy) {
Yash Tibrewald6c292f2017-12-07 19:38:43 -08001198 grpc_lb_policy_ping_one_locked(glb_policy->rr_policy, on_initiate, on_ack);
David Garcia Quintas65318262016-07-29 13:43:38 -07001199 } else {
Mark D. Rothc0febd32018-01-09 10:25:24 -08001200 pending_ping_add(&glb_policy->pending_pings, on_initiate, on_ack);
David Garcia Quintas65318262016-07-29 13:43:38 -07001201 if (!glb_policy->started_picking) {
Yash Tibrewal8cf14702017-12-06 09:47:54 -08001202 start_picking_locked(glb_policy);
David Garcia Quintas65318262016-07-29 13:43:38 -07001203 }
1204 }
David Garcia Quintas65318262016-07-29 13:43:38 -07001205}
David Garcia Quintas8d489112016-07-29 15:20:42 -07001206
Yash Tibrewal8cf14702017-12-06 09:47:54 -08001207static void glb_notify_on_state_change_locked(grpc_lb_policy* pol,
Craig Tillerbaa14a92017-11-03 09:09:36 -07001208 grpc_connectivity_state* current,
1209 grpc_closure* notify) {
Noah Eisenbe82e642018-02-09 09:16:55 -08001210 glb_lb_policy* glb_policy = reinterpret_cast<glb_lb_policy*>(pol);
Yash Tibrewal8cf14702017-12-06 09:47:54 -08001211 grpc_connectivity_state_notify_on_state_change(&glb_policy->state_tracker,
1212 current, notify);
David Garcia Quintas65318262016-07-29 13:43:38 -07001213}
1214
Yash Tibrewal8cf14702017-12-06 09:47:54 -08001215static void lb_call_on_retry_timer_locked(void* arg, grpc_error* error) {
Noah Eisenbe82e642018-02-09 09:16:55 -08001216 glb_lb_policy* glb_policy = static_cast<glb_lb_policy*>(arg);
Juanli Shen4ed35d12018-01-08 18:01:45 -08001217 glb_policy->retry_timer_callback_pending = false;
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001218 if (!glb_policy->shutting_down && error == GRPC_ERROR_NONE &&
1219 glb_policy->lb_calld == nullptr) {
Craig Tiller6014e8a2017-10-16 13:50:29 -07001220 if (grpc_lb_glb_trace.enabled()) {
David Garcia Quintas2b372e02017-11-09 14:15:59 -08001221 gpr_log(GPR_INFO, "[grpclb %p] Restarting call to LB server", glb_policy);
Mark D. Rotha4792f52017-09-26 09:06:35 -07001222 }
Yash Tibrewal8cf14702017-12-06 09:47:54 -08001223 query_for_backends_locked(glb_policy);
Mark D. Rotha4792f52017-09-26 09:06:35 -07001224 }
Mark D. Rothc0febd32018-01-09 10:25:24 -08001225 GRPC_LB_POLICY_UNREF(&glb_policy->base, "grpclb_retry_timer");
Mark D. Rotha4792f52017-09-26 09:06:35 -07001226}
1227
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001228static void start_lb_call_retry_timer_locked(glb_lb_policy* glb_policy) {
1229 grpc_millis next_try = glb_policy->lb_call_backoff->NextAttemptTime();
1230 if (grpc_lb_glb_trace.enabled()) {
1231 gpr_log(GPR_DEBUG, "[grpclb %p] Connection to LB server lost...",
1232 glb_policy);
1233 grpc_millis timeout = next_try - grpc_core::ExecCtx::Get()->Now();
1234 if (timeout > 0) {
1235 gpr_log(GPR_DEBUG,
1236 "[grpclb %p] ... retry_timer_active in %" PRIuPTR "ms.",
1237 glb_policy, timeout);
1238 } else {
1239 gpr_log(GPR_DEBUG, "[grpclb %p] ... retry_timer_active immediately.",
David Garcia Quintasa1c65902017-11-09 10:37:35 -08001240 glb_policy);
Mark D. Rotha4792f52017-09-26 09:06:35 -07001241 }
Mark D. Rotha4792f52017-09-26 09:06:35 -07001242 }
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001243 GRPC_LB_POLICY_REF(&glb_policy->base, "grpclb_retry_timer");
1244 GRPC_CLOSURE_INIT(&glb_policy->lb_on_call_retry,
1245 lb_call_on_retry_timer_locked, glb_policy,
1246 grpc_combiner_scheduler(glb_policy->base.combiner));
1247 glb_policy->retry_timer_callback_pending = true;
1248 grpc_timer_init(&glb_policy->lb_call_retry_timer, next_try,
1249 &glb_policy->lb_on_call_retry);
Mark D. Rotha4792f52017-09-26 09:06:35 -07001250}
1251
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001252static void maybe_send_client_load_report_locked(void* arg, grpc_error* error);
Mark D. Roth09e458c2017-05-02 08:13:26 -07001253
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001254static void schedule_next_client_load_report(glb_lb_call_data* lb_calld) {
Craig Tillerc0df1c02017-07-17 16:12:33 -07001255 const grpc_millis next_client_load_report_time =
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001256 grpc_core::ExecCtx::Get()->Now() + lb_calld->client_stats_report_interval;
1257 GRPC_CLOSURE_INIT(
1258 &lb_calld->client_load_report_closure,
1259 maybe_send_client_load_report_locked, lb_calld,
1260 grpc_combiner_scheduler(lb_calld->glb_policy->base.combiner));
1261 grpc_timer_init(&lb_calld->client_load_report_timer,
Mark D. Roth09e458c2017-05-02 08:13:26 -07001262 next_client_load_report_time,
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001263 &lb_calld->client_load_report_closure);
1264 lb_calld->client_load_report_timer_callback_pending = true;
Mark D. Roth09e458c2017-05-02 08:13:26 -07001265}
1266
Yash Tibrewal8cf14702017-12-06 09:47:54 -08001267static void client_load_report_done_locked(void* arg, grpc_error* error) {
Noah Eisenbe82e642018-02-09 09:16:55 -08001268 glb_lb_call_data* lb_calld = static_cast<glb_lb_call_data*>(arg);
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001269 glb_lb_policy* glb_policy = lb_calld->glb_policy;
1270 grpc_byte_buffer_destroy(lb_calld->send_message_payload);
1271 lb_calld->send_message_payload = nullptr;
1272 if (error != GRPC_ERROR_NONE || lb_calld != glb_policy->lb_calld) {
1273 glb_lb_call_data_unref(lb_calld, "client_load_report");
Mark D. Roth09e458c2017-05-02 08:13:26 -07001274 return;
1275 }
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001276 schedule_next_client_load_report(lb_calld);
Juanli Shenf2a0ae72017-12-27 16:08:12 -08001277}
1278
Craig Tillerbaa14a92017-11-03 09:09:36 -07001279static bool load_report_counters_are_zero(grpc_grpclb_request* request) {
1280 grpc_grpclb_dropped_call_counts* drop_entries =
Noah Eisen4d20a662018-02-09 09:34:04 -08001281 static_cast<grpc_grpclb_dropped_call_counts*>(
1282 request->client_stats.calls_finished_with_drop.arg);
Mark D. Roth09e458c2017-05-02 08:13:26 -07001283 return request->client_stats.num_calls_started == 0 &&
1284 request->client_stats.num_calls_finished == 0 &&
Mark D. Roth09e458c2017-05-02 08:13:26 -07001285 request->client_stats.num_calls_finished_with_client_failed_to_send ==
1286 0 &&
Mark D. Rothe7751802017-07-27 12:31:45 -07001287 request->client_stats.num_calls_finished_known_received == 0 &&
Noah Eisen882dfed2017-11-14 14:58:20 -08001288 (drop_entries == nullptr || drop_entries->num_entries == 0);
Mark D. Roth09e458c2017-05-02 08:13:26 -07001289}
1290
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001291static void send_client_load_report_locked(glb_lb_call_data* lb_calld) {
1292 glb_lb_policy* glb_policy = lb_calld->glb_policy;
Mark D. Roth09e458c2017-05-02 08:13:26 -07001293 // Construct message payload.
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001294 GPR_ASSERT(lb_calld->send_message_payload == nullptr);
Craig Tillerbaa14a92017-11-03 09:09:36 -07001295 grpc_grpclb_request* request =
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001296 grpc_grpclb_load_report_request_create_locked(lb_calld->client_stats);
Mark D. Roth09e458c2017-05-02 08:13:26 -07001297 // Skip client load report if the counters were all zero in the last
1298 // report and they are still zero in this one.
1299 if (load_report_counters_are_zero(request)) {
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001300 if (lb_calld->last_client_load_report_counters_were_zero) {
Mark D. Roth09e458c2017-05-02 08:13:26 -07001301 grpc_grpclb_request_destroy(request);
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001302 schedule_next_client_load_report(lb_calld);
Mark D. Roth09e458c2017-05-02 08:13:26 -07001303 return;
1304 }
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001305 lb_calld->last_client_load_report_counters_were_zero = true;
Mark D. Roth09e458c2017-05-02 08:13:26 -07001306 } else {
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001307 lb_calld->last_client_load_report_counters_were_zero = false;
Mark D. Roth09e458c2017-05-02 08:13:26 -07001308 }
1309 grpc_slice request_payload_slice = grpc_grpclb_request_encode(request);
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001310 lb_calld->send_message_payload =
Mark D. Roth09e458c2017-05-02 08:13:26 -07001311 grpc_raw_byte_buffer_create(&request_payload_slice, 1);
Yash Tibrewal8cf14702017-12-06 09:47:54 -08001312 grpc_slice_unref_internal(request_payload_slice);
Mark D. Roth09e458c2017-05-02 08:13:26 -07001313 grpc_grpclb_request_destroy(request);
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001314 // Send the report.
1315 grpc_op op;
1316 memset(&op, 0, sizeof(op));
1317 op.op = GRPC_OP_SEND_MESSAGE;
1318 op.data.send_message.send_message = lb_calld->send_message_payload;
1319 GRPC_CLOSURE_INIT(&lb_calld->client_load_report_closure,
1320 client_load_report_done_locked, lb_calld,
1321 grpc_combiner_scheduler(glb_policy->base.combiner));
1322 grpc_call_error call_error = grpc_call_start_batch_and_execute(
1323 lb_calld->lb_call, &op, 1, &lb_calld->client_load_report_closure);
1324 if (call_error != GRPC_CALL_OK) {
1325 gpr_log(GPR_ERROR, "[grpclb %p] call_error=%d", glb_policy, call_error);
1326 GPR_ASSERT(GRPC_CALL_OK == call_error);
1327 }
1328}
1329
1330static void maybe_send_client_load_report_locked(void* arg, grpc_error* error) {
Noah Eisenbe82e642018-02-09 09:16:55 -08001331 glb_lb_call_data* lb_calld = static_cast<glb_lb_call_data*>(arg);
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001332 glb_lb_policy* glb_policy = lb_calld->glb_policy;
1333 lb_calld->client_load_report_timer_callback_pending = false;
1334 if (error != GRPC_ERROR_NONE || lb_calld != glb_policy->lb_calld) {
1335 glb_lb_call_data_unref(lb_calld, "client_load_report");
1336 return;
1337 }
Juanli Shenf2a0ae72017-12-27 16:08:12 -08001338 // If we've already sent the initial request, then we can go ahead and send
1339 // the load report. Otherwise, we need to wait until the initial request has
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001340 // been sent to send this (see lb_on_sent_initial_request_locked()).
1341 if (lb_calld->send_message_payload == nullptr) {
1342 send_client_load_report_locked(lb_calld);
1343 } else {
1344 lb_calld->client_load_report_is_due = true;
Mark D. Roth09e458c2017-05-02 08:13:26 -07001345 }
1346}
1347
Juanli Shenf2a0ae72017-12-27 16:08:12 -08001348static void lb_on_sent_initial_request_locked(void* arg, grpc_error* error);
Yash Tibrewal8cf14702017-12-06 09:47:54 -08001349static void lb_on_server_status_received_locked(void* arg, grpc_error* error);
1350static void lb_on_response_received_locked(void* arg, grpc_error* error);
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001351static glb_lb_call_data* lb_call_data_create_locked(glb_lb_policy* glb_policy) {
1352 GPR_ASSERT(!glb_policy->shutting_down);
1353 // Init the LB call. Note that the LB call will progress every time there's
1354 // activity in glb_policy->base.interested_parties, which is comprised of the
1355 // polling entities from client_channel.
Noah Eisen882dfed2017-11-14 14:58:20 -08001356 GPR_ASSERT(glb_policy->server_name != nullptr);
David Garcia Quintas55ba14a2016-09-27 18:45:30 -07001357 GPR_ASSERT(glb_policy->server_name[0] != '\0');
Craig Tiller7c70b6c2017-01-23 07:48:42 -08001358 grpc_slice host = grpc_slice_from_copied_string(glb_policy->server_name);
Craig Tiller89c14282017-07-19 15:32:27 -07001359 grpc_millis deadline =
Mark D. Roth64d922a2017-05-03 12:52:04 -07001360 glb_policy->lb_call_timeout_ms == 0
Craig Tiller89c14282017-07-19 15:32:27 -07001361 ? GRPC_MILLIS_INF_FUTURE
Yash Tibrewal8cf14702017-12-06 09:47:54 -08001362 : grpc_core::ExecCtx::Get()->Now() + glb_policy->lb_call_timeout_ms;
Noah Eisen4d20a662018-02-09 09:34:04 -08001363 glb_lb_call_data* lb_calld =
1364 static_cast<glb_lb_call_data*>(gpr_zalloc(sizeof(*lb_calld)));
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001365 lb_calld->lb_call = grpc_channel_create_pollset_set_call(
Yash Tibrewal8cf14702017-12-06 09:47:54 -08001366 glb_policy->lb_channel, nullptr, GRPC_PROPAGATE_DEFAULTS,
David Garcia Quintas4543e5c2016-09-22 15:09:34 -07001367 glb_policy->base.interested_parties,
Craig Tiller7c70b6c2017-01-23 07:48:42 -08001368 GRPC_MDSTR_SLASH_GRPC_DOT_LB_DOT_V1_DOT_LOADBALANCER_SLASH_BALANCELOAD,
Noah Eisen882dfed2017-11-14 14:58:20 -08001369 &host, deadline, nullptr);
Yash Tibrewal8cf14702017-12-06 09:47:54 -08001370 grpc_slice_unref_internal(host);
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001371 // Init the LB call request payload.
Craig Tillerbaa14a92017-11-03 09:09:36 -07001372 grpc_grpclb_request* request =
David Garcia Quintas55ba14a2016-09-27 18:45:30 -07001373 grpc_grpclb_request_create(glb_policy->server_name);
Craig Tillerd41a4a72016-10-26 16:16:06 -07001374 grpc_slice request_payload_slice = grpc_grpclb_request_encode(request);
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001375 lb_calld->send_message_payload =
David Garcia Quintas65318262016-07-29 13:43:38 -07001376 grpc_raw_byte_buffer_create(&request_payload_slice, 1);
Yash Tibrewal8cf14702017-12-06 09:47:54 -08001377 grpc_slice_unref_internal(request_payload_slice);
David Garcia Quintas65318262016-07-29 13:43:38 -07001378 grpc_grpclb_request_destroy(request);
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001379 // Init other data associated with the LB call.
1380 lb_calld->glb_policy = glb_policy;
1381 gpr_ref_init(&lb_calld->refs, 1);
1382 grpc_metadata_array_init(&lb_calld->lb_initial_metadata_recv);
1383 grpc_metadata_array_init(&lb_calld->lb_trailing_metadata_recv);
1384 GRPC_CLOSURE_INIT(&lb_calld->lb_on_sent_initial_request,
1385 lb_on_sent_initial_request_locked, lb_calld,
Juanli Shenf2a0ae72017-12-27 16:08:12 -08001386 grpc_combiner_scheduler(glb_policy->base.combiner));
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001387 GRPC_CLOSURE_INIT(&lb_calld->lb_on_response_received,
1388 lb_on_response_received_locked, lb_calld,
Craig Tilleree4b1452017-05-12 10:56:03 -07001389 grpc_combiner_scheduler(glb_policy->base.combiner));
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001390 GRPC_CLOSURE_INIT(&lb_calld->lb_on_server_status_received,
1391 lb_on_server_status_received_locked, lb_calld,
Craig Tilleree4b1452017-05-12 10:56:03 -07001392 grpc_combiner_scheduler(glb_policy->base.combiner));
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001393 // Hold a ref to the glb_policy.
1394 GRPC_LB_POLICY_REF(&glb_policy->base, "lb_calld");
1395 return lb_calld;
David Garcia Quintas65318262016-07-29 13:43:38 -07001396}
1397
David Garcia Quintas8d489112016-07-29 15:20:42 -07001398/*
1399 * Auxiliary functions and LB client callbacks.
1400 */
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001401
Yash Tibrewal8cf14702017-12-06 09:47:54 -08001402static void query_for_backends_locked(glb_lb_policy* glb_policy) {
Noah Eisen882dfed2017-11-14 14:58:20 -08001403 GPR_ASSERT(glb_policy->lb_channel != nullptr);
David Garcia Quintasa74b2462016-11-11 14:07:27 -08001404 if (glb_policy->shutting_down) return;
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001405 // Init the LB call data.
1406 GPR_ASSERT(glb_policy->lb_calld == nullptr);
1407 glb_policy->lb_calld = lb_call_data_create_locked(glb_policy);
Craig Tiller6014e8a2017-10-16 13:50:29 -07001408 if (grpc_lb_glb_trace.enabled()) {
David Garcia Quintas87d5a312017-06-06 19:45:58 -07001409 gpr_log(GPR_INFO,
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001410 "[grpclb %p] Query for backends (lb_channel: %p, lb_calld: %p, "
1411 "lb_call: %p)",
1412 glb_policy, glb_policy->lb_channel, glb_policy->lb_calld,
1413 glb_policy->lb_calld->lb_call);
David Garcia Quintas98da61b2016-10-29 08:46:31 +02001414 }
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001415 GPR_ASSERT(glb_policy->lb_calld->lb_call != nullptr);
1416 // Create the ops.
David Garcia Quintas65318262016-07-29 13:43:38 -07001417 grpc_call_error call_error;
Mark D. Roth2de36a82017-09-25 14:54:44 -07001418 grpc_op ops[3];
David Garcia Quintas3fb8f732016-06-15 22:53:08 -07001419 memset(ops, 0, sizeof(ops));
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001420 // Op: send initial metadata.
Craig Tillerbaa14a92017-11-03 09:09:36 -07001421 grpc_op* op = ops;
David Garcia Quintas65318262016-07-29 13:43:38 -07001422 op->op = GRPC_OP_SEND_INITIAL_METADATA;
1423 op->data.send_initial_metadata.count = 0;
David Garcia Quintas3fb8f732016-06-15 22:53:08 -07001424 op->flags = 0;
Noah Eisen882dfed2017-11-14 14:58:20 -08001425 op->reserved = nullptr;
David Garcia Quintas3fb8f732016-06-15 22:53:08 -07001426 op++;
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001427 // Op: send request message.
1428 GPR_ASSERT(glb_policy->lb_calld->send_message_payload != nullptr);
1429 op->op = GRPC_OP_SEND_MESSAGE;
1430 op->data.send_message.send_message =
1431 glb_policy->lb_calld->send_message_payload;
1432 op->flags = 0;
1433 op->reserved = nullptr;
1434 op++;
1435 glb_lb_call_data_ref(glb_policy->lb_calld,
1436 "lb_on_sent_initial_request_locked");
1437 call_error = grpc_call_start_batch_and_execute(
Noah Eisenbe82e642018-02-09 09:16:55 -08001438 glb_policy->lb_calld->lb_call, ops, static_cast<size_t>(op - ops),
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001439 &glb_policy->lb_calld->lb_on_sent_initial_request);
1440 GPR_ASSERT(GRPC_CALL_OK == call_error);
1441 // Op: recv initial metadata.
1442 op = ops;
David Garcia Quintas98da61b2016-10-29 08:46:31 +02001443 op->op = GRPC_OP_RECV_INITIAL_METADATA;
Mark D. Roth448c1f02017-01-25 10:44:30 -08001444 op->data.recv_initial_metadata.recv_initial_metadata =
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001445 &glb_policy->lb_calld->lb_initial_metadata_recv;
David Garcia Quintas98da61b2016-10-29 08:46:31 +02001446 op->flags = 0;
Noah Eisen882dfed2017-11-14 14:58:20 -08001447 op->reserved = nullptr;
David Garcia Quintas98da61b2016-10-29 08:46:31 +02001448 op++;
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001449 // Op: recv response.
1450 op->op = GRPC_OP_RECV_MESSAGE;
1451 op->data.recv_message.recv_message =
1452 &glb_policy->lb_calld->recv_message_payload;
David Garcia Quintas98da61b2016-10-29 08:46:31 +02001453 op->flags = 0;
Noah Eisen882dfed2017-11-14 14:58:20 -08001454 op->reserved = nullptr;
David Garcia Quintas98da61b2016-10-29 08:46:31 +02001455 op++;
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001456 glb_lb_call_data_ref(glb_policy->lb_calld, "lb_on_response_received_locked");
Juanli Shenf2a0ae72017-12-27 16:08:12 -08001457 call_error = grpc_call_start_batch_and_execute(
Noah Eisenbe82e642018-02-09 09:16:55 -08001458 glb_policy->lb_calld->lb_call, ops, static_cast<size_t>(op - ops),
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001459 &glb_policy->lb_calld->lb_on_response_received);
Mark D. Roth09e458c2017-05-02 08:13:26 -07001460 GPR_ASSERT(GRPC_CALL_OK == call_error);
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001461 // Op: recv server status.
Mark D. Roth09e458c2017-05-02 08:13:26 -07001462 op = ops;
David Garcia Quintas98da61b2016-10-29 08:46:31 +02001463 op->op = GRPC_OP_RECV_STATUS_ON_CLIENT;
1464 op->data.recv_status_on_client.trailing_metadata =
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001465 &glb_policy->lb_calld->lb_trailing_metadata_recv;
1466 op->data.recv_status_on_client.status = &glb_policy->lb_calld->lb_call_status;
David Garcia Quintas98da61b2016-10-29 08:46:31 +02001467 op->data.recv_status_on_client.status_details =
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001468 &glb_policy->lb_calld->lb_call_status_details;
David Garcia Quintas98da61b2016-10-29 08:46:31 +02001469 op->flags = 0;
Noah Eisen882dfed2017-11-14 14:58:20 -08001470 op->reserved = nullptr;
David Garcia Quintas98da61b2016-10-29 08:46:31 +02001471 op++;
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001472 // This callback signals the end of the LB call, so it relies on the initial
1473 // ref instead of a new ref. When it's invoked, it's the initial ref that is
1474 // unreffed.
David Garcia Quintas65318262016-07-29 13:43:38 -07001475 call_error = grpc_call_start_batch_and_execute(
Noah Eisenbe82e642018-02-09 09:16:55 -08001476 glb_policy->lb_calld->lb_call, ops, static_cast<size_t>(op - ops),
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001477 &glb_policy->lb_calld->lb_on_server_status_received);
David Garcia Quintas280fd2a2016-06-20 22:04:48 -07001478 GPR_ASSERT(GRPC_CALL_OK == call_error);
David Garcia Quintas3fb8f732016-06-15 22:53:08 -07001479}
1480
Juanli Shenf2a0ae72017-12-27 16:08:12 -08001481static void lb_on_sent_initial_request_locked(void* arg, grpc_error* error) {
Noah Eisenbe82e642018-02-09 09:16:55 -08001482 glb_lb_call_data* lb_calld = static_cast<glb_lb_call_data*>(arg);
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001483 grpc_byte_buffer_destroy(lb_calld->send_message_payload);
1484 lb_calld->send_message_payload = nullptr;
Juanli Shenf2a0ae72017-12-27 16:08:12 -08001485 // If we attempted to send a client load report before the initial request was
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001486 // sent (and this lb_calld is still in use), send the load report now.
1487 if (lb_calld->client_load_report_is_due &&
1488 lb_calld == lb_calld->glb_policy->lb_calld) {
1489 send_client_load_report_locked(lb_calld);
1490 lb_calld->client_load_report_is_due = false;
Juanli Shenf2a0ae72017-12-27 16:08:12 -08001491 }
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001492 glb_lb_call_data_unref(lb_calld, "lb_on_sent_initial_request_locked");
Juanli Shenf2a0ae72017-12-27 16:08:12 -08001493}
1494
Yash Tibrewal8cf14702017-12-06 09:47:54 -08001495static void lb_on_response_received_locked(void* arg, grpc_error* error) {
Noah Eisenbe82e642018-02-09 09:16:55 -08001496 glb_lb_call_data* lb_calld = static_cast<glb_lb_call_data*>(arg);
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001497 glb_lb_policy* glb_policy = lb_calld->glb_policy;
1498 // Empty payload means the LB call was cancelled.
1499 if (lb_calld != glb_policy->lb_calld ||
1500 lb_calld->recv_message_payload == nullptr) {
1501 glb_lb_call_data_unref(lb_calld, "lb_on_response_received_locked");
1502 return;
1503 }
David Garcia Quintas3fb8f732016-06-15 22:53:08 -07001504 grpc_op ops[2];
1505 memset(ops, 0, sizeof(ops));
Craig Tillerbaa14a92017-11-03 09:09:36 -07001506 grpc_op* op = ops;
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001507 glb_policy->lb_call_backoff->Reset();
1508 grpc_byte_buffer_reader bbr;
1509 grpc_byte_buffer_reader_init(&bbr, lb_calld->recv_message_payload);
1510 grpc_slice response_slice = grpc_byte_buffer_reader_readall(&bbr);
1511 grpc_byte_buffer_reader_destroy(&bbr);
1512 grpc_byte_buffer_destroy(lb_calld->recv_message_payload);
1513 lb_calld->recv_message_payload = nullptr;
1514 grpc_grpclb_initial_response* initial_response;
1515 grpc_grpclb_serverlist* serverlist;
1516 if (!lb_calld->seen_initial_response &&
1517 (initial_response = grpc_grpclb_initial_response_parse(response_slice)) !=
1518 nullptr) {
1519 // Have NOT seen initial response, look for initial response.
1520 if (initial_response->has_client_stats_report_interval) {
1521 lb_calld->client_stats_report_interval = GPR_MAX(
1522 GPR_MS_PER_SEC, grpc_grpclb_duration_to_millis(
1523 &initial_response->client_stats_report_interval));
1524 if (grpc_lb_glb_trace.enabled()) {
1525 gpr_log(GPR_INFO,
1526 "[grpclb %p] Received initial LB response message; "
1527 "client load reporting interval = %" PRIdPTR " milliseconds",
1528 glb_policy, lb_calld->client_stats_report_interval);
1529 }
1530 } else if (grpc_lb_glb_trace.enabled()) {
1531 gpr_log(GPR_INFO,
1532 "[grpclb %p] Received initial LB response message; client load "
1533 "reporting NOT enabled",
1534 glb_policy);
1535 }
1536 grpc_grpclb_initial_response_destroy(initial_response);
1537 lb_calld->seen_initial_response = true;
1538 } else if ((serverlist = grpc_grpclb_response_parse_serverlist(
1539 response_slice)) != nullptr) {
1540 // Have seen initial response, look for serverlist.
1541 GPR_ASSERT(lb_calld->lb_call != nullptr);
1542 if (grpc_lb_glb_trace.enabled()) {
1543 gpr_log(GPR_INFO,
1544 "[grpclb %p] Serverlist with %" PRIuPTR " servers received",
1545 glb_policy, serverlist->num_servers);
1546 for (size_t i = 0; i < serverlist->num_servers; ++i) {
1547 grpc_resolved_address addr;
1548 parse_server(serverlist->servers[i], &addr);
1549 char* ipport;
1550 grpc_sockaddr_to_string(&ipport, &addr, false);
1551 gpr_log(GPR_INFO, "[grpclb %p] Serverlist[%" PRIuPTR "]: %s",
1552 glb_policy, i, ipport);
1553 gpr_free(ipport);
1554 }
1555 }
1556 /* update serverlist */
1557 if (serverlist->num_servers > 0) {
1558 // Start sending client load report only after we start using the
1559 // serverlist returned from the current LB call.
1560 if (lb_calld->client_stats_report_interval > 0 &&
1561 lb_calld->client_stats == nullptr) {
1562 lb_calld->client_stats = grpc_grpclb_client_stats_create();
1563 glb_lb_call_data_ref(lb_calld, "client_load_report");
1564 schedule_next_client_load_report(lb_calld);
1565 }
1566 if (grpc_grpclb_serverlist_equals(glb_policy->serverlist, serverlist)) {
Craig Tiller6014e8a2017-10-16 13:50:29 -07001567 if (grpc_lb_glb_trace.enabled()) {
David Garcia Quintasea11d162016-07-14 17:27:28 -07001568 gpr_log(GPR_INFO,
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001569 "[grpclb %p] Incoming server list identical to current, "
1570 "ignoring.",
1571 glb_policy);
David Garcia Quintasea11d162016-07-14 17:27:28 -07001572 }
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001573 grpc_grpclb_destroy_serverlist(serverlist);
1574 } else { /* new serverlist */
1575 if (glb_policy->serverlist != nullptr) {
1576 /* dispose of the old serverlist */
1577 grpc_grpclb_destroy_serverlist(glb_policy->serverlist);
1578 } else {
1579 /* or dispose of the fallback */
1580 grpc_lb_addresses_destroy(glb_policy->fallback_backend_addresses);
1581 glb_policy->fallback_backend_addresses = nullptr;
1582 if (glb_policy->fallback_timer_callback_pending) {
1583 grpc_timer_cancel(&glb_policy->lb_fallback_timer);
1584 glb_policy->fallback_timer_callback_pending = false;
1585 }
1586 }
1587 /* and update the copy in the glb_lb_policy instance. This
1588 * serverlist instance will be destroyed either upon the next
1589 * update or in glb_destroy() */
1590 glb_policy->serverlist = serverlist;
1591 glb_policy->serverlist_index = 0;
1592 rr_handover_locked(glb_policy);
1593 }
1594 } else {
1595 if (grpc_lb_glb_trace.enabled()) {
1596 gpr_log(GPR_INFO, "[grpclb %p] Received empty server list, ignoring.",
David Garcia Quintasa1c65902017-11-09 10:37:35 -08001597 glb_policy);
David Garcia Quintas3fb8f732016-06-15 22:53:08 -07001598 }
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001599 grpc_grpclb_destroy_serverlist(serverlist);
David Garcia Quintas3fb8f732016-06-15 22:53:08 -07001600 }
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001601 } else {
1602 // No valid initial response or serverlist found.
1603 gpr_log(GPR_ERROR,
1604 "[grpclb %p] Invalid LB response received: '%s'. Ignoring.",
1605 glb_policy,
1606 grpc_dump_slice(response_slice, GPR_DUMP_ASCII | GPR_DUMP_HEX));
David Garcia Quintas3fb8f732016-06-15 22:53:08 -07001607 }
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001608 grpc_slice_unref_internal(response_slice);
1609 if (!glb_policy->shutting_down) {
1610 // Keep listening for serverlist updates.
1611 op->op = GRPC_OP_RECV_MESSAGE;
1612 op->data.recv_message.recv_message = &lb_calld->recv_message_payload;
1613 op->flags = 0;
1614 op->reserved = nullptr;
1615 op++;
1616 // Reuse the "lb_on_response_received_locked" ref taken in
1617 // query_for_backends_locked().
1618 const grpc_call_error call_error = grpc_call_start_batch_and_execute(
Noah Eisenbe82e642018-02-09 09:16:55 -08001619 lb_calld->lb_call, ops, static_cast<size_t>(op - ops),
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001620 &lb_calld->lb_on_response_received);
1621 GPR_ASSERT(GRPC_CALL_OK == call_error);
1622 } else {
1623 glb_lb_call_data_unref(lb_calld,
1624 "lb_on_response_received_locked+glb_shutdown");
1625 }
1626}
1627
1628static void lb_on_server_status_received_locked(void* arg, grpc_error* error) {
Noah Eisenbe82e642018-02-09 09:16:55 -08001629 glb_lb_call_data* lb_calld = static_cast<glb_lb_call_data*>(arg);
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001630 glb_lb_policy* glb_policy = lb_calld->glb_policy;
1631 GPR_ASSERT(lb_calld->lb_call != nullptr);
1632 if (grpc_lb_glb_trace.enabled()) {
1633 char* status_details =
1634 grpc_slice_to_c_string(lb_calld->lb_call_status_details);
1635 gpr_log(GPR_INFO,
1636 "[grpclb %p] Status from LB server received. Status = %d, details "
1637 "= '%s', (lb_calld: %p, lb_call: %p), error '%s'",
1638 lb_calld->glb_policy, lb_calld->lb_call_status, status_details,
1639 lb_calld, lb_calld->lb_call, grpc_error_string(error));
1640 gpr_free(status_details);
1641 }
Juanli Shen87c65042018-02-15 09:42:45 -08001642 grpc_lb_policy_try_reresolve(&glb_policy->base, &grpc_lb_glb_trace,
1643 GRPC_ERROR_NONE);
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001644 // If this lb_calld is still in use, this call ended because of a failure so
1645 // we want to retry connecting. Otherwise, we have deliberately ended this
1646 // call and no further action is required.
1647 if (lb_calld == glb_policy->lb_calld) {
1648 glb_policy->lb_calld = nullptr;
1649 if (lb_calld->client_load_report_timer_callback_pending) {
1650 grpc_timer_cancel(&lb_calld->client_load_report_timer);
1651 }
1652 GPR_ASSERT(!glb_policy->shutting_down);
1653 if (lb_calld->seen_initial_response) {
1654 // If we lose connection to the LB server, reset the backoff and restart
1655 // the LB call immediately.
1656 glb_policy->lb_call_backoff->Reset();
1657 query_for_backends_locked(glb_policy);
1658 } else {
1659 // If this LB call fails establishing any connection to the LB server,
1660 // retry later.
1661 start_lb_call_retry_timer_locked(glb_policy);
1662 }
1663 }
1664 glb_lb_call_data_unref(lb_calld, "lb_call_ended");
David Garcia Quintas3fb8f732016-06-15 22:53:08 -07001665}
David Garcia Quintasea11d162016-07-14 17:27:28 -07001666
Yash Tibrewal8cf14702017-12-06 09:47:54 -08001667static void lb_on_fallback_timer_locked(void* arg, grpc_error* error) {
Noah Eisenbe82e642018-02-09 09:16:55 -08001668 glb_lb_policy* glb_policy = static_cast<glb_lb_policy*>(arg);
Juanli Shen4ed35d12018-01-08 18:01:45 -08001669 glb_policy->fallback_timer_callback_pending = false;
Juanli Shenfe408152017-09-27 12:27:20 -07001670 /* If we receive a serverlist after the timer fires but before this callback
1671 * actually runs, don't fall back. */
Juanli Shen87c65042018-02-15 09:42:45 -08001672 if (glb_policy->serverlist == nullptr && !glb_policy->shutting_down &&
1673 error == GRPC_ERROR_NONE) {
1674 if (grpc_lb_glb_trace.enabled()) {
1675 gpr_log(GPR_INFO,
1676 "[grpclb %p] Falling back to use backends from resolver",
1677 glb_policy);
David Garcia Quintas98da61b2016-10-29 08:46:31 +02001678 }
Juanli Shen87c65042018-02-15 09:42:45 -08001679 GPR_ASSERT(glb_policy->fallback_backend_addresses != nullptr);
1680 rr_handover_locked(glb_policy);
David Garcia Quintas3fb8f732016-06-15 22:53:08 -07001681 }
Mark D. Rothc0febd32018-01-09 10:25:24 -08001682 GRPC_LB_POLICY_UNREF(&glb_policy->base, "grpclb_fallback_timer");
David Garcia Quintas3fb8f732016-06-15 22:53:08 -07001683}
1684
Yash Tibrewal8cf14702017-12-06 09:47:54 -08001685static void fallback_update_locked(glb_lb_policy* glb_policy,
Craig Tillerbaa14a92017-11-03 09:09:36 -07001686 const grpc_lb_addresses* addresses) {
Noah Eisen882dfed2017-11-14 14:58:20 -08001687 GPR_ASSERT(glb_policy->fallback_backend_addresses != nullptr);
Yash Tibrewal8cf14702017-12-06 09:47:54 -08001688 grpc_lb_addresses_destroy(glb_policy->fallback_backend_addresses);
Juanli Shenfe408152017-09-27 12:27:20 -07001689 glb_policy->fallback_backend_addresses =
Yash Tibrewal8cf14702017-12-06 09:47:54 -08001690 extract_backend_addresses_locked(addresses);
Juanli Shen592cf342017-12-04 20:52:01 -08001691 if (glb_policy->lb_fallback_timeout_ms > 0 &&
1692 glb_policy->rr_policy != nullptr) {
Yash Tibrewal8cf14702017-12-06 09:47:54 -08001693 rr_handover_locked(glb_policy);
Juanli Shenfe408152017-09-27 12:27:20 -07001694 }
David Garcia Quintas3fb8f732016-06-15 22:53:08 -07001695}
1696
Yash Tibrewal8cf14702017-12-06 09:47:54 -08001697static void glb_update_locked(grpc_lb_policy* policy,
Craig Tillerbaa14a92017-11-03 09:09:36 -07001698 const grpc_lb_policy_args* args) {
Noah Eisenbe82e642018-02-09 09:16:55 -08001699 glb_lb_policy* glb_policy = reinterpret_cast<glb_lb_policy*>(policy);
Craig Tillerbaa14a92017-11-03 09:09:36 -07001700 const grpc_arg* arg =
Juanli Shenfe408152017-09-27 12:27:20 -07001701 grpc_channel_args_find(args->args, GRPC_ARG_LB_ADDRESSES);
Noah Eisen882dfed2017-11-14 14:58:20 -08001702 if (arg == nullptr || arg->type != GRPC_ARG_POINTER) {
1703 if (glb_policy->lb_channel == nullptr) {
Juanli Shenfe408152017-09-27 12:27:20 -07001704 // If we don't have a current channel to the LB, go into TRANSIENT
1705 // FAILURE.
1706 grpc_connectivity_state_set(
Yash Tibrewal8cf14702017-12-06 09:47:54 -08001707 &glb_policy->state_tracker, GRPC_CHANNEL_TRANSIENT_FAILURE,
Juanli Shenfe408152017-09-27 12:27:20 -07001708 GRPC_ERROR_CREATE_FROM_STATIC_STRING("Missing update in args"),
1709 "glb_update_missing");
1710 } else {
1711 // otherwise, keep using the current LB channel (ignore this update).
David Garcia Quintasa1c65902017-11-09 10:37:35 -08001712 gpr_log(
1713 GPR_ERROR,
1714 "[grpclb %p] No valid LB addresses channel arg in update, ignoring.",
1715 glb_policy);
Juanli Shenfe408152017-09-27 12:27:20 -07001716 }
1717 return;
1718 }
Craig Tillerbaa14a92017-11-03 09:09:36 -07001719 const grpc_lb_addresses* addresses =
Noah Eisenbe82e642018-02-09 09:16:55 -08001720 static_cast<const grpc_lb_addresses*>(arg->value.pointer.p);
Mark D. Roth97b6e5d2017-10-09 08:31:41 -07001721 // If a non-empty serverlist hasn't been received from the balancer,
1722 // propagate the update to fallback_backend_addresses.
Noah Eisen882dfed2017-11-14 14:58:20 -08001723 if (glb_policy->serverlist == nullptr) {
Yash Tibrewal8cf14702017-12-06 09:47:54 -08001724 fallback_update_locked(glb_policy, addresses);
David Garcia Quintas87d5a312017-06-06 19:45:58 -07001725 }
Noah Eisen882dfed2017-11-14 14:58:20 -08001726 GPR_ASSERT(glb_policy->lb_channel != nullptr);
Mark D. Roth97b6e5d2017-10-09 08:31:41 -07001727 // Propagate updates to the LB channel (pick_first) through the fake
1728 // resolver.
Craig Tillerbaa14a92017-11-03 09:09:36 -07001729 grpc_channel_args* lb_channel_args = build_lb_channel_args(
Mark D. Roth209f6442018-02-08 10:26:46 -08001730 addresses, glb_policy->response_generator.get(), args->args);
1731 glb_policy->response_generator->SetResponse(lb_channel_args);
Yash Tibrewal8cf14702017-12-06 09:47:54 -08001732 grpc_channel_args_destroy(lb_channel_args);
Mark D. Roth97b6e5d2017-10-09 08:31:41 -07001733 // Start watching the LB channel connectivity for connection, if not
1734 // already doing so.
David Garcia Quintas87d5a312017-06-06 19:45:58 -07001735 if (!glb_policy->watching_lb_channel) {
David Garcia Quintas6a7935e2017-07-27 19:24:52 -07001736 glb_policy->lb_channel_connectivity = grpc_channel_check_connectivity_state(
1737 glb_policy->lb_channel, true /* try to connect */);
Craig Tillerbaa14a92017-11-03 09:09:36 -07001738 grpc_channel_element* client_channel_elem = grpc_channel_stack_last_element(
David Garcia Quintas87d5a312017-06-06 19:45:58 -07001739 grpc_channel_get_channel_stack(glb_policy->lb_channel));
1740 GPR_ASSERT(client_channel_elem->filter == &grpc_client_channel_filter);
1741 glb_policy->watching_lb_channel = true;
Mark D. Rothc0febd32018-01-09 10:25:24 -08001742 GRPC_LB_POLICY_REF(&glb_policy->base, "watch_lb_channel_connectivity");
David Garcia Quintas87d5a312017-06-06 19:45:58 -07001743 grpc_client_channel_watch_connectivity_state(
Yash Tibrewal8cf14702017-12-06 09:47:54 -08001744 client_channel_elem,
David Garcia Quintas87d5a312017-06-06 19:45:58 -07001745 grpc_polling_entity_create_from_pollset_set(
1746 glb_policy->base.interested_parties),
1747 &glb_policy->lb_channel_connectivity,
Noah Eisen882dfed2017-11-14 14:58:20 -08001748 &glb_policy->lb_channel_on_connectivity_changed, nullptr);
David Garcia Quintas87d5a312017-06-06 19:45:58 -07001749 }
1750}
1751
1752// Invoked as part of the update process. It continues watching the LB channel
1753// until it shuts down or becomes READY. It's invoked even if the LB channel
1754// stayed READY throughout the update (for example if the update is identical).
Yash Tibrewal8cf14702017-12-06 09:47:54 -08001755static void glb_lb_channel_on_connectivity_changed_cb(void* arg,
Craig Tillerbaa14a92017-11-03 09:09:36 -07001756 grpc_error* error) {
Noah Eisenbe82e642018-02-09 09:16:55 -08001757 glb_lb_policy* glb_policy = static_cast<glb_lb_policy*>(arg);
David Garcia Quintas87d5a312017-06-06 19:45:58 -07001758 if (glb_policy->shutting_down) goto done;
1759 // Re-initialize the lb_call. This should also take care of updating the
1760 // embedded RR policy. Note that the current RR policy, if any, will stay in
1761 // effect until an update from the new lb_call is received.
1762 switch (glb_policy->lb_channel_connectivity) {
David Garcia Quintas87d5a312017-06-06 19:45:58 -07001763 case GRPC_CHANNEL_CONNECTING:
1764 case GRPC_CHANNEL_TRANSIENT_FAILURE: {
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001765 // Keep watching the LB channel.
Craig Tillerbaa14a92017-11-03 09:09:36 -07001766 grpc_channel_element* client_channel_elem =
David Garcia Quintas87d5a312017-06-06 19:45:58 -07001767 grpc_channel_stack_last_element(
1768 grpc_channel_get_channel_stack(glb_policy->lb_channel));
1769 GPR_ASSERT(client_channel_elem->filter == &grpc_client_channel_filter);
1770 grpc_client_channel_watch_connectivity_state(
Yash Tibrewal8cf14702017-12-06 09:47:54 -08001771 client_channel_elem,
David Garcia Quintas87d5a312017-06-06 19:45:58 -07001772 grpc_polling_entity_create_from_pollset_set(
1773 glb_policy->base.interested_parties),
1774 &glb_policy->lb_channel_connectivity,
Noah Eisen882dfed2017-11-14 14:58:20 -08001775 &glb_policy->lb_channel_on_connectivity_changed, nullptr);
David Garcia Quintas87d5a312017-06-06 19:45:58 -07001776 break;
1777 }
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001778 // The LB channel may be IDLE because it's shut down before the update.
1779 // Restart the LB call to kick the LB channel into gear.
David Garcia Quintas87d5a312017-06-06 19:45:58 -07001780 case GRPC_CHANNEL_IDLE:
David Garcia Quintas87d5a312017-06-06 19:45:58 -07001781 case GRPC_CHANNEL_READY:
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001782 if (glb_policy->lb_calld != nullptr) {
1783 lb_call_data_shutdown(glb_policy);
1784 }
1785 if (glb_policy->started_picking) {
Juanli Shen4ed35d12018-01-08 18:01:45 -08001786 if (glb_policy->retry_timer_callback_pending) {
Yash Tibrewal8cf14702017-12-06 09:47:54 -08001787 grpc_timer_cancel(&glb_policy->lb_call_retry_timer);
David Garcia Quintas87d5a312017-06-06 19:45:58 -07001788 }
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001789 glb_policy->lb_call_backoff->Reset();
1790 query_for_backends_locked(glb_policy);
David Garcia Quintas87d5a312017-06-06 19:45:58 -07001791 }
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001792 // Fall through.
David Garcia Quintas87d5a312017-06-06 19:45:58 -07001793 case GRPC_CHANNEL_SHUTDOWN:
1794 done:
1795 glb_policy->watching_lb_channel = false;
Mark D. Rothc0febd32018-01-09 10:25:24 -08001796 GRPC_LB_POLICY_UNREF(&glb_policy->base,
1797 "watch_lb_channel_connectivity_cb_shutdown");
David Garcia Quintas87d5a312017-06-06 19:45:58 -07001798 }
1799}
1800
David Garcia Quintas8d489112016-07-29 15:20:42 -07001801/* Code wiring the policy with the rest of the core */
David Garcia Quintas3fb8f732016-06-15 22:53:08 -07001802static const grpc_lb_policy_vtable glb_lb_policy_vtable = {
Craig Tiller2400bf52017-02-09 16:25:19 -08001803 glb_destroy,
1804 glb_shutdown_locked,
1805 glb_pick_locked,
1806 glb_cancel_pick_locked,
1807 glb_cancel_picks_locked,
1808 glb_ping_one_locked,
1809 glb_exit_idle_locked,
1810 glb_check_connectivity_locked,
David Garcia Quintas87d5a312017-06-06 19:45:58 -07001811 glb_notify_on_state_change_locked,
Juanli Shen87c65042018-02-15 09:42:45 -08001812 glb_update_locked};
David Garcia Quintas3fb8f732016-06-15 22:53:08 -07001813
Yash Tibrewal8cf14702017-12-06 09:47:54 -08001814static grpc_lb_policy* glb_create(grpc_lb_policy_factory* factory,
Craig Tillerbaa14a92017-11-03 09:09:36 -07001815 grpc_lb_policy_args* args) {
Juanli Shenfe408152017-09-27 12:27:20 -07001816 /* Count the number of gRPC-LB addresses. There must be at least one. */
Craig Tillerbaa14a92017-11-03 09:09:36 -07001817 const grpc_arg* arg =
Yash Tibrewala4952202017-09-13 10:53:28 -07001818 grpc_channel_args_find(args->args, GRPC_ARG_LB_ADDRESSES);
Noah Eisen882dfed2017-11-14 14:58:20 -08001819 if (arg == nullptr || arg->type != GRPC_ARG_POINTER) {
1820 return nullptr;
Yash Tibrewala4952202017-09-13 10:53:28 -07001821 }
Noah Eisen4d20a662018-02-09 09:34:04 -08001822 grpc_lb_addresses* addresses =
1823 static_cast<grpc_lb_addresses*>(arg->value.pointer.p);
Yash Tibrewala4952202017-09-13 10:53:28 -07001824 size_t num_grpclb_addrs = 0;
1825 for (size_t i = 0; i < addresses->num_addresses; ++i) {
1826 if (addresses->addresses[i].is_balancer) ++num_grpclb_addrs;
1827 }
Noah Eisen882dfed2017-11-14 14:58:20 -08001828 if (num_grpclb_addrs == 0) return nullptr;
Yash Tibrewala4952202017-09-13 10:53:28 -07001829
Noah Eisen4d20a662018-02-09 09:34:04 -08001830 glb_lb_policy* glb_policy =
1831 static_cast<glb_lb_policy*>(gpr_zalloc(sizeof(*glb_policy)));
Yash Tibrewala4952202017-09-13 10:53:28 -07001832
1833 /* Get server name. */
1834 arg = grpc_channel_args_find(args->args, GRPC_ARG_SERVER_URI);
ncteisenbf323a92018-02-14 17:34:05 -08001835 const char* server_uri = grpc_channel_arg_get_string(arg);
1836 GPR_ASSERT(server_uri != nullptr);
1837 grpc_uri* uri = grpc_uri_parse(server_uri, true);
Yash Tibrewala4952202017-09-13 10:53:28 -07001838 GPR_ASSERT(uri->path[0] != '\0');
1839 glb_policy->server_name =
1840 gpr_strdup(uri->path[0] == '/' ? uri->path + 1 : uri->path);
Craig Tiller6014e8a2017-10-16 13:50:29 -07001841 if (grpc_lb_glb_trace.enabled()) {
David Garcia Quintasa1c65902017-11-09 10:37:35 -08001842 gpr_log(GPR_INFO,
1843 "[grpclb %p] Will use '%s' as the server name for LB request.",
1844 glb_policy, glb_policy->server_name);
Yash Tibrewala4952202017-09-13 10:53:28 -07001845 }
1846 grpc_uri_destroy(uri);
1847
1848 glb_policy->cc_factory = args->client_channel_factory;
Noah Eisen882dfed2017-11-14 14:58:20 -08001849 GPR_ASSERT(glb_policy->cc_factory != nullptr);
Yash Tibrewala4952202017-09-13 10:53:28 -07001850
1851 arg = grpc_channel_args_find(args->args, GRPC_ARG_GRPCLB_CALL_TIMEOUT_MS);
1852 glb_policy->lb_call_timeout_ms =
Yash Tibrewald8b84a22017-09-25 13:38:03 -07001853 grpc_channel_arg_get_integer(arg, {0, 0, INT_MAX});
Yash Tibrewala4952202017-09-13 10:53:28 -07001854
Juanli Shenfe408152017-09-27 12:27:20 -07001855 arg = grpc_channel_args_find(args->args, GRPC_ARG_GRPCLB_FALLBACK_TIMEOUT_MS);
1856 glb_policy->lb_fallback_timeout_ms = grpc_channel_arg_get_integer(
Yash Tibrewal1150bfb2017-09-28 14:43:41 -07001857 arg, {GRPC_GRPCLB_DEFAULT_FALLBACK_TIMEOUT_MS, 0, INT_MAX});
Juanli Shenfe408152017-09-27 12:27:20 -07001858
Yash Tibrewala4952202017-09-13 10:53:28 -07001859 // Make sure that GRPC_ARG_LB_POLICY_NAME is set in channel args,
1860 // since we use this to trigger the client_load_reporting filter.
Yash Tibrewal9eb86722017-09-17 23:43:30 -07001861 grpc_arg new_arg = grpc_channel_arg_string_create(
Craig Tillerbaa14a92017-11-03 09:09:36 -07001862 (char*)GRPC_ARG_LB_POLICY_NAME, (char*)"grpclb");
1863 static const char* args_to_remove[] = {GRPC_ARG_LB_POLICY_NAME};
Yash Tibrewala4952202017-09-13 10:53:28 -07001864 glb_policy->args = grpc_channel_args_copy_and_add_and_remove(
1865 args->args, args_to_remove, GPR_ARRAY_SIZE(args_to_remove), &new_arg, 1);
1866
Juanli Shenfe408152017-09-27 12:27:20 -07001867 /* Extract the backend addresses (may be empty) from the resolver for
1868 * fallback. */
1869 glb_policy->fallback_backend_addresses =
Yash Tibrewal8cf14702017-12-06 09:47:54 -08001870 extract_backend_addresses_locked(addresses);
Juanli Shenfe408152017-09-27 12:27:20 -07001871
Yash Tibrewala4952202017-09-13 10:53:28 -07001872 /* Create a client channel over them to communicate with a LB service */
1873 glb_policy->response_generator =
Mark D. Roth209f6442018-02-08 10:26:46 -08001874 grpc_core::MakeRefCounted<grpc_core::FakeResolverResponseGenerator>();
Craig Tillerbaa14a92017-11-03 09:09:36 -07001875 grpc_channel_args* lb_channel_args = build_lb_channel_args(
Mark D. Roth209f6442018-02-08 10:26:46 -08001876 addresses, glb_policy->response_generator.get(), args->args);
Craig Tillerbaa14a92017-11-03 09:09:36 -07001877 char* uri_str;
Yash Tibrewala4952202017-09-13 10:53:28 -07001878 gpr_asprintf(&uri_str, "fake:///%s", glb_policy->server_name);
1879 glb_policy->lb_channel = grpc_lb_policy_grpclb_create_lb_channel(
Yash Tibrewal8cf14702017-12-06 09:47:54 -08001880 uri_str, args->client_channel_factory, lb_channel_args);
Yash Tibrewala4952202017-09-13 10:53:28 -07001881
1882 /* Propagate initial resolution */
Mark D. Roth209f6442018-02-08 10:26:46 -08001883 glb_policy->response_generator->SetResponse(lb_channel_args);
Yash Tibrewal8cf14702017-12-06 09:47:54 -08001884 grpc_channel_args_destroy(lb_channel_args);
Yash Tibrewala4952202017-09-13 10:53:28 -07001885 gpr_free(uri_str);
Noah Eisen882dfed2017-11-14 14:58:20 -08001886 if (glb_policy->lb_channel == nullptr) {
Craig Tillerbaa14a92017-11-03 09:09:36 -07001887 gpr_free((void*)glb_policy->server_name);
Yash Tibrewal8cf14702017-12-06 09:47:54 -08001888 grpc_channel_args_destroy(glb_policy->args);
Yash Tibrewala4952202017-09-13 10:53:28 -07001889 gpr_free(glb_policy);
Noah Eisen882dfed2017-11-14 14:58:20 -08001890 return nullptr;
Yash Tibrewala4952202017-09-13 10:53:28 -07001891 }
Ken Payson9fa10cc2017-09-14 11:49:52 -07001892 grpc_subchannel_index_ref();
Juanli Shen87c65042018-02-15 09:42:45 -08001893 GRPC_CLOSURE_INIT(&glb_policy->rr_on_connectivity_changed,
1894 rr_on_connectivity_changed_locked, glb_policy,
1895 grpc_combiner_scheduler(args->combiner));
1896 GRPC_CLOSURE_INIT(&glb_policy->rr_on_reresolution_requested,
1897 rr_on_reresolution_requested_locked, glb_policy,
1898 grpc_combiner_scheduler(args->combiner));
Yash Tibrewala4952202017-09-13 10:53:28 -07001899 GRPC_CLOSURE_INIT(&glb_policy->lb_channel_on_connectivity_changed,
1900 glb_lb_channel_on_connectivity_changed_cb, glb_policy,
1901 grpc_combiner_scheduler(args->combiner));
1902 grpc_lb_policy_init(&glb_policy->base, &glb_lb_policy_vtable, args->combiner);
1903 grpc_connectivity_state_init(&glb_policy->state_tracker, GRPC_CHANNEL_IDLE,
1904 "grpclb");
Juanli Shen8e4c9d32018-01-23 16:26:39 -08001905 // Init LB call backoff option.
1906 grpc_core::BackOff::Options backoff_options;
1907 backoff_options
1908 .set_initial_backoff(GRPC_GRPCLB_INITIAL_CONNECT_BACKOFF_SECONDS * 1000)
1909 .set_multiplier(GRPC_GRPCLB_RECONNECT_BACKOFF_MULTIPLIER)
1910 .set_jitter(GRPC_GRPCLB_RECONNECT_JITTER)
1911 .set_max_backoff(GRPC_GRPCLB_RECONNECT_MAX_BACKOFF_SECONDS * 1000);
1912 glb_policy->lb_call_backoff.Init(backoff_options);
Yash Tibrewala4952202017-09-13 10:53:28 -07001913 return &glb_policy->base;
1914}
1915
Craig Tillerbaa14a92017-11-03 09:09:36 -07001916static void glb_factory_ref(grpc_lb_policy_factory* factory) {}
David Garcia Quintas3fb8f732016-06-15 22:53:08 -07001917
Craig Tillerbaa14a92017-11-03 09:09:36 -07001918static void glb_factory_unref(grpc_lb_policy_factory* factory) {}
David Garcia Quintas3fb8f732016-06-15 22:53:08 -07001919
David Garcia Quintas3fb8f732016-06-15 22:53:08 -07001920static const grpc_lb_policy_factory_vtable glb_factory_vtable = {
1921 glb_factory_ref, glb_factory_unref, glb_create, "grpclb"};
1922
1923static grpc_lb_policy_factory glb_lb_policy_factory = {&glb_factory_vtable};
1924
Craig Tillerbaa14a92017-11-03 09:09:36 -07001925grpc_lb_policy_factory* grpc_glb_lb_factory_create() {
David Garcia Quintas3fb8f732016-06-15 22:53:08 -07001926 return &glb_lb_policy_factory;
1927}
1928
1929/* Plugin registration */
Mark D. Roth09e458c2017-05-02 08:13:26 -07001930
1931// Only add client_load_reporting filter if the grpclb LB policy is used.
1932static bool maybe_add_client_load_reporting_filter(
Yash Tibrewal8cf14702017-12-06 09:47:54 -08001933 grpc_channel_stack_builder* builder, void* arg) {
Craig Tillerbaa14a92017-11-03 09:09:36 -07001934 const grpc_channel_args* args =
Mark D. Roth09e458c2017-05-02 08:13:26 -07001935 grpc_channel_stack_builder_get_channel_arguments(builder);
Craig Tillerbaa14a92017-11-03 09:09:36 -07001936 const grpc_arg* channel_arg =
Mark D. Roth09e458c2017-05-02 08:13:26 -07001937 grpc_channel_args_find(args, GRPC_ARG_LB_POLICY_NAME);
Noah Eisen882dfed2017-11-14 14:58:20 -08001938 if (channel_arg != nullptr && channel_arg->type == GRPC_ARG_STRING &&
Mark D. Roth09e458c2017-05-02 08:13:26 -07001939 strcmp(channel_arg->value.string, "grpclb") == 0) {
1940 return grpc_channel_stack_builder_append_filter(
Noah Eisen4d20a662018-02-09 09:34:04 -08001941 builder, static_cast<const grpc_channel_filter*>(arg), nullptr,
1942 nullptr);
Mark D. Roth09e458c2017-05-02 08:13:26 -07001943 }
1944 return true;
1945}
1946
ncteisenadbfbd52017-11-16 15:35:45 -08001947void grpc_lb_policy_grpclb_init() {
David Garcia Quintas3fb8f732016-06-15 22:53:08 -07001948 grpc_register_lb_policy(grpc_glb_lb_factory_create());
Mark D. Roth09e458c2017-05-02 08:13:26 -07001949 grpc_channel_init_register_stage(GRPC_CLIENT_SUBCHANNEL,
1950 GRPC_CHANNEL_INIT_BUILTIN_PRIORITY,
1951 maybe_add_client_load_reporting_filter,
Craig Tillerbaa14a92017-11-03 09:09:36 -07001952 (void*)&grpc_client_load_reporting_filter);
David Garcia Quintas3fb8f732016-06-15 22:53:08 -07001953}
1954
ncteisenadbfbd52017-11-16 15:35:45 -08001955void grpc_lb_policy_grpclb_shutdown() {}