Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  1. Wrap http.Transport (implementing RoundTripper interface) and observe HTTP requests/responses at the application layer. Note that some code in EVE and in 3rd party libs used by zedUpload assumes that RoundTripper’s concrete type is http.Transport. Inside EVE we simply changed such code to remove this assumption and in the 3rd party libraries this is not reachable from EVE, because we pass a ready to use http.Client so that a 3rd party library does not need to do any further customization. In any case, there is an option to disable this hook (at the cost of losing tracing info on the application and the session (TLS) layers - see point 2.). On the other hand, it will not cause any issues if the caller is adding some additional Transport wrapping on top of it (like we do here).
  2. With Transport wrapped, we can add ClientTrace into the request context to call our callbacks and inform us about: TCP connection selected, TLS handshake(s) done/failed (incl. server certs and negotiated parameters) and some more (less useful info). Note that it will work even if the caller also adds its own ClientTrace (like EVE does here), because httptrace.WithClientTrace is smart enough to be able to merge multiple traces.
  3. Put a custom DialContext (into the underlying http.Transport) as well as a custom Dial for Resolver, both of which still call the original methods, but also wrap the returned net.Conn to observe socket read/write operations.
  4. In our custom Dial methods, use Dialer.Control to get access to information about every TCP connection attempt (including failed ones).

...

  • All Dial attempts. Each record contains:
    • reference to trace record of established TCP connection (undefined if failed)
    • dial begin + end time, context close time
    • destination address
    • proxy config
    • static source IP (if set, otherwise undefined)
    • dial error (nil if OK)
  • All DNS queries. Each record contains
    • reference to trace record of Dial where this originated from
    • reference to trace record of the underlying UDP or TCP connection (used as a fallback from truncated UDP DNS response)
    • (optional) sent DNS questions and received DNS message header + answers (we are able to parse DNS messages from sent/received data)
  • All TCP connections (attempts + established). Each record will containcontains:
    • reference to trace record of Dial where this originated from
    • handshake start + done time, conn close time
    • 4-tuple (src IP, src port, dst IP, dst port)
    • was it reused?
    • total sent + received bytes (L4 payload)
    • (optional) conntrack (captured-at time, 5-tuple after NAT, mark, flags, packet/byte counters)
    • (optional) socket trace - array of:
      • operation type (read or write), op begin+end time, transferred data length, error (nil if OK)
  • All UDP "connections" (or rather exchanges of messages). Each record contains:
    • reference to trace record of Dial where this originated from
    • time when the socket was created and when it was closed
    • 4-tuple (src IP, src port, dst IP, dst port)
    • total sent + received bytes (L4 payload)
    • (optional) conntrack (captured-at time, 5-tuple after NAT, mark, flags, packet/byte counters)
    • (optional) socket trace - array of:
      • operation type (read or write), op begin+end time, transferred data length, error (nil if OK)
  • All TLS tunnels (attempted + established). Each record contains:
    • reference to trace record of the underlying TCP connection
    • was resumed from a previous session?
    • handshake start + done time, error (nil if OK)
    • negotiated cipher and application proto
    • SNI value
    • for every peer cert in the chain:
      • subject, issuer, validity time range (NotBefore, NotAfter)
  • All HTTP requests made. Each record contains info for both the request and the response:
    • reference to trace record of the underlying TCP connection
    • reference to trace record(s) of the underlying TLS tunnel(s) (2 tunnels are made with proxy listening on HTTPS)
    • time when the request was sent
    • method, URL, HTTP version
    • (optional) request headers
    • request message content length (not transport length which can differ)
    • time when response was received, error (nil if OK)
    • response status code, HTTP version
    • (optional) response headers
    • response message content length (not transport length which can differ)

...

Note that nim is evaluating connectivity status by running a /ping request towards the controller. If the request fails, the error returned from http.Client is reported inside DevicePortStatus. With tracedhttp.Client, we are able to obtain and publish more information when the connectivity checks are failing. Additionally, during a traced connectivity check, a failed /ping request is followed by GET requests for http://www.google.com and https://www.google.com (just like diag is performing). From trace output we are then therefore able to tell if, at the time of the test, the device had any Internet connectivity at all.

As opposed to the downloader, in nim it makes sense to include all tracing information, including packet tracing capture so that we can narrow down the root cause of a failed check as much as possible. However, we should then perform tracing much less frequently - not with each connectivity check performed by nim, which is at least once every 5 minutes. Multiple traces obtained inside a duration of the same network issue would likely not add any additional information. We decided to run full HTTP tracing only at most once per hour before onboarding and at most once per day after onboarding (the second interval is configurable, learn more here) and only when the latest DPC is being tested. It does not make sense to troubleshoot obsolete network configurations.

...

With nim sporadically tracing /ping and google.com requests, it still makes sense to utilize network tracing in zedagent as well. This microservice is running the most important requests: /config to get the latest device configuration, aka the intended state, and /info to publish the actual device state. In fact, both of these must succeed for the device to be considered as Online and not as Suspect by zedcloud. As it was pointed out above, a failing latest DPC is applied only temporarily - until nim performs one connectivity check and fallbacks to a previous DPC. This means that as long as the latest DPC is marked as not working, it does not make sense for zedagent to trace its requests, because they would likely be using an obsolete DPC anyway. However, if nim evaluates the latest DPC as working yet zedagent is failing to get config or publish information (specifically ZInfoDevice), then zedagent is eligible to run tracing and publish the output. However, the same tracing interval (at most once per hour/day before/after onboarding by default) applies here as well.

Both successful and failed config/info network traces are published.