Reactive Interaction Gateway

Reactive Interaction Gateway

  • User Documentation
  • Source Documentation
  • GitHub

›Main Concepts

Getting Started

  • Intro
  • Tutorial
  • Examples
  • FAQ

Main Concepts

  • Architecture
  • Features
  • Event Format
  • Publishing Events
  • Receiving Events
  • Forwarding Requests
  • User Authorization
  • Distributed Tracing

Advanced Guides

  • API Gateway Management
  • API Gateway Synchronization
  • Avro
  • JWT Blacklisting
  • Azure Event Hubs

RIG in Production

  • Operator's Guide
  • API Documentation
  • Phoenix LiveDashboard
  • Prometheus Metrics
  • HTTPS
  • Scaling

Hacking the Source

  • Developer's Guide

Forwarding Requests

RIG includes a configurable, distributed HTTP reverse proxy. Depending on the configuration, RIG forwards incoming HTTP requests to backend services, to a Kafka topic or to a Kinesis stream, then waits for the reply and forwards that reply to the original caller.

API Endpoint Configuration

The configuration should be passed at startup. Additionally, RIG provides an API to add, change or remove routes at runtime. These changes are replicated throughout the cluster, but they are not persisted; that is, if all RIG nodes are shut down, any changes to the proxy configuration are lost. Check out the API Gateway Synchronization to learn more.

To pass the configuration at startup, RIG uses an environment variable called PROXY_CONFIG_FILE. This variable can be used to either pass the path to an existing JSON file, or to directly pass the configuration as a JSON string. Let's configure a simple endpoint to show how this works.

The configuration JSON (file) holds a list of API definitions. Refer to the API Gateway Management for details. You can also utilize small playground in examples

We define an endpoint configuration like this:

[
  {
    "id": "my-service",
    "version_data": {
      "default": {
        "endpoints": [
          {
            "id": "my-endpoint",
            "method": "GET",
            "path_regex": "/"
          }
        ]
      }
    },
    "proxy": {
      "use_env": true,
      "target_url": "API_HOST",
      "port": 3000
    }
  }
]

This defines a single service called "my-service". The URL is read from an given environment variable in this case (use_env: true). Because we want to run RIG inside a Docker container, we cannot use localhost. Instead, we can use host.docker.internal within the container to refer to the Docker host. This way, the service URL is resolved to http://host.docker.internal:3000. The service has one endpoint called "my-endpoint" at path /, which forwards GET requests to the same path (http://host.docker.internal:3000/).

As a demo service, we use a small Node.js script:

const http = require("http");
const port = 3000;
const handler = (_req, res) => res.end("Hi, I'm a demo service!\n");
const server = http.createServer(handler);
server.listen(port, (err) => {
  if (err) {
    return console.error(err);
  }
  console.log(`server is listening on ${port}`);
});

Using Docker, our configuration can be put into a file and mounted into the container. Also, we set API_HOST to the Docker host URL as mentioned above. On Linux or Mac, this looks like this:

$ cat <<EOF >config.json
<paste the configuration from above>
EOF
$ docker run -d \
  -e API_HOST=http://host.docker.internal \
  -v "$(pwd)"/config.json:/config.json \
  -e PROXY_CONFIG_FILE=/config.json \
  -p 4000:4000 -p 4010:4010 \
  accenture/reactive-interaction-gateway

After that we should be able to reach our small demo service through RIG:

$ curl \
    -H 'traceparent: 00-4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-01' \
    -H 'tracestate: hello=tracing' \
    localhost:4000
Hi, I'm a demo service!

Alternatively, instead of using a file we can also pass the configuration directly:

$ config="$(cat config.json)"
$ docker run \
  -e API_HOST=http://host.docker.internal \
  -e PROXY_CONFIG_FILE="$config" \
  -p 4000:4000 -p 4010:4010 \
  accenture/reactive-interaction-gateway

Note that this way we don't need a Docker volume, which might work better in your environment. Again, we should be able to reach the demo service:

$ curl \
    -H 'traceparent: 00-4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-01' \
    -H 'tracestate: hello=tracing' \
    localhost:4000
Hi, I'm a demo service!

Dynamic URL parameters

It's a common case that you want to fetch detail for some entity e.g. /books/123. To make sure the dynamic value 123 is correctly matched and forwarded API endpoint can be configured as a regular expression:

[{
  "id": "my-service",
  "version_data": {
    "default": {
      "endpoints": [{
        "id": "my-detail-endpoint",
        "method": "GET",
        "path_regex": "/books/(.+)"
      }]
    }
  },
  ...
}]

Publishing to event streams

Instead of forwarding an HTTP request to an internal HTTP endpoint, RIG can also produce an event to a Kafka topic (or Kinesis stream). What looks like a standard HTTP call to the frontend, actually produces an event for backend services to consume.

Depending on the use case, the request may either return immediately or after a response has been produced to another Kafka topic (or Kinesis stream), as described below.

For fire-and-forget style requests, the endpoint configuration looks like this:

[{
  ...
  "version_data": {
    "default": {
      "endpoints": [{
        "id": "my-endpoint",
        "method": "POST",
        "path_regex": "/",
        "target": "kafka",
        "topic": "my-topic",
        "schema": "my-avro-schema"
      }]
    }
  },
  ...
}]

Note that the target field is set to kafka (for Kinesis use kinesis). The topic field is mandatory, but the schema field is optional. Note that the topic and schema fields are just about publishing to event stream and have nothing to do with events consumption.

The endpoint expects the following request format:

{
  "id": "069711bf-3946-4661-984f-c667657b8d85",
  "type": "com.example",
  "time": "2018-04-05T17:31:00Z",
  "specversion": "0.2",
  "source": "/cli",
  "contenttype": "application/json",
  "rig": {
    "target_partition": "the-partition-key"
  },
  "data": {
    "foo": "bar"
  }
}

target_partition is optional, if not set -- RIG produces event to random Kafka/Kinesis partition.

Wait for response

Sometimes it makes sense to provide a simple request-response API to something that runs asynchronously on the backend. For example, let's say there's a ticket reservation process that takes 10 seconds in total and involves three different services that communicate via message passing. For an external client, it may be simpler to wait 10 seconds for the response instead of polling for a response every other second. A behavior like this can be configured using an endpoints' response_from property. When set to kafka, the response to the request is not necessarily taken from the target, e.g., for target = http this means the backend's HTTP response might be ignored - it's the responsibility of the backend service where to read the response from: If the backend returns with the HTTP code 202 Accepted, the response will be read from a Kafka topic. If the backend returns a different HTTP code (can be 200 or 400 or whatever makes sense), only then the response will be read synchronously from the http target directly (this allows the backend to return cached responses). In order to enable RIG to correlate the response from the kafka topic with the original request, RIG adds a correlation ID to the request (using a query parameter in case of target = http, or baked into the produced CloudEvent otherwise). Backend services that work with the request need to include that correlation ID in their response; otherwise, RIG won't be able to forward it to the client (and times out).

Configuration of such API endpoint might look like this:

[{
  ...
  "version_data": {
    "default": {
      "endpoints": [{
        "id": "my-endpoint",
        "method": "POST",
        "path_regex": "/",
        "target": "kafka",
        "topic": "my-topic",
        "response_from": "kafka"
      }]
    }
  },
  ...
}]

Note the presence of response_from field. This tells RIG to wait for different event with the same correlation ID.

Supported combinations (target -> response_from)

  • HTTP -> kafka/http_async/kinesis
  • Kafka -> kafka
  • Kinesis -> not supported
  • Nats -> nats

http_async means that correlated response has to be sent to internal :4010/v3/responses POST endpoint.

Supported formats

All response_from options support only binary mode.

Message headers:

rig-correlation: "correlation_id_sent_by_rig"
rig-response-code: "201"
content-type: "application/json"

All headers are required.

Message body:

{
  "foo": "bar"
}

Auth

RIG can do simple auth check for endpoints. Currently supports JWT.

API configuration is following:

[{
  "id": "my-service",
  "auth_type": "jwt",
  "auth": {
    "use_header": true,
    "header_name": "Authorization",
    "use_query": false,
    "query_name": ""
  },
  "version_data": {
    "default": {
      "endpoints": [{
        "id": "my-unsecured-endpoint",
        "method": "GET",
        "path_regex": "/unsecured"
      },{
        "id": "my-secured-endpoint",
        "method": "GET",
        "path_regex": "/secured",
        "secured": true
      }]
    }
  },
  ...
}]

Important blocks are auth_type and auth. auth_type sets which auth mechanism to use -- currently jwt or none. auth sets where to find token that should be used. It's possible to send token in 2 places -- HTTP headers (use_header) and as URL query parameter (use_query). header_name and query_name define lookup key in headers/query. You can use headers and query at the same time.

Once you set how to use auth, you can simply define which API endpoint should be secured via secured property. Auth check is by default disabled and secured field set to false.

Make sure to use Bearer ... form as a value for auth header.

Headers transformations

Headers transformations are supported in a very simple way. Assume following API configuration:

[{
  "id": "my-service",
  "version_data": {
    "default": {
      "transform_request_headers": {
        "add_headers": {
          "host": "my-very-different-host.com",
          "custom-header": "custom-value"
        }
      },
      "endpoints": [{
        "id": "my-endpoint",
        "method": "GET",
        "path_regex": "/"
      },{
        "id": "my-transformed-endpoint",
        "method": "GET",
        "path_regex": "/transformed",
        "transform_request_headers": true
      }]
    }
  },
  ...
}]

Via transform_request_headers you can set which headers should be overridden or added. In this case RIG would override host header and add completely new header custom-header. Same as with auth, you can define per endpoint if you want to transform headers using transform_request_headers property. Headers transformation is by default disabled and transform_request_headers field set to false.

URL rewriting

With URL rewriting you can set how the incoming and outgoing request urls should look like.

[{
  "id": "my-service",
  "version_data": {
    "default": {
      "endpoints": [{
        "id": "my-transformed-endpoint",
        "method": "GET",
        "path_regex": "/foo/([^/]+)/bar/([^/]+)",
        "path_replacement": "/bar/\\1/foo/\\2"
      }]
    }
  },
  ...
}]

As you send GET request to /foo/1/bar/2 RIG will forward it to GET /bar/1/foo/2.

CORS

Quite often you need to deal with cross origin requests. CORS itself is configured via CORS environment variable, which defaults to *. In addition RIG requires to configure OPTIONS pre-flight endpoints:

[{
  "id": "my-service",
  "version_data": {
    "default": {
      "endpoints": [{
        "id": "my-endpoint",
        "method": "GET",
        "path_regex": "/"
      },{
        "id": "my-endpoint-preflight",
        "method": "OPTIONS",
        "path_regex": "/"
      }]
    }
  },
  ...
}]

Request logger

Every request going through reverse proxy can be tracked by loggers -- console or/and kafka. To enable such logging, set REQUEST_LOG to one or both of them (comma separated).

In case of Kafka, you can also set which Avro schema to use via KAFKA_LOG_SCHEMA.

← Receiving EventsUser Authorization →
  • API Endpoint Configuration
  • Dynamic URL parameters
  • Publishing to event streams
    • Wait for response
  • Auth
  • Headers transformations
  • URL rewriting
  • CORS
  • Request logger
Docs
IntroGetting Started
Community
User ShowcaseSlackStack Overflow@KevnBadr
More
Star RIG on GitHub
Copyright © 2021 Accenture