VERIFIED SOLUTION i
X

Vault e2routerd planning FAQ

UPDATED: September 7, 2017


e2routerd planning FAQ

 

Vault Router

The Vault Router (e2Routerd), an application that manages connections to many Rendering

Engines and spreads the request load intelligently across the network of Rendering

Engines it knows about.

In a simple Vault configuration, a front end application sends Vault specific search or rendering commands to a rendering engine for execution. In environments with large amounts of request traffic you may want to deploy multiple rendering engines to handle the load. You may also want to deploy multiple rendering engines to make the system more fault tolerant.

The Vault router is a service that can be used to distribute load from the front the application across multiple rendering engines. It transparently distributes incoming requests across a set of connected rendering engines based on current activity. If a particular connection fails, traffic is rerouted to other connections automatically.

The Vault Router uses an algorithm to distribute requests which is based on 4 weights, in order of

priority:

1. failed requests since last “heartbeat” (~30 sec).

2. hold ticks (holds come from previous “heartbeats” with failures).

3. running commands.

4. completed commands.

 

Priorities 1 and 2 manage connection failures. Connections that fail during the current

heartbeat are avoided. Connections with failures during the last heartbeat get increased hold

ticks which act as a delay until the connection is retried. Connections that do not fail outright can be deemed as failed if a transaction to the connection exceeds the waittimeout and retry thresholds for that connection.

 

Priority 3 is the main rule, causing commands to be distributed to rendering engines running

the fewest commands. It doesn't know the costs of each command, just the number of

commands that were sent and have not completed.

 

Priority 4 acts to distribute the load across connections under light load. This helps keep the

“live-ness” information about connections fresh. This actually produces around a round robin effect

but only under light load.

 

Failure modes aside, the Vault Router distributes requests based on the number of requests each

rendering engine is currently working on.

 

Note that the router does not directly know about load from other sources that might be sent directly to the attached rendering engines. It will influence how long request take and thus how many requests are in flight at any given time.

 

Planning a Vault Router

 

At its simplest, the Vault Router is a listener and a series of connection endpoints (renderers) where traffic will be distributed to. Each connection must have an endpoint and there must be enough threads configured to keep the message traffic flowing. A typical router configuration will consist of a [router1] component and a series of [connectionN] components. A [pool1] component can be added to improve thread count and throughput behavior where there are many connections or where the traffic load is expected to be high.

 

Configuration settings by component

 

[router1]

 

count=n The number of connections to be defined (mandatory)

 

debug=0 Log debug level. Default is 0 (off). Debug=1 can provide useful

information about the balance of the request load.

 

 

[pool1]

minthreads= Minimum number of threads allowed. The default value is 4.

maxthreads= Maximum number of threads allowed. The default value is 16

startthreads= Number of threads created when the router starts. Set this based on the traffic you expect to see when the router starts up.The actual thread count will be adjusted between the minthreads and maxthreads values based on load over time. The default value is 8.

Note: Set up min and max threads based on the number of connections and the anticipated traffic. There should be one thread for each connection plus one thread for the listening connection and a number of worker threads that will handle the messages. The worker thread count should be determined based on the maximum number of concurrent messages you would expect to be active. This includes searches and rendering requests. Note that thread count capacity can be limited by the amount of system and process memory available.

 

[connection1]

service=IP address or hostname:port The connection point for the first renderer

 

maximumcapacity=size in bytes The maximum size of the received message from the render connection. This should be large enough to accommodate the largest rendered document you expect to receive.

 

waittimeout=time in seconds The maximum time (in seconds) to wait for a request reply before a connection is deemed to be timed out. This applies to an individual request, not the use of the router to renderer connection. The default value is 600.

 

idletimeout=time in seconds The number of seconds without activity after which the router will close a connection to a renderer. The default value is 300.

 

retrycount= The number of times to retry a failed/timed out request on this connection before declaring the connection has failed. The default value is 3.

 

 

[connection2]

service=IP address/hostname:port The connection point for the second renderer

 

 

 

Sample Configurations:

Scenario 1:

A Vault router connected to two render instances on the same Vault platform. One renderer is listening on port 6003, the other on 7003

[router1]

count=2

debug=0

 

[connection1]

service=vault.render.pvt:6003

 

[connection2]

service= vault.render.pvt:7003

 

Scenario 2:

A Vault router connected to two render instances on the same Vault platform. One renderer is listening on port 6003, the other on 7003. The expected load is 40 concurrent searches per second with an average response time of 60 msec. The peak load is 80 concurrent searches per second. There can be load as soon as the router is started. The longest render time expected is 1 minute,

[common1]

# allow for longest render doubled

waittimeout=120

 

[pool1]

# min concurrent searches + connection count + input port

minthreads=43

# max concurrent searches + connection count + input port

maxthreads=86

# allow for 57 concurrent searches in first time window

startthreads=60

 

[router1]

count=2

debug=0

 

[connection1]

service=vault.render.pvt:6003

inherit=comon1

 

[connection2]

service= vault.render.pvt:7003

inherit=comon1

 

Senario 3:

A larger Vault environment with:

• 2 e2serverd (S1,S2) and 1 e2loaderd instance (L1)

o all connected to the SAN containing the data store

• 6 e2renderd instances

o R1,R2,R3 connect to S1

o R4,R5,R6 connect to S2

• 3 front end application servers (e.g. Tomcat or JBOSS), A1,A2,A3

o some sort of external load balancing in front of the app servers

• 3 e2routerd instances, X1,X2,X3

o one local to each app server (A1 uses X1, A2 uses X2,…)

o each connecting to routers R1 to R6

Each e2routerd.ini would list all 6 rendering engine hosts:

 

[router1]

count=6

debug=1

 

[connection1]

service=r1.company.pvt:6003

 

[connection2]

service=r2.company.pvt:6003

 

 

[connection6]

service=r6.company.pvt:6003

 

[pool1]

minthreads=30

maxthreads=90

startthreads=30

debug=1


Downloads

  • No Downloads