Feed Handler Design (TorQ + EOD Websocket)
This document describes the feed handler design implemented and planned for EOD websocket ingestion into TorQ.
Goals
- Ingest live trade and quote ticks from EOD websocket endpoints.
- Publish normalized updates into TorQ
tradeandquotetables. - Preserve provider event timestamp (
t) in a side-channel table. - Support scaling from small symbol sets to ~600 symbols.
Current Implementation
Primary script:
backend-q/TorQ-Finance-Starter-Pack/code/tick/feed_ws.q
Current endpoints:
- Trades:
wss://ws.eodhistoricaldata.com/ws/us?api_token=... - Quotes:
wss://ws.eodhistoricaldata.com/ws/us-quote?api_token=...
Connection model
- Single q process can open two websocket client connections.
- Separate handles are tracked:
- trade handle
- quote handle
- Incoming messages are routed by
.z.whandle.
Authorization and subscription flow
For each socket independently:
- Connect and complete HTTP->WebSocket upgrade.
- Wait for
{"status_code":200,"message":"Authorized"}. - Send subscribe payload:
{"action":"subscribe","symbols":"AMZN,TSLA"}
Publish model
- Trade messages are published via
.u.updtotrade. - Quote messages are published via
.u.updtoquote. - Side-channel rows are published via
.u.updtoeodwsraw.
Data Mapping
Trade mapping
Input (example):
{"s":"TSLA","p":387.66,"v":53,"dp":false,"ms":"extended-hours","t":1778016703232}
Published fields:
sym <- sprice <- psize <- vstop <- dpex <- first char of mscond <- " "side <-unknown`
Quote mapping
Confirmed /ws/us-quote payload:
{
"s":"AAPL",
"ap":227.33,
"as":200,
"bp":227.30,
"bs":100,
"t":1725198451165
}
Published fields:
sym <- sbid <- bp(fallback aliases supported)ask <- ap(fallback aliases supported)bsize <- bsasize <- asmode <- fallback/derivedex <- fallback/derivedsrc <- eod
Side-channel mapping (eodwsraw)
Table added in:
backend-q/TorQ-Finance-Starter-Pack/database.q
Purpose:
- Preserve provider timestamp exactly while still supporting standard TorQ table updates.
Core fields:
time(ingest timestamp)providertsfrom providertsym,kind- normalized price/size fields
rawpayload textsrc
Error Handling Strategy
- Parse and callback processing is wrapped in traps.
- Log messages are flattened to char vectors before
.lg.*calls. - Connection retries are currently manual via process restart (future: automatic reconnect loops with backoff).
SSL/TLS Notes
- q websocket client must run with valid CA bundle.
- Local setup uses prepared files under
docker_volume/backend-q/ssland mounted to/usr/lib/ssl. - Deploy-aware setenv avoids stale CA download behavior.
Related scripts:
backend-q/docker/prepare_q_tls.shbackend-q/docker/setenv-deploy.sh
Scaling Design for ~600 Tickers
Provider limit:
- 50 tickers per connection.
Recommended topology
- Minimum sockets:
ceil(600/50)=12. - Recommended: 14-16 sockets for headroom.
- Use separate socket pools per stream type:
- trades pool
- quotes pool
Process model
Prefer multiple feed processes over one:
- Example:
- process A: trade shards 1-4, quote shards 1-4
- process B: trade shards 5-8, quote shards 5-8
- process C: trade shards 9-12 (+headroom), quote shards 9-12 (+headroom)
Benefits:
- smaller blast radius
- easier restarts
- better CPU utilization and observability
Sharding
- Deterministic symbol-to-shard mapping (hash mod shard count).
- Stable mapping across reconnects to reduce churn.
- Keep shard registry in config for deterministic operations.
Reliability controls
- Per-socket reconnect with exponential backoff + jitter.
- Socket health watchdog (last-message age).
- Auto-resubscribe after authorization.
- Metrics per shard:
- connected state
- reconnect count
- msg/sec
- provider->ingest latency (
time - providerts)
Next Iteration Suggestions
- Add automatic reconnect loop per socket shard.
- Add symbol-sharding utility in q config.
- Add monitoring alerts on stale shard streams.
- Add optional dedupe key (
sym+providerts+kind) in side-channel for replay checks. - Add replay utility from
eodwsrawintotrade/quotefor backfill experiments.
