k4unit in TorQ
This page explains how TorQ k4unit tests work in this repo, what the CSV (sometimes written as “CVS”) test headers mean, and how to mock dependencies cleanly.
How TorQ k4unit works
TorQ uses a CSV-driven test runner (k4unit) instead of writing tests as q functions.
Core files in this repo:
backend-q/TorQ/tests/k4unit.q: framework implementationbackend-q/TorQ/tests/runtests.q: loads test directories and executes the suitebackend-q/TorQ/tests/order.txt: test load order
At runtime, k4unit loads rows from CSV files into KUT (test definitions), executes row-by-row, and writes results into KUTR.
Execution flow:
- Load test CSV files.
- Normalize defaults (
lang,repeat,ms,bytes, etc.). - Run lifecycle actions (
beforeany,beforeeach,before,run|true|fail,after,aftereach,afterall). - Store outcomes in
KUTRand failed rows inKUerr.
Core tables: KUT and KUTR
The two main in-memory tables used by k4unit are:
KUT: the loaded test definitionsKUTR: the executed test results
KUT
KUT is populated from one or more CSV files before the suite runs.
It answers:
- What tests were loaded?
- In what file did each test come from?
- What code will be executed?
- What runtime or memory limits apply?
Columns in KUT:
action: lifecycle/test action such asbefore,run,true,fail,afterms: runtime budget in millisecondsbytes: memory budget in byteslang: execution language, usuallyqcode: q or k expression to executerepeat: repetition countminver: minimum supported.z.Kfile: source CSV file pathcomment: optional human-readable note
Typical inspection queries:
show KUT
select from KUT where file like "*feed_ws*"
select count i by action from KUT
KUTR
KUTR is the result table populated after execution.
It answers:
- Which tests passed or failed?
- Did they stay within runtime and memory budgets?
- Was the test itself valid q/k code?
- Which CSV row produced the result?
Columns in KUTR:
action: copied fromKUTms: configured runtime budgetbytes: configured memory budgetlang: execution languagecode: executed expressionrepeat: repetition count used for executionfile: source CSV file pathmsx: measured execution timebytesx: measured memory useok: overall pass/fail for the test rowokms: whether runtime stayed withinmsokbytes: whether memory stayed withinbytesvalid: whether execution was valid for the declared actiontimestamp: execution timestampcsvline: source line number in the CSV file
Typical inspection queries:
show KUTR
select from KUTR where not ok
select from KUTR where not okms
select from KUTR where not okbytes
select from KUTR where not valid
select count i by ok,action from KUTR
select count i by file,ok from KUTR
CSV (or “CVS”) header columns
Your CSV test files must include headers expected by k4unit.q.
Required practical columns:
action: test phase (beforeany,beforeeach,before,run,true,fail,after,aftereach,afterall)code: q expression to execute
Commonly used columns:
lang:qork(defaults toq)ms: max runtime budget in milliseconds forrunbytes: max memory budget forrunrepeat: number of repetitions for the codeminver: minimum kdb version (.z.K) to run testcomment: free-form description
Example header row:
action,ms,bytes,lang,code,repeat,minver,comment
Minimal valid row set:
action,lang,code,comment
before,q,"x:41",setup
true,q,"42~x+1",simple assertion
after,q,"delete x from `.",cleanup
Action semantics
run: execute code and check performance budgets (ms,bytes) if providedtrue: expression must evaluate to1bfail: expression must throwbefore*andafter*: setup and teardown hooks
Use before/after (file-scoped) for local fixtures and beforeeach/aftereach for shared environment reset.
Running tests
From TorQ tests folder:
cd backend-q/TorQ/tests
q runtests.q -test ./stp -runtime 0 -testresults ./logs
Run a single CSV directly in an interactive q session:
\l k4unit.q
KUltf `:my_test.csv
KUrt[]
show KUerr
show KUTR
Interactive debug mode for TorQApp tests
The existing feed_ws runner already supports an interactive debug mode. This is the easiest way to open a q console with the same TorQ test wiring that the script uses during normal execution.
Open an interactive q session with the test setup loaded:
cd /opt/backend-q/deploy/TorQApp/latest/tests/feed_ws
bash run.sh -q -d
What this does:
- Loads TorQ in test mode with the local test folder passed as
-test - Loads shared helpers and local test settings:
${KDBTESTS}/helperfunctions.qsettings.q
- Keeps the process interactive because
-dmaps to-debuginrun.sh - Disables
feed_ws.qautostart side effects through:FEEDWS_NOSTART=1FEEDWS_NOCONNECT=1
If you only want one-shot execution with no interactive console:
cd /opt/backend-q/deploy/TorQApp/latest/tests/feed_ws
bash run.sh -q
The local runner script is:
backend-q/TorQ-Finance-Starter-Pack/tests/feed_ws/run.sh
It builds a TorQ command equivalent to:
q "$TORQHOME/torq.q" \
-proctype test \
-procname test_feed_ws \
-test "$SCRIPT_DIR" \
-load "$KDBTESTS/helperfunctions.q" "$SCRIPT_DIR/settings.q"
When one-shot runs fail
If bash run.sh -q exits with code 1, check the redirected error log mentioned in the command output. In practice, look under the local test log directory, for example:
logs
If needed, the runner can be extended later to print the newest error log automatically after a failed run.
Rerun tests from inside the same q debug session
If you already started debug mode (for example with bash run.sh -q -d), you can rerun tests without restarting the process.
1) Reload changed files
\l finsym.q
\l settings.q
2) Clear previous test state
KUT:0#KUT
KUTR:0#KUTR
3) Reload tests from active -test directories and execute
KUltd each hsym each .proc.params`test
KUrt[]
4) Show only failures
select from KUTR where not ok
If you only want a single test file instead of reloading all test directories, use:
KUltf hsym $ "/opt/backend-q/deploy/TorQApp/latest/tests/finsym/test.csv"
KUrt[]
Mocking in k4unit
Because k4unit executes raw q expressions, mocking is done by rebinding symbols/functions during setup and restoring them in teardown.
Pattern 1: Mock a function and restore it
action,lang,code,comment
before,q,"orig_gettph::.feedws.gettph",save original
before,q,".feedws.gettph:{[] 123i}",mock tph handle
true,q,"123i~.feedws.gettph[]",assert mock
after,q,".feedws.gettph::orig_gettph",restore
after,q,"delete orig_gettph from `.",cleanup temp symbol
Pattern 2: Mock external service lookup
For TorQ code that calls .servers.gethandlebytype, replace it temporarily:
action,lang,code,comment
before,q,"orig_gethandle::.servers.gethandlebytype",save
before,q,".servers.gethandlebytype:{[t;n] 6000i}",mock handle lookup
true,q,"6000i~.servers.gethandlebytype[`segmentedtickerplant;`any]",assert
after,q,".servers.gethandlebytype::orig_gethandle",restore
Pattern 3: Mock filesystem input (for CSV symbol tests)
Avoid touching production files. Create a temporary file and test against it:
action,lang,code,comment
before,q,"tmpf::`:./tmp_feed_symbols.csv",tmp file path
before,q,"tmpf 0: (\"ticker,nasdaq100,sp500\";\"AAPL,1,1\";\"MSFT,1,0\")",write fixture
true,q,"`AAPL`MSFT~.feedws.readcsvsymbols tmpf",validate parser
after,q,"system \"rm -f ./tmp_feed_symbols.csv\"",remove fixture
after,q,"delete tmpf from `.",cleanup
Pattern 4: Reset mutable globals between tests
If code under test mutates globals (lasttrade, lastquote, caches), reset them explicitly in beforeeach:
action,lang,code,comment
beforeeach,q,".feedws.lasttrade::(`$())!`timestamp$()",reset trade throttle state
beforeeach,q,".feedws.lastquote::(`$())!`timestamp$()",reset quote throttle state
beforeeach,q,".feedws.lastbid::(`$())!`float$()",reset bid cache
beforeeach,q,".feedws.lastask::(`$())!`float$()",reset ask cache
Practical mocking advice
- Keep mocks local to one CSV file unless truly shared.
- Always restore overridden functions in
after/aftereach. - Prefer deterministic fixtures (explicit temp files and exact values).
- Avoid network and real process dependencies in unit tests.
- Use
truefor correctness checks andrunonly when validating performance constraints.
Debugging failing tests
Useful result tables after KUrt[]:
KUerr: failed checksKUinvalid: execution/parse errorsKUTR: full run history includingmsxandbytesx
Typical workflow:
show KUerr
select from KUTR where not ok
select from KUTR where not valid
How to verify a test run
The fastest way to verify that a k4unit run succeeded is to inspect KUTR and the derived error views.
In an interactive q session
After KUrt[], check:
/ high-level summary
select count i by ok,action from KUTR
/ any failing checks
show KUerr
/ invalid code or setup failures
show KUinvalid
/ performance failures
show KUslow
show KUbig
Good run expectations:
KUerris emptyKUinvalidis emptyselect count i by ok from KUTRshows onlyok=1bKUslowandKUbigare empty if you are enforcing budgets
In one-shot mode
When using bash run.sh -q, verify success by checking:
- process exit code is
0 - no rows appear in
KUerr/KUinvalidif you rerun interactively - any generated logs or result files do not show failed assertions
Useful summary queries when debugging a failed one-shot run:
select count i by file,action,ok from KUTR
select file,csvline,action,comment:code from KUTR where not ok
The second query is useful when you need to locate the exact failing CSV row quickly.
Suggested layout for new TorQApp tests
For app-specific tests (for example feed_ws.q), keep CSV tests under a dedicated folder and run through existing TorQ runner conventions.
Example:
backend-q/TorQ/tests/feedws/feedws.csv
This keeps app tests isolated while still using the same k4unit machinery as the rest of TorQ.
