Native Ports
py-scm is the reference linear-Gaussian implementation, but it is not the
only runtime we maintain. The same continuous reasoning surface is also
available in maintained C#, Java, C++, TypeScript/JavaScript, R, Julia, Go,
Rust, Octave, Swift, Ruby, and Lua ports.
These ports are maintained against the same shared fixtures and benchmark harness used for the Python reference. Continuous parity is checked on both the committed oracle models and generated Gaussian BBN sweeps, with deterministic size/topology runs across the maintained implementations.
The maintained continuous surface targets parity for:
pqueryiqueryequerycquerysamplescontinuous
serdeseeded singly- and multi-connected Gaussian BBN generation
Current port names and namespaces:
C#:RocketVector.DarkStar.ContinuousJava:io.rocketvector.darkstar.continuousC++:rocketvector::darkstar::continuousTypeScript / JavaScript: package@rocketvector/darkstarunderdarkstar.continuousR: packagedarkstarwith S3 model classes and shared query verbsJulia: packageDarkstarwith multiple-dispatch model and query functionsGo: moduledarkstarwith packagedarkstarRust: cratedarkstarwithScmModeland typed query payloadsOctave: packagedarkstarwithdarkstar_-prefixed script functions and C ABI-backed model handlesSwift: packageDarkstarwithSCMModeland Swift error handlingRuby: gemdarkstarwithDarkstar::SCMand C extension-backed model handlesLua: moduledarkstarwith C ABI-backed model handles
Code Examples
Each example below builds the same small linear-Gaussian model, then runs associational, interventional, average-causal-effect, and counterfactual queries against it.
C#
using RocketVector.DarkStar.Continuous;
var graph = GraphData.Create(
new[] { "C", "X", "Y" },
new[]
{
new[] { "C", "X" },
new[] { "C", "Y" },
new[] { "X", "Y" },
});
var parameters = Parameters.FromPayload(
new[] { "C", "X", "Y" },
new[] { 1.0017, 4.9960, 10.5033 },
new[]
{
new[] { 0.9907, 2.9799, 6.9569 },
new[] { 2.9799, 9.9734, 22.4469 },
new[] { 6.9569, 22.4469, 52.1280 },
});
var model = Reasoning.CreateReasoningModel(graph, parameters);
var posterior = model.Pquery(new Dictionary<string, double> { ["X"] = 4.0 });
var interventional = model.Iquery("Y", new Dictionary<string, double> { ["X"] = 4.0 });
var ace =
model.Equery(
"Y",
new Dictionary<string, double> { ["X"] = 5.0 },
new Dictionary<string, double> { ["X"] = 3.0 });
var counterfactual =
model.Cquery(
"Y",
new Dictionary<string, double> { ["C"] = 1.0, ["X"] = 4.0, ["Y"] = 10.0 },
new[]
{
(IReadOnlyDictionary<string, double>)new Dictionary<string, double> { ["X"] = 2.0 },
new Dictionary<string, double> { ["X"] = 6.0 },
});
Console.WriteLine(posterior.Mean.Get("Y"));
Console.WriteLine(interventional.Mean);
Console.WriteLine(ace.Mean);
Console.WriteLine(counterfactual.Row(0)[0]);
Java
import io.rocketvector.darkstar.continuous.GraphData;
import io.rocketvector.darkstar.continuous.Parameters;
import io.rocketvector.darkstar.continuous.Reasoning;
import java.util.List;
import java.util.Map;
var graph =
new GraphData(
List.of("C", "X", "Y"),
List.of(List.of("C", "X"), List.of("C", "Y"), List.of("X", "Y")));
var parameters =
Parameters.fromPayload(
List.of("C", "X", "Y"),
new double[] {1.0017, 4.9960, 10.5033},
new double[][] {
{0.9907, 2.9799, 6.9569},
{2.9799, 9.9734, 22.4469},
{6.9569, 22.4469, 52.1280}
});
var model = Reasoning.createReasoningModel(graph, parameters);
var posterior = model.pquery(Map.of("X", 4.0));
var interventional = model.iquery("Y", Map.of("X", 4.0));
var ace = model.equery("Y", Map.of("X", 5.0), Map.of("X", 3.0));
var counterfactual =
model.cquery(
"Y",
Map.of("C", 1.0, "X", 4.0, "Y", 10.0),
List.of(Map.of("X", 2.0), Map.of("X", 6.0)));
System.out.println(posterior.mean().get("Y"));
System.out.println(interventional.mean());
System.out.println(ace.mean());
System.out.println(counterfactual.row(0)[0]);
C++
#include <iostream>
#include "continuous.h"
using rocketvector::darkstar::continuous::GraphData;
using rocketvector::darkstar::continuous::Parameters;
using rocketvector::darkstar::continuous::createReasoningModel;
GraphData graph{
{"C", "X", "Y"},
{{"C", "X"}, {"C", "Y"}, {"X", "Y"}},
};
Parameters parameters{
{{"C", "X", "Y"}, {1.0017, 4.9960, 10.5033}},
{{"C", "X", "Y"},
{
{0.9907, 2.9799, 6.9569},
{2.9799, 9.9734, 22.4469},
{6.9569, 22.4469, 52.1280},
}},
};
auto model = createReasoningModel(graph, parameters);
auto posterior = model.pquery({{"X", 4.0}});
auto interventional = model.iquery("Y", {{"X", 4.0}});
auto ace = model.equery("Y", {{"X", 5.0}}, {{"X", 3.0}});
auto counterfactual =
model.cquery("Y", {{"C", 1.0}, {"X", 4.0}, {"Y", 10.0}},
{{{"X", 2.0}}, {{"X", 6.0}}});
std::cout << posterior.mean.get("Y") << '\n';
std::cout << interventional.mean << '\n';
std::cout << ace.mean << '\n';
std::cout << counterfactual.row(0).at(0) << '\n';
TypeScript / JavaScript
The TypeScript port compiles to JavaScript for Node, so the same public API is available from either language. The example below is plain modern JavaScript.
import { darkstar } from '@rocketvector/darkstar';
const graph = {
nodes: ['C', 'X', 'Y'],
edges: [['C', 'X'], ['C', 'Y'], ['X', 'Y']],
};
const parameters = {
v: ['C', 'X', 'Y'],
m: [1.0017, 4.9960, 10.5033],
S: [
[0.9907, 2.9799, 6.9569],
[2.9799, 9.9734, 22.4469],
[6.9569, 22.4469, 52.1280],
],
};
const model = darkstar.continuous.createReasoningModel(graph, parameters);
const posterior = model.pquery({ X: 4.0 });
const interventional = model.iquery('Y', { X: 4.0 });
const ace = model.equery('Y', { X: 5.0 }, { X: 3.0 });
const counterfactual = model.cquery(
'Y',
{ C: 1.0, X: 4.0, Y: 10.0 },
[{ X: 2.0 }, { X: 6.0 }]
);
console.log(posterior.mean.get('Y'));
console.log(interventional.mean);
console.log(ace.mean);
console.log(counterfactual.row(0)[0]);
R
library(darkstar)
graph <- scm_graph(
nodes = c("C", "X", "Y"),
edges = data.frame(
parent = c("C", "C", "X"),
child = c("X", "Y", "Y")
)
)
params <- gaussian_params(
mean = c(C = 1.0017, X = 4.9960, Y = 10.5033),
covariance = matrix(
c(
0.9907, 2.9799, 6.9569,
2.9799, 9.9734, 22.4469,
6.9569, 22.4469, 52.1280
),
nrow = 3,
byrow = TRUE,
dimnames = list(c("C", "X", "Y"), c("C", "X", "Y"))
)
)
model <- scm_model(graph, params)
posterior <- pquery(model, observations = c(X = 4.0))
interventional <- iquery(model, target = "Y", intervention = c(X = 4.0))
ace <- equery(model, target = "Y", x1 = c(X = 5.0), x2 = c(X = 3.0))
counterfactual <- cquery(
model,
target = "Y",
factual = c(C = 1.0, X = 4.0, Y = 10.0),
counterfactual = list(c(X = 2.0), c(X = 6.0))
)
posterior$mean[["Y"]]
interventional$mean
ace$mean
counterfactual
Julia
using Darkstar
graph = scm_graph(
["C", "X", "Y"];
edges = [["C", "X"], ["C", "Y"], ["X", "Y"]],
)
params = gaussian_params(
Dict("C" => 1.0017, "X" => 4.9960, "Y" => 10.5033),
[
0.9907 2.9799 6.9569
2.9799 9.9734 22.4469
6.9569 22.4469 52.1280
],
)
model = scm_model(graph, params)
posterior = pquery(model; observations = Dict("X" => 4.0))
interventional = iquery(model; target = "Y", intervention = Dict("X" => 4.0))
ace = equery(model; target = "Y", x1 = Dict("X" => 5.0), x2 = Dict("X" => 3.0))
counterfactual = cquery(
model;
target = "Y",
factual = Dict("C" => 1.0, "X" => 4.0, "Y" => 10.0),
counterfactual = [Dict("X" => 2.0), Dict("X" => 6.0)],
)
println(posterior.mean["Y"])
println(interventional.mean)
println(ace.mean)
println(counterfactual["rows"][1])
Go
package main
import (
"context"
"fmt"
"darkstar"
)
func main() {
ctx := context.Background()
model, err := darkstar.NewSCM(
darkstar.NewSCMGraph(
[]string{"C", "X", "Y"},
[][2]string{{"C", "X"}, {"C", "Y"}, {"X", "Y"}},
),
darkstar.NewGaussianParameters(
[]string{"C", "X", "Y"},
[]float64{1.0017, 4.9960, 10.5033},
[][]float64{
{0.9907, 2.9799, 6.9569},
{2.9799, 9.9734, 22.4469},
{6.9569, 22.4469, 52.1280},
},
),
)
if err != nil {
panic(err)
}
defer model.Close()
posterior, err := model.PQuery(ctx, darkstar.ContinuousQuery{
Observations: map[string]float64{"X": 4.0},
})
if err != nil {
panic(err)
}
interventional, err := model.IQuery(ctx, darkstar.ContinuousIntervention{
Target: "Y",
Intervention: map[string]float64{"X": 4.0},
})
if err != nil {
panic(err)
}
ace, err := model.EQuery(ctx, darkstar.ContinuousEffect{
Target: "Y",
X1: map[string]float64{"X": 5.0},
X2: map[string]float64{"X": 3.0},
})
if err != nil {
panic(err)
}
counterfactual, err := model.CQuery(ctx, darkstar.ContinuousCounterfactual{
Target: "Y",
Factual: map[string]float64{"C": 1.0, "X": 4.0, "Y": 10.0},
Counterfactuals: []map[string]float64{
{"X": 2.0},
{"X": 6.0},
},
})
if err != nil {
panic(err)
}
fmt.Println(posterior.Mean["Y"])
fmt.Println(interventional.Mean)
fmt.Println(ace.Mean)
fmt.Println(counterfactual.Rows[0])
}
Rust
use std::collections::HashMap;
use darkstar::{
gaussian_parameters, graph, ContinuousCounterfactual, ContinuousEffect,
ContinuousIntervention, ContinuousQuery, ScmModel,
};
fn main() -> Result<(), darkstar::DarkstarError> {
let model = ScmModel::new(
graph(
["C", "X", "Y"],
vec![
["C".to_string(), "X".to_string()],
["C".to_string(), "Y".to_string()],
["X".to_string(), "Y".to_string()],
],
),
gaussian_parameters(
vec!["C".to_string(), "X".to_string(), "Y".to_string()],
vec![1.0017, 4.9960, 10.5033],
vec![
vec![0.9907, 2.9799, 6.9569],
vec![2.9799, 9.9734, 22.4469],
vec![6.9569, 22.4469, 52.1280],
],
),
)?;
let posterior = model.pquery(ContinuousQuery {
observations: HashMap::from([("X".to_string(), 4.0)]),
})?;
let interventional = model.iquery(ContinuousIntervention {
target: "Y".to_string(),
intervention: HashMap::from([("X".to_string(), 4.0)]),
})?;
let ace = model.equery(ContinuousEffect {
target: "Y".to_string(),
x1: HashMap::from([("X".to_string(), 5.0)]),
x2: HashMap::from([("X".to_string(), 3.0)]),
})?;
let counterfactual = model.cquery(ContinuousCounterfactual {
target: "Y".to_string(),
factual: HashMap::from([
("C".to_string(), 1.0),
("X".to_string(), 4.0),
("Y".to_string(), 10.0),
]),
counterfactuals: vec![
HashMap::from([("X".to_string(), 2.0)]),
HashMap::from([("X".to_string(), 6.0)]),
],
})?;
println!("{}", posterior.mean["Y"]);
println!("{}", interventional.mean);
println!("{}", ace.mean);
println!("{}", counterfactual.rows[0]);
Ok(())
}
Octave
addpath("inst");
model = darkstar_scm_from_json([
'{"d":{"nodes":["C","X","Y"],"edges":[["C","X"],["C","Y"],["X","Y"]]},' ...
'"p":{"h":["C","X","Y"],"m":[1.0017,4.9960,10.5033],' ...
'"c":[[0.9907,2.9799,6.9569],[2.9799,9.9734,22.4469],[6.9569,22.4469,52.1280]]}}'
]);
posterior = darkstar_pquery(model, struct("observations", struct("X", 4.0)));
interventional = darkstar_iquery(model, struct("target", "Y", "intervention", struct("X", 4.0)));
ace = darkstar_equery(
model,
struct("target", "Y", "x1", struct("X", 5.0), "x2", struct("X", 3.0))
);
counterfactual = darkstar_cquery(
model,
struct(
"target", "Y",
"factual", struct("C", 1.0, "X", 4.0, "Y", 10.0),
"counterfactuals", {{struct("X", 2.0), struct("X", 6.0)}}
)
);
posterior.mean.Y
interventional.mean
ace.mean
counterfactual.rows
darkstar_close(model);
Swift
import Darkstar
let model = try newSCM(
graph: graph(
nodes: ["C", "X", "Y"],
edges: [["C", "X"], ["C", "Y"], ["X", "Y"]]),
parameters: gaussianParameters(
variables: ["C", "X", "Y"],
mean: [1.0017, 4.9960, 10.5033],
covariance: [
[0.9907, 2.9799, 6.9569],
[2.9799, 9.9734, 22.4469],
[6.9569, 22.4469, 52.1280],
]))
defer { model.close() }
let posterior = try model.pquery(["observations": ["X": 4.0]])
let interventional = try model.iquery([
"target": "Y",
"intervention": ["X": 4.0],
])
let ace = try model.equery([
"target": "Y",
"x1": ["X": 5.0],
"x2": ["X": 3.0],
])
let counterfactual = try model.cquery([
"target": "Y",
"factual": ["C": 1.0, "X": 4.0, "Y": 10.0],
"counterfactuals": [["X": 2.0], ["X": 6.0]],
])
print(posterior.mean["Y"] ?? 0)
print(interventional.mean ?? 0)
print(ace.mean ?? 0)
print(counterfactual.rows)
Ruby
require 'darkstar'
model = Darkstar.new_scm(
graph: Darkstar.scm_graph(
nodes: ['C', 'X', 'Y'],
edges: [%w[C X], %w[C Y], %w[X Y]]
),
parameters: Darkstar.gaussian_parameters(
variables: ['C', 'X', 'Y'],
mean: [1.0017, 4.9960, 10.5033],
covariance: [
[0.9907, 2.9799, 6.9569],
[2.9799, 9.9734, 22.4469],
[6.9569, 22.4469, 52.1280]
]
)
)
posterior = model.pquery(observations: { 'X' => 4.0 })
interventional = model.iquery(target: 'Y', intervention: { 'X' => 4.0 })
ace = model.equery(target: 'Y', x1: { 'X' => 5.0 }, x2: { 'X' => 3.0 })
counterfactual = model.cquery(
target: 'Y',
factual: { 'C' => 1.0, 'X' => 4.0, 'Y' => 10.0 },
counterfactual: [{ 'X' => 2.0 }, { 'X' => 6.0 }]
)
puts posterior.mean.fetch('Y')
puts interventional.mean
puts ace.mean
puts counterfactual.rows.first
model.close
Lua
local darkstar = require("darkstar")
local model = darkstar.new_scm({
graph = darkstar.scm_graph({
nodes = { "C", "X", "Y" },
edges = { { "C", "X" }, { "C", "Y" }, { "X", "Y" } },
}),
parameters = darkstar.gaussian_parameters({
variables = { "C", "X", "Y" },
mean = { 1.0017, 4.9960, 10.5033 },
covariance = {
{ 0.9907, 2.9799, 6.9569 },
{ 2.9799, 9.9734, 22.4469 },
{ 6.9569, 22.4469, 52.1280 },
},
}),
})
local posterior = model:pquery({ observations = { X = 4.0 } })
local interventional = model:iquery({ target = "Y", intervention = { X = 4.0 } })
local ace = model:equery({ target = "Y", x1 = { X = 5.0 }, x2 = { X = 3.0 } })
local counterfactual = model:cquery({
target = "Y",
factual = { C = 1.0, X = 4.0, Y = 10.0 },
counterfactuals = { { X = 2.0 }, { X = 6.0 } },
})
print(posterior.mean.Y)
print(interventional.mean)
print(ace.mean)
print(counterfactual.rows[1])
model:close()
Runtime Comparison
The shared continuous benchmark harness runs deterministic Gaussian models
across the maintained implementations and checks every port against the
continuous oracle. The table below uses the shared oracle query suite in both
cold and warm modes.
Language |
Cold ms |
Warm ms |
vs Python cold |
|---|---|---|---|
C++ |
0.0036 |
0.0010 |
7.53x |
Rust |
0.0102 |
0.0151 |
2.67x |
Ruby |
0.0133 |
0.0281 |
2.04x |
Go |
0.0138 |
0.0189 |
1.97x |
TypeScript / JavaScript |
0.0192 |
0.0017 |
1.42x |
Python |
0.0272 |
0.0012 |
1.00x |
Swift |
0.0377 |
0.0293 |
0.72x |
Lua |
0.0446 |
0.0519 |
0.61x |
C# |
0.0509 |
0.0029 |
0.53x |
Java |
0.0726 |
0.0172 |
0.37x |
Octave |
0.0829 |
0.0988 |
0.33x |
R |
0.2123 |
0.2009 |
0.13x |
Julia |
4.3262 |
0.0200 |
0.01x |
The oracle suite is intentionally small, so sub-0.01 ms gaps are best treated as local microbenchmark noise. Cold timings are the cleaner portability signal because they include model setup.