02-webppl 3 atoms 3
dippl-02-webppl/atom-1
prompt
system base instructions
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.
Answer format (strict): emit exactly one fenced code block.
```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```
The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.
Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies). system WebPPL primer
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:
Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.
Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.
Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:
flip(p) -> boolean (sample, no `sample()` needed)
uniform(a, b) -> number (sample)
gaussian(mu, sg) -> number (sample)
beta(a, b) -> number (sample)
dirichlet(alpha) -> tensor (sample; alpha must be a Vector)
randomInteger(n) -> int 0..n-1 (sample)
uniformDrift({a, b, width}) -> sample (drift kernel; do NOT wrap in sample())
dirichletDrift({alpha, conc.}) -> sample (drift kernel; do NOT wrap in sample())
Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
Bernoulli({p}) Beta({a, b}) Gaussian({mu, sigma})
Uniform({a, b}) Categorical({vs, ps}) Binomial({p, n})
Dirichlet({alpha}) Multinomial({ps, n}) Poisson({mu})
Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.
Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.
Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.
Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.
Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression. user message
In WebPPL, define a function
geometric(p) that returns a sample from the geometric distribution using only flip. The geometric distribution here is defined so that geometric(p) returns 1 plus a geometrically-distributed count: it returns 1 with probability p, and otherwise returns 1 plus a recursive call. Call geometric(0.5) as the final expression. End your program with var ANSWER = geometric(0.5);.groundtruth code
var geometric = function(p) {
return flip(p) ? 1 + geometric(p) : 1
}
var ANSWER = (geometric(0.5));
no run
groundtruth output
2
feedback
not signed rating as ✎
dippl-02-webppl/atom-2
prompt
system base instructions
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.
Answer format (strict): emit exactly one fenced code block.
```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```
The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.
Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies). system WebPPL primer
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:
Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.
Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.
Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:
flip(p) -> boolean (sample, no `sample()` needed)
uniform(a, b) -> number (sample)
gaussian(mu, sg) -> number (sample)
beta(a, b) -> number (sample)
dirichlet(alpha) -> tensor (sample; alpha must be a Vector)
randomInteger(n) -> int 0..n-1 (sample)
uniformDrift({a, b, width}) -> sample (drift kernel; do NOT wrap in sample())
dirichletDrift({alpha, conc.}) -> sample (drift kernel; do NOT wrap in sample())
Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
Bernoulli({p}) Beta({a, b}) Gaussian({mu, sigma})
Uniform({a, b}) Categorical({vs, ps}) Binomial({p, n})
Dirichlet({alpha}) Multinomial({ps, n}) Poisson({mu})
Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.
Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.
Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.
Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.
Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression. user message
In WebPPL, model the number of heads in three independent fair coin flips. Define a function
binomial (no arguments) that samples three Bernoulli(0.5) values a, b, c and returns a + b + c. Then compute the marginal distribution using Infer({ model: binomial }) and assign it to binomialDist. End your program with var ANSWER = binomialDist;.groundtruth code
var binomial = function() {
var a = sample(Bernoulli({ p: 0.5 }))
var b = sample(Bernoulli({ p: 0.5 }))
var c = sample(Bernoulli({ p: 0.5 }))
return a + b + c
}
var binomialDist = Infer({ model: binomial })
viz(binomialDist)
var ANSWER = (binomialDist);
no run
groundtruth output
10.3750
20.3750
00.1250
30.1250
raw JSON
{
"__kind": "distribution",
"probs": [
0.12500000000000003,
0.3750000000000001,
0.3750000000000001,
0.12500000000000003
],
"support": [
0,
1,
2,
3
]
} feedback
not signed rating as ✎
dippl-02-webppl/atom-3
prompt
system base instructions
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.
Answer format (strict): emit exactly one fenced code block.
```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```
The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.
Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies). system WebPPL primer
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:
Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.
Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.
Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:
flip(p) -> boolean (sample, no `sample()` needed)
uniform(a, b) -> number (sample)
gaussian(mu, sg) -> number (sample)
beta(a, b) -> number (sample)
dirichlet(alpha) -> tensor (sample; alpha must be a Vector)
randomInteger(n) -> int 0..n-1 (sample)
uniformDrift({a, b, width}) -> sample (drift kernel; do NOT wrap in sample())
dirichletDrift({alpha, conc.}) -> sample (drift kernel; do NOT wrap in sample())
Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
Bernoulli({p}) Beta({a, b}) Gaussian({mu, sigma})
Uniform({a, b}) Categorical({vs, ps}) Binomial({p, n})
Dirichlet({alpha}) Multinomial({ps, n}) Poisson({mu})
Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.
Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.
Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.
Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.
Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression. user message
In WebPPL, define a function
funnyBinomial (no arguments) that samples three independent Bernoulli(0.5) variables a, b, c, applies factor((a || b) ? 0 : -2) to downweight executions where neither a nor b is true, and returns a + b + c. Compute its marginal distribution with Infer({ model: funnyBinomial }) and assign it to funnyBinomialDist. End your program with var ANSWER = funnyBinomialDist;.groundtruth code
var funnyBinomial = function(){
var a = sample(Bernoulli({ p: 0.5 }))
var b = sample(Bernoulli({ p: 0.5 }))
var c = sample(Bernoulli({ p: 0.5 }))
factor( (a || b) ? 0 : -2)
return a + b + c}
var funnyBinomialDist = Infer({ model: funnyBinomial })
viz(funnyBinomialDist)
var ANSWER = (funnyBinomialDist);
no run
groundtruth output
20.4784
10.3405
30.1595
00.0216
raw JSON
{
"__kind": "distribution",
"probs": [
0.021582266489998118,
0.34052742216333254,
0.47841773351000183,
0.15947257783666724
],
"support": [
0,
1,
2,
3
]
} feedback
not signed rating as ✎
03-enumeration 3 atoms 3
dippl-03-enumeration/atom-1
prompt
system base instructions
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.
Answer format (strict): emit exactly one fenced code block.
```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```
The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.
Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies). system WebPPL primer
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:
Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.
Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.
Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:
flip(p) -> boolean (sample, no `sample()` needed)
uniform(a, b) -> number (sample)
gaussian(mu, sg) -> number (sample)
beta(a, b) -> number (sample)
dirichlet(alpha) -> tensor (sample; alpha must be a Vector)
randomInteger(n) -> int 0..n-1 (sample)
uniformDrift({a, b, width}) -> sample (drift kernel; do NOT wrap in sample())
dirichletDrift({alpha, conc.}) -> sample (drift kernel; do NOT wrap in sample())
Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
Bernoulli({p}) Beta({a, b}) Gaussian({mu, sigma})
Uniform({a, b}) Categorical({vs, ps}) Binomial({p, n})
Dirichlet({alpha}) Multinomial({ps, n}) Poisson({mu})
Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.
Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.
Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.
Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.
Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression. user message
Rewrite the following recursive
factorial function in continuation-passing style (CPS). In CPS, functions never return; instead they call a continuation k with the value they would have returned.
Here is the original factorial:
var factorial = function(n) {
if (n == 0) {
return 1;
} else {
return factorial(n-1) * n;
}
}
Write a WebPPL function cpsFactorial(k, n) in CPS such that it calls k with n! rather than returning it. Then call cpsFactorial(print, 5) as the final expression. End your program with var ANSWER = cpsFactorial(print, 5);.groundtruth code
var cpsFactorial = function(k, n) {
if (n == 0) {
k(1);
} else {
cpsFactorial(
function(x){ k(x * n) },
n - 1);
}
}
cpsFactorial(print, 5)
var ANSWER = (cpsFactorial(function(x){return x;}, 5));
no run
groundtruth output
120
feedback
not signed rating as ✎
dippl-03-enumeration/atom-2
prompt
system base instructions
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.
Answer format (strict): emit exactly one fenced code block.
```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```
The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.
Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies). system WebPPL primer
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:
Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.
Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.
Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:
flip(p) -> boolean (sample, no `sample()` needed)
uniform(a, b) -> number (sample)
gaussian(mu, sg) -> number (sample)
beta(a, b) -> number (sample)
dirichlet(alpha) -> tensor (sample; alpha must be a Vector)
randomInteger(n) -> int 0..n-1 (sample)
uniformDrift({a, b, width}) -> sample (drift kernel; do NOT wrap in sample())
dirichletDrift({alpha, conc.}) -> sample (drift kernel; do NOT wrap in sample())
Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
Bernoulli({p}) Beta({a, b}) Gaussian({mu, sigma})
Uniform({a, b}) Categorical({vs, ps}) Binomial({p, n})
Dirichlet({alpha}) Multinomial({ps, n}) Poisson({mu})
Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.
Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.
Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.
Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.
Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression. user message
Extend a CPS factorial function to handle negative inputs using an error continuation. Write a WebPPL function
totalCpsFactorial(k, err, n) in continuation-passing style where:
- If n < 0, call err with an error message string
- If n == 0, call k with 1
- Otherwise, recurse: call totalCpsFactorial with a continuation that multiplies the result by n, passing along err and n - 1
Also define var printError = function(x){ print("Error: " + x); }. Then call totalCpsFactorial(print, printError, 5) and totalCpsFactorial(print, printError, -1) as the final two statements. End your program with var ANSWER = totalCpsFactorial(print, printError, -1); as the last binding (the error-path call).groundtruth code
var totalCpsFactorial = function(k, err, n) {
if (n < 0) {
err("cpsFactorial: n < 0!")
} else if (n == 0) {
k(1);
} else {
totalCpsFactorial(
function(x){ k(x * n) },
err,
n - 1);
}
}
var printError = function(x){
print("Error: " + x);
}
totalCpsFactorial(print, printError, 5)
totalCpsFactorial(print, printError, -1)
var ANSWER = (totalCpsFactorial(function(x){return x;}, function(e){return 'Error: ' + e;}, -1));
no run
groundtruth output
"Error: cpsFactorial: n < 0!"
feedback
not signed rating as ✎
dippl-03-enumeration/atom-3
prompt
system base instructions
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.
Answer format (strict): emit exactly one fenced code block.
```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```
The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.
Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies). system WebPPL primer
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:
Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.
Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.
Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:
flip(p) -> boolean (sample, no `sample()` needed)
uniform(a, b) -> number (sample)
gaussian(mu, sg) -> number (sample)
beta(a, b) -> number (sample)
dirichlet(alpha) -> tensor (sample; alpha must be a Vector)
randomInteger(n) -> int 0..n-1 (sample)
uniformDrift({a, b, width}) -> sample (drift kernel; do NOT wrap in sample())
dirichletDrift({alpha, conc.}) -> sample (drift kernel; do NOT wrap in sample())
Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
Bernoulli({p}) Beta({a, b}) Gaussian({mu, sigma})
Uniform({a, b}) Categorical({vs, ps}) Binomial({p, n})
Dirichlet({alpha}) Multinomial({ps, n}) Poisson({mu})
Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.
Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.
Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.
Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.
Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression. user message
In WebPPL, define a function
binomial (no arguments) that samples three independent Bernoulli random variables with probabilities 0.1, 0.9, and 0.1 respectively (variables a, b, c), and returns a + b + c.
Then compute and return a record with three keys, each the marginal distribution of binomial under a different enumeration strategy (all with maxExecutions: 10):
- depthFirst: Infer({ model: binomial, method: "enumerate", maxExecutions: 10, strategy: "depthFirst" })
- breadthFirst: Infer({ model: binomial, method: "enumerate", maxExecutions: 10, strategy: "breadthFirst" })
- likelyFirst: Infer({ model: binomial, method: "enumerate", maxExecutions: 10, strategy: "likelyFirst" })
End with var ANSWER = { depthFirst: depthFirstDist, breadthFirst: breadthFirstDist, likelyFirst: likelyFirstDist };.groundtruth code
var binomial = function(){
var a = sample(Bernoulli({ p: 0.1 }))
var b = sample(Bernoulli({ p: 0.9 }))
var c = sample(Bernoulli({ p: 0.1 }))
return a + b + c
}
var maxExec = 10
viz(Infer({
model: binomial,
method: 'enumerate',
maxExecutions: maxExec,
strategy: 'depthFirst'
}));
viz(Infer({
model: binomial,
method: 'enumerate',
maxExecutions: maxExec,
strategy: 'breadthFirst'
}));
viz(Infer({
model: binomial,
method: 'enumerate',
maxExecutions: maxExec,
strategy: 'likelyFirst',
}));
var ANSWER = ({depthFirst: Infer({model: binomial, method: 'enumerate', maxExecutions: 10, strategy: 'depthFirst'}), breadthFirst: Infer({model: binomial, method: 'enumerate', maxExecutions: 10, strategy: 'breadthFirst'}), likelyFirst: Infer({model: binomial, method: 'enumerate', maxExecutions: 10, strategy: 'likelyFirst'})});
no run
groundtruth output
{
"depthFirst": {
"__kind": "distribution",
"probs": [
0.08099999999999997,
0.747,
0.163,
0.009000000000000008
],
"support": [
0,
1,
2,
3
]
},
"breadthFirst": {
"__kind": "distribution",
"probs": [
0.08099999999999997,
0.747,
0.163,
0.009000000000000008
],
"support": [
0,
1,
2,
3
]
},
"likelyFirst": {
"__kind": "distribution",
"probs": [
0.08099999999999997,
0.747,
0.16300000000000003,
0.009000000000000008
],
"support": [
0,
1,
2,
3
]
}
} feedback
not signed rating as ✎
04-factorseq 5 atoms 5
dippl-04-factorseq/atom-1
prompt
system base instructions
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.
Answer format (strict): emit exactly one fenced code block.
```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```
The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.
Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies). system WebPPL primer
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:
Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.
Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.
Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:
flip(p) -> boolean (sample, no `sample()` needed)
uniform(a, b) -> number (sample)
gaussian(mu, sg) -> number (sample)
beta(a, b) -> number (sample)
dirichlet(alpha) -> tensor (sample; alpha must be a Vector)
randomInteger(n) -> int 0..n-1 (sample)
uniformDrift({a, b, width}) -> sample (drift kernel; do NOT wrap in sample())
dirichletDrift({alpha, conc.}) -> sample (drift kernel; do NOT wrap in sample())
Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
Bernoulli({p}) Beta({a, b}) Gaussian({mu, sigma})
Uniform({a, b}) Categorical({vs, ps}) Binomial({p, n})
Dirichlet({alpha}) Multinomial({ps, n}) Poisson({mu})
Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.
Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.
Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.
Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.
Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression. user message
In WebPPL, implement a Hidden Markov Model with binary states and observations.
Define:
-
transition(s): returns flip(0.7) if s is true, flip(0.3) otherwise
- observeState(s): returns flip(0.9) if s is true, flip(0.1) otherwise
- hmm(n): recursive function; base case (n==1) starts with state true and empty observations list; each step samples a new state via transition from the last state, samples a new observation via observeState, and returns {states: ..., observations: ...} with both arrays extended
Define var trueObs = [false, false, false] and a model function that runs hmm(3), applies factor(_.isEqual(r.observations, trueObs) ? 0 : -Infinity), and returns r.states.
Compute the posterior with Infer({ model }) and assign to ANSWER. End with var ANSWER = Infer({ model });.groundtruth code
var transition = function(s) {
return s ? flip(0.7) : flip(0.3)
}
var observeState = function(s) {
return s ? flip(0.9) : flip(0.1)
}
observeState(transition(true))
///fold:
var transition = function(s) {
return s ? flip(0.7) : flip(0.3)
}
var observeState = function(s) {
return s ? flip(0.9) : flip(0.1)
}
///
var hmm = function(n) {
var prev = (n==1) ? {states: [true], observations:[]} : hmm(n-1)
var newState = transition(prev.states[prev.states.length-1])
var newObs = observeState(newState)
return {
states: prev.states.concat([newState]),
observations: prev.observations.concat([newObs])
}
}
hmm(4)
///fold:
var transition = function(s) {
return s ? flip(0.7) : flip(0.3)
}
var observeState = function(s) {
return s ? flip(0.9) : flip(0.1)
}
var hmm = function(n) {
var prev = (n==1) ? {states: [true], observations:[]} : hmm(n-1)
var newState = transition(prev.states[prev.states.length-1])
var newObs = observeState(newState)
return {
states: prev.states.concat([newState]),
observations: prev.observations.concat([newObs])
}
}
///
//some true observations (the data we observe):
var trueObs = [false, false, false]
var model = function(){
var r = hmm(3)
factor(_.isEqual(r.observations, trueObs) ? 0 : -Infinity)
return r.states
};
viz.table(Infer({ model }))
var ANSWER = (Infer({ model }));
no run
groundtruth output
[true,false,false,false]0.8297
[true,true,false,false]0.0922
[true,false,false,true]0.0395
[true,false,true,false]0.0169
[true,true,true,false]0.0102
[true,false,true,true]0.0044
[true,true,false,true]0.0044
[true,true,true,true]0.0027
raw JSON
{
"__kind": "distribution",
"probs": [
0.8296918550634864,
0.03950913595540412,
0.01693248683803033,
0.004389903995044901,
0.092187983895943,
0.004389903995044901,
0.010243109321771443,
0.0026556209352740735
],
"support": [
[
true,
false,
false,
false
],
[
true,
false,
false,
true
],
[
true,
false,
true,
false
],
[
true,
false,
true,
true
],
[
true,
true,
false,
false
],
[
true,
true,
false,
true
],
[
true,
true,
true,
false
],
[
true,
true,
true,
true
]
]
} feedback
not signed rating as ✎
dippl-04-factorseq/atom-2
prompt
system base instructions
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.
Answer format (strict): emit exactly one fenced code block.
```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```
The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.
Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies). system WebPPL primer
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:
Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.
Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.
Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:
flip(p) -> boolean (sample, no `sample()` needed)
uniform(a, b) -> number (sample)
gaussian(mu, sg) -> number (sample)
beta(a, b) -> number (sample)
dirichlet(alpha) -> tensor (sample; alpha must be a Vector)
randomInteger(n) -> int 0..n-1 (sample)
uniformDrift({a, b, width}) -> sample (drift kernel; do NOT wrap in sample())
dirichletDrift({alpha, conc.}) -> sample (drift kernel; do NOT wrap in sample())
Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
Bernoulli({p}) Beta({a, b}) Gaussian({mu, sigma})
Uniform({a, b}) Categorical({vs, ps}) Binomial({p, n})
Dirichlet({alpha}) Multinomial({ps, n}) Poisson({mu})
Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.
Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.
Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.
Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.
Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression. user message
In WebPPL, implement a probabilistic context-free grammar (PCFG) and compute the distribution over the next word after "tall John".
Define
pcfgTransition(symbol) using rule tables:
- "start" expands to ["NP","V","NP"] (prob 0.4) or ["NP","V"] (prob 0.6)
- "NP" expands to ["A","NP"] (prob 0.4) or ["N"] (prob 0.6)
Use discrete(rules[symbol].probs) to sample the rule index.
Define preTerminal(symbol) returning true for "N", "V", or "A".
Define terminal(symbol) using word tables:
- "N": "John" (0.6), "soup" (0.4)
- "V": "loves" (0.3), "hates" (0.3), "runs" (0.4)
- "A": "tall" (0.6), "salty" (0.4)
Use discrete(rules[symbol].probs) to sample.
Define mutually recursive pcfg(symbol) and expand(symbols) to generate a terminal yield as an array.
Define a model function that samples y = pcfg("start"), applies factor(_.isEqual(y.slice(0,2), ["tall","John"]) ? 0 : -Infinity), and returns y[2] ? y[2] : "".
End with var ANSWER = Infer({ model, method: "enumerate", maxExecutions: 20 });.groundtruth code
var pcfgTransition = function(symbol) {
var rules = {'start': {rhs: [['NP', 'V', 'NP'], ['NP', 'V']], probs: [0.4, 0.6]},
'NP': {rhs: [['A', 'NP'], ['N']], probs: [0.4, 0.6]} }
return rules[symbol].rhs[ discrete(rules[symbol].probs) ]
}
var preTerminal = function(symbol) {
return symbol=='N' | symbol=='V' | symbol=='A'
}
var terminal = function(symbol) {
var rules = {'N': {words: ['John', 'soup'], probs: [0.6, 0.4]},
'V': {words: ['loves', 'hates', 'runs'], probs: [0.3, 0.3, 0.4]},
'A': {words: ['tall', 'salty'], probs: [0.6, 0.4]} }
return rules[symbol].words[ discrete(rules[symbol].probs) ]
}
var pcfg = function(symbol) {
preTerminal(symbol) ? [terminal(symbol)] : expand(pcfgTransition(symbol))
}
var expand = function(symbols) {
if(symbols.length==0) {
return []
} else {
var f = pcfg(symbols[0])
return f.concat(expand(symbols.slice(1)))
}
}
var model = function(){
var y = pcfg("start")
factor(_.isEqual(y.slice(0,2), ["tall", "John"]) ? 0 : -Infinity) // yield starts with "tall John"
return y[2] ? y[2] : "" // distribution on next word?
}
viz.table(Infer({ model, method: 'enumerate', maxExecutions: 20}))
var ANSWER = (Infer({ model, method: 'enumerate', maxExecutions: 20 }));
no run
groundtruth output
runs0.4000
hates0.3000
loves0.3000
raw JSON
{
"__kind": "distribution",
"probs": [
0.39999999999999997,
0.30000000000000004,
0.30000000000000004
],
"support": [
"runs",
"hates",
"loves"
]
} feedback
not signed rating as ✎
dippl-04-factorseq/atom-3
prompt
system base instructions
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.
Answer format (strict): emit exactly one fenced code block.
```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```
The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.
Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies). system WebPPL primer
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:
Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.
Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.
Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:
flip(p) -> boolean (sample, no `sample()` needed)
uniform(a, b) -> number (sample)
gaussian(mu, sg) -> number (sample)
beta(a, b) -> number (sample)
dirichlet(alpha) -> tensor (sample; alpha must be a Vector)
randomInteger(n) -> int 0..n-1 (sample)
uniformDrift({a, b, width}) -> sample (drift kernel; do NOT wrap in sample())
dirichletDrift({alpha, conc.}) -> sample (drift kernel; do NOT wrap in sample())
Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
Bernoulli({p}) Beta({a, b}) Gaussian({mu, sigma})
Uniform({a, b}) Categorical({vs, ps}) Binomial({p, n})
Dirichlet({alpha}) Multinomial({ps, n}) Poisson({mu})
Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.
Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.
Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.
Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.
Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression. user message
In WebPPL, implement an incrementalized HMM that incorporates observations stepwise during recursion rather than at the end.
Define:
-
transition(s): returns flip(0.7) if s, else flip(0.3)
- observeState(s): returns flip(0.9) if s, else flip(0.1)
- trueObs = [false, false, false]
- hmmRecur(n, states, observations): samples newState via transition from states[states.length-1], samples newObs via observeState, immediately applies factor(newObs == trueObs[observations.length] ? 0 : -Infinity), then recurses (n > 1) or returns {states: newStates, observations: newObservations}
- hmm(n): calls hmmRecur(n, [true], [])
Define a model function that runs var r = hmm(3) and returns r.states. Compute Infer({ model }) and assign to ANSWER. End with var ANSWER = Infer({ model });.groundtruth code
///fold:
var transition = function(s) {
return s ? flip(0.7) : flip(0.3)
}
var observeState = function(s) {
return s ? flip(0.9) : flip(0.1)
}
var trueObs = [false, false, false]
///
var hmmRecur = function(n, states, observations){
var newState = transition(states[states.length-1])
var newObs = observeState(newState)
factor(newObs==trueObs[observations.length] ? 0 : -Infinity)
var newStates = states.concat([newState])
var newObservations = observations.concat([newObs])
return (n==1) ? { states: newStates, observations: newObservations } :
hmmRecur(n-1, newStates, newObservations)
}
var hmm = function(n) {
return hmmRecur(n, [true], [])
}
var model = function(){
var r = hmm(3)
return r.states
}
viz.table(Infer({ model }))
var ANSWER = (Infer({ model }));
no run
groundtruth output
[true,false,false,false]0.8297
[true,true,false,false]0.0922
[true,false,false,true]0.0395
[true,false,true,false]0.0169
[true,true,true,false]0.0102
[true,false,true,true]0.0044
[true,true,false,true]0.0044
[true,true,true,true]0.0027
raw JSON
{
"__kind": "distribution",
"probs": [
0.8296918550634864,
0.03950913595540412,
0.01693248683803033,
0.004389903995044901,
0.092187983895943,
0.004389903995044901,
0.010243109321771443,
0.0026556209352740735
],
"support": [
[
true,
false,
false,
false
],
[
true,
false,
false,
true
],
[
true,
false,
true,
false
],
[
true,
false,
true,
true
],
[
true,
true,
false,
false
],
[
true,
true,
false,
true
],
[
true,
true,
true,
false
],
[
true,
true,
true,
true
]
]
} feedback
not signed rating as ✎
dippl-04-factorseq/atom-4
prompt
system base instructions
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.
Answer format (strict): emit exactly one fenced code block.
```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```
The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.
Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies). system WebPPL primer
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:
Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.
Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.
Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:
flip(p) -> boolean (sample, no `sample()` needed)
uniform(a, b) -> number (sample)
gaussian(mu, sg) -> number (sample)
beta(a, b) -> number (sample)
dirichlet(alpha) -> tensor (sample; alpha must be a Vector)
randomInteger(n) -> int 0..n-1 (sample)
uniformDrift({a, b, width}) -> sample (drift kernel; do NOT wrap in sample())
dirichletDrift({alpha, conc.}) -> sample (drift kernel; do NOT wrap in sample())
Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
Bernoulli({p}) Beta({a, b}) Gaussian({mu, sigma})
Uniform({a, b}) Categorical({vs, ps}) Binomial({p, n})
Dirichlet({alpha}) Multinomial({ps, n}) Poisson({mu})
Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.
Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.
Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.
Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.
Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression. user message
In WebPPL, implement an HMM using
sampleWithFactor to simultaneously sample observations and incorporate evidence at each step.
Define:
- transition(s): returns flip(0.7) if s, else flip(0.3)
- observeState: a cached function — cache(function(s) { return Bernoulli({p: s ? .9 : .1}) }) — that returns a distribution object
- trueObs = [false, false, false]
- hmmRecur(n, states, observations): samples newState = transition(last(states)), then samples newObs using sampleWithFactor(observeState(newState), function(v){ return v == trueObs[observations.length] ? 0 : -Infinity }), extends both arrays, recurses or returns {states: ..., observations: ...}
- hmm(n): calls hmmRecur(n, [true], [])
Define a model function returning hmm(3).states. End with var ANSWER = Infer({ model, method: "enumerate", maxExecutions: 500 });.groundtruth code
///fold:
var transition = function(s) {
return s ? flip(0.7) : flip(0.3)
}
var observeState = cache(function(s) {
return Bernoulli({p: s ? .9 : .1})
})
var trueObs = [false, false, false]
///
var hmmRecur = function(n, states, observations){
var newState = transition(states[states.length-1])
var newObs = sampleWithFactor(
observeState(newState),
function(v){return v==trueObs[observations.length] ? 0 : -Infinity})
var newStates = states.concat([newState])
var newObservations = observations.concat([newObs])
return ((n==1) ?
{ states: newStates, observations: newObservations } :
hmmRecur(n-1, newStates, newObservations));
}
var hmm = function(n) {
return hmmRecur(n,[true],[])
}
var model = function(){
var r = hmm(3)
return r.states
}
viz.table(Infer({ model, method: 'enumerate', maxExecutions: 500 }))
var ANSWER = (Infer({ model, method: 'enumerate', maxExecutions: 500 }));
no run
groundtruth output
[true,false,false,false]0.8297
[true,true,false,false]0.0922
[true,false,false,true]0.0395
[true,false,true,false]0.0169
[true,true,true,false]0.0102
[true,true,false,true]0.0044
[true,false,true,true]0.0044
[true,true,true,true]0.0027
raw JSON
{
"__kind": "distribution",
"probs": [
0.8296918550634872,
0.09218798389594308,
0.03950913595540415,
0.01693248683803036,
0.010243109321771443,
0.004389903995044901,
0.004389903995044901,
0.002655620935274078
],
"support": [
[
true,
false,
false,
false
],
[
true,
true,
false,
false
],
[
true,
false,
false,
true
],
[
true,
false,
true,
false
],
[
true,
true,
true,
false
],
[
true,
true,
false,
true
],
[
true,
false,
true,
true
],
[
true,
true,
true,
true
]
]
} feedback
not signed rating as ✎
dippl-04-factorseq/atom-5
prompt
system base instructions
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.
Answer format (strict): emit exactly one fenced code block.
```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```
The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.
Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies). system WebPPL primer
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:
Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.
Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.
Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:
flip(p) -> boolean (sample, no `sample()` needed)
uniform(a, b) -> number (sample)
gaussian(mu, sg) -> number (sample)
beta(a, b) -> number (sample)
dirichlet(alpha) -> tensor (sample; alpha must be a Vector)
randomInteger(n) -> int 0..n-1 (sample)
uniformDrift({a, b, width}) -> sample (drift kernel; do NOT wrap in sample())
dirichletDrift({alpha, conc.}) -> sample (drift kernel; do NOT wrap in sample())
Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
Bernoulli({p}) Beta({a, b}) Gaussian({mu, sigma})
Uniform({a, b}) Categorical({vs, ps}) Binomial({p, n})
Dirichlet({alpha}) Multinomial({ps, n}) Poisson({mu})
Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.
Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.
Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.
Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.
Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression. user message
In WebPPL, demonstrate the technique of inserting canceling heuristic factors to guide enumeration. Here is the original model:
var binomial = function(){
var a = sample(Bernoulli({ p: 0.1 }))
var b = sample(Bernoulli({ p: 0.9 }))
var c = sample(Bernoulli({ p: 0.1 }))
factor((a||b||c) ? 0 : -10)
return a + b + c
}
Rewrite this as binomialHeuristic by splitting the single end-of-model factor into three interleaved factors that cancel correctly:
- After sampling a: factor(a ? 0 : -1)
- After sampling b: factor(((a||b) ? 0 : -1) - (a ? 0 : -1))
- After sampling c: factor(((a||b||c) ? 0 : -10) - ((a||b) ? 0 : -1))
Run both models with Infer({ model: ..., method: "enumerate", maxExecutions: 2 }). End with var ANSWER = { original: Infer({model: binomial, method: "enumerate", maxExecutions: 2}), heuristic: Infer({model: binomialHeuristic, method: "enumerate", maxExecutions: 2}) };.groundtruth code
var binomial = function(){
var a = sample(Bernoulli({ p: 0.1 }))
factor(a ? 0 : -1)
var b = sample(Bernoulli({ p: 0.9 }))
factor(((a||b)?0:-1) - (a?0:-1))
var c = sample(Bernoulli({ p: 0.1 }))
factor(((a||b||c) ? 0:-10) - ((a||b)?0:-1))
return a + b + c
}
viz(Infer({ model: binomial, method: 'enumerate', maxExecutions: 2 }))
var ANSWER = ({ original: Infer({model: function(){ var a = sample(Bernoulli({ p: 0.1 })); var b = sample(Bernoulli({ p: 0.9 })); var c = sample(Bernoulli({ p: 0.1 })); factor((a||b||c) ? 0 : -10); return a + b + c }, method: 'enumerate', maxExecutions: 2}), heuristic: Infer({model: binomial, method: 'enumerate', maxExecutions: 2}) });
no run
groundtruth output
{
"original": {
"__kind": "distribution",
"probs": [
0.8999999999999999,
0.10000000000000002
],
"support": [
1,
2
]
},
"heuristic": {
"__kind": "distribution",
"probs": [
0.8999999999999999,
0.10000000000000002
],
"support": [
1,
2
]
}
} feedback
not signed rating as ✎
05-particlefilter 4 atoms 4
dippl-05-particlefilter/atom-1
prompt
system base instructions
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.
Answer format (strict): emit exactly one fenced code block.
```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```
The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.
Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies). system WebPPL primer
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:
Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.
Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.
Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:
flip(p) -> boolean (sample, no `sample()` needed)
uniform(a, b) -> number (sample)
gaussian(mu, sg) -> number (sample)
beta(a, b) -> number (sample)
dirichlet(alpha) -> tensor (sample; alpha must be a Vector)
randomInteger(n) -> int 0..n-1 (sample)
uniformDrift({a, b, width}) -> sample (drift kernel; do NOT wrap in sample())
dirichletDrift({alpha, conc.}) -> sample (drift kernel; do NOT wrap in sample())
Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
Bernoulli({p}) Beta({a, b}) Gaussian({mu, sigma})
Uniform({a, b}) Categorical({vs, ps}) Binomial({p, n})
Dirichlet({alpha}) Multinomial({ps, n}) Poisson({mu})
Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.
Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.
Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.
Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.
Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression. user message
In WebPPL, implement a simple HMM with binary states and soft observation factors.
Define
hmm(states, observations) recursively:
- prevState is the last element of states
- Sample state from Bernoulli({p: prevState ? .9 : .1})
- Apply factor((state == observations[0]) ? 0 : -2) to soft-match the state to the first observation
- If observations is empty, return states; otherwise recurse with states.concat([state]) and observations.slice(1)
Set var observations = [true, true, true, true] and var startState = false. Compute the posterior and assign to ANSWER. End with var ANSWER = Infer({ model() { return hmm([startState], observations) } });.groundtruth code
var hmm = function(states, observations){
var prevState = states[states.length - 1];
var state = sample(Bernoulli({p: prevState ? .9 : .1}));
factor((state == observations[0]) ? 0 : -2);
if (observations.length == 0) {
return states;
} else {
return hmm(states.concat([state]), observations.slice(1));
}
}
var observations = [true, true, true, true];
var startState = false;
viz.table(Infer({
model() {
return hmm([startState], observations)
}
}))
var ANSWER = (Infer({ model() { return hmm([startState], observations) } }));
no run
groundtruth output
[false,true,true,true,true]0.8454
[false,false,true,true,true]0.1144
[false,false,false,true,true]0.0155
[false,true,true,true,false]0.0127
[false,false,false,false,false]0.0026
[false,false,false,false,true]0.0021
[false,false,true,true,false]0.0017
[false,true,true,false,false]0.0017
[false,true,false,true,true]0.0014
[false,true,true,false,true]0.0014
[false,false,false,true,false]0.0002
[false,false,true,false,false]0.0002
… 4 more
raw JSON
{
"__kind": "distribution",
"probs": [
0.002552337415047737,
0.002095484927020818,
0.00023283165855786852,
0.015483655680220422,
0.0002328316585578681,
0.000191156242965684,
0.001720406186691157,
0.11440960043767502,
0.0002328316585578681,
0.000191156242965684,
0.000021239582551742652,
0.0014124642029342566,
0.001720406186691157,
0.0014124642029342566,
0.01271217782640834,
0.8453789558902199
],
"support": [
[
false,
false,
false,
false,
false
],
[
false,
false,
false,
false,
true
],
[
false,
false,
false,
true,
false
],
[
false,
false,
false,
true,
true
],
[
false,
false,
true,
false,
false
],
[
false,
false,
true,
false,
true
],
[
false,
false,
true,
true,
false
],
[
false,
false,
true,
true,
true
],
[
false,
true,
false,
false,
false
],
[
false,
true,
false,
false,
true
],
[
false,
true,
false,
true,
false
],
[
false,
true,
false,
true,
true
],
[
false,
true,
true,
false,
false
],
[
false,
true,
true,
false,
true
],
[
false,
true,
true,
true,
false
],
[
false,
true,
true,
true,
true
]
]
} feedback
not signed rating as ✎
dippl-05-particlefilter/atom-2
prompt
system base instructions
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.
Answer format (strict): emit exactly one fenced code block.
```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```
The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.
Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies). system WebPPL primer
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:
Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.
Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.
Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:
flip(p) -> boolean (sample, no `sample()` needed)
uniform(a, b) -> number (sample)
gaussian(mu, sg) -> number (sample)
beta(a, b) -> number (sample)
dirichlet(alpha) -> tensor (sample; alpha must be a Vector)
randomInteger(n) -> int 0..n-1 (sample)
uniformDrift({a, b, width}) -> sample (drift kernel; do NOT wrap in sample())
dirichletDrift({alpha, conc.}) -> sample (drift kernel; do NOT wrap in sample())
Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
Bernoulli({p}) Beta({a, b}) Gaussian({mu, sigma})
Uniform({a, b}) Categorical({vs, ps}) Binomial({p, n})
Dirichlet({alpha}) Multinomial({ps, n}) Poisson({mu})
Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.
Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.
Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.
Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.
Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression. user message
In WebPPL, implement a two-dimensional Gaussian random walk.
Define:
-
init(dim): returns an array of dim samples from gaussian(200, 1) using repeat
- transition(pos): maps over pos, replacing each coordinate x with a sample from gaussian(x, 10)
- gaussianRandomWalk(n, dim): recursively builds a list of positions; base case (n==1) is [init(dim)]; recursive case appends transition(last(prevStates)) to the previous state list
Sample a walk of 10 steps in 2 dimensions and assign to ANSWER. End with var ANSWER = gaussianRandomWalk(10, 2);.groundtruth code
///fold:
var drawLines = function(canvas, start, positions){
if (positions.length == 0) { return []; }
var next = positions[0];
canvas.line(start[0], start[1], next[0], next[1], 4, 0.2);
drawLines(canvas, next, positions.slice(1));
return;
}
var last = function(xs){
return xs[xs.length - 1];
}
///
var init = function(dim){
return repeat(dim, function(){ return gaussian(200, 1) });
}
var transition = function(pos){
return map(
function(x){ return gaussian(x, 10); },
pos
);
};
var gaussianRandomWalk = function(n, dim) {
var prevStates = (n==1) ? [init(dim)] : gaussianRandomWalk(n-1, dim);
var newState = transition(last(prevStates));
return prevStates.concat([newState]);
};
var positions = gaussianRandomWalk(100, 2);
// Draw model output
var canvas = Draw(400, 400, true)
drawLines(canvas, positions[0], positions.slice(1))
var ANSWER = (gaussianRandomWalk(10, 2));
no run
groundtruth output
[
[
199.17928338368102,
200.61813592881856
],
[
206.332471268273,
205.76981773009456
],
[
194.45226382776647,
228.722882998382
],
[
194.63248222039357,
213.93462415440072
],
[
197.58443582153956,
207.14322423225457
],
[
186.46856530180185,
221.23984160343517
],
[
191.8485041339506,
235.97123891123283
],
[
175.32888201870907,
242.90594421989812
],
[
165.53919390047477,
231.53174733453062
],
[
163.49342446214862,
222.65846090391753
],
[
152.58366211523042,
223.31601714135357
]
] feedback
not signed rating as ✎
dippl-05-particlefilter/atom-3
prompt
system base instructions
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.
Answer format (strict): emit exactly one fenced code block.
```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```
The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.
Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies). system WebPPL primer
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:
Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.
Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.
Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:
flip(p) -> boolean (sample, no `sample()` needed)
uniform(a, b) -> number (sample)
gaussian(mu, sg) -> number (sample)
beta(a, b) -> number (sample)
dirichlet(alpha) -> tensor (sample; alpha must be a Vector)
randomInteger(n) -> int 0..n-1 (sample)
uniformDrift({a, b, width}) -> sample (drift kernel; do NOT wrap in sample())
dirichletDrift({alpha, conc.}) -> sample (drift kernel; do NOT wrap in sample())
Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
Bernoulli({p}) Beta({a, b}) Gaussian({mu, sigma})
Uniform({a, b}) Categorical({vs, ps}) Binomial({p, n})
Dirichlet({alpha}) Multinomial({ps, n}) Poisson({mu})
Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.
Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.
Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.
Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.
Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression. user message
In WebPPL, implement a two-dimensional semi-Markov random walk with momentum.
Define:
-
last(xs): returns xs[xs.length - 1]
- secondLast(xs): returns xs[xs.length - 2]
- init(dim): returns an array of dim samples from gaussian(200, 1) using repeat
- transition(lastPos, secondLastPos): uses map2 over both position arrays; each new coordinate is gaussian(lastX + (lastX - secondLastX) * 0.7, 3)
- semiMarkovWalk(n, dim): base case (n==2) returns [init(dim), init(dim)]; recursive case appends transition(last(prevStates), secondLast(prevStates))
Sample a walk of 10 steps in 2 dimensions and assign to ANSWER. End with var ANSWER = semiMarkovWalk(10, 2);.groundtruth code
///fold:
var drawLines = function(canvas, start, positions){
if (positions.length == 0) { return []; }
var next = positions[0];
canvas.line(start[0], start[1], next[0], next[1], 4, 0.2);
drawLines(canvas, next, positions.slice(1));
return;
}
///
var init = function(dim){
return repeat(dim, function(){ return gaussian(200, 1) });
}
var transition = function(lastPos, secondLastPos){
return map2(
function(lastX, secondLastX){
var momentum = (lastX - secondLastX) * .7;
return gaussian(lastX + momentum, 3);
},
lastPos,
secondLastPos
);
};
var semiMarkovWalk = function(n, dim) {
var prevStates = (n==2) ? [init(dim), init(dim)] : semiMarkovWalk(n-1, dim);
var newState = transition(last(prevStates), secondLast(prevStates));
return prevStates.concat([newState]);
};
var positions = semiMarkovWalk(80, 2);
// Draw model output
var canvas = Draw(400, 400, true)
drawLines(canvas, positions[0], positions.slice(1))
var ANSWER = (semiMarkovWalk(10, 2));
no run
groundtruth output
[
[
199.93368884269614,
198.7971535448458
],
[
200.74140332990797,
200.5000707293415
],
[
203.27398444923335,
205.45165770487736
],
[
205.53498761612093,
205.57242586513922
],
[
210.2262306731615,
203.12919097347503
],
[
219.7454753501496,
200.06936815326307
],
[
227.89892253358266,
193.96084823654147
],
[
233.0446425610827,
188.95130292648278
],
[
234.6491870008044,
189.55977201957575
],
[
233.1907583077067,
195.8293851649039
],
[
229.01684679182674,
197.49875640440874
]
] feedback
not signed rating as ✎
dippl-05-particlefilter/atom-4
prompt
system base instructions
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.
Answer format (strict): emit exactly one fenced code block.
```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```
The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.
Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies). system WebPPL primer
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:
Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.
Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.
Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:
flip(p) -> boolean (sample, no `sample()` needed)
uniform(a, b) -> number (sample)
gaussian(mu, sg) -> number (sample)
beta(a, b) -> number (sample)
dirichlet(alpha) -> tensor (sample; alpha must be a Vector)
randomInteger(n) -> int 0..n-1 (sample)
uniformDrift({a, b, width}) -> sample (drift kernel; do NOT wrap in sample())
dirichletDrift({alpha, conc.}) -> sample (drift kernel; do NOT wrap in sample())
Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
Bernoulli({p}) Beta({a, b}) Gaussian({mu, sigma})
Uniform({a, b}) Categorical({vs, ps}) Binomial({p, n})
Dirichlet({alpha}) Multinomial({ps, n}) Poisson({mu})
Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.
Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.
Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.
Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.
Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression. user message
In WebPPL, implement a two-component Gaussian mixture model in 2D.
Define:
-
makeGaussian(dim): samples means (array of dim values from uniform(20, 380)) and stds (array of dim values from uniform(5, 50)), and returns a thunk that calls map2(gaussian, means, stds) to produce a 2D point
- var mixtureWeight = uniform(0, 1)
- var gaussian1 = makeGaussian(2) and var gaussian2 = makeGaussian(2)
- gaussianMixture(): returns gaussian1() if flip(mixtureWeight), else gaussian2()
Generate 20 points and assign to ANSWER. End with var ANSWER = repeat(20, gaussianMixture);.groundtruth code
///fold:
var drawPoints = function(canvas, points){
if (points.length > 0) {
var next = points[0];
canvas.circle(next[0], next[1], 2, "black", "white");
drawPoints(canvas, points.slice(1));
}
}
///
var makeGaussian = function(dim){
var means = repeat(dim, function(){uniform(20, 380)});
var stds = repeat(dim, function(){uniform(5, 50)});
return function(){
return map2(gaussian, means, stds);
}
}
var mixtureWeight = uniform(0, 1);
var gaussian1 = makeGaussian(2);
var gaussian2 = makeGaussian(2);
var gaussianMixture = function(){
if (flip(mixtureWeight)) {
return gaussian1();
} else {
return gaussian2();
}
}
var points = repeat(100, gaussianMixture);
// Draw model output
var canvas = Draw(400, 400, true);
drawPoints(canvas, points)
var ANSWER = (repeat(20, gaussianMixture));
no run
groundtruth output
[
[
262.12868030402575,
356.65138561784795
],
[
256.762516631989,
373.149997528529
],
[
279.43208358642073,
379.3504373407656
],
[
268.20991914052377,
379.1861504587717
],
[
240.01737337420593,
362.1200459058964
],
[
278.2243045406285,
395.95901435996393
],
[
259.12327929449265,
382.77304597112555
],
[
255.49766709501353,
349.1809580980835
],
[
287.1086571860336,
341.0140129101485
],
[
279.7674404325199,
415.98481734669775
],
[
250.334045010173,
345.67319992267414
],
[
258.83348592202105,
333.0936802621058
],
[
254.71612050019274,
369.99552616113766
],
[
260.42136168852096,
370.20110344309893
],
[
275.4651659824036,
375.9737560492925
],
[
287.62507553910484,
316.8286894048985
],
[
262.0275835845372,
328.73415233052134
],
[
257.2858422700161,
450.1157480595846
],
[
283.6172911606304,
362.6042842170142
],
[
253.8276847408976,
383.284705108851
]
] feedback
not signed rating as ✎
06-mcmc 2 atoms 2
dippl-06-mcmc/atom-1
prompt
system base instructions
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.
Answer format (strict): emit exactly one fenced code block.
```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```
The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.
Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies). system WebPPL primer
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:
Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.
Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.
Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:
flip(p) -> boolean (sample, no `sample()` needed)
uniform(a, b) -> number (sample)
gaussian(mu, sg) -> number (sample)
beta(a, b) -> number (sample)
dirichlet(alpha) -> tensor (sample; alpha must be a Vector)
randomInteger(n) -> int 0..n-1 (sample)
uniformDrift({a, b, width}) -> sample (drift kernel; do NOT wrap in sample())
dirichletDrift({alpha, conc.}) -> sample (drift kernel; do NOT wrap in sample())
Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
Bernoulli({p}) Beta({a, b}) Gaussian({mu, sigma})
Uniform({a, b}) Categorical({vs, ps}) Binomial({p, n})
Dirichlet({alpha}) Multinomial({ps, n}) Poisson({mu})
Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.
Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.
Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.
Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.
Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression. user message
In WebPPL, define a function
skewBinomial (no arguments) that samples three independent fair coin flips a, b, c from Bernoulli({p: 0.5}), applies factor((a|b) ? 0 : -1) to downweight executions where neither a nor b is true, and returns a + b + c. Compute the exact marginal distribution using enumeration. End with var ANSWER = Infer({ model: skewBinomial });.groundtruth code
var skewBinomial = function(){
var a = sample(Bernoulli({p: 0.5}))
var b = sample(Bernoulli({p: 0.5}))
var c = sample(Bernoulli({p: 0.5}))
factor( (a|b)?0:-1 )
return a + b + c
}
viz(Infer({ model: skewBinomial }))
var ANSWER = (Infer({ model: skewBinomial }));
no run
groundtruth output
20.4454
10.3515
30.1485
00.0546
raw JSON
{
"__kind": "distribution",
"probs": [
0.05461588628651796,
0.3515386287621727,
0.4453841137134821,
0.14846137123782735
],
"support": [
0,
1,
2,
3
]
} feedback
not signed rating as ✎
dippl-06-mcmc/atom-2
prompt
system base instructions
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.
Answer format (strict): emit exactly one fenced code block.
```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```
The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.
Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies). system WebPPL primer
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:
Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.
Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.
Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:
flip(p) -> boolean (sample, no `sample()` needed)
uniform(a, b) -> number (sample)
gaussian(mu, sg) -> number (sample)
beta(a, b) -> number (sample)
dirichlet(alpha) -> tensor (sample; alpha must be a Vector)
randomInteger(n) -> int 0..n-1 (sample)
uniformDrift({a, b, width}) -> sample (drift kernel; do NOT wrap in sample())
dirichletDrift({alpha, conc.}) -> sample (drift kernel; do NOT wrap in sample())
Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
Bernoulli({p}) Beta({a, b}) Gaussian({mu, sigma})
Uniform({a, b}) Categorical({vs, ps}) Binomial({p, n})
Dirichlet({alpha}) Multinomial({ps, n}) Poisson({mu})
Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.
Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.
Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.
Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.
Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression. user message
In WebPPL, define a function
skewBinomial (no arguments) that samples three independent fair coin flips a, b, c from Bernoulli({p: 0.5}), applies factor((a|b) ? 0 : -1) to downweight executions where neither a nor b is true, and returns a + b + c. Approximate the marginal distribution using MCMC with 1000 samples and 200 burn-in steps. End with var ANSWER = Infer({ model: skewBinomial, method: "MCMC", samples: 1000, burn: 200 });.groundtruth code
var skewBinomial = function(){
var a = sample(Bernoulli({p: 0.5}))
var b = sample(Bernoulli({p: 0.5}))
var c = sample(Bernoulli({p: 0.5}))
factor( (a|b)?0:-1 )
return a + b + c
}
viz(Infer({ model: skewBinomial }))
var ANSWER = (Infer({ model: skewBinomial, method: 'MCMC', samples: 1000, burn: 200 }));
no run
groundtruth output
20.4390
10.3500
30.1630
00.0480
raw JSON
{
"__kind": "distribution",
"probs": [
0.048,
0.35,
0.43899999999999995,
0.163
],
"support": [
0,
1,
2,
3
]
} feedback
not signed rating as ✎