ppl-gym

/ exercises (v2) press / to search · j/k navigate · esc clear
76 atoms · 75 scoreable (99%) · showing 76
bucket TV=0 41 TV<.05 7 TV<.5 8 TV<1 10 TV=1 3 val+ 4 val- 2 shape! 1
04.1-agents-as-programs 9 atoms 7✓ · 0✗
probmods2-agents-as-programs/ex1.a distribution 8 runs TV=0.00 TV<.05
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Modify the factor(...) line in the coin-flipping model so that the soft condition makes heads happen with approximately 95% probability:
Infer({method: 'enumerate'}, function () {
  var A = flip()
  factor(A) // edit this line
  return A
});
Hint: a factor of c gives weight exp(c). End your program with the Infer(...).
groundtruth code
var ANSWER = (Infer({method: 'enumerate'}, function () {
  var A = flip();
  factor(A*3);
  return A;
}));
generated code
sonnet-46-primer-v3 TV<.05 haiku-45-noprimer-v3 TV<.05 haiku-45-primer-v3 TV<.05 haiku-45-think-noprimer-v3 TV<.05 haiku-45-think-primer-v3 TV<.05 sonnet-46-noprimer-v3 TV<.05 sonnet-46-think-noprimer-v3 TV<.05 sonnet-46-think-primer-v3 TV<.05
var ANSWER = Infer({method: 'enumerate'}, function () {
  var A = flip();
  factor(A ? Math.log(19) : 0);
  return A;
});
var ANSWER = Infer({method: 'enumerate'}, function () {
  var A = flip()
  factor(A ? Math.log(19) : Math.log(1))
  return A
});
var ANSWER = Infer({method: 'enumerate'}, function () {
  var A = flip()
  factor(A ? Math.log(0.95 / 0.05) : Math.log(0.05 / 0.95))
  return A
});
Infer({method: 'enumerate'}, function () {
  var A = flip()
  factor(A ? Math.log(19) : 0)
  return A
});

var ANSWER = Infer({method: 'enumerate'}, function () {
  var A = flip()
  factor(A ? Math.log(19) : 0)
  return A
});
var ANSWER = Infer({method: 'enumerate'}, function () {
  var A = flip()
  factor(A ? Math.log(19) : 0)
  return A
});
var ANSWER = Infer({method: 'enumerate'}, function () {
  var A = flip()
  factor(A ? Math.log(19) : 0)
  return A
});
var ANSWER = Infer({method: 'enumerate'}, function () {
  var A = flip()
  factor(A ? Math.log(19) : 0)
  return A
});
var ANSWER = Infer({method: 'enumerate'}, function () {
  var A = flip();
  factor(A ? Math.log(19) : 0);
  return A;
});
groundtruth output
true0.9526
false0.0474
raw JSON
{
  "__kind": "distribution",
  "probs": [
    0.04742587317756678,
    0.9525741268224333
  ],
  "support": [
    false,
    true
  ]
}
feedback
not signed
probmods2-agents-as-programs/ex1.b distribution 8 runs TV=0.10 TV<.5
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Three coins are flipped. Use factor to favor outcomes where the number of heads (true's) equals 2:
Infer({}, function() {
    var a = flip(0.5);
    var b = flip(0.5);
    var c = flip(0.5);
    factor(...);  // fill in
    return a;
})
End your program with the Infer(...).
groundtruth code
var ANSWER = (Infer({}, function() {
    var a = flip(0.5);
    var b = flip(0.5);
    var c = flip(0.5);
    factor(1*((a+b+c)==2));
    return a;
}));
generated code
sonnet-46-primer-v3 TV<.5 haiku-45-noprimer-v3 TV<.5 haiku-45-primer-v3 TV<.5 haiku-45-think-noprimer-v3 TV=0 haiku-45-think-primer-v3 TV=0 sonnet-46-noprimer-v3 TV<.5 sonnet-46-think-noprimer-v3 TV<.5 sonnet-46-think-primer-v3 TV<.5
var ANSWER = Infer({}, function() {
    var a = flip(0.5);
    var b = flip(0.5);
    var c = flip(0.5);
    var numHeads = (a ? 1 : 0) + (b ? 1 : 0) + (c ? 1 : 0);
    factor(numHeads === 2 ? 0 : -Infinity);
    return a;
});
var ANSWER = Infer({}, function() {
    var a = flip(0.5);
    var b = flip(0.5);
    var c = flip(0.5);
    var numHeads = (a ? 1 : 0) + (b ? 1 : 0) + (c ? 1 : 0);
    factor(numHeads === 2 ? 2 : 0);
    return a;
});
var ANSWER = Infer({}, function() {
    var a = flip(0.5);
    var b = flip(0.5);
    var c = flip(0.5);
    var numHeads = (a ? 1 : 0) + (b ? 1 : 0) + (c ? 1 : 0);
    factor(numHeads === 2 ? 0 : -Infinity);
    return a;
});
var ANSWER = Infer({}, function() {
    var a = flip(0.5);
    var b = flip(0.5);
    var c = flip(0.5);
    var numHeads = (a ? 1 : 0) + (b ? 1 : 0) + (c ? 1 : 0);
    factor(numHeads === 2 ? 1 : 0);
    return a;
});
var ANSWER = Infer({}, function() {
    var a = flip(0.5);
    var b = flip(0.5);
    var c = flip(0.5);
    var numHeads = (a ? 1 : 0) + (b ? 1 : 0) + (c ? 1 : 0);
    factor(numHeads === 2 ? 1 : 0);
    return a;
});
var ANSWER = Infer({}, function() {
    var a = flip(0.5);
    var b = flip(0.5);
    var c = flip(0.5);
    var numHeads = (a ? 1 : 0) + (b ? 1 : 0) + (c ? 1 : 0);
    factor(numHeads === 2 ? 0 : -Infinity);
    return a;
});
var ANSWER = Infer({}, function() {
    var a = flip(0.5);
    var b = flip(0.5);
    var c = flip(0.5);
    var numHeads = (a ? 1 : 0) + (b ? 1 : 0) + (c ? 1 : 0);
    factor(numHeads === 2 ? 0 : -Infinity);
    return a;
});
var ANSWER = Infer({}, function() {
    var a = flip(0.5);
    var b = flip(0.5);
    var c = flip(0.5);
    var numHeads = (a ? 1 : 0) + (b ? 1 : 0) + (c ? 1 : 0);
    factor(numHeads === 2 ? 0 : -Infinity);
    return a;
});
groundtruth output
true0.5653
false0.4347
raw JSON
{
  "__kind": "distribution",
  "probs": [
    0.43469015541197054,
    0.5653098445880296
  ],
  "support": [
    false,
    true
  ]
}
feedback
not signed
probmods2-agents-as-programs/ex2.a distribution 8 runs TV=0.00 TV<.05
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Ultimatum game: a proposer allocates $10 between themselves and a responder in $1 increments. The responder accepts or rejects. If accepted, both get the split; if rejected, both get $0. Assume the responder is a strict utilitarian (accepts any offer >= $1). The proposer is a soft maximizer who wants to keep as much of the $10 as possible. End your program with the Infer(...) over the proposer's offer.
var responder = function(offer) { /* fill in */ }
Infer({method: "enumerate"}, function(){
  // sample offer, compute reward = responder(offer) ? (10-offer) : 0
  // factor(reward)
  return offer
})
groundtruth code
var responder = function(offer) {    
    return (offer>0 ? true : false);
};
var ANSWER = (Infer({method: "enumerate"}, function(){
    var offer = uniformDraw([0,1,2,3,4,5,6,7,8,9,10]);
    var reward = responder(offer) ? (10 - offer) : 0;
    factor(reward);
    return offer;
}));
generated code
sonnet-46-primer-v3 TV<.05 haiku-45-noprimer-v3 TV=0 haiku-45-primer-v3 TV=0 haiku-45-think-noprimer-v3 TV=0 haiku-45-think-primer-v3 TV=0 sonnet-46-noprimer-v3 TV=0 sonnet-46-think-noprimer-v3 TV=0 sonnet-46-think-primer-v3 TV=0
var responder = function(offer) {
  return offer >= 1;
};

var ANSWER = Infer({method: "enumerate"}, function() {
  var offer = randomInteger(10) + 1; // offers from 1 to 10
  var reward = responder(offer) ? (10 - offer) : 0;
  factor(reward);
  return offer;
});
var responder = function(offer) {
  return offer >= 1;
};

var ANSWER = Infer({method: "enumerate"}, function(){
  var offer = uniformDraw([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]);
  var accepted = responder(offer);
  var reward = accepted ? (10 - offer) : 0;
  factor(reward);
  return offer;
});
var responder = function(offer) {
  return offer >= 1;
};

var ANSWER = Infer({method: "enumerate"}, function(){
  var offer = randomInteger(11); // 0 to 10
  var accepted = responder(offer);
  var reward = accepted ? (10 - offer) : 0;
  factor(reward);
  return offer;
});
var responder = function(offer) {
  return offer >= 1;
}

var ANSWER = Infer({method: "enumerate"}, function(){
  var offer = uniformDraw([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]);
  var accepted = responder(offer);
  var reward = accepted ? (10 - offer) : 0;
  factor(reward);
  return offer;
});
var responder = function(offer) {
  return offer >= 1;
};

var ANSWER = Infer({method: "enumerate"}, function(){
  var offer = randomInteger(11);
  var accepted = responder(offer);
  var reward = accepted ? (10 - offer) : 0;
  factor(reward);
  return offer;
});
var responder = function(offer) {
  return offer >= 1;
};

var ANSWER = Infer({method: "enumerate"}, function() {
  var offer = sample(RandomInteger({n: 11})); // 0 to 10
  var accepted = responder(offer);
  var reward = accepted ? (10 - offer) : 0;
  factor(reward);
  return offer;
});
var responder = function(offer) {
  return offer >= 1;
};

var ANSWER = Infer({method: "enumerate"}, function() {
  var offer = uniformDraw([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]);
  var accepted = responder(offer);
  var reward = accepted ? (10 - offer) : 0;
  factor(reward);
  return offer;
});
var responder = function(offer) {
  return offer >= 1;
};

var ANSWER = Infer({method: "enumerate"}, function() {
  var offer = randomInteger(11); // 0 through 10
  var reward = responder(offer) ? (10 - offer) : 0;
  factor(reward);
  return offer;
});
groundtruth output
10.6321
20.2325
30.0855
40.0315
50.0116
60.0043
70.0016
80.0006
90.0002
00.0001
100.0001
raw JSON
{
  "__kind": "distribution",
  "probs": [
    0.00007800733050947138,
    0.6320999460843584,
    0.2325365749300126,
    0.08554542523717426,
    0.03147040323102506,
    0.011577314354069453,
    0.004259055934841188,
    0.0015668191172272911,
    0.0005764005412623084,
    0.00021204590901049492,
    0.00007800733050947138
  ],
  "support": [
    0,
    1,
    2,
    3,
    4,
    5,
    6,
    7,
    8,
    9,
    10
  ]
}
feedback
not signed
probmods2-agents-as-programs/ex2.b distribution 8 runs TV=0.00 TV=0
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Modified ultimatum game: the responder accepts in proportion to the fraction of $10 allocated to her, raised to a power alpha = 2 (a spitefulness parameter):
var alpha = 2;
var responder = function(offer, alpha) {
  var p = Math.pow(offer/10, alpha);
  return flip(p);
}
Use offers 0..10 in $1 increments. End with the Infer(...) over the proposer's offer (factor on reward).
groundtruth code
var alpha = 2;

var responder = function(offer, alpha) {    
    var p = Math.pow(offer/10,alpha);
    return flip(p);
};
var ANSWER = (Infer({method: "enumerate"}, function(){
    var offer = uniformDraw([0,1,2,3,4,5,6,7,8,9,10]);
    var reward = responder(offer,alpha) ? (10 - offer) : 0;
    factor(reward);
    return offer;
}));
generated code
sonnet-46-primer-v3 TV=0 haiku-45-noprimer-v3 TV=0 haiku-45-primer-v3 TV=0 haiku-45-think-noprimer-v3 TV=0 haiku-45-think-primer-v3 TV=0 sonnet-46-noprimer-v3 TV=0 sonnet-46-think-noprimer-v3 TV=0 sonnet-46-think-primer-v3 TV=0
var alpha = 2;

var responder = function(offer, alpha) {
  var p = Math.pow(offer / 10, alpha);
  return flip(p);
};

var model = function() {
  var offer = randomInteger(11); // 0..10
  var accepted = responder(offer, alpha);
  var reward = accepted ? (10 - offer) : 0;
  factor(reward);
  return offer;
};

var ANSWER = Infer({method: 'enumerate'}, model);
var alpha = 2;

var responder = function(offer, alpha) {
  var p = Math.pow(offer/10, alpha);
  return flip(p);
}

var proposer = function() {
  var offer = uniformDraw([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]);
  var accepted = responder(offer, alpha);
  var proposerReward = accepted ? (10 - offer) : 0;
  factor(proposerReward);
  return offer;
}

var ANSWER = Infer({method: "enumerate"}, proposer);
var alpha = 2;

var responder = function(offer, alpha) {
  var p = Math.pow(offer/10, alpha);
  return flip(p);
};

var proposer = function() {
  var offer = randomInteger(11); // 0..10
  var accepted = responder(offer, alpha);
  var proposerReward = accepted ? (10 - offer) : 0;
  factor(proposerReward);
  return offer;
};

var ANSWER = Infer({method: 'enumerate'}, proposer);
var alpha = 2;

var responder = function(offer, alpha) {
  var p = Math.pow(offer/10, alpha);
  return flip(p);
};

var model = function() {
  var offer = uniformDraw([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]);
  
  var accepts = responder(offer, alpha);
  
  var proposerReward = accepts ? (10 - offer) : 0;
  
  factor(proposerReward);
  
  return offer;
};

var ANSWER = Infer({method: 'enumerate'}, model);
var alpha = 2;

var responder = function(offer, alpha) {
  var p = Math.pow(offer/10, alpha);
  return flip(p);
};

var model = function() {
  var offer = randomInteger(11); // offers 0..10
  var accepted = responder(offer, alpha);
  var proposerReward = accepted ? (10 - offer) : 0;
  factor(proposerReward); // factor on the proposer's reward
  return offer;
};

var ANSWER = Infer({method: 'enumerate'}, model);
var alpha = 2;

var responder = function(offer, alpha) {
  var p = Math.pow(offer / 10, alpha);
  return flip(p);
};

var model = function() {
  var offer = randomInteger(11); // 0..10
  var accepted = responder(offer, alpha);
  var reward = accepted ? (10 - offer) : 0;
  factor(reward);
  return offer;
};

var ANSWER = Infer({method: 'enumerate'}, model);
var alpha = 2;

var responder = function(offer, alpha) {
  var p = Math.pow(offer / 10, alpha);
  return flip(p);
};

var proposerModel = function() {
  var offer = randomInteger(11); // 0, 1, ..., 10
  var accepted = responder(offer, alpha);
  var reward = accepted ? (10 - offer) : 0;
  factor(reward);
  return offer;
};

var ANSWER = Infer({method: 'enumerate'}, proposerModel);
var alpha = 2;

var responder = function(offer, alpha) {
  var p = Math.pow(offer / 10, alpha);
  return flip(p);
};

var model = function() {
  var offer = randomInteger(11);   // 0, 1, ..., 10
  var accepted = responder(offer, alpha);
  var reward = accepted ? (10 - offer) : 0;
  factor(reward);
  return offer;
};

var ANSWER = Infer({method: 'enumerate'}, model);
groundtruth output
20.2700
30.2237
10.1842
40.1469
50.0850
60.0456
70.0233
80.0114
90.0054
00.0022
100.0022
raw JSON
{
  "__kind": "distribution",
  "probs": [
    0.00224620121364029,
    0.18423530872499344,
    0.26998961109724834,
    0.2237373289286295,
    0.1468759683091889,
    0.08502610543432941,
    0.0455874038888365,
    0.02325247975156222,
    0.011430908774237995,
    0.005372482663693508,
    0.00224620121364029
  ],
  "support": [
    0,
    1,
    2,
    3,
    4,
    5,
    6,
    7,
    8,
    9,
    10
  ]
}
feedback
not signed
probmods2-agents-as-programs/ex2.d distribution 8 runs TV=0.83 TV<1
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
In the ultimatum game (responder accepts with probability (offer/10)^alpha), the proposer doesn't know alpha but believes it is uniformly distributed on [0.5, 5]. The proposer offered $2 and the responder rejected it. End with the Infer(...) returning the posterior over alpha. Use MCMC with 50000 samples.
groundtruth code
var responder = function(offer, alpha) {    
    var p = Math.pow(offer/10,alpha);
    return flip(p);
};
var ANSWER = (Infer({method: "MCMC", samples:50000}, function(){
    var alpha = uniform(0.5,5);
    var offer = 2;
    var reward = responder(offer, alpha) ? (10 - offer) : 0;
    condition(reward==0);
    return alpha;
}));
generated code
sonnet-46-primer-v3 TV<1 haiku-45-noprimer-v3 fail haiku-45-primer-v3 TV<.5 haiku-45-think-noprimer-v3 fail haiku-45-think-primer-v3 fail sonnet-46-noprimer-v3 TV<1 sonnet-46-think-noprimer-v3 TV<1 sonnet-46-think-primer-v3 TV<1
var model = function() {
  var alpha = uniform(0.5, 5);
  
  // Probability that responder accepts offer of $2 out of $10
  var pAccept = Math.pow(2 / 10, alpha);
  
  // Responder rejected, so observe false (not accepted)
  observe(Bernoulli({p: pAccept}), false);
  
  return alpha;
};

var ANSWER = Infer({method: 'MCMC', samples: 50000}, model);
Timeout after 60s
var model = function() {
  // Prior over alpha: uniform on [0.5, 5]
  var alpha = uniform(0.5, 5);
  
  // Proposer offers $2
  var offer = 2;
  
  // Probability that responder accepts given alpha
  var acceptProb = Math.pow(offer / 10, alpha);
  
  // Responder rejected the offer
  // So we observe that acceptance did NOT happen
  observe(Bernoulli({p: acceptProb}), 0);
  
  return alpha;
};

var ANSWER = Infer({method: "MCMC", samples: 50000}, model);
var model = function() {
  // Prior over alpha: uniform on [0.5, 5]
  var alpha = uniform(0.5, 5);
  
  // Proposer offers $2
  var offer = 2;
  
  // Responder accepts with probability (offer/10)^alpha
  var acceptProb = Math.pow(offer / 10, alpha);
  
  // Responder rejected, so we observe acceptance = false
  var accepted = flip(acceptProb);
  observe(Bernoulli({p: acceptProb}), false);
  
  return alpha;
};

var ANSWER = Infer({method: 'MCMC', samples: 50000}, model);
Timeout after 60s
var model = function() {
  // Prior: alpha uniformly distributed on [0.5, 5]
  var alpha = uniform(0.5, 5);
  
  // Proposer's offer
  var offer = 2;
  
  // Responder accepts with probability (offer/10)^alpha
  var acceptProb = Math.pow(offer / 10, alpha);
  
  // Observe: responder rejected (Bernoulli outcome 0 = did not accept)
  observe(Bernoulli({p: acceptProb}), 0);
  
  return alpha;
};

var ANSWER = Infer({method: 'MCMC', samples: 50000}, model);
Timeout after 60s
var model = function() {
  // Prior: alpha is uniformly distributed on [0.5, 5]
  var alpha = uniform(0.5, 5);
  
  // Proposer offers $2
  var offer = 2;
  
  // Responder accepts with probability (offer/10)^alpha
  var acceptProb = Math.pow(offer / 10, alpha);
  
  // Observe that responder rejected (0 = reject, 1 = accept)
  observe(Bernoulli({p: acceptProb}), 0);
  
  return alpha;
};

var ANSWER = Infer({method: 'MCMC', samples: 50000}, model);
var model = function() {
  // Sample alpha uniformly from [0.5, 5]
  var alpha = uniform(0.5, 5);
  
  // The offer is $2 out of $10
  var offer = 2;
  var total = 10;
  
  // Probability of acceptance: (offer/total)^alpha
  var pAccept = Math.pow(offer / total, alpha);
  
  // The responder rejected (observe rejection)
  observe(Bernoulli({p: pAccept}), false);
  
  return alpha;
};

var ANSWER = Infer({method: 'MCMC', samples: 50000}, model);
var model = function() {
  // Prior: alpha ~ Uniform(0.5, 5)
  var alpha = uniform(0.5, 5);

  // Acceptance probability given offer of $2 out of $10
  var offer = 2;
  var acceptProb = Math.pow(offer / 10, alpha);

  // The responder rejected (observed = false)
  observe(Bernoulli({ p: acceptProb }), false);

  return alpha;
};

var ANSWER = Infer({ method: 'MCMC', samples: 50000, burn: 5000 }, model);
var model = function() {
  // Prior: alpha ~ Uniform(0.5, 5)
  var alpha = uniform(0.5, 5);

  // Acceptance probability under the responder's rule
  var offer = 2;
  var total = 10;
  var acceptProb = Math.pow(offer / total, alpha);

  // The responder rejected the offer -> observe false
  observe(Bernoulli({p: acceptProb}), false);

  return alpha;
};

var ANSWER = Infer({method: 'MCMC', samples: 50000}, model);
groundtruth output
2.8035159140933040.0004
2.1994152696644130.0003
3.97442005512596050.0003
3.28659291740858660.0003
0.93168069137824030.0003
3.12260288156434430.0003
1.23973026308786060.0003
4.9624054045537090.0003
4.5099619738228610.0003
4.0563722486723890.0003
3.19541774660658050.0003
4.6702884235672210.0003
… 23914 more
raw JSON
{
  "__kind": "distribution",
  "probs": [
    0.00005999999999999996,
    0.000039999999999999996,
    0.00010000000000000009,
    0.000019999999999999998,
    0.00005999999999999996,
    0.000019999999999999998,
    0.00008,
    0.00005999999999999996,
    0.000019999999999999998,
    0.000019999999999999998,
    0.000019999999999999998,
    0.000039999999999999996,
    0.00011999999999999994,
    0.000019999999999999998,
    0.000019999999999999998,
    0.000039999999999999996,
    0.000019999999999999998,
    0.00005999999999999996,
    0.000019999999999999998,
    0.00005999999999999996,
    0.000039999999999999996,
    0.000019999999999999998,
    0.000039999999999999996,
    0.00005999999999999996,
    0.00005999999999999996,
    0.00008,
    0.000039999999999999996,
    0.00005999999999999996,
    0.00005999999999999996,
    0.000019999999999999998,
    0.000019999999999999998,
    0.000019999999999999998,
    0.00005999999999999996,
    0.00010000000000000009,
    0.000019999999999999998,
    0.000019999999999999998,
    0.00008,
    0.00008,
    0.000039999999999999996,
    0.00005999999999999996,
    0.000019999999999999998,
    0.00008,
    0.000019999999999999998,
    0.00005999999999999996,
    0.000039999999999999996,
    0.000019999999999999998,
    0.00005999999999999996,
    0.00005999999999999996,
    0.000019999999999999998,
    0.000039999999999999996,
    0.000019999999999999998,
    0.00005999999999999996,
    0.000039999999999999996,
    0.000019999999999999998,
    0.000019999999999999998,
    0.00005999999999999996,
    0.00005999999999999996,
    0.000019999999999999998,
    0.000039999999999999996,
    0.00005999999999999996,
    0.000019999999999999998,
    0.000039999999999999996,
    0.00005999999999999996,
    0.000019999999999999998,
    0.00014000000000000001,
    0.000039999999999999996,
    0.000019999999999999998,
    0.000039999999999999996,
    0.00010000000000000009,
    0.00008,
    0.000019999999999999998,
    0.000019999999999999998,
    0.000039999999999999996,
    0.00005999999999999996,
    0.000019999999999999998,
    0.00008,
    0.000039999999999999996,
    0.00008,
    0.000039999999999999996,
    0.00014000000000000001,
    0.000039999999999999996,
    0.000019999999999999998,
    0.00005999999999999996,
    0.00008,
    0.000039999999999999996,
    0.000019999999999999998,
    0.000019999999999999998,
    0.000019999999999999998,
    0.000019999999999999998,
    0.000039999999999999996,
    0.000039999999999999996,
    0.000039999999999999996,
    0.000019999999999999998,
    0.00005999999999999996,
    0.000019999999999999998,
    0.000019999999999999998,
    0.000019999999999999998,
    0.000039999999999999996,
    0.000019999999999999998,
    0.000019999999999999998,
    0.00005999999999999996,
    0.000039999999999999996,
    0.000039999999999999996,
    0.000039999999999999996,
    0.000019999999999999998,
    0.000019999999999999998,
    0.00005999999999999996,
    0.00008,
    0.000039999999999999996,
    0.000019999999999999998,
    0.000039999999999999996,
    0.000039999999999999996,
    0.000019999999999999998,
    0.00008,
    0.00005999999999999996,
    0.000019999999999999998,
    0.000039999999999999996,
    0.000019999999999999998,
    0.000019999999999999998,
    0.000019999999999999998,
    0.000039999999999999996,
    0.00016000000000000004,
    0.000039999999999999996,
    0.000019999999999999998,
    0.000039999999999999996,
    0.000039999999999999996,
    0.00010000000000000009,
    0.000039999999999999996,
    0.00005999999999999996,
    0.000039999999999999996,
    0.000039999999999999996,
    0.000039999999999999996,
    0.000019999999999999998,
    0.000019999999999999998,
    0.000019999999999999998,
    0.00010000000000000009,
    0.000019999999999999998,
    0.000019999999999999998,
    0.000019999999999999998,
    0.000019999999999999998,
    0.000019999999999999998,
    0.000019999999999999998,
    0.000039999999999999996,
    0.000059999999999

... (1220413 more chars truncated)
feedback
not signed
probmods2-agents-as-programs/ex2.e distribution 8 runs TV=0.00 TV=0
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Ultimatum game with uncertain alpha. Responder accepts with probability Math.pow(offer/10, alpha); the proposer doesn't know alpha but believes it is uniform on [0.5, 5]. Setup: in round 1, the proposer offered $2 and the responder rejected. In round 2, what should the proposer offer to maximize expected payoff? Two-stage inference: 1. proposer1: Infer (MCMC, 50000 samples) the posterior over alpha given that round 1's offer of $2 was rejected. 2. End your program with an outer Infer (forward, 1000 samples) that samples an alpha2 from proposer1, then runs an inner Infer (MCMC, 5000 samples) over offers 0..10 with factor(reward2) where reward2 = responder(offer2, alpha2) ? (10 - offer2) : 0. Sample one round-2 offer from that inner posterior and return it. The outer Infer is the distribution over the chosen round-2 offer.
groundtruth code
var responder = function(offer, alpha) {    
    var p = Math.pow(offer/10,alpha);
    return flip(p);
};

var proposer1 = Infer({method: "MCMC", samples:50000}, function(){
    var alpha = uniform(0.5,5);
    var offer1 = 2;
    var reward1 = responder(offer1, alpha) ? (10 - offer1) : 0;
    condition(reward1==0);
    return alpha;
});
var ANSWER = (Infer({method: "forward", samples:1000}, function(){
     var alpha2 = sample(proposer1);
     var proposer2 = Infer({method: "MCMC", samples:5000}, function(){
       var offer2 = uniformDraw([0,1,2,3,4,5,6,7,8,9,10]);
       var reward2 = responder(offer2, alpha2) ? (10 - offer2) : 0;
       factor(reward2);
       return offer2;
      });
      return sample(proposer2);
}));
generated code
sonnet-46-primer-v3 TV=0 haiku-45-noprimer-v3 TV<.5 haiku-45-primer-v3 fail haiku-45-think-noprimer-v3 TV<.05 haiku-45-think-primer-v3 TV<.05 sonnet-46-noprimer-v3 TV=0 sonnet-46-think-noprimer-v3 TV<.05 sonnet-46-think-primer-v3 TV<.05
var responder = function(offer, alpha) {
  return flip(Math.pow(offer / 10, alpha));
};

// Stage 1: Infer posterior over alpha given rejection of offer=2
var proposer1 = Infer({method: 'MCMC', samples: 50000}, function() {
  var alpha = uniform(0.5, 5);
  var accepted = responder(2, alpha);
  condition(!accepted);
  return alpha;
});

// Stage 2: For each alpha2 sampled from posterior, find best offer
var ANSWER = Infer({method: 'forward', samples: 1000}, function() {
  var alpha2 = sample(proposer1);
  
  var innerDist = Infer({method: 'MCMC', samples: 5000}, function() {
    var offer2 = randomInteger(11); // 0..10
    var accepted = responder(offer2, alpha2);
    var reward2 = accepted ? (10 - offer2) : 0;
    factor(reward2);
    return offer2;
  });
  
  var chosenOffer = sample(innerDist);
  return chosenOffer;
});
// Stage 1: Infer posterior over alpha given round 1 rejection
var proposer1 = Infer({method: "MCMC", samples: 50000}, function() {
  var alpha = uniform({a: 0.5, b: 5});
  
  // Round 1: offer of $2 was rejected
  var offer1 = 2;
  var acceptProb1 = Math.pow(offer1 / 10, alpha);
  
  // Responder rejected, so we condition on rejection
  factor(Math.log(1 - acceptProb1));
  
  return alpha;
});

// Stage 2: Outer inference over round 2 offers
var ANSWER = Infer({method: "forward", samples: 1000}, function() {
  // Sample alpha from posterior
  var alpha2 = sample(proposer1);
  
  // Inner inference: find best offer for round 2
  var bestOffer = sample(Infer({method: "MCMC", samples: 5000}, function() {
    var offer2 = uniformDraw(_.range(0, 11)); // offers 0..10
    
    // Responder accepts with probability Math.pow(offer2/10, alpha2)
    var acceptProb2 = Math.pow(offer2 / 10, alpha2);
    
    // Expected payoff: if accepted, proposer gets (10 - offer2); if rejected, gets 0
    var reward2 = acceptProb2 * (10 - offer2);
    
    factor(reward2);
    
    return offer2;
  }));
  
  return bestOffer;
});
Timeout after 60s
// Stage 1: Infer posterior over alpha given round 1 rejection
var proposer1 = Infer({method: 'MCMC', samples: 50000}, function() {
  var alpha = uniform(0.5, 5);
  var offer1 = 2;
  var acceptProb = Math.pow(offer1 / 10, alpha);
  observe(Bernoulli({p: acceptProb}), 0); // 0 = rejection
  return alpha;
});

// Stage 2: Outer inference over round 2 offers
var ANSWER = Infer({method: 'forward', samples: 1000}, function() {
  var alpha2 = sample(proposer1);
  
  // Inner inference: find best offer in round 2
  var innerInference = Infer({method: 'MCMC', samples: 5000}, function() {
    var offer2 = randomInteger(11); // offers 0..10
    var acceptProb2 = Math.pow(offer2 / 10, alpha2);
    var accepted = flip(acceptProb2);
    var reward2 = accepted ? (10 - offer2) : 0;
    factor(reward2);
    return offer2;
  });
  
  // Sample one offer from the inner posterior
  var chosenOffer = sample(innerInference);
  return chosenOffer;
});
// Stage 1: Infer posterior over alpha from round 1 rejection
var proposer1 = Infer(
  {method: "MCMC", samples: 50000},
  function() {
    var alpha = uniform({a: 0.5, b: 5});
    
    // Round 1: offer = 2, responder rejected
    var acceptProb1 = Math.pow(2/10, alpha);
    var rejectProb1 = 1 - acceptProb1;
    
    factor(Math.log(rejectProb1));
    
    return alpha;
  }
);

// Helper function for responder
var responder = function(offer, alpha) {
  var acceptProb = Math.pow(offer/10, alpha);
  return flip(acceptProb);
};

// Stage 2: Proposer chooses offer in round 2
var answer = Infer(
  {method: "forward", samples: 1000},
  function() {
    var alpha2 = sample(proposer1);
    
    // Inner Infer: find best offer for this alpha2
    var offer2Dist = Infer(
      {method: "MCMC", samples: 5000},
      function() {
        var offer2 = uniformDraw(_.range(0, 11));
        
        var accepted = responder(offer2, alpha2);
        var reward2 = accepted ? (10 - offer2) : 0;
        
        factor(reward2);
        
        return offer2;
      }
    );
    
    return sample(offer2Dist);
  }
);

var ANSWER = answer;
// Stage 1: Infer posterior over alpha given round 1 rejection
var proposer1 = Infer({method: 'MCMC', samples: 50000}, function() {
  var alpha = uniform(0.5, 5);
  
  // Round 1: offer of $2 was rejected
  var offer1 = 2;
  var acceptProb1 = Math.pow(offer1 / 10, alpha);
  var rejectProb1 = 1 - acceptProb1;
  
  // Observe the rejection
  factor(Math.log(rejectProb1));
  
  return alpha;
});

// Stage 2: Outer forward inference over round 2 offers
var ANSWER = Infer({method: 'forward', samples: 1000}, function() {
  var alpha2 = sample(proposer1);
  
  // Inner MCMC inference to find best offer given alpha2
  var offerPosterior = Infer({method: 'MCMC', samples: 5000}, function() {
    var offer2 = randomInteger(11); // offers 0-10
    
    // Responder's acceptance probability
    var acceptProb2 = Math.pow(offer2 / 10, alpha2);
    
    // Simulate responder's acceptance
    var responderAccepts = flip(acceptProb2);
    
    // Proposer's payoff
    var reward2 = responderAccepts ? (10 - offer2) : 0;
    
    // Factor by reward to bias toward high-payoff offers
    factor(reward2);
    
    return offer2;
  });
  
  // Sample one round-2 offer from the posterior
  var chosenOffer = sample(offerPosterior);
  return chosenOffer;
});
// Responder accepts with probability Math.pow(offer/10, alpha)
var responder = function(offer, alpha) {
  return flip(Math.pow(offer / 10, alpha));
};

// Stage 1: Infer posterior over alpha given round 1 offer of $2 was rejected
var proposer1 = Infer({method: 'MCMC', samples: 50000}, function() {
  var alpha = uniform(0.5, 5);
  var accepted = flip(Math.pow(2 / 10, alpha));
  condition(!accepted);
  return alpha;
});

// Stage 2: Outer Infer samples alpha2 from proposer1, then inner Infer over offers
var ANSWER = Infer({method: 'forward', samples: 1000}, function() {
  var alpha2 = sample(proposer1);
  
  var innerPosterior = Infer({method: 'MCMC', samples: 5000}, function() {
    var offer2 = randomInteger(11); // 0..10
    var accepted = responder(offer2, alpha2);
    var reward2 = accepted ? (10 - offer2) : 0;
    factor(reward2);
    return offer2;
  });
  
  return sample(innerPosterior);
});
var responder = function(offer, alpha) {
  var acceptProb = Math.pow(offer / 10, alpha);
  return flip(acceptProb);
};

var proposer1 = Infer({method: 'MCMC', samples: 50000}, function() {
  var alpha = uniform(0.5, 5);
  var acceptProb = Math.pow(2 / 10, alpha);
  observe(Bernoulli({p: acceptProb}), false);
  return alpha;
});

var ANSWER = Infer({method: 'forward', samples: 1000}, function() {
  var alpha2 = sample(proposer1);

  var innerPosterior = Infer({method: 'MCMC', samples: 5000}, function() {
    var offer2 = randomInteger(11);
    var accepted = responder(offer2, alpha2);
    var reward2 = accepted ? (10 - offer2) : 0;
    factor(reward2);
    return offer2;
  });

  return sample(innerPosterior);
});
// Step 1: Posterior over alpha given that offer=$2 was rejected
var proposer1 = Infer({method: 'MCMC', samples: 50000}, function() {
  var alpha = uniform(0.5, 5);
  var acceptProb = Math.pow(2 / 10, alpha);
  observe(Bernoulli({p: acceptProb}), false);
  return alpha;
});

// Step 2: Outer forward inference over the chosen round-2 offer
var ANSWER = Infer({method: 'forward', samples: 1000}, function() {
  // Draw alpha from updated posterior
  var alpha2 = sample(proposer1);

  // Inner MCMC over offers 0..10, weighted by expected reward
  var innerDist = Infer({method: 'MCMC', samples: 5000}, function() {
    var offer2 = randomInteger(11); // integers 0..10
    var acceptProb2 = Math.pow(offer2 / 10, alpha2);
    var accepted = flip(acceptProb2);
    var reward2 = accepted ? (10 - offer2) : 0;
    factor(reward2);
    return offer2;
  });

  // Sample one round-2 offer from the inner posterior
  return sample(innerDist);
});
groundtruth output
20.1920
30.1710
40.1430
10.1400
50.1180
60.0890
70.0580
80.0470
90.0180
00.0130
100.0110
raw JSON
{
  "__kind": "distribution",
  "probs": [
    0.012999999999999998,
    0.14,
    0.192,
    0.17100000000000004,
    0.143,
    0.11800000000000002,
    0.089,
    0.057999999999999996,
    0.047,
    0.018000000000000002,
    0.011000000000000005
  ],
  "support": [
    0,
    1,
    2,
    3,
    4,
    5,
    6,
    7,
    8,
    9,
    10
  ]
}
feedback
not signed
probmods2-agents-as-programs/ex3 distribution 8 runs TV=0.00 TV=0
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Prisoner's Dilemma. Two thieves are interrogated separately. If a thief confesses she gets a lenient sentence (lenient = 6 years). If she doesn't but the other does, she gets 10 years. If neither confesses, both go free. With lenient = 6, use factor(percentYearsFreedom) (where percentYearsFreedom = (10 - years_in_jail) / 10). The other thief flips uniformly to decide whether to rat. End your program with the Infer(...) returning the posterior over the focal thief's rat decision.
groundtruth code
var thiefRats = function(){
  return flip();
};

var lenient = 6;
var ANSWER = (Infer({}, function(){
  var otherThiefRats = thiefRats();
  var IRat = thiefRats();
  var years = (otherThiefRats? 
              (IRat? lenient : 10) : 
              (IRat? lenient : 0));
  var percentYearsFreedom = (10-years)/10;
  factor(percentYearsFreedom);
  return IRat;
}));
generated code
sonnet-46-primer-v3 TV=0 haiku-45-noprimer-v3 fail haiku-45-primer-v3 TV=0 haiku-45-think-noprimer-v3 fail haiku-45-think-primer-v3 TV<.05 sonnet-46-noprimer-v3 TV=1 sonnet-46-think-noprimer-v3 TV=1 sonnet-46-think-primer-v3 TV=0
var model = function() {
  // Focal thief's decision: rat or not
  var rat = flip(0.5);
  
  // Other thief flips uniformly
  var otherRat = flip(0.5);
  
  var lenient = 6;
  
  // Compute years in jail for focal thief
  var years = rat ? lenient : (otherRat ? 10 : 0);
  
  // percentYearsFreedom = (10 - years) / 10
  var percentYearsFreedom = (10 - years) / 10;
  
  factor(percentYearsFreedom);
  
  return rat;
};

var ANSWER = Infer({method: 'enumerate'}, model);
Error: Line 40: Did you mean var yearsInJail = ?
var lenient = 6;

var model = function() {
  // The other thief's decision (uniform random)
  var otherRats = flip(0.5);
  
  // The focal thief's decision
  var iRat = flip(0.5);
  
  // Determine years in jail for the focal thief
  var yearsInJail;
  if (iRat && otherRats) {
    // Both confess: both get lenient sentence
    yearsInJail = lenient;
  } else if (iRat && !otherRats) {
    // I confess, other doesn't: I get lenient, other gets 10
    yearsInJail = lenient;
  } else if (!iRat && otherRats) {
    // I don't confess, other does: I get 10, other gets lenient
    yearsInJail = 10;
  } else {
    // Neither confesses: both go free
    yearsInJail = 0;
  }
  
  // Calculate percent years of freedom and apply factor
  var percentYearsFreedom = (10 - yearsInJail) / 10;
  factor(percentYearsFreedom);
  
  return iRat;
};

var ANSWER = Infer({method: "enumerate"}, model);
var lenient = 6;

var model = function() {
  // The other thief's decision: flip uniformly
  var otherRats = flip(0.5);
  
  // The focal thief's decision: we want to infer this
  var focalRats = flip(0.5);
  
  // Determine sentences based on decisions
  var focalYears = (function() {
    if (focalRats && otherRats) {
      // Both confess: both get lenient sentence
      return lenient;
    } else if (focalRats && !otherRats) {
      // Focal confesses, other doesn't: focal gets lenient, other gets 10
      return lenient;
    } else if (!focalRats && otherRats) {
      // Focal doesn't confess, other does: focal gets 10, other gets lenient
      return 10;
    } else {
      // Neither confesses: both go free
      return 0;
    }
  })();
  
  // Calculate percent years of freedom and factor it in
  var percentYearsFreedom = (10 - focalYears) / 10;
  factor(percentYearsFreedom);
  
  return focalRats;
};

var ANSWER = Infer({method: 'enumerate'}, model);
Error: Line 38: Did you mean var focalSentence = ?
var model = function() {
  // The other thief flips uniformly to decide whether to rat
  var otherRats = flip(0.5);
  
  // The focal thief's decision (what we want the posterior over)
  var focalRats = flip(0.5);
  
  // Determine the focal thief's sentence based on both decisions
  var focalSentence;
  if (focalRats) {
    // If focal thief confesses, gets lenient sentence
    focalSentence = 6;
  } else {
    // If focal thief doesn't confess
    if (otherRats) {
      // Other thief rats, focal gets the harsh sentence
      focalSentence = 10;
    } else {
      // Neither confesses, both go free
      focalSentence = 0;
    }
  }
  
  // Calculate percent years freedom utility
  var percentYearsFreedom = (10 - focalSentence) / 10;
  
  // Weight by utility
  factor(percentYearsFreedom);
  
  return focalRats;
};

var ANSWER = Infer({method: "enumerate"}, model);
var model = function() {
  var focal_rats = flip(0.5);
  var other_rats = flip(0.5);
  
  var years_in_jail = focal_rats ? 6 : (other_rats ? 10 : 0);
  var percentYearsFreedom = (10 - years_in_jail) / 10;
  
  factor(percentYearsFreedom);
  
  return focal_rats;
};

var ANSWER = Infer({method: 'MCMC', samples: 10000}, model);
var model = function() {
  // Focal thief's decision
  var focalRats = flip(0.5);
  
  // Other thief flips uniformly
  var otherRats = flip(0.5);
  
  var lenient = 6;
  
  // Determine years in jail for focal thief
  var years = focalRats ? lenient : (otherRats ? 10 : 0);
  
  // percentYearsFreedom = (10 - years) / 10
  var percentYearsFreedom = (10 - years) / 10;
  
  factor(percentYearsFreedom);
  
  return focalRats ? "rat" : "silent";
};

var ANSWER = Infer({method: 'enumerate'}, model);
var lenient = 6;

var model = function() {
  // Focal thief decides whether to rat (uniform prior)
  var ratDecision = flip(0.5);

  // Other thief flips uniformly to decide whether to rat
  var otherRats = flip(0.5);

  // Compute years in jail for focal thief
  var years = ratDecision
    ? lenient                   // confessed → lenient (6 yrs) regardless of other
    : (otherRats ? 10 : 0);    // didn't confess: 10 if other did, 0 if neither did

  // Score by percentage of freedom (higher is better)
  var percentYearsFreedom = (10 - years) / 10;
  factor(percentYearsFreedom);

  return ratDecision ? "rat" : "stay_silent";
};

var ANSWER = Infer({method: 'enumerate'}, model);
var model = function() {
  var lenient = 6;

  // Focal thief's decision (what we're inferring)
  var focalRats = flip(0.5);

  // Other thief flips uniformly
  var otherRats = flip(0.5);

  // Compute years in jail for the focal thief
  var years = focalRats
    ? lenient                    // confessing always yields lenient sentence
    : (otherRats ? 10 : 0);     // not confessing: 10 if other confesses, 0 if neither does

  // Utility: percentage of years free (out of 10)
  var percentYearsFreedom = (10 - years) / 10;

  factor(percentYearsFreedom);

  return focalRats;
};

var ANSWER = Infer({method: 'enumerate'}, model);
groundtruth output
false0.5548
true0.4452
raw JSON
{
  "__kind": "distribution",
  "probs": [
    0.5548075180609188,
    0.44519248193908123
  ],
  "support": [
    false,
    true
  ]
}
feedback
not signed
probmods2-agents-as-programs/ex4.a record(alpha_001, alpha_1, alpha_4, alpha_10) 8 runs TV=0.00 TV=0
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Implement the Frank & Goodman RSA model. The world has three objects represented as records {shape, color}: {shape: "square", color: "blue"}, {shape: "circle", color: "blue"}, {shape: "square", color: "green"}, drawn uniformly. Possible utterances: 'blue', 'green', 'square', 'circle'. Truth function: color/shape utterances must match the corresponding attribute, otherwise true. Build: - literalListener(utterance): Infer over meaningPrior conditional on truth - speaker(obj, alpha): uniformDraw utterance, factor alpha * literalListener(utterance).score(obj) - pragmaticListener(utterance, alpha): Infer over meaningPrior conditional on speaker(obj, alpha) Return an object literal with pragmaticListener('blue', alpha) for four alphas, keyed as: - alpha_001 (alpha=0.01) - alpha_1 (alpha=1) - alpha_4 (alpha=4) - alpha_10 (alpha=10)
groundtruth code
var meaningPrior = function() {
  uniformDraw([
    {shape: "square", color: "blue"},
    {shape: "circle", color: "blue"},
    {shape: "square", color: "green"}
  ])
};

var utterances = ["blue","green","square","circle"];

var meaning = function(utterance, obj){
  (utterance === "blue" || utterance === "green") ? utterance === obj.color :
  (utterance === "circle" || utterance === "square") ? utterance === obj.shape :
  true
};

var literalListener = function(utterance){
  return Infer({model: function(){
    var obj = meaningPrior();
    condition(meaning(utterance, obj));
    return obj;
  }});
};

var speaker = function(obj,alpha){
  return Infer({model: function(){
    var utterance = uniformDraw(utterances);
    factor(alpha * literalListener(utterance).score(obj));
    return utterance;
  }});
};

var pragmaticListener = function(utterance,alpha){
  return Infer({model: function(){
    var obj = meaningPrior();
    observe(speaker(obj,alpha),utterance);
    return obj;
  }});
};
var ANSWER = (({
  alpha_001: pragmaticListener("blue", 0.01),
  alpha_1: pragmaticListener("blue", 1),
  alpha_4: pragmaticListener("blue", 4),
  alpha_10: pragmaticListener("blue", 10)
}));
generated code
sonnet-46-primer-v3 TV=0 haiku-45-noprimer-v3 TV=0 haiku-45-primer-v3 TV=0 haiku-45-think-noprimer-v3 TV=0 haiku-45-think-primer-v3 TV<.05 sonnet-46-noprimer-v3 TV=0 sonnet-46-think-noprimer-v3 TV=0 sonnet-46-think-primer-v3 TV=0
var objects = [
  {shape: "square", color: "blue"},
  {shape: "circle", color: "blue"},
  {shape: "square", color: "green"}
];

var utterances = ['blue', 'green', 'square', 'circle'];

var meaningPrior = function() {
  return uniformDraw(objects);
};

var meaning = function(utterance, obj) {
  if (utterance === 'blue' || utterance === 'green') {
    return obj.color === utterance;
  } else if (utterance === 'square' || utterance === 'circle') {
    return obj.shape === utterance;
  } else {
    return true;
  }
};

var literalListener = mem(function(utterance) {
  return Infer({method: 'enumerate'}, function() {
    var obj = meaningPrior();
    condition(meaning(utterance, obj));
    return obj;
  });
});

var speaker = function(obj, alpha) {
  return Infer({method: 'enumerate'}, function() {
    var utterance = uniformDraw(utterances);
    factor(alpha * literalListener(utterance).score(obj));
    return utterance;
  });
};

var pragmaticListener = function(utterance, alpha) {
  return Infer({method: 'enumerate'}, function() {
    var obj = meaningPrior();
    observe(speaker(obj, alpha), utterance);
    return obj;
  });
};

var ANSWER = {
  alpha_001: pragmaticListener('blue', 0.01),
  alpha_1: pragmaticListener('blue', 1),
  alpha_4: pragmaticListener('blue', 4),
  alpha_10: pragmaticListener('blue', 10)
};
// Define the world
var objects = [
  {shape: "square", color: "blue"},
  {shape: "circle", color: "blue"},
  {shape: "square", color: "green"}
];

var utterances = ['blue', 'green', 'square', 'circle'];

// Truth function
var truth = function(utterance, obj) {
  if (utterance === 'blue') return obj.color === 'blue';
  if (utterance === 'green') return obj.color === 'green';
  if (utterance === 'square') return obj.shape === 'square';
  if (utterance === 'circle') return obj.shape === 'circle';
  return true;
};

// Literal listener
var literalListener = function(utterance) {
  return Infer({model: function() {
    var obj = uniformDraw(objects);
    condition(truth(utterance, obj));
    return obj;
  }});
};

// Speaker
var speaker = function(obj, alpha) {
  return Infer({model: function() {
    var utterance = uniformDraw(utterances);
    var listener = literalListener(utterance);
    factor(alpha * listener.score(obj));
    return utterance;
  }});
};

// Pragmatic listener
var pragmaticListener = function(utterance, alpha) {
  return Infer({model: function() {
    var obj = uniformDraw(objects);
    var sp = speaker(obj, alpha);
    observe(sp, utterance);
    return obj;
  }});
};

// Compute pragmatic listener for 'blue' at different alphas
var alpha_001 = pragmaticListener('blue', 0.01);
var alpha_1 = pragmaticListener('blue', 1);
var alpha_4 = pragmaticListener('blue', 4);
var alpha_10 = pragmaticListener('blue', 10);

var ANSWER = {
  alpha_001: alpha_001,
  alpha_1: alpha_1,
  alpha_4: alpha_4,
  alpha_10: alpha_10
};
// Define the world
var objects = [
  {shape: "square", color: "blue"},
  {shape: "circle", color: "blue"},
  {shape: "square", color: "green"}
];

var utterances = ['blue', 'green', 'square', 'circle'];

// Truth function: utterance is true if it matches the object's attribute
var truth = function(utterance, obj) {
  if (utterance === 'blue' || utterance === 'green') {
    return utterance === obj.color;
  } else if (utterance === 'square' || utterance === 'circle') {
    return utterance === obj.shape;
  }
  return false;
};

// Literal listener: infer over objects given utterance, conditioned on truth
var literalListener = function(utterance) {
  return Infer({method: 'enumerate'}, function() {
    var obj = uniformDraw(objects);
    condition(truth(utterance, obj));
    return obj;
  });
};

// Speaker: choose utterance, factor by how well it communicates the object
var speaker = function(obj, alpha) {
  return Infer({method: 'enumerate'}, function() {
    var utterance = uniformDraw(utterances);
    var listener = literalListener(utterance);
    factor(alpha * listener.score(obj));
    return utterance;
  });
};

// Pragmatic listener: infer over objects given utterance and speaker rationality
var pragmaticListener = function(utterance, alpha) {
  return Infer({method: 'enumerate'}, function() {
    var obj = uniformDraw(objects);
    var speakerDist = speaker(obj, alpha);
    observe(speakerDist, utterance);
    return obj;
  });
};

// Compute pragmatic listener for 'blue' at different alpha values
var result_001 = pragmaticListener('blue', 0.01);
var result_1 = pragmaticListener('blue', 1);
var result_4 = pragmaticListener('blue', 4);
var result_10 = pragmaticListener('blue', 10);

var ANSWER = {
  alpha_001: result_001,
  alpha_1: result_1,
  alpha_4: result_4,
  alpha_10: result_10
};
// Objects in the world
var objects = [
  {shape: "square", color: "blue"},
  {shape: "circle", color: "blue"},
  {shape: "square", color: "green"}
];

// Possible utterances
var utterances = ['blue', 'green', 'square', 'circle'];

// Truth function: utterance is true iff it matches the relevant attribute
var meaning = function(utterance, obj) {
  if (utterance === 'blue' || utterance === 'green') {
    return utterance === obj.color;
  } else if (utterance === 'square' || utterance === 'circle') {
    return utterance === obj.shape;
  }
  return false;
};

// Literal listener: infer objects conditional on utterance being true
var literalListener = function(utterance) {
  return Infer({method: 'enumerate'}, function() {
    var obj = uniformDraw(objects);
    condition(meaning(utterance, obj));
    return obj;
  });
};

// Speaker: choose utterance, factor by how well literal listener would identify obj
var speaker = function(obj, alpha) {
  return Infer({method: 'enumerate'}, function() {
    var utterance = uniformDraw(utterances);
    var ll = literalListener(utterance);
    factor(alpha * ll.score(obj));
    return utterance;
  });
};

// Pragmatic listener: infer objects conditional on speaker having said utterance
var pragmaticListener = function(utterance, alpha) {
  return Infer({method: 'enumerate'}, function() {
    var obj = uniformDraw(objects);
    var s = speaker(obj, alpha);
    factor(s.score(utterance));
    return obj;
  });
};

var ANSWER = {
  alpha_001: pragmaticListener('blue', 0.01),
  alpha_1: pragmaticListener('blue', 1),
  alpha_4: pragmaticListener('blue', 4),
  alpha_10: pragmaticListener('blue', 10)
};
var objects = [
  {shape: "square", color: "blue"},
  {shape: "circle", color: "blue"},
  {shape: "square", color: "green"}
];

var utterances = ['blue', 'green', 'square', 'circle'];

var truth = function(utterance, object) {
  return (utterance === 'blue' && object.color === 'blue') ||
         (utterance === 'green' && object.color === 'green') ||
         (utterance === 'square' && object.shape === 'square') ||
         (utterance === 'circle' && object.shape === 'circle');
};

var meaningPrior = function() {
  return categorical({vs: objects});
};

var literalListener = function(utterance) {
  return Infer({method: 'enumerate'}, function() {
    var obj = meaningPrior();
    condition(truth(utterance, obj));
    return obj;
  });
};

var speaker = function(obj, alpha) {
  return Infer({method: 'enumerate'}, function() {
    var utterance = categorical({vs: utterances});
    var listener = literalListener(utterance);
    factor(alpha * listener.score(obj));
    return utterance;
  });
};

var pragmaticListener = function(utterance, alpha) {
  return Infer({method: 'enumerate'}, function() {
    var obj = meaningPrior();
    var speakerDist = speaker(obj, alpha);
    observe(speakerDist, utterance);
    return obj;
  });
};

var ANSWER = {
  alpha_001: pragmaticListener('blue', 0.01),
  alpha_1: pragmaticListener('blue', 1),
  alpha_4: pragmaticListener('blue', 4),
  alpha_10: pragmaticListener('blue', 10)
};
var objects = [
  {shape: "square", color: "blue"},
  {shape: "circle", color: "blue"},
  {shape: "square", color: "green"}
];

var utterances = ['blue', 'green', 'square', 'circle'];

var meaningPrior = function() {
  return uniformDraw(objects);
};

var truthFn = function(utterance, obj) {
  if (utterance === 'blue' || utterance === 'green') {
    return obj.color === utterance;
  } else if (utterance === 'square' || utterance === 'circle') {
    return obj.shape === utterance;
  } else {
    return true;
  }
};

var literalListener = function(utterance) {
  return Infer({method: 'enumerate'}, function() {
    var obj = meaningPrior();
    condition(truthFn(utterance, obj));
    return obj;
  });
};

var speaker = function(obj, alpha) {
  return Infer({method: 'enumerate'}, function() {
    var utterance = uniformDraw(utterances);
    factor(alpha * literalListener(utterance).score(obj));
    return utterance;
  });
};

var pragmaticListener = function(utterance, alpha) {
  return Infer({method: 'enumerate'}, function() {
    var obj = meaningPrior();
    observe(speaker(obj, alpha), utterance);
    return obj;
  });
};

var ANSWER = {
  alpha_001: pragmaticListener('blue', 0.01),
  alpha_1: pragmaticListener('blue', 1),
  alpha_4: pragmaticListener('blue', 4),
  alpha_10: pragmaticListener('blue', 10)
};
var objects = [
  {shape: "square", color: "blue"},
  {shape: "circle", color: "blue"},
  {shape: "square", color: "green"}
];

var utterances = ['blue', 'green', 'square', 'circle'];

var meaningPrior = function() {
  return uniformDraw(objects);
};

var utterancePrior = function() {
  return uniformDraw(utterances);
};

var meaning = function(utterance, obj) {
  if (utterance === "blue" || utterance === "green") {
    return obj.color === utterance;
  } else if (utterance === "square" || utterance === "circle") {
    return obj.shape === utterance;
  } else {
    return true;
  }
};

var literalListener = function(utterance) {
  return Infer({method: 'enumerate'}, function() {
    var obj = meaningPrior();
    condition(meaning(utterance, obj));
    return obj;
  });
};

var speaker = function(obj, alpha) {
  return Infer({method: 'enumerate'}, function() {
    var utterance = utterancePrior();
    factor(alpha * literalListener(utterance).score(obj));
    return utterance;
  });
};

var pragmaticListener = function(utterance, alpha) {
  return Infer({method: 'enumerate'}, function() {
    var obj = meaningPrior();
    observe(speaker(obj, alpha), utterance);
    return obj;
  });
};

var ANSWER = {
  alpha_001: pragmaticListener('blue', 0.01),
  alpha_1:   pragmaticListener('blue', 1),
  alpha_4:   pragmaticListener('blue', 4),
  alpha_10:  pragmaticListener('blue', 10)
};
var objects = [
  {shape: "square", color: "blue"},
  {shape: "circle", color: "blue"},
  {shape: "square", color: "green"}
];

var utterances = ['blue', 'green', 'square', 'circle'];

var meaningPrior = function() {
  return uniformDraw(objects);
};

var meaning = function(utterance, obj) {
  if (utterance === 'blue' || utterance === 'green') {
    return obj.color === utterance;
  } else if (utterance === 'square' || utterance === 'circle') {
    return obj.shape === utterance;
  } else {
    return true;
  }
};

var literalListener = mem(function(utterance) {
  return Infer({method: 'enumerate'}, function() {
    var obj = meaningPrior();
    condition(meaning(utterance, obj));
    return obj;
  });
});

var speaker = mem(function(obj, alpha) {
  return Infer({method: 'enumerate'}, function() {
    var utterance = uniformDraw(utterances);
    factor(alpha * literalListener(utterance).score(obj));
    return utterance;
  });
});

var pragmaticListener = mem(function(utterance, alpha) {
  return Infer({method: 'enumerate'}, function() {
    var obj = meaningPrior();
    observe(speaker(obj, alpha), utterance);
    return obj;
  });
});

var ANSWER = {
  alpha_001: pragmaticListener('blue', 0.01),
  alpha_1:   pragmaticListener('blue', 1),
  alpha_4:   pragmaticListener('blue', 4),
  alpha_10:  pragmaticListener('blue', 10)
};
groundtruth output
{
  "alpha_001": {
    "__kind": "distribution",
    "probs": [
      0.4991320654834043,
      0.5008679345165958
    ],
    "support": [
      {
        "shape": "circle",
        "color": "blue"
      },
      {
        "shape": "square",
        "color": "blue"
      }
    ]
  },
  "alpha_1": {
    "__kind": "distribution",
    "probs": [
      0.4000000000000001,
      0.6
    ],
    "support": [
      {
        "shape": "circle",
        "color": "blue"
      },
      {
        "shape": "square",
        "color": "blue"
      }
    ]
  },
  "alpha_4": {
    "__kind": "distribution",
    "probs": [
      0.10526315789473686,
      0.8947368421052633
    ],
    "support": [
      {
        "shape": "circle",
        "color": "blue"
      },
      {
        "shape": "square",
        "color": "blue"
      }
    ]
  },
  "alpha_10": {
    "__kind": "distribution",
    "probs": [
      0.001947419668938659,
      0.9980525803310613
    ],
    "support": [
      {
        "shape": "circle",
        "color": "blue"
      },
      {
        "shape": "square",
        "color": "blue"
      }
    ]
  }
}
feedback
not signed
probmods2-agents-as-programs/ex4.b record(L1, L2) 8 runs TV=0.00 TV=0
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Frank & Goodman pragmatic listener / pragmatic speaker / RSA model. The world has three objects represented as records {shape, color}: {shape: "square", color: "blue"}, {shape: "circle", color: "blue"}, {shape: "square", color: "green"}, drawn uniformly. Possible utterances: 'blue', 'green', 'square', 'circle'. Truth function: a color/shape utterance must match the corresponding attribute, otherwise the utterance is true. Build the level-1 stack with alpha = 1: - literalListener(utterance): Infer over meaningPrior conditioning on meaning(utterance, obj) being true. - speaker(obj): Infer over utterances drawn uniformly with factor(alpha * literalListener(utterance).score(obj)). - pragmaticListener(utterance) = L1: Infer over meaningPrior conditioning on observe(speaker(obj), utterance). Build the level-2 stack with alpha = 1: - speaker2(obj): Infer over utterances drawn uniformly with factor(alpha * pragmaticListener(utterance).score(obj)). - listener3(utterance) = L2: Infer over meaningPrior conditioning on observe(speaker2(obj), utterance). End your program with an object literal {L1, L2} where L1 = pragmaticListener('blue') and L2 = listener3('blue').
groundtruth code
var meaningPrior = function() {
  uniformDraw([
    {shape: "square", color: "blue"},
    {shape: "circle", color: "blue"},
    {shape: "square", color: "green"}
  ])
};

var utterances = ["blue","green","square","circle"];

var meaning = function(utterance, obj){
  (utterance === "blue" || utterance === "green") ? utterance === obj.color :
  (utterance === "circle" || utterance === "square") ? utterance === obj.shape :
  true
};

var alpha = 1;

var literalListener = function(utterance){
  return Infer({model: function(){
    var obj = meaningPrior();
    condition(meaning(utterance, obj));
    return obj;
  }});
};

var speaker = function(obj){
  return Infer({model: function(){
    var utterance = uniformDraw(utterances);
    factor(alpha * literalListener(utterance).score(obj));
    return utterance;
  }});
};

var pragmaticListener = function(utterance){
  return Infer({model: function(){
    var obj = meaningPrior();
    observe(speaker(obj),utterance);
    return obj;
  }});
};

var speaker2 = function(obj){
  return Infer({model: function(){
    var utterance = uniformDraw(utterances);
    factor(alpha * pragmaticListener(utterance).score(obj));
    return utterance;
  }});
};

var listener3 = function(utterance){
  return Infer({model: function(){
    var obj = meaningPrior();
    observe(speaker2(obj),utterance);
    return obj;
  }});
};
var ANSWER = (({
  L1: pragmaticListener("blue"),
  L2: listener3("blue")
}));
generated code
sonnet-46-primer-v3 TV=0 haiku-45-noprimer-v3 TV=0 haiku-45-primer-v3 TV=0 haiku-45-think-noprimer-v3 TV=0 haiku-45-think-primer-v3 TV=0 sonnet-46-noprimer-v3 TV=0 sonnet-46-think-noprimer-v3 TV=0 sonnet-46-think-primer-v3 TV=0
var objects = [
  {shape: "square", color: "blue"},
  {shape: "circle", color: "blue"},
  {shape: "square", color: "green"}
];

var utterances = ['blue', 'green', 'square', 'circle'];

var meaning = function(utterance, obj) {
  if (utterance === 'blue' || utterance === 'green') {
    return obj.color === utterance;
  } else {
    return obj.shape === utterance;
  }
};

var meaningPrior = function() {
  return uniformDraw(objects);
};

var alpha = 1;

var literalListener = mem(function(utterance) {
  return Infer({method: 'enumerate'}, function() {
    var obj = meaningPrior();
    condition(meaning(utterance, obj));
    return obj;
  });
});

var speaker = mem(function(obj) {
  return Infer({method: 'enumerate'}, function() {
    var utterance = uniformDraw(utterances);
    factor(alpha * literalListener(utterance).score(obj));
    return utterance;
  });
});

var pragmaticListener = mem(function(utterance) {
  return Infer({method: 'enumerate'}, function() {
    var obj = meaningPrior();
    observe(speaker(obj), utterance);
    return obj;
  });
});

var speaker2 = mem(function(obj) {
  return Infer({method: 'enumerate'}, function() {
    var utterance = uniformDraw(utterances);
    factor(alpha * pragmaticListener(utterance).score(obj));
    return utterance;
  });
});

var listener3 = mem(function(utterance) {
  return Infer({method: 'enumerate'}, function() {
    var obj = meaningPrior();
    observe(speaker2(obj), utterance);
    return obj;
  });
});

var L1 = pragmaticListener('blue');
var L2 = listener3('blue');

var ANSWER = {L1: L1, L2: L2};
// Define the world
var objects = [
  {shape: "square", color: "blue"},
  {shape: "circle", color: "blue"},
  {shape: "square", color: "green"}
];

var meaningPrior = function() {
  uniformDraw(objects);
};

var utterances = ['blue', 'green', 'square', 'circle'];

// Truth function: utterance is true if it matches the object's attribute
var meaning = function(utterance, obj) {
  return (utterance === obj.color) || (utterance === obj.shape);
};

// Level 0: Literal Listener
var literalListener = function(utterance) {
  Infer({method: "enumerate"}, function() {
    var obj = meaningPrior();
    condition(meaning(utterance, obj));
    return obj;
  });
};

// Level 1: Speaker (S1)
var speaker = function(obj) {
  Infer({method: "enumerate"}, function() {
    var utterance = uniformDraw(utterances);
    var alpha = 1;
    factor(alpha * literalListener(utterance).score(obj));
    return utterance;
  });
};

// Level 1: Pragmatic Listener (L1)
var pragmaticListener = function(utterance) {
  Infer({method: "enumerate"}, function() {
    var obj = meaningPrior();
    observe(speaker(obj), utterance);
    return obj;
  });
};

// Level 2: Speaker (S2)
var speaker2 = function(obj) {
  Infer({method: "enumerate"}, function() {
    var utterance = uniformDraw(utterances);
    var alpha = 1;
    factor(alpha * pragmaticListener(utterance).score(obj));
    return utterance;
  });
};

// Level 2: Listener (L2)
var listener3 = function(utterance) {
  Infer({method: "enumerate"}, function() {
    var obj = meaningPrior();
    observe(speaker2(obj), utterance);
    return obj;
  });
};

// Compute L1 and L2 for 'blue'
var L1 = pragmaticListener('blue');
var L2 = listener3('blue');

var ANSWER = {L1: L1, L2: L2};
// Define the world objects
var objects = [
  {shape: "square", color: "blue"},
  {shape: "circle", color: "blue"},
  {shape: "square", color: "green"}
];

// Uniform prior over objects
var meaningPrior = function() {
  return uniformDraw(objects);
};

// Truth function: utterance is true if it matches the object's attribute
var meaning = function(utterance, obj) {
  return (utterance === obj.color) || (utterance === obj.shape);
};

// Level 0: Literal Listener
var literalListener = mem(function(utterance) {
  return Infer({method: 'enumerate'}, function() {
    var obj = meaningPrior();
    condition(meaning(utterance, obj));
    return obj;
  });
});

// Level 1: Speaker (pragmatic)
var alpha = 1;
var speaker = mem(function(obj) {
  return Infer({method: 'enumerate'}, function() {
    var utterance = uniformDraw(['blue', 'green', 'square', 'circle']);
    factor(alpha * literalListener(utterance).score(obj));
    return utterance;
  });
});

// Level 1: Pragmatic Listener (L1)
var pragmaticListener = mem(function(utterance) {
  return Infer({method: 'enumerate'}, function() {
    var obj = meaningPrior();
    observe(speaker(obj), utterance);
    return obj;
  });
});

// Level 2: Speaker2 (pragmatic, reasoning about L1)
var speaker2 = mem(function(obj) {
  return Infer({method: 'enumerate'}, function() {
    var utterance = uniformDraw(['blue', 'green', 'square', 'circle']);
    factor(alpha * pragmaticListener(utterance).score(obj));
    return utterance;
  });
});

// Level 2: Listener3 (L2, reasoning about speaker2)
var listener3 = mem(function(utterance) {
  return Infer({method: 'enumerate'}, function() {
    var obj = meaningPrior();
    observe(speaker2(obj), utterance);
    return obj;
  });
});

// Compute L1 and L2 for utterance 'blue'
var L1 = pragmaticListener('blue');
var L2 = listener3('blue');

var ANSWER = {L1: L1, L2: L2};
var objects = [
  {shape: "square", color: "blue"},
  {shape: "circle", color: "blue"},
  {shape: "square", color: "green"}
];

var utterances = ['blue', 'green', 'square', 'circle'];

var meaning = function(utterance, obj) {
  if (utterance === 'blue') return obj.color === 'blue';
  if (utterance === 'green') return obj.color === 'green';
  if (utterance === 'square') return obj.shape === 'square';
  if (utterance === 'circle') return obj.shape === 'circle';
  return false;
};

var meaningPrior = function() {
  return uniformDraw(objects);
};

var alpha = 1;

// Level 1: Literal Listener
var literalListener = function(utterance) {
  return Infer({method: 'enumerate'}, function() {
    var obj = meaningPrior();
    condition(meaning(utterance, obj));
    return obj;
  });
};

// Level 1: Speaker
var speaker = function(obj) {
  return Infer({method: 'enumerate'}, function() {
    var utterance = uniformDraw(utterances);
    factor(alpha * literalListener(utterance).score(obj));
    return utterance;
  });
};

// Level 1: Pragmatic Listener
var pragmaticListener = function(utterance) {
  return Infer({method: 'enumerate'}, function() {
    var obj = meaningPrior();
    observe(speaker(obj), utterance);
    return obj;
  });
};

// Level 2: Speaker 2
var speaker2 = function(obj) {
  return Infer({method: 'enumerate'}, function() {
    var utterance = uniformDraw(utterances);
    factor(alpha * pragmaticListener(utterance).score(obj));
    return utterance;
  });
};

// Level 2: Listener 3
var listener3 = function(utterance) {
  return Infer({method: 'enumerate'}, function() {
    var obj = meaningPrior();
    observe(speaker2(obj), utterance);
    return obj;
  });
};

var L1 = pragmaticListener('blue');
var L2 = listener3('blue');

var ANSWER = {L1: L1, L2: L2};
var objects = [
  {shape: "square", color: "blue"},
  {shape: "circle", color: "blue"},
  {shape: "square", color: "green"}
];

var utterances = ['blue', 'green', 'square', 'circle'];

var meaningPrior = function() {
  return objects[randomInteger(objects.length)];
};

var meaning = function(utterance, obj) {
  if (utterance === 'blue') return obj.color === 'blue';
  if (utterance === 'green') return obj.color === 'green';
  if (utterance === 'square') return obj.shape === 'square';
  if (utterance === 'circle') return obj.shape === 'circle';
  return false;
};

var alpha = 1;

var literalListener = mem(function(utterance) {
  return Infer({method: 'enumerate'}, function() {
    var obj = meaningPrior();
    condition(meaning(utterance, obj));
    return obj;
  });
});

var speaker = mem(function(obj) {
  return Infer({method: 'enumerate'}, function() {
    var utterance = utterances[randomInteger(utterances.length)];
    factor(alpha * literalListener(utterance).score(obj));
    return utterance;
  });
});

var pragmaticListener = mem(function(utterance) {
  return Infer({method: 'enumerate'}, function() {
    var obj = meaningPrior();
    observe(speaker(obj), utterance);
    return obj;
  });
});

var speaker2 = mem(function(obj) {
  return Infer({method: 'enumerate'}, function() {
    var utterance = utterances[randomInteger(utterances.length)];
    factor(alpha * pragmaticListener(utterance).score(obj));
    return utterance;
  });
});

var listener3 = mem(function(utterance) {
  return Infer({method: 'enumerate'}, function() {
    var obj = meaningPrior();
    observe(speaker2(obj), utterance);
    return obj;
  });
});

var L1 = pragmaticListener('blue');
var L2 = listener3('blue');

var ANSWER = {L1: L1, L2: L2};
var objects = [
  {shape: "square", color: "blue"},
  {shape: "circle", color: "blue"},
  {shape: "square", color: "green"}
];

var utterances = ['blue', 'green', 'square', 'circle'];

var meaning = function(utterance, obj) {
  if (utterance === 'blue' || utterance === 'green') {
    return obj.color === utterance;
  } else {
    return obj.shape === utterance;
  }
};

var meaningPrior = function() {
  return uniformDraw(objects);
};

var alpha = 1;

var literalListener = cache(function(utterance) {
  return Infer({method: 'enumerate'}, function() {
    var obj = meaningPrior();
    condition(meaning(utterance, obj));
    return obj;
  });
});

var speaker = cache(function(obj) {
  return Infer({method: 'enumerate'}, function() {
    var utterance = uniformDraw(utterances);
    factor(alpha * literalListener(utterance).score(obj));
    return utterance;
  });
});

var pragmaticListener = cache(function(utterance) {
  return Infer({method: 'enumerate'}, function() {
    var obj = meaningPrior();
    observe(speaker(obj), utterance);
    return obj;
  });
});

var speaker2 = cache(function(obj) {
  return Infer({method: 'enumerate'}, function() {
    var utterance = uniformDraw(utterances);
    factor(alpha * pragmaticListener(utterance).score(obj));
    return utterance;
  });
});

var listener3 = cache(function(utterance) {
  return Infer({method: 'enumerate'}, function() {
    var obj = meaningPrior();
    observe(speaker2(obj), utterance);
    return obj;
  });
});

var L1 = pragmaticListener('blue');
var L2 = listener3('blue');

var ANSWER = {L1: L1, L2: L2};
var objects = [
  {shape: "square", color: "blue"},
  {shape: "circle", color: "blue"},
  {shape: "square", color: "green"}
];

var utterances = ['blue', 'green', 'square', 'circle'];
var alpha = 1;

var meaning = function(utterance, obj) {
  if (utterance === 'blue' || utterance === 'green') {
    return obj.color === utterance;
  } else {
    return obj.shape === utterance;
  }
};

var meaningPrior = function() {
  return uniformDraw(objects);
};

// Level-1 stack
var literalListener = cache(function(utterance) {
  return Infer({method: 'enumerate'}, function() {
    var obj = meaningPrior();
    condition(meaning(utterance, obj));
    return obj;
  });
});

var speaker = cache(function(obj) {
  return Infer({method: 'enumerate'}, function() {
    var utterance = uniformDraw(utterances);
    factor(alpha * literalListener(utterance).score(obj));
    return utterance;
  });
});

var pragmaticListener = cache(function(utterance) {
  return Infer({method: 'enumerate'}, function() {
    var obj = meaningPrior();
    observe(speaker(obj), utterance);
    return obj;
  });
});

// Level-2 stack
var speaker2 = cache(function(obj) {
  return Infer({method: 'enumerate'}, function() {
    var utterance = uniformDraw(utterances);
    factor(alpha * pragmaticListener(utterance).score(obj));
    return utterance;
  });
});

var listener3 = cache(function(utterance) {
  return Infer({method: 'enumerate'}, function() {
    var obj = meaningPrior();
    observe(speaker2(obj), utterance);
    return obj;
  });
});

var L1 = pragmaticListener('blue');
var L2 = listener3('blue');

var ANSWER = {L1: L1, L2: L2};
var objects = [
  {shape: "square", color: "blue"},
  {shape: "circle", color: "blue"},
  {shape: "square", color: "green"}
];

var utterances = ['blue', 'green', 'square', 'circle'];

var meaning = function(utterance, obj) {
  if (utterance === 'blue' || utterance === 'green') {
    return obj.color === utterance;
  } else {
    return obj.shape === utterance;
  }
};

var meaningPrior = function() {
  return uniformDraw(objects);
};

var alpha = 1;

// Level-1 stack
var literalListener = mem(function(utterance) {
  return Infer({method: 'enumerate'}, function() {
    var obj = meaningPrior();
    condition(meaning(utterance, obj));
    return obj;
  });
});

var speaker = mem(function(obj) {
  return Infer({method: 'enumerate'}, function() {
    var utterance = uniformDraw(utterances);
    factor(alpha * literalListener(utterance).score(obj));
    return utterance;
  });
});

var pragmaticListener = mem(function(utterance) {
  return Infer({method: 'enumerate'}, function() {
    var obj = meaningPrior();
    observe(speaker(obj), utterance);
    return obj;
  });
});

// Level-2 stack
var speaker2 = mem(function(obj) {
  return Infer({method: 'enumerate'}, function() {
    var utterance = uniformDraw(utterances);
    factor(alpha * pragmaticListener(utterance).score(obj));
    return utterance;
  });
});

var listener3 = mem(function(utterance) {
  return Infer({method: 'enumerate'}, function() {
    var obj = meaningPrior();
    observe(speaker2(obj), utterance);
    return obj;
  });
});

var L1 = pragmaticListener('blue');
var L2 = listener3('blue');

var ANSWER = {L1: L1, L2: L2};
groundtruth output
{
  "L1": {
    "__kind": "distribution",
    "probs": [
      0.4000000000000001,
      0.6
    ],
    "support": [
      {
        "shape": "circle",
        "color": "blue"
      },
      {
        "shape": "square",
        "color": "blue"
      }
    ]
  },
  "L2": {
    "__kind": "distribution",
    "probs": [
      0.3636363636363638,
      0.6363636363636364
    ],
    "support": [
      {
        "shape": "circle",
        "color": "blue"
      },
      {
        "shape": "square",
        "color": "blue"
      }
    ]
  }
}
feedback
not signed
05-observing-sequences 8 atoms 2✓ · 1✗
probmods2-observing-sequences/ex1.a distribution 8 runs TV=1.00 TV<1
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
We have a vocabulary {dogs, cats, chase, sleep, stop} and a bigram model with Dirichlet priors over per-word transition distributions. Given the scaffold:
Infer({method:'MCMC', burn:10000, samples: 50000, onlyMAP:false}, function() {
  let vocab = ['dogs', 'cats', 'chase', 'sleep', 'stop'];
  var wordToDistribution = mem(function(word) {
    return dirichletDrift({alpha:ones([vocab.length,1]), concentration:10})
  })
  var transition = function(word) {
    return categorical({ps: wordToDistribution(word), vs: vocab})
  }
  // ...your code here...
})
Someone says 'dogs chase cats'. Determine how likely 'chase' is to be followed by each word. Use a recursive generateSentence that emits until 'stop'. Condition on the observation. End with the Infer(...) returning the posterior over transition('chase').
groundtruth code
var comparray = function(arr1,arr2){
  return (JSON.stringify(arr1) === JSON.stringify(arr2));
};
var ANSWER = (Infer({method:'MCMC', burn:10000, samples: 50000, onlyMAP:false}, function() {
  let vocab = ['dogs', 'cats', 'chase', 'sleep', 'stop'];
  var wordToDistribution = mem(function(word) {
    return dirichletDrift({alpha:ones([vocab.length,1]), concentration:10});
  });
  var transition = function(word) {
    return categorical({ps: wordToDistribution(word), vs: vocab});
  };
  let obs = ['dogs', 'chase', 'cats'];
  let generateSentence = function(lastState, sentence) {
    let word = transition(lastState);
    if (word == 'stop') return [];
    return [word].concat(generateSentence(word, sentence));
  };
  condition(comparray(obs, generateSentence('start')));
  return transition('chase');
}));
generated code
sonnet-46-primer-v3 TV<1 haiku-45-noprimer-v3 TV<.05 haiku-45-primer-v3 TV<.5 haiku-45-think-noprimer-v3 TV<.05 haiku-45-think-primer-v3 fail sonnet-46-noprimer-v3 TV<.05 sonnet-46-think-noprimer-v3 TV<.05 sonnet-46-think-primer-v3 TV<.05
var ANSWER = Infer({method:'MCMC', burn:10000, samples: 50000, onlyMAP:false}, function() {
  var vocab = ['dogs', 'cats', 'chase', 'sleep', 'stop'];
  var wordToDistribution = mem(function(word) {
    return dirichletDrift({alpha:ones([vocab.length,1]), concentration:10});
  });
  var transition = function(word) {
    return categorical({ps: wordToDistribution(word), vs: vocab});
  };

  var generateSentence = function(word, sentence) {
    var nextWord = transition(word);
    var newSentence = sentence.concat([nextWord]);
    return (nextWord === 'stop') ? newSentence : generateSentence(nextWord, newSentence);
  };

  var observeSentence = function(words) {
    // words is the sequence after the start, ending with 'stop'
    // We observe each transition: start -> words[0], words[0] -> words[1], ...
    // The observed sentence is 'dogs chase cats stop'
    // transitions: dogs->chase, chase->cats, cats->stop
    observe(Categorical({ps: wordToDistribution('dogs'), vs: vocab}), 'chase');
    observe(Categorical({ps: wordToDistribution('chase'), vs: vocab}), 'cats');
    observe(Categorical({ps: wordToDistribution('cats'), vs: vocab}), 'stop');
  };

  observeSentence(['chase', 'cats', 'stop']);

  return wordToDistribution('chase');
});
var ANSWER = Infer({method:'MCMC', burn:10000, samples: 50000, onlyMAP:false}, function() {
  let vocab = ['dogs', 'cats', 'chase', 'sleep', 'stop'];
  var wordToDistribution = mem(function(word) {
    return dirichletDrift({alpha:ones([vocab.length,1]), concentration:10})
  })
  var transition = function(word) {
    return categorical({ps: wordToDistribution(word), vs: vocab})
  }
  
  var generateSentence = function(word) {
    if (word === 'stop') {
      return []
    } else {
      var nextWord = transition(word)
      return [nextWord].concat(generateSentence(nextWord))
    }
  }
  
  var sentence = ['dogs'].concat(generateSentence('dogs'))
  
  condition(sentence[0] === 'dogs' && sentence[1] === 'chase' && sentence[2] === 'cats')
  
  return transition('chase')
})
var ANSWER = Infer({method:'MCMC', burn:10000, samples: 50000, onlyMAP:false}, function() {
  var vocab = ['dogs', 'cats', 'chase', 'sleep', 'stop'];
  var wordToDistribution = mem(function(word) {
    return dirichletDrift({alpha:ones([vocab.length,1]), concentration:10});
  });
  var transition = function(word) {
    return categorical({ps: wordToDistribution(word), vs: vocab});
  };
  
  var generateSentence = function(word) {
    if (word === 'stop') {
      return [word];
    } else {
      var nextWord = transition(word);
      return [word].concat(generateSentence(nextWord));
    }
  };
  
  var sentence = generateSentence('dogs');
  observe(Categorical({vs: sentence, ps: ones([sentence.length, 1])}), 'cats');
  
  var idx_chase = _.indexOf(sentence, 'chase');
  condition(idx_chase >= 0 && idx_chase < sentence.length - 1);
  
  return transition('chase');
});
var ANSWER = Infer({method:'MCMC', burn:10000, samples: 50000, onlyMAP:false}, function() {
  let vocab = ['dogs', 'cats', 'chase', 'sleep', 'stop'];
  var wordToDistribution = mem(function(word) {
    return dirichletDrift({alpha:ones([vocab.length,1]), concentration:10})
  })
  var transition = function(word) {
    return categorical({ps: wordToDistribution(word), vs: vocab})
  }
  
  var generateSentence = function(word) {
    var nextWord = transition(word);
    if (nextWord === 'stop') {
      return [word];
    } else {
      return [word].concat(generateSentence(nextWord));
    }
  }
  
  // Generate a sentence starting from 'dogs'
  var sentence = generateSentence('dogs');
  
  // Condition on observing the sentence "dogs chase cats"
  condition(_.isEqual(sentence, ['dogs', 'chase', 'cats']));
  
  // Return the posterior distribution over transition('chase')
  return transition('chase');
});
Timeout after 60s
var ANSWER = Infer({method:'MCMC', burn:10000, samples: 50000, onlyMAP:false}, function() {
  var vocab = ['dogs', 'cats', 'chase', 'sleep', 'stop'];
  var wordToDistribution = mem(function(word) {
    return dirichletDrift({alpha: ones([vocab.length,1]), concentration:10})
  });
  var transition = mem(function(word) {
    return categorical({ps: wordToDistribution(word), vs: vocab})
  });
  
  var generateSentence = function(currentWord) {
    var nextWord = transition(currentWord);
    return (nextWord === 'stop') ? [currentWord] : [currentWord].concat(generateSentence(nextWord));
  };
  
  // Generate a sentence starting with 'dogs'
  var sentence = generateSentence('dogs');
  
  // Condition on observing the sequence "dogs" -> "chase" -> "cats"
  condition(sentence[0] === 'dogs' && sentence[1] === 'chase' && sentence[2] === 'cats');
  
  // Return the posterior distribution over what comes after 'chase'
  return transition('chase');
});
var ANSWER = Infer({method:'MCMC', burn:10000, samples: 50000, onlyMAP:false}, function() {
  var vocab = ['dogs', 'cats', 'chase', 'sleep', 'stop'];
  
  var wordToDistribution = mem(function(word) {
    return dirichletDrift({alpha: ones([vocab.length, 1]), concentration: 10});
  });
  
  var transition = function(word) {
    return categorical({ps: wordToDistribution(word), vs: vocab});
  };
  
  var generateSentence = function(currentWord, words) {
    var nextWord = transition(currentWord);
    if (nextWord === 'stop') {
      return words.concat(['stop']);
    } else {
      return generateSentence(nextWord, words.concat([nextWord]));
    }
  };
  
  var sentence = generateSentence('dogs', ['dogs']);
  
  // Condition on observing 'dogs chase cats stop'
  condition(sentence[0] === 'dogs' &&
            sentence[1] === 'chase' &&
            sentence[2] === 'cats' &&
            sentence[3] === 'stop' &&
            sentence.length === 4);
  
  return transition('chase');
});
var ANSWER = Infer({method:'MCMC', burn:10000, samples: 50000, onlyMAP:false}, function() {
  var vocab = ['dogs', 'cats', 'chase', 'sleep', 'stop'];

  var wordToDistribution = mem(function(word) {
    return dirichletDrift({alpha: ones([vocab.length, 1]), concentration: 10});
  });

  var transition = function(word) {
    return categorical({ps: wordToDistribution(word), vs: vocab});
  };

  var generateSentence = function(currentWord, wordsSoFar) {
    var nextWord = transition(currentWord);
    if (nextWord === 'stop') {
      return wordsSoFar.concat(['stop']);
    } else {
      return generateSentence(nextWord, wordsSoFar.concat([nextWord]));
    }
  };

  // Generate a sentence starting from 'dogs'
  var sentence = generateSentence('dogs', ['dogs']);

  // Condition on the observation 'dogs chase cats'
  // The sentence should contain at least ['dogs', 'chase', 'cats', ...]
  condition(sentence[0] === 'dogs');
  condition(sentence[1] === 'chase');
  condition(sentence[2] === 'cats');

  // Return the posterior over transition('chase')
  return transition('chase');
});
var ANSWER = Infer({method:'MCMC', burn:10000, samples:50000, onlyMAP:false}, function() {
  var vocab = ['dogs', 'cats', 'chase', 'sleep', 'stop'];

  var wordToDistribution = mem(function(word) {
    return dirichletDrift({alpha: ones([vocab.length, 1]), concentration: 10});
  });

  var transition = function(word) {
    return categorical({ps: wordToDistribution(word), vs: vocab});
  };

  // Recursive sentence generator: starts from `word`, emits words until 'stop'
  var generateSentence = function(word) {
    if (word === 'stop') {
      return ['stop'];
    } else {
      var next = transition(word);
      return [word].concat(generateSentence(next));
    }
  };

  // Instead of generating and comparing (slow rejection), we soft-condition by
  // observing each bigram transition in the sentence 'dogs chase cats (stop)'.
  // This directly scores the Dirichlet-drawn distributions against the data.
  var observeTransition = function(from, to) {
    observe(Categorical({ps: wordToDistribution(from), vs: vocab}), to);
  };

  // Observed sentence: dogs -> chase -> cats -> stop
  observeTransition('dogs',  'chase');
  observeTransition('chase', 'cats');
  observeTransition('cats',  'stop');

  // Return a sample from the posterior predictive next-word after 'chase'
  return transition('chase');
});
groundtruth output
cats0.3430
chase0.1761
sleep0.1641
dogs0.1613
stop0.1556
raw JSON
{
  "__kind": "distribution",
  "probs": [
    0.34299999999999997,
    0.17606000000000002,
    0.16127999999999998,
    0.16408,
    0.15558
  ],
  "support": [
    "cats",
    "chase",
    "dogs",
    "sleep",
    "stop"
  ]
}
feedback
not signed
probmods2-observing-sequences/ex1.b distribution 8 runs TV=0.12 TV<.5
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Bigram sentence model over vocabulary ['dogs', 'cats', 'chase', 'sleep', 'stop']. Each word has its own transition distribution drawn from a dirichletDrift({alpha: ones([5,1]), concentration: 10}). A sentence is generated by starting at the special token 'start', repeatedly applying transition(prevWord) to draw the next word, terminating when 'stop' is drawn (and emitting 'stop' so that the observed sentence's length matches without undefined). Helper:
var comparray = function(arr1, arr2) {
  return JSON.stringify(arr1) === JSON.stringify(arr2);
};
Use Infer({method: 'MCMC', burn: 10000, samples: 50000, onlyMAP: false}, ...). Observe the sentence ['dogs', 'chase', 'cats', 'stop']. Then, in a second sentence, the first word is 'dogs'. End your program with the Infer(...) returning the marginal distribution over the second word of this new sentence.
groundtruth code
var comparray = function(arr1,arr2){
  return (JSON.stringify(arr1) === JSON.stringify(arr2));
};
var ANSWER = (Infer({method:'MCMC', burn:10000, samples: 50000, onlyMAP: false}, function() {
  let vocab = ['dogs', 'cats', 'chase', 'sleep', 'stop'];
  var wordToDistribution = mem(function(word) {
    return dirichletDrift({alpha:ones([vocab.length,1]), concentration:10});
  });
  var transition = function(word) {
    return categorical({ps: wordToDistribution(word), vs: vocab});
  };
  let generateSentence = function(lastState, sentence) {
    let word = transition(lastState);
    if (word == 'stop') return ['stop'];
    return [word].concat(generateSentence(word, sentence));
  };
  let obs = ['dogs', 'chase', 'cats', 'stop'];
  condition(comparray(obs, generateSentence('start')));
  let newSentence = generateSentence('start');
  condition(newSentence[0] == 'dogs');
  return newSentence[1];
}));
generated code
sonnet-46-primer-v3 TV<.5 haiku-45-noprimer-v3 fail haiku-45-primer-v3 fail haiku-45-think-noprimer-v3 fail haiku-45-think-primer-v3 TV<.05 sonnet-46-noprimer-v3 fail sonnet-46-think-noprimer-v3 fail sonnet-46-think-primer-v3 TV=0
var vocab = ['dogs', 'cats', 'chase', 'sleep', 'stop'];
var n = vocab.length;

var comparray = function(arr1, arr2) {
  return JSON.stringify(arr1) === JSON.stringify(arr2);
};

var model = function() {
  // Each word (including 'start') has its own transition distribution
  // Words that can be "previous": start, dogs, cats, chase, sleep, stop
  var prevWords = ['start', 'dogs', 'cats', 'chase', 'sleep', 'stop'];
  
  // Draw transition distributions for each previous word
  var transitionDists = mem(function(word) {
    return dirichletDrift({alpha: ones([n, 1]), concentration: 10});
  });
  
  var transition = function(prevWord) {
    var probs = transitionDists(prevWord);
    var idx = sample(Categorical({vs: vocab, ps: probs}));
    return idx;
  };
  
  // Generate a sentence starting from 'start'
  var generateSentence = function(prevWord, acc) {
    var nextWord = transition(prevWord);
    var newAcc = acc.concat([nextWord]);
    if (nextWord === 'stop') {
      return newAcc;
    } else {
      return generateSentence(nextWord, newAcc);
    }
  };
  
  // Observe the sentence ['dogs', 'chase', 'cats', 'stop']
  var observed = ['dogs', 'chase', 'cats', 'stop'];
  var generated = generateSentence('start', []);
  condition(comparray(generated, observed));
  
  // Second sentence: first word is 'dogs', return second word
  var secondWord = transition('dogs');
  return secondWord;
};

var ANSWER = Infer({method: 'MCMC', burn: 10000, samples: 50000, onlyMAP: false}, model);
Error: cpsInnerStatement
var comparray = function(arr1, arr2) {
  return JSON.stringify(arr1) === JSON.stringify(arr2);
};

var vocab = ['dogs', 'cats', 'chase', 'sleep', 'stop'];
var vocabSize = vocab.length;

var model = function() {
  // Create transition distributions for each word
  var transitions = {};
  
  for (var i = 0; i < vocabSize; i++) {
    var word = vocab[i];
    transitions[word] = sample(dirichletDrift({alpha: ones([5, 1]), concentration: 10}));
  }
  
  // Also need transition from 'start'
  transitions['start'] = sample(dirichletDrift({alpha: ones([5, 1]), concentration: 10}));
  
  // Function to generate a sentence
  var generateSentence = function() {
    var sentence = [];
    var currentWord = 'start';
    
    while (true) {
      var nextWordDist = transitions[currentWord];
      var nextWord = categorical({ps: nextWordDist, vs: vocab});
      sentence.push(nextWord);
      
      if (nextWord === 'stop') {
        break;
      }
      currentWord = nextWord;
    }
    
    return sentence;
  };
  
  // Observe the first sentence
  var sentence1 = generateSentence();
  observe(comparray(sentence1, ['dogs', 'chase', 'cats', 'stop']), true);
  
  // Generate second sentence starting with 'dogs'
  var sentence2 = [];
  sentence2.push('dogs');
  
  var currentWord = 'dogs';
  var secondWord = categorical({ps: transitions[currentWord], vs: vocab});
  sentence2.push(secondWord);
  
  return secondWord;
};

var ANSWER = Infer({method: 'MCMC', burn: 10000, samples: 50000, onlyMAP: false}, model);
Error: cpsInnerStatement
var comparray = function(arr1, arr2) {
  return JSON.stringify(arr1) === JSON.stringify(arr2);
};

var vocab = ['dogs', 'cats', 'chase', 'sleep', 'stop'];
var observedSentence = ['dogs', 'chase', 'cats', 'stop'];

var model = function() {
  // Draw transition distributions for each word
  var transitionDists = {};
  map(function(word) {
    transitionDists[word] = dirichletDrift({
      alpha: ones([5, 1]),
      concentration: 10
    });
  }, vocab);
  
  // Also need transition distribution from 'start'
  transitionDists['start'] = dirichletDrift({
    alpha: ones([5, 1]),
    concentration: 10
  });
  
  // Function to get next word given previous word
  var transition = function(prevWord) {
    var probs = transitionDists[prevWord];
    var probsArray = [
      T.get(probs, 0),
      T.get(probs, 1),
      T.get(probs, 2),
      T.get(probs, 3),
      T.get(probs, 4)
    ];
    return categorical({vs: vocab, ps: probsArray});
  };
  
  // Generate a sentence
  var generateSentence = function() {
    var sentence = [];
    var prevWord = 'start';
    var done = false;
    var maxSteps = 100;
    var step = 0;
    
    while (!done && step < maxSteps) {
      var nextWord = transition(prevWord);
      sentence.push(nextWord);
      if (nextWord === 'stop') {
        done = true;
      }
      prevWord = nextWord;
      step = step + 1;
    }
    
    return sentence;
  };
  
  // Observe the first sentence
  var sentence1 = generateSentence();
  observe(Categorical({vs: [true, false], ps: [1, 0]}), 
          comparray(sentence1, observedSentence));
  
  // Generate second sentence starting with 'dogs'
  var sentence2 = [];
  sentence2.push('dogs');
  var prevWord = 'dogs';
  var done = false;
  var maxSteps = 100;
  var step = 0;
  
  while (!done && step < maxSteps) {
    var nextWord = transition(prevWord);
    sentence2.push(nextWord);
    if (nextWord === 'stop') {
      done = true;
    }
    prevWord = nextWord;
    step = step + 1;
  }
  
  // Return the second word of the second sentence
  return sentence2[1];
};

var ANSWER = Infer({
  method: 'MCMC',
  burn: 10000,
  samples: 50000,
  onlyMAP: false
}, model);
Error: cpsInnerStatement
var vocabulary = ['dogs', 'cats', 'chase', 'sleep', 'stop'];

var comparray = function(arr1, arr2) {
  return JSON.stringify(arr1) === JSON.stringify(arr2);
};

var model = function() {
  // Draw transition distributions for each word
  var transitions = {};
  
  // Transition from 'start'
  transitions['start'] = dirichletDrift({alpha: ones([5, 1]), concentration: 10});
  
  // Transition from each word in vocabulary
  vocabulary.forEach(function(word) {
    transitions[word] = dirichletDrift({alpha: ones([5, 1]), concentration: 10});
  });
  
  // Function to get next word given previous word
  var transition = function(prevWord) {
    var dist = transitions[prevWord];
    return categorical({ps: dist, vs: vocabulary});
  };
  
  // Function to generate a sentence
  var generateSentence = function() {
    var sentence = [];
    var currentWord = 'start';
    while (true) {
      currentWord = transition(currentWord);
      sentence.push(currentWord);
      if (currentWord === 'stop') {
        break;
      }
    }
    return sentence;
  };
  
  // Observe the sentence
  var observedSentence = ['dogs', 'chase', 'cats', 'stop'];
  var generatedSentence = generateSentence();
  condition(comparray(generatedSentence, observedSentence));
  
  // Generate second word of new sentence starting with 'dogs'
  var secondWord = transition('dogs');
  
  return secondWord;
};

var ANSWER = Infer({method: 'MCMC', burn: 10000, samples: 50000, onlyMAP: false}, model);
var vocab = ['dogs', 'cats', 'chase', 'sleep', 'stop'];
var observedSentence = ['dogs', 'chase', 'cats', 'stop'];

var comparray = function(arr1, arr2) {
  return JSON.stringify(arr1) === JSON.stringify(arr2);
};

var model = function() {
  // Memoized transition distribution for each word
  var transitionProbs = mem(function(word) {
    var alpha = ones([5, 1]);
    return dirichletDrift({alpha: alpha, concentration: 10});
  });
  
  // Observe the sentence by conditioning on each transition
  // 'dogs' comes after 'start'
  observe(Categorical({vs: vocab, ps: transitionProbs('start')}), observedSentence[0]);
  // 'chase' comes after 'dogs'
  observe(Categorical({vs: vocab, ps: transitionProbs('dogs')}), observedSentence[1]);
  // 'cats' comes after 'chase'
  observe(Categorical({vs: vocab, ps: transitionProbs('chase')}), observedSentence[2]);
  // 'stop' comes after 'cats'
  observe(Categorical({vs: vocab, ps: transitionProbs('cats')}), observedSentence[3]);
  
  // In a new sentence, first word is 'dogs', sample the second word
  var secondWord = categorical({vs: vocab, ps: transitionProbs('dogs')});
  
  return secondWord;
};

var ANSWER = Infer({method: 'MCMC', burn: 10000, samples: 50000, onlyMAP: false}, model);
Error: Parameter "vs" missing from Categorical distribution.
var vocab = ['dogs', 'cats', 'chase', 'sleep', 'stop'];
var vocabSize = vocab.length;

var comparray = function(arr1, arr2) {
  return JSON.stringify(arr1) === JSON.stringify(arr2);
};

var indexOf = function(word) {
  return vocab.indexOf(word);
};

var ANSWER = Infer({method: 'MCMC', burn: 10000, samples: 50000, onlyMAP: false}, function() {
  // Each word in vocab has its own transition distribution
  // We also need a transition from 'start'
  // Total: 6 transition distributions (start + 5 vocab words)
  
  // Transition distributions for each "from" token
  // from: start, dogs, cats, chase, sleep, stop
  var allTokens = ['start', 'dogs', 'cats', 'chase', 'sleep', 'stop'];
  
  var transitionDists = mapObject(function(key, val) {
    return dirichletDrift({alpha: ones([vocabSize, 1]), concentration: 10});
  }, {
    'start': 0,
    'dogs': 0,
    'cats': 0,
    'chase': 0,
    'sleep': 0,
    'stop': 0
  });
  
  var transition = function(prevWord) {
    var probs = transitionDists[prevWord];
    return vocab[categorical({ps: probs})];
  };
  
  // Generate a sentence starting from 'start', collect words until 'stop'
  var generateSentence = function(prevWord, acc) {
    var nextWord = transition(prevWord);
    var newAcc = acc.concat([nextWord]);
    if (nextWord === 'stop') {
      return newAcc;
    } else {
      return generateSentence(nextWord, newAcc);
    }
  };
  
  // Observe the sentence ['dogs', 'chase', 'cats', 'stop']
  var observed = ['dogs', 'chase', 'cats', 'stop'];
  var generated = generateSentence('start', []);
  condition(comparray(generated, observed));
  
  // Second sentence: first word is 'dogs', return second word
  var secondWord = transition('dogs');
  return secondWord;
});
Error: Parameter "vs" missing from Categorical distribution.
var vocab = ['dogs', 'cats', 'chase', 'sleep', 'stop'];
var allWords = ['start', 'dogs', 'cats', 'chase', 'sleep', 'stop'];
var n = vocab.length;

var comparray = function(arr1, arr2) {
  return JSON.stringify(arr1) === JSON.stringify(arr2);
};

var model = function() {
  // Draw one transition distribution per source word
  var getTransition = mem(function(word) {
    return dirichletDrift({alpha: ones([n, 1]), concentration: 10});
  });

  // Observed sentence 1: ['dogs', 'chase', 'cats', 'stop']
  // Condition on each bigram transition
  observe(Categorical({ps: getTransition('start'), vs: vocab}), 'dogs');
  observe(Categorical({ps: getTransition('dogs'),  vs: vocab}), 'chase');
  observe(Categorical({ps: getTransition('chase'), vs: vocab}), 'cats');
  observe(Categorical({ps: getTransition('cats'),  vs: vocab}), 'stop');

  // Second sentence: first word is 'dogs'
  // Condition on transition from 'start' yielding 'dogs'
  observe(Categorical({ps: getTransition('start'), vs: vocab}), 'dogs');

  // Sample the second word of the new sentence from transition('dogs')
  var secondWord = vocab[categorical({ps: getTransition('dogs')})];
  return secondWord;
};

var ANSWER = Infer({method: 'MCMC', burn: 10000, samples: 50000, onlyMAP: false}, model);
var vocab = ['dogs', 'cats', 'chase', 'sleep', 'stop'];

var comparray = function(arr1, arr2) {
  return JSON.stringify(arr1) === JSON.stringify(arr2);
};

var model = function() {
  // Each state gets its own transition distribution (memoized = shared within a run)
  var transitionDist = mem(function(word) {
    return dirichletDrift({alpha: ones([5, 1]), concentration: 10});
  });

  var transition = function(prevWord) {
    var probs = transitionDist(prevWord);
    return vocab[discrete(probs)];
  };

  // Generate a sentence from prevWord, accumulating words including 'stop'
  var generateSentence = function(prevWord, acc) {
    var nextWord = transition(prevWord);
    var newAcc = acc.concat([nextWord]);
    return nextWord === 'stop' ? newAcc : generateSentence(nextWord, newAcc);
  };

  // Observe the first sentence: ['dogs', 'chase', 'cats', 'stop']
  var sent1 = generateSentence('start', []);
  condition(comparray(sent1, ['dogs', 'chase', 'cats', 'stop']));

  // Second sentence: condition that first word is 'dogs', return second word
  var sent2 = generateSentence('start', []);
  condition(sent2[0] === 'dogs');

  return sent2[1];
};

var ANSWER = Infer({method: 'MCMC', burn: 10000, samples: 50000, onlyMAP: false}, model);
groundtruth output
chase0.2939
stop0.1988
sleep0.1828
cats0.1805
dogs0.1440
raw JSON
{
  "__kind": "distribution",
  "probs": [
    0.19878,
    0.144,
    0.29392000000000007,
    0.18052,
    0.18278
  ],
  "support": [
    "stop",
    "dogs",
    "chase",
    "cats",
    "sleep"
  ]
}
feedback
not signed
probmods2-observing-sequences/ex1.c distribution 8 runs TV=0.00 TV=0
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Bigram sentence model over vocabulary ['dogs', 'cats', 'chase', 'sleep', 'stop']. Each word has its own transition distribution drawn from a dirichletDrift({alpha: ones([5,1]), concentration: 10}). A sentence is generated by starting at the special token 'start', repeatedly applying transition(prevWord) to draw the next word, terminating when 'stop' is drawn (and emitting 'stop' so that the observed sentence's length matches without undefined). Helper:
var comparray = function(arr1, arr2) {
  return JSON.stringify(arr1) === JSON.stringify(arr2);
};
Use Infer({method: 'MCMC', burn: 10000, samples: 50000, onlyMAP: false}, ...). Observe the sentence ['dogs', 'chase', 'cats', 'stop']. Then, in a second sentence, the second word is 'chase'. End your program with the Infer(...) returning the marginal distribution over the FIRST word of this new sentence.
groundtruth code
var comparray = function(arr1,arr2){
  return (JSON.stringify(arr1) === JSON.stringify(arr2));
};
var ANSWER = (Infer({method:'MCMC', burn:10000, samples: 50000, onlyMAP: false}, function() {
  let vocab = ['dogs', 'cats', 'chase', 'sleep', 'stop'];
  var wordToDistribution = mem(function(word) {
    return dirichletDrift({alpha:ones([vocab.length,1]), concentration:10});
  });
  var transition = function(word) {
    return categorical({ps: wordToDistribution(word), vs: vocab});
  };
  let generateSentence = function(lastState, sentence) {
    let word = transition(lastState);
    if (word == 'stop') return ['stop'];
    return [word].concat(generateSentence(word, sentence));
  };
  let obs = ['dogs', 'chase', 'cats', 'stop'];
  condition(comparray(obs, generateSentence('start')));
  let newSentence = generateSentence('start');
  condition(newSentence[1] == 'chase');
  return newSentence[0];
}));
generated code
sonnet-46-primer-v3 TV=0 haiku-45-noprimer-v3 fail haiku-45-primer-v3 fail haiku-45-think-noprimer-v3 fail haiku-45-think-primer-v3 fail sonnet-46-noprimer-v3 fail sonnet-46-think-noprimer-v3 TV<.5 sonnet-46-think-primer-v3 TV<.5
var vocab = ['dogs', 'cats', 'chase', 'sleep', 'stop'];
var vocabSize = vocab.length;

var comparray = function(arr1, arr2) {
  return JSON.stringify(arr1) === JSON.stringify(arr2);
};

var indexOf = function(word) {
  return _.indexOf(vocab, word);
};

var model = function() {
  // Each word (including 'start') has a transition distribution over vocab
  // Words that can be "previous": start, dogs, cats, chase, sleep, stop
  // We need transition from 'start' and from each vocab word
  var allPrev = ['start'].concat(vocab);
  
  // Draw transition distributions for each previous word
  var transitionDists = mem(function(prevWord) {
    return dirichletDrift({alpha: ones([5, 1]), concentration: 10});
  });
  
  var transition = function(prevWord) {
    var probs = transitionDists(prevWord);
    var idx = sample(Categorical({vs: vocab, ps: probs}));
    return idx;
  };
  
  // Generate a sentence starting from 'start'
  var generateSentence = function(prevWord, acc) {
    var nextWord = transition(prevWord);
    var newAcc = acc.concat([nextWord]);
    if (nextWord === 'stop') {
      return newAcc;
    } else {
      return generateSentence(nextWord, newAcc);
    }
  };
  
  // Observe the first sentence: ['dogs', 'chase', 'cats', 'stop']
  var observed1 = ['dogs', 'chase', 'cats', 'stop'];
  var generated1 = generateSentence('start', []);
  condition(comparray(generated1, observed1));
  
  // Generate a second sentence and condition on second word being 'chase'
  var generated2 = generateSentence('start', []);
  condition(generated2.length >= 2 && generated2[1] === 'chase');
  
  // Return the first word of the second sentence
  return generated2[0];
};

var ANSWER = Infer({method: 'MCMC', burn: 10000, samples: 50000, onlyMAP: false}, model);
Error: cpsInnerStatement
var comparray = function(arr1, arr2) {
  return JSON.stringify(arr1) === JSON.stringify(arr2);
};

var vocab = ['dogs', 'cats', 'chase', 'sleep', 'stop'];

var model = function() {
  // Draw transition distributions for each word
  var transitions = {};
  transitions['start'] = sample(dirichletDrift({alpha: ones([5, 1]), concentration: 10}));
  transitions['dogs'] = sample(dirichletDrift({alpha: ones([5, 1]), concentration: 10}));
  transitions['cats'] = sample(dirichletDrift({alpha: ones([5, 1]), concentration: 10}));
  transitions['chase'] = sample(dirichletDrift({alpha: ones([5, 1]), concentration: 10}));
  transitions['sleep'] = sample(dirichletDrift({alpha: ones([5, 1]), concentration: 10}));
  transitions['stop'] = sample(dirichletDrift({alpha: ones([5, 1]), concentration: 10}));
  
  var transition = function(prevWord) {
    return categorical({ps: transitions[prevWord], vs: vocab});
  };
  
  // Generate first sentence
  var sentence1 = [];
  var word = transition('start');
  sentence1.push(word);
  while (word !== 'stop') {
    word = transition(word);
    sentence1.push(word);
  }
  
  // Observe first sentence
  observe(comparray(sentence1, ['dogs', 'chase', 'cats', 'stop']), true);
  
  // Generate second sentence
  var sentence2 = [];
  var word2 = transition('start');
  sentence2.push(word2);
  
  // Observe that second word of sentence2 is 'chase'
  var secondWord = transition(word2);
  sentence2.push(secondWord);
  observe(secondWord === 'chase', true);
  
  // Return the first word of the second sentence
  return word2;
};

var ANSWER = Infer({method: 'MCMC', burn: 10000, samples: 50000, onlyMAP: false}, model);
TypeError: T.toArray is not a function
var comparray = function(arr1, arr2) {
  return JSON.stringify(arr1) === JSON.stringify(arr2);
};

var vocab = ['dogs', 'cats', 'chase', 'sleep', 'stop'];

var model = function() {
  // Draw transition distributions for each word
  var transition = mem(function(word) {
    var alpha = ones([5, 1]);
    var probs = dirichletDrift({alpha: alpha, concentration: 10});
    return probs;
  });

  // Generate a sentence starting from 'start'
  var generateSentence = function(prevWord) {
    var probs = transition(prevWord);
    var nextWord = categorical({vs: vocab, ps: T.toArray(probs)});
    var sentence = [nextWord];
    if (nextWord === 'stop') {
      return sentence;
    } else {
      return sentence.concat(generateSentence(nextWord));
    }
  };

  // Observe the first sentence
  var sentence1 = generateSentence('start');
  observe(Categorical({vs: [true, false], ps: [1, 0]}), 
          comparray(sentence1, ['dogs', 'chase', 'cats', 'stop']));

  // Generate the second sentence
  var sentence2 = generateSentence('start');
  
  // Condition on the second word being 'chase'
  condition(sentence2.length > 1 && sentence2[1] === 'chase');

  // Return the first word of the second sentence
  return sentence2[0];
};

var ANSWER = Infer({method: 'MCMC', burn: 10000, samples: 50000, onlyMAP: false}, model);
Error: cpsInnerStatement
var vocab = ['dogs', 'cats', 'chase', 'sleep', 'stop'];

var comparray = function(arr1, arr2) {
  return JSON.stringify(arr1) === JSON.stringify(arr2);
};

var model = function() {
  // Define transition distributions for each word (including 'start')
  var transitions = {};
  
  // Create transition distributions from dirichletDrift
  var words_and_start = ['start'].concat(vocab);
  for (var i = 0; i < words_and_start.length; i++) {
    var word = words_and_start[i];
    transitions[word] = dirichletDrift({
      alpha: ones([5, 1]),
      concentration: 10
    });
  }
  
  // Function to get next word given previous word
  var transition = function(prevWord) {
    var dist = transitions[prevWord];
    var nextWordIdx = sample(dist);
    return vocab[nextWordIdx];
  };
  
  // Function to generate a sentence
  var generateSentence = function() {
    var sentence = [];
    var currentWord = 'start';
    while (true) {
      var nextWord = transition(currentWord);
      sentence.push(nextWord);
      if (nextWord === 'stop') {
        break;
      }
      currentWord = nextWord;
    }
    return sentence;
  };
  
  // Observe the first sentence
  var sentence1 = generateSentence();
  observe(comparray(sentence1, ['dogs', 'chase', 'cats', 'stop']), true);
  
  // Generate second sentence where second word is 'chase'
  var firstWord = transition('start');
  var secondWord = transition(firstWord);
  observe(secondWord === 'chase', true);
  
  return firstWord;
};

var ANSWER = Infer({method: 'MCMC', burn: 10000, samples: 50000, onlyMAP: false}, model);
Error: Line 39: You tried to assign to a field of transitions, but you can only assign to fields of globalStore
var vocab = ['dogs', 'cats', 'chase', 'sleep', 'stop'];

var comparray = function(arr1, arr2) {
  return JSON.stringify(arr1) === JSON.stringify(arr2);
};

var model = function() {
  // Draw transition distributions for each word
  var transitions = {};
  
  var allWords = ['start'].concat(vocab);
  mapData({data: allWords}, function(word) {
    transitions[word] = dirichletDrift({alpha: ones([5, 1]), concentration: 10});
  });
  
  // Function to get next word given current word
  var nextWord = function(word) {
    var probs = transitions[word];
    return categorical({vs: vocab, ps: probs});
  };
  
  // Generate sentence word by word
  var generateSentenceWords = function(maxLength) {
    var rec = function(currentWord, acc, remaining) {
      if (remaining === 0) {
        return acc;
      }
      var next = nextWord(currentWord);
      var newAcc = acc.concat([next]);
      if (next === 'stop') {
        return newAcc;
      } else {
        return rec(next, newAcc, remaining - 1);
      }
    };
    return rec('start', [], maxLength);
  };
  
  // Observe first sentence ['dogs', 'chase', 'cats', 'stop']
  var sentence1 = generateSentenceWords(20);
  condition(comparray(sentence1, ['dogs', 'chase', 'cats', 'stop']));
  
  // Generate second sentence
  var sentence2 = generateSentenceWords(20);
  // Condition that second word is 'chase'
  condition(sentence2.length >= 2 && sentence2[1] === 'chase');
  
  // Return first word of second sentence
  return sentence2[0];
};

var ANSWER = Infer({method: 'MCMC', burn: 10000, samples: 50000, onlyMAP: false}, model);
Error: Parameter "vs" missing from Categorical distribution.
var vocab = ['dogs', 'cats', 'chase', 'sleep', 'stop'];
var vocabSize = vocab.length;

var comparray = function(arr1, arr2) {
  return JSON.stringify(arr1) === JSON.stringify(arr2);
};

var indexOf = function(word) {
  return vocab.indexOf(word);
};

var model = function() {
  // Each word (plus 'start') has its own transition distribution
  // 'start' + 5 vocab words = 6 transition distributions
  // But 'stop' doesn't need a transition distribution (terminal)
  // We need transitions from: 'start', 'dogs', 'cats', 'chase', 'sleep'
  // (not from 'stop' since we terminate)
  
  var allSources = ['start', 'dogs', 'cats', 'chase', 'sleep'];
  
  // Draw transition distributions for each source word
  var transitionDists = map(function(src) {
    return dirichletDrift({alpha: ones([vocabSize, 1]), concentration: 10});
  }, allSources);
  
  var getTransDist = function(word) {
    var idx = allSources.indexOf(word);
    return transitionDists[idx];
  };
  
  var transition = function(prevWord) {
    var probs = getTransDist(prevWord);
    return vocab[categorical({ps: probs})];
  };
  
  // Generate a sentence starting from 'start'
  var generateSentence = function(prevWord, acc) {
    var nextWord = transition(prevWord);
    var newAcc = acc.concat([nextWord]);
    if (nextWord === 'stop') {
      return newAcc;
    } else {
      return generateSentence(nextWord, newAcc);
    }
  };
  
  // Observe sentence 1: ['dogs', 'chase', 'cats', 'stop']
  var obs1 = ['dogs', 'chase', 'cats', 'stop'];
  var sent1 = generateSentence('start', []);
  observe(Gaussian({mu: 0, sigma: 0.01}), comparray(sent1, obs1) ? 0 : -10);
  
  // Generate second sentence and observe that second word is 'chase'
  var sent2 = generateSentence('start', []);
  // Second word is sent2[1]
  // We need at least 2 words in sent2
  observe(Gaussian({mu: 0, sigma: 0.01}), (sent2.length >= 2 && sent2[1] === 'chase') ? 0 : -10);
  
  // Return the first word of the second sentence
  return sent2[0];
};

var ANSWER = Infer({method: 'MCMC', burn: 10000, samples: 50000, onlyMAP: false}, model);
var vocab = ['dogs', 'cats', 'chase', 'sleep', 'stop'];

var comparray = function(arr1, arr2) {
  return JSON.stringify(arr1) === JSON.stringify(arr2);
};

var model = function() {
  // Each source token gets its own transition distribution over vocab
  var tokens = ['start', 'dogs', 'cats', 'chase', 'sleep', 'stop'];

  var ps = {
    'start': dirichletDrift({alpha: ones([5, 1]), concentration: 10}),
    'dogs':  dirichletDrift({alpha: ones([5, 1]), concentration: 10}),
    'cats':  dirichletDrift({alpha: ones([5, 1]), concentration: 10}),
    'chase': dirichletDrift({alpha: ones([5, 1]), concentration: 10}),
    'sleep': dirichletDrift({alpha: ones([5, 1]), concentration: 10}),
    'stop':  dirichletDrift({alpha: ones([5, 1]), concentration: 10})
  };

  var transition = function(prevWord) {
    return categorical({vs: vocab, ps: ps[prevWord]});
  };

  // ---- Observe sentence 1: ['dogs', 'chase', 'cats', 'stop'] ----
  // start -> dogs -> chase -> cats -> stop
  observe(Categorical({vs: vocab, ps: ps['start']}), 'dogs');
  observe(Categorical({vs: vocab, ps: ps['dogs']}),  'chase');
  observe(Categorical({vs: vocab, ps: ps['chase']}), 'cats');
  observe(Categorical({vs: vocab, ps: ps['cats']}),  'stop');

  // ---- Sentence 2: second word is 'chase' ----
  // Sample the first word, then observe second word = 'chase'
  var word1 = transition('start');
  observe(Categorical({vs: vocab, ps: ps[word1]}), 'chase');

  return word1;
};

var ANSWER = Infer({method: 'MCMC', burn: 10000, samples: 50000, onlyMAP: false}, model);
var vocab = ['dogs', 'cats', 'chase', 'sleep', 'stop'];

var comparray = function(arr1, arr2) {
  return JSON.stringify(arr1) === JSON.stringify(arr2);
};

var model = function() {
  // Each "from" token gets its own memoized transition distribution (Dirichlet drift)
  var transitions = mem(function(word) {
    return dirichletDrift({alpha: ones([5, 1]), concentration: 10});
  });

  // Sample next word from the transition distribution of prevWord
  var transition = function(prevWord) {
    var probs = transitions(prevWord);
    return categorical({vs: vocab, ps: probs});
  };

  // Observe the first sentence: ['dogs', 'chase', 'cats', 'stop']
  // This encodes: start->dogs, dogs->chase, chase->cats, cats->stop
  observe(Categorical({vs: vocab, ps: transitions('start')}), 'dogs');
  observe(Categorical({vs: vocab, ps: transitions('dogs')}),  'chase');
  observe(Categorical({vs: vocab, ps: transitions('chase')}), 'cats');
  observe(Categorical({vs: vocab, ps: transitions('cats')}),  'stop');

  // Second sentence: the second word is 'chase'
  // Sample the first word from the start transition
  var firstWord = transition('start');
  // Observe that the word after firstWord is 'chase'
  observe(Categorical({vs: vocab, ps: transitions(firstWord)}), 'chase');

  return firstWord;
};

var ANSWER = Infer({method: 'MCMC', burn: 10000, samples: 50000, onlyMAP: false}, model);
groundtruth output
dogs0.6192
sleep0.1439
cats0.1267
chase0.1102
raw JSON
{
  "__kind": "distribution",
  "probs": [
    0.61918,
    0.1267,
    0.1439,
    0.11021999999999997
  ],
  "support": [
    "dogs",
    "cats",
    "sleep",
    "chase"
  ]
}
feedback
not signed
probmods2-observing-sequences/ex2.a distribution 8 runs TV=0.08 TV<.5
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Bigram sentence model over vocabulary ['dogs', 'cats', 'chase', 'sleep', 'stop']. Each word has its own transition distribution drawn from a dirichletDrift({alpha: ones([5,1]), concentration: 10}). A sentence is generated by starting at the special token 'start', repeatedly applying transition(prevWord) to draw the next word, terminating when 'stop' is drawn (and emitting 'stop' so that the observed sentence's length matches without undefined). Helper:
var comparray = function(arr1, arr2) {
  return JSON.stringify(arr1) === JSON.stringify(arr2);
};
Use Infer({method: 'MCMC', burn: 10000, samples: 50000, onlyMAP: false}, ...). Observe the sentence ['dogs', 'chase', 'cats', 'stop']. Then, in a second sentence, the first word is 'cats'. End your program with the Infer(...) returning the marginal distribution over the second word of this new sentence.
groundtruth code
var comparray = function(arr1,arr2){
  return (JSON.stringify(arr1) === JSON.stringify(arr2));
};
var ANSWER = (Infer({method:'MCMC', burn:10000, samples: 50000, onlyMAP: false}, function() {
  let vocab = ['dogs', 'cats', 'chase', 'sleep', 'stop'];
  var wordToDistribution = mem(function(word) {
    return dirichletDrift({alpha:ones([vocab.length,1]), concentration:10});
  });
  var transition = function(word) {
    return categorical({ps: wordToDistribution(word), vs: vocab});
  };
  let generateSentence = function(lastState, sentence) {
    let word = transition(lastState);
    if (word == 'stop') return ['stop'];
    return [word].concat(generateSentence(word, sentence));
  };
  let obs = ['dogs', 'chase', 'cats', 'stop'];
  condition(comparray(obs, generateSentence('start')));
  let newSentence = generateSentence('start');
  condition(newSentence[0] == 'cats');
  return newSentence[1];
}));
generated code
sonnet-46-primer-v3 TV<.5 haiku-45-noprimer-v3 fail haiku-45-primer-v3 fail haiku-45-think-noprimer-v3 fail haiku-45-think-primer-v3 fail sonnet-46-noprimer-v3 fail sonnet-46-think-noprimer-v3 TV<.5 sonnet-46-think-primer-v3 TV<.5
var vocab = ['dogs', 'cats', 'chase', 'sleep', 'stop'];
var n = vocab.length;

var comparray = function(arr1, arr2) {
  return JSON.stringify(arr1) === JSON.stringify(arr2);
};

var model = function() {
  // Each word (including 'start') has a transition distribution over vocab
  var allWords = ['start', 'dogs', 'cats', 'chase', 'sleep', 'stop'];
  
  // Memoized transition distributions
  var transitionDist = mem(function(word) {
    return dirichletDrift({alpha: ones([n, 1]), concentration: 10});
  });
  
  var transition = function(prevWord) {
    var probs = transitionDist(prevWord);
    return vocab[discrete(probs)];
  };
  
  // Generate a sentence starting from 'start'
  var generateSentence = function(prevWord, acc) {
    var nextWord = transition(prevWord);
    var newAcc = acc.concat([nextWord]);
    if (nextWord === 'stop') {
      return newAcc;
    } else {
      return generateSentence(nextWord, newAcc);
    }
  };
  
  // Observe the sentence ['dogs', 'chase', 'cats', 'stop']
  var observed = ['dogs', 'chase', 'cats', 'stop'];
  var generated = generateSentence('start', []);
  condition(comparray(generated, observed));
  
  // Second sentence: first word is 'cats', return second word
  var secondWord = transition('cats');
  return secondWord;
};

var ANSWER = Infer({method: 'MCMC', burn: 10000, samples: 50000, onlyMAP: false}, model);
Error: cpsInnerStatement
var comparray = function(arr1, arr2) {
  return JSON.stringify(arr1) === JSON.stringify(arr2);
};

var vocab = ['dogs', 'cats', 'chase', 'sleep', 'stop'];
var vocabSize = vocab.length;

var model = function() {
  // Create transition distributions for each word
  var transitions = {};
  
  for (var i = 0; i < vocabSize; i++) {
    var word = vocab[i];
    transitions[word] = dirichletDrift({
      alpha: ones([vocabSize, 1]),
      concentration: 10
    });
  }
  
  // Add transition distribution for 'start'
  transitions['start'] = dirichletDrift({
    alpha: ones([vocabSize, 1]),
    concentration: 10
  });
  
  // Function to generate a sentence
  var generateSentence = function() {
    var sentence = [];
    var currentWord = 'start';
    
    while (true) {
      var nextDist = transitions[currentWord];
      var nextWord = categorical({ps: nextDist, vs: vocab});
      sentence.push(nextWord);
      
      if (nextWord === 'stop') {
        break;
      }
      currentWord = nextWord;
    }
    
    return sentence;
  };
  
  // Observe the first sentence
  var sentence1 = generateSentence();
  observe(comparray(sentence1, ['dogs', 'chase', 'cats', 'stop']), true);
  
  // Generate second sentence starting with 'cats'
  var sentence2 = [];
  sentence2.push('cats');
  var currentWord = 'cats';
  
  while (true) {
    var nextDist = transitions[currentWord];
    var nextWord = categorical({ps: nextDist, vs: vocab});
    sentence2.push(nextWord);
    
    if (nextWord === 'stop') {
      break;
    }
    currentWord = nextWord;
  }
  
  // Return the second word of the second sentence
  return sentence2[1];
};

var ANSWER = Infer({method: 'MCMC', burn: 10000, samples: 50000, onlyMAP: false}, model);
Error: cpsInnerStatement
var comparray = function(arr1, arr2) {
  return JSON.stringify(arr1) === JSON.stringify(arr2);
};

var vocab = ['dogs', 'cats', 'chase', 'sleep', 'stop'];
var observedSentence = ['dogs', 'chase', 'cats', 'stop'];

var model = function() {
  // Draw transition distributions for each word
  var transitionDists = {};
  map(function(word) {
    transitionDists[word] = dirichletDrift({
      alpha: ones([5, 1]),
      concentration: 10
    });
  }, vocab);
  
  // Also need transition from 'start'
  transitionDists['start'] = dirichletDrift({
    alpha: ones([5, 1]),
    concentration: 10
  });
  
  // Helper function to get next word given previous word
  var transition = function(prevWord) {
    var probs = transitionDists[prevWord];
    var probsArray = [];
    for (var i = 0; i < 5; i++) {
      probsArray.push(T.get(probs, i));
    }
    return categorical({vs: vocab, ps: probsArray});
  };
  
  // Generate a sentence starting from 'start'
  var generateSentence = function() {
    var sentence = [];
    var current = 'start';
    var maxLen = 100;
    var len = 0;
    
    while (len < maxLen) {
      var next = transition(current);
      sentence.push(next);
      if (next === 'stop') {
        return sentence;
      }
      current = next;
      len = len + 1;
    }
    return sentence;
  };
  
  // Observe the first sentence
  var sentence1 = generateSentence();
  condition(comparray(sentence1, observedSentence));
  
  // Generate second sentence starting with 'cats'
  var current = 'cats';
  var probs = transitionDists[current];
  var probsArray = [];
  for (var i = 0; i < 5; i++) {
    probsArray.push(T.get(probs, i));
  }
  var secondWord = categorical({vs: vocab, ps: probsArray});
  
  return secondWord;
};

var ANSWER = Infer({
  method: 'MCMC',
  burn: 10000,
  samples: 50000,
  onlyMAP: false
}, model);
Error: cpsInnerStatement
var vocab = ['dogs', 'cats', 'chase', 'sleep', 'stop'];

var comparray = function(arr1, arr2) {
  return JSON.stringify(arr1) === JSON.stringify(arr2);
};

var model = function() {
  // Create transition distributions for each word (including 'start')
  var transitions = {};
  var alpha = [1, 1, 1, 1, 1]; // ones([5,1]) equivalent
  
  // Sample transition distribution for 'start'
  transitions['start'] = sample(Dirichlet({alpha: alpha}));
  
  // Sample transition distribution for each vocabulary word
  vocab.forEach(function(word) {
    transitions[word] = sample(Dirichlet({alpha: alpha}));
  });
  
  // Function to generate a sentence
  var generateSentence = function() {
    var words = [];
    var currentWord = 'start';
    
    while (true) {
      var transitionDist = transitions[currentWord];
      var nextWord = sample(Categorical({ps: transitionDist, vs: vocab}));
      words.push(nextWord);
      
      if (nextWord === 'stop') {
        break;
      }
      currentWord = nextWord;
    }
    return words;
  };
  
  // Generate first sentence and condition on observed sentence
  var sentence1 = generateSentence();
  var observedSentence = ['dogs', 'chase', 'cats', 'stop'];
  condition(comparray(sentence1, observedSentence));
  
  // Generate second sentence starting with 'cats'
  var words2 = ['cats'];
  var currentWord = 'cats';
  
  while (true) {
    var transitionDist = transitions[currentWord];
    var nextWord = sample(Categorical({ps: transitionDist, vs: vocab}));
    words2.push(nextWord);
    
    if (nextWord === 'stop') {
      break;
    }
    currentWord = nextWord;
  }
  
  // Return the second word of the new sentence
  return words2[1];
};

var ANSWER = Infer({method: 'MCMC', burn: 10000, samples: 50000, onlyMAP: false}, model);
Error: cpsInnerStatement
var vocab = ['dogs', 'cats', 'chase', 'sleep', 'stop'];

var comparray = function(arr1, arr2) {
  return JSON.stringify(arr1) === JSON.stringify(arr2);
};

var model = function() {
  var transition = mem(function(word) {
    var probs = dirichletDrift({alpha: ones([5,1]), concentration: 10});
    return probs;
  });
  
  var generateSentence = function() {
    var sentence = [];
    var word = 'start';
    var count = 0;
    var maxLength = 20;
    while (count < maxLength) {
      var probs = transition(word);
      var nextWord = sample(Categorical({vs: vocab, ps: probs}));
      sentence.push(nextWord);
      word = nextWord;
      if (nextWord === 'stop') break;
      count += 1;
    }
    return sentence;
  };
  
  var sent1 = generateSentence();
  condition(comparray(sent1, ['dogs', 'chase', 'cats', 'stop']));
  
  var sent2Word2 = sample(Categorical({vs: vocab, ps: transition('cats')}));
  return sent2Word2;
};

var ANSWER = Infer({method: 'MCMC', burn: 10000, samples: 50000, onlyMAP: false}, model);
Error: Parameter "vs" missing from Categorical distribution.
var vocab = ['dogs', 'cats', 'chase', 'sleep', 'stop'];
var vocabSize = vocab.length;

var comparray = function(arr1, arr2) {
  return JSON.stringify(arr1) === JSON.stringify(arr2);
};

var ones5 = ones([5, 1]);

var model = function() {
  // Each word (plus 'start') has its own transition distribution
  // Words: start, dogs, cats, chase, sleep, stop
  // We need transition distributions for: start, dogs, cats, chase, sleep
  // (stop is terminal, no transitions from it needed)
  
  var transitionDists = {
    'start': dirichletDrift({alpha: ones5, concentration: 10}),
    'dogs': dirichletDrift({alpha: ones5, concentration: 10}),
    'cats': dirichletDrift({alpha: ones5, concentration: 10}),
    'chase': dirichletDrift({alpha: ones5, concentration: 10}),
    'sleep': dirichletDrift({alpha: ones5, concentration: 10}),
    'stop': dirichletDrift({alpha: ones5, concentration: 10})
  };
  
  var transition = function(prevWord) {
    var probs = transitionDists[prevWord];
    return vocab[categorical({ps: probs})];
  };
  
  // Generate a sentence starting from 'start'
  var generateSentence = function(prevWord, acc) {
    var nextWord = transition(prevWord);
    var newAcc = acc.concat([nextWord]);
    if (nextWord === 'stop') {
      return newAcc;
    } else {
      return generateSentence(nextWord, newAcc);
    }
  };
  
  // Observe sentence 1: ['dogs', 'chase', 'cats', 'stop']
  var observed = ['dogs', 'chase', 'cats', 'stop'];
  var generated = generateSentence('start', []);
  observe(Gaussian({mu: 0, sigma: 0.01}), comparray(generated, observed) ? 0 : -10);
  
  // Second sentence: first word is 'cats', get second word
  var secondWord = transition('cats');
  
  return secondWord;
};

var ANSWER = Infer({method: 'MCMC', burn: 10000, samples: 50000, onlyMAP: false}, model);
var vocab = ['dogs', 'cats', 'chase', 'sleep', 'stop'];

var comparray = function(arr1, arr2) {
  return JSON.stringify(arr1) === JSON.stringify(arr2);
};

var model = function() {
  // Each word (including 'start') gets its own transition distribution over vocab
  var transStart = dirichletDrift({alpha: ones([5, 1]), concentration: 10});
  var transDogs   = dirichletDrift({alpha: ones([5, 1]), concentration: 10});
  var transCats   = dirichletDrift({alpha: ones([5, 1]), concentration: 10});
  var transChase  = dirichletDrift({alpha: ones([5, 1]), concentration: 10});
  var transSleep  = dirichletDrift({alpha: ones([5, 1]), concentration: 10});
  // 'stop' is a terminal; no outgoing transitions needed

  var transProbs = function(word) {
    if (word === 'start') return transStart;
    if (word === 'dogs')  return transDogs;
    if (word === 'cats')  return transCats;
    if (word === 'chase') return transChase;
    if (word === 'sleep') return transSleep;
  };

  var transition = function(prevWord) {
    return Categorical({ps: transProbs(prevWord), vs: vocab});
  };

  // Observe sentence ['dogs', 'chase', 'cats', 'stop']
  // start -> dogs
  observe(transition('start'), 'dogs');
  // dogs -> chase
  observe(transition('dogs'), 'chase');
  // chase -> cats
  observe(transition('chase'), 'cats');
  // cats -> stop
  observe(transition('cats'), 'stop');

  // Second sentence: first word is 'cats'; return second word
  return sample(transition('cats'));
};

var ANSWER = Infer({method: 'MCMC', burn: 10000, samples: 50000, onlyMAP: false}, model);
var vocab = ['dogs', 'cats', 'chase', 'sleep', 'stop'];

var comparray = function(arr1, arr2) {
  return JSON.stringify(arr1) === JSON.stringify(arr2);
};

var model = function() {
  // Each context word gets its own transition distribution over vocab
  var transitionWeights = mem(function(word) {
    return dirichletDrift({alpha: ones([5, 1]), concentration: 10});
  });

  var transition = function(prevWord) {
    var probs = transitionWeights(prevWord);
    return sample(Categorical({vs: vocab, ps: probs}));
  };

  // Generate a sentence by chaining transitions until 'stop'
  var generateSentence = function(prevWord, acc) {
    var nextWord = transition(prevWord);
    var newAcc = acc.concat([nextWord]);
    return nextWord === 'stop' ? newAcc : generateSentence(nextWord, newAcc);
  };

  // Condition on observed sentence ['dogs', 'chase', 'cats', 'stop']
  var observed = ['dogs', 'chase', 'cats', 'stop'];
  var generated = generateSentence('start', []);
  condition(comparray(generated, observed));

  // New sentence: first word is 'cats'; return the second word
  var secondWord = transition('cats');
  return secondWord;
};

var ANSWER = Infer({method: 'MCMC', burn: 10000, samples: 50000, onlyMAP: false}, model);
groundtruth output
stop0.3065
dogs0.2073
sleep0.1960
chase0.1503
cats0.1399
raw JSON
{
  "__kind": "distribution",
  "probs": [
    0.19604,
    0.20734000000000002,
    0.3065,
    0.13985999999999998,
    0.15026
  ],
  "support": [
    "sleep",
    "dogs",
    "stop",
    "cats",
    "chase"
  ]
}
feedback
not signed
probmods2-observing-sequences/ex2.c samples 8 runs TV=1.00 TV<1
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Define a hidden Markov model for sentence generation: words have parts of speech (N for nouns 'dogs','cats'; V for verbs 'chase','sleep'; 'stop'). Markov transitions are between POS, not words; words are then drawn given POS. Use Dirichlet-drift priors for the per-POS transition distributions. End your program with generateSentence('start') (a single sample - the harness will rerun your program multiple times to estimate the distribution).
groundtruth code
var drawWord = function(pos){
  return (pos=="N") ? uniformDraw(['dogs','cats']) :
         (pos=="V") ? uniformDraw(['chase','sleep']) : 
         'stop';
};
var POS = ["N", "V", "stop"];

var posToDistribution = mem(function(pos) {
  return dirichletDrift({alpha:ones([POS.length,1]), concentration:10});
});

var transition = function(pos) {
  return categorical({ps: posToDistribution(pos), vs: POS});
};

var generateSentence = function(lastPOS) {
  var nextPOS = transition(lastPOS);
  var word = drawWord(nextPOS);
  return (word == 'stop') ? [word] : [word].concat(generateSentence(nextPOS));
};
var ANSWER = (generateSentence("start"));
generated code
sonnet-46-primer-v3 TV<1 haiku-45-noprimer-v3 TV<1 haiku-45-primer-v3 fail haiku-45-think-noprimer-v3 fail haiku-45-think-primer-v3 TV<1 sonnet-46-noprimer-v3 TV<1 sonnet-46-think-noprimer-v3 TV<1 sonnet-46-think-primer-v3 TV<1
// HMC/MCMC with Dirichlet-drift priors for HMM sentence generation

// POS tags and vocabulary
var posStates = ['N', 'V', 'stop'];
var words = {
  'N': ['dogs', 'cats'],
  'V': ['chase', 'sleep'],
  'stop': ['stop']
};

// Dirichlet priors for transitions from each POS (including 'start')
// From 'start': can go to N or V (not stop directly)
// From 'N': can go to V or stop
// From 'V': can go to N or stop

// We use mem to get consistent parameters within a run
var transitionDist = mem(function(fromState) {
  if (fromState === 'start') {
    // Can transition to N or V
    var ps = dirichlet(Vector([1, 1]));
    return Categorical({vs: ['N', 'V'], ps: ps});
  } else if (fromState === 'N') {
    // Can transition to V or stop
    var ps = dirichlet(Vector([1, 1]));
    return Categorical({vs: ['V', 'stop'], ps: ps});
  } else if (fromState === 'V') {
    // Can transition to N or stop
    var ps = dirichlet(Vector([1, 1]));
    return Categorical({vs: ['N', 'stop'], ps: ps});
  } else {
    // stop -> stop (absorbing)
    return Categorical({vs: ['stop'], ps: Vector([1])});
  }
});

var emissionDist = mem(function(pos) {
  if (pos === 'stop') {
    return Categorical({vs: ['stop'], ps: Vector([1])});
  }
  var vocabSize = words[pos].length;
  var ps = dirichlet(ones([vocabSize, 1]));
  return Categorical({vs: words[pos], ps: ps});
});

var generateSentence = function(currentState) {
  if (currentState === 'stop') {
    return [];
  }
  var nextState = sample(transitionDist(currentState));
  if (nextState === 'stop') {
    return [];
  }
  var word = sample(emissionDist(nextState));
  return [word].concat(generateSentence(nextState));
};

var ANSWER = generateSentence('start');
// Hidden Markov Model for sentence generation with parts of speech

var posTransitionPrior = function(pos) {
  // Dirichlet-drift priors for transitions from each POS
  var alpha = 1.5; // concentration parameter
  
  if (pos === 'start') {
    return Categorical({ps: [0.7, 0.3], vs: ['N', 'V']});
  } else if (pos === 'N') {
    // From noun, likely to go to verb or stop
    return Categorical({ps: [0.5, 0.4, 0.1], vs: ['V', 'N', 'stop']});
  } else if (pos === 'V') {
    // From verb, likely to go to noun or stop
    return Categorical({ps: [0.5, 0.3, 0.2], vs: ['N', 'V', 'stop']});
  }
};

var wordPrior = function(pos) {
  // Generate words given part of speech
  if (pos === 'N') {
    return Categorical({ps: [0.5, 0.5], vs: ['dogs', 'cats']});
  } else if (pos === 'V') {
    return Categorical({ps: [0.4, 0.4, 0.2], vs: ['chase', 'sleep', 'stop']});
  }
};

var generateSentence = function(currentPos) {
  if (currentPos === 'stop') {
    return [];
  }
  
  // Sample next POS from transition distribution
  var nextPos = sample(posTransitionPrior(currentPos));
  
  if (nextPos === 'stop') {
    return [];
  }
  
  // Sample word given POS
  var word = sample(wordPrior(nextPos));
  
  // Recursively generate rest of sentence
  var restOfSentence = generateSentence(nextPos);
  
  return [word].concat(restOfSentence);
};

var ANSWER = generateSentence('start');
TypeError: Cannot read properties of undefined (reading 'length')
// Hidden Markov Model for sentence generation
// States: parts of speech (start, N, V, stop)
// Observations: words

var words = {
  'N': ['dogs', 'cats', 'mice'],
  'V': ['chase', 'sleep', 'eat']
};

var posStates = ['N', 'V', 'stop'];

// Memoized transition distribution for each POS
var transitionDist = mem(function(pos) {
  var alpha = Vector([1, 1, 1]); // Dirichlet prior for [N, V, stop]
  var probs = dirichlet({alpha: alpha});
  return Categorical({
    vs: posStates,
    ps: [T.get(probs, 0), T.get(probs, 1), T.get(probs, 2)]
  });
});

// Special transition from 'start' state
var startTransitionDist = function() {
  var alpha = Vector([1, 1]); // Dirichlet prior for [N, V]
  var probs = dirichlet({alpha: alpha});
  return Categorical({
    vs: ['N', 'V'],
    ps: [T.get(probs, 0), T.get(probs, 1)]
  });
};

// Memoized word emission distribution for each POS
var emissionDist = mem(function(pos) {
  var wordList = words[pos];
  var n = wordList.length;
  var alpha = Vector(repeat(n, function() { return 1; }));
  var probs = dirichlet({alpha: alpha});
  var probsList = map(function(i) { return T.get(probs, i); }, _.range(0, n));
  return Categorical({
    vs: wordList,
    ps: probsList
  });
});

// Generate a sentence starting from 'start' state
var generateSentence = function(currentPos) {
  if (currentPos === 'stop') {
    return [];
  }
  var nextPos = (currentPos === 'start') ? 
    sample(startTransitionDist()) : 
    sample(transitionDist(currentPos));
  var word = sample(emissionDist(nextPos));
  var restOfSentence = generateSentence(nextPos);
  return [word].concat(restOfSentence);
};

var ANSWER = generateSentence('start');
Error: cpsInnerStatement
var generateSentence = function(startState) {
  // Per-state transition probabilities sampled from Dirichlet distributions
  var transitionProbs = function(state) {
    var nextStates = ['N', 'V', 'stop'];
    
    if (state === 'start') {
      // From start, prefer nouns
      var ps = dirichlet({alpha: [2, 1, 0.5]});
      return {ps: ps, vs: nextStates};
    } else if (state === 'N') {
      // From noun, prefer verb or stop
      var ps = dirichlet({alpha: [0.5, 2, 1]});
      return {ps: ps, vs: nextStates};
    } else if (state === 'V') {
      // From verb, prefer noun or stop
      var ps = dirichlet({alpha: [1.5, 0.5, 1]});
      return {ps: ps, vs: nextStates};
    }
  };
  
  // Emit word given POS
  var emitWord = function(pos) {
    if (pos === 'N') {
      return uniformDraw(['dogs', 'cats']);
    } else if (pos === 'V') {
      return uniformDraw(['chase', 'sleep']);
    }
  };
  
  // Generate sentence as sequence of words
  var state = startState;
  var sentence = [];
  var maxSteps = 50;
  var step = 0;
  
  while (state !== 'stop' && step < maxSteps) {
    var transProbs = transitionProbs(state);
    var nextState = categorical(transProbs);
    
    if (nextState === 'stop') {
      state = 'stop';
    } else {
      sentence.push(emitWord(nextState));
      state = nextState;
    }
    step++;
  }
  
  return sentence.join(' ');
};

var ANSWER = generateSentence('start');
var vocab = {
  'N': ['dogs', 'cats'],
  'V': ['chase', 'sleep', 'stop']
};

// Memoized Dirichlet priors for transition probabilities from each state
var getTransitionProbs = mem(function(state) {
  if (state === 'start') {
    // From start: transition to N or V
    return dirichlet({alpha: Vector([1, 1])});
  } else if (state === 'N') {
    // From N: transition to V or end
    return dirichlet({alpha: Vector([1, 1])});
  } else if (state === 'V') {
    // From V: transition to N or end
    return dirichlet({alpha: Vector([1, 1])});
  }
});

// Memoized Dirichlet priors for emission probabilities for each POS
var getEmissionProbs = mem(function(pos) {
  if (pos === 'N') {
    // Nouns: dogs, cats
    return dirichlet({alpha: Vector([1, 1])});
  } else if (pos === 'V') {
    // Verbs: chase, sleep, stop
    return dirichlet({alpha: Vector([1, 1, 1])});
  }
});

var generateSentence = function(state) {
  if (state === 'end') {
    return [];
  } else if (state === 'start') {
    // Sample next POS state from start
    var probs = getTransitionProbs(state);
    var nextState = sample(Categorical({
      vs: ['N', 'V'],
      ps: [T.get(probs, 0), T.get(probs, 1)]
    }));
    return generateSentence(nextState);
  } else {
    // state is 'N' or 'V' - emit a word
    var emissionProbs = getEmissionProbs(state);
    var words = vocab[state];
    var wordProbs = map(function(i) {
      return T.get(emissionProbs, i);
    }, _.range(0, words.length));
    var word = sample(Categorical({vs: words, ps: wordProbs}));
    
    // Sample next POS state
    var transitionProbs = getTransitionProbs(state);
    var nextStates = (state === 'N') ? ['V', 'end'] : ['N', 'end'];
    var nextStateProbs = map(function(ns) {
      return T.get(transitionProbs, _.indexOf(nextStates, ns));
    }, nextStates);
    var nextState = sample(Categorical({vs: nextStates, ps: nextStateProbs}));
    
    return [word].concat(generateSentence(nextState));
  }
};

var ANSWER = generateSentence('start');
// Hidden Markov Model for sentence generation
// POS tags: 'N' (noun), 'V' (verb), 'stop'
// Words: nouns = ['dogs', 'cats'], verbs = ['chase', 'sleep']

// Dirichlet-drift priors for POS transitions
// From 'start': can go to N or V
// From N: can go to V or stop
// From V: can go to N or stop

var model = function() {
  
  // Per-POS transition distributions using Dirichlet priors
  // start -> {N, V}
  var startTrans = dirichlet(Vector([2, 2]));
  
  // N -> {V, stop}
  var nTrans = dirichlet(Vector([3, 1]));
  
  // V -> {N, stop}
  var vTrans = dirichlet(Vector([2, 2]));
  
  // Word emission distributions
  // N -> {dogs, cats}
  var nEmit = dirichlet(Vector([1, 1]));
  
  // V -> {chase, sleep}
  var vEmit = dirichlet(Vector([1, 1]));
  
  // Transition function: given current POS, sample next POS
  var nextPOS = function(pos) {
    if (pos === 'start') {
      var idx = categorical({ps: startTrans, vs: ['N', 'V']});
      return idx;
    } else if (pos === 'N') {
      var idx = categorical({ps: nTrans, vs: ['V', 'stop']});
      return idx;
    } else if (pos === 'V') {
      var idx = categorical({ps: vTrans, vs: ['N', 'stop']});
      return idx;
    } else {
      return 'stop';
    }
  };
  
  // Emission function: given POS, sample a word
  var emitWord = function(pos) {
    if (pos === 'N') {
      return categorical({ps: nEmit, vs: ['dogs', 'cats']});
    } else if (pos === 'V') {
      return categorical({ps: vEmit, vs: ['chase', 'sleep']});
    } else {
      return null;
    }
  };
  
  // Generate a sentence starting from a given state
  var generateSentence = function(pos) {
    var next = nextPOS(pos);
    if (next === 'stop') {
      return [];
    } else {
      var word = emitWord(next);
      return [word].concat(generateSentence(next));
    }
  };
  
  return generateSentence('start');
};

var ANSWER = model();
// Parts of speech and vocabulary
// POS: 'N' (noun), 'V' (verb), 'stop'
// Words: nouns=['dogs','cats'], verbs=['chase','sleep'], terminal='stop'

// ── Dirichlet-drift priors: sample HMM parameters once per program run ──

// Transition distributions  (from each non-terminal POS)
var transFromStart = dirichlet(Vector([1, 1]));        // → [N, V]
var transFromN     = dirichlet(Vector([1, 1, 2]));     // → [N, V, stop]
var transFromV     = dirichlet(Vector([1, 1, 2]));     // → [N, V, stop]

// Emission distributions (word given POS)
var emitN = dirichlet(Vector([1, 1]));   // dogs, cats
var emitV = dirichlet(Vector([1, 1]));   // chase, sleep

// ── Helpers ──

var nextState = function(state) {
  if (state === 'start') {
    return categorical({ ps: transFromStart, vs: ['N', 'V'] });
  } else if (state === 'N') {
    return categorical({ ps: transFromN, vs: ['N', 'V', 'stop'] });
  } else {  // state === 'V'
    return categorical({ ps: transFromV, vs: ['N', 'V', 'stop'] });
  }
};

var emitWord = function(pos) {
  if (pos === 'N') {
    return categorical({ ps: emitN, vs: ['dogs', 'cats'] });
  } else {  // pos === 'V'
    return categorical({ ps: emitV, vs: ['chase', 'sleep'] });
  }
};

// ── Sentence generator (recursive, depth-limited for safety) ──

var generateSentence = function(state, depth) {
  if (depth > 20) return ['stop'];          // safety cap
  var next = nextState(state);
  if (next === 'stop') return ['stop'];
  var word = emitWord(next);
  return [word].concat(generateSentence(next, depth + 1));
};

var ANSWER = generateSentence('start', 0);
// Memoized per-POS transition probability vectors, sampled from Dirichlet-drift priors
var transitionPs = mem(function(pos) {
  if (pos === 'start') {
    return dirichletDrift({alpha: Vector([1, 1]), concentration: 2});
  } else if (pos === 'N') {
    return dirichletDrift({alpha: Vector([1, 1]), concentration: 2});
  } else if (pos === 'V') {
    return dirichletDrift({alpha: Vector([1, 1]), concentration: 2});
  } else {
    return Vector([1]);
  }
});

// Legal next-POS options for each state
var nextStates = {
  'start': ['N', 'V'],
  'N':     ['V', 'stop'],
  'V':     ['N', 'stop']
};

// Sample next POS given current POS
var transition = function(pos) {
  var states = nextStates[pos];
  var ps = transitionPs(pos);
  return sample(Categorical({vs: states, ps: ps}));
};

// Emit a word given POS (deterministic vocab per POS)
var emitWord = function(pos) {
  if (pos === 'N') {
    return flip(0.5) ? 'dogs' : 'cats';
  } else if (pos === 'V') {
    return flip(0.5) ? 'chase' : 'sleep';
  } else {
    return 'stop';
  }
};

// Recursively generate a sentence from the given POS state
var generateSentence = function(pos) {
  var nextPos = transition(pos);
  if (nextPos === 'stop') {
    return ['stop'];
  } else {
    var word = emitWord(nextPos);
    return [word].concat(generateSentence(nextPos));
  }
};

var ANSWER = generateSentence('start');
groundtruth output
[
  [
    "sleep",
    "chase",
    "stop"
  ],
  [
    "stop"
  ],
  [
    "dogs",
    "chase",
    "dogs",
    "sleep",
    "chase",
    "cats",
    "dogs",
    "sleep",
    "stop"
  ],
  [
    "sleep",
    "stop"
  ],
  [
    "sleep",
    "cats",
    "stop"
  ],
  [
    "sleep",
    "cats",
    "sleep",
    "cats",
    "chase",
    "chase",
    "sleep",
    "cats",
    "cats",
    "sleep",
    "dogs",
    "stop"
  ],
  [
    "sleep",
    "cats",
    "stop"
  ],
  [
    "stop"
  ],
  [
    "stop"
  ],
  [
    "chase",
    "cats",
    "stop"
  ],
  [
    "chase",
    "stop"
  ],
  [
    "chase",
    "stop"
  ],
  [
    "stop"
  ],
  [
    "cats",
    "sleep",
    "stop"
  ],
  [
    "dogs",
    "chase",
    "cats",
    "cats",
    "dogs",
    "cats",
    "stop"
  ],
  [
    "cats",
    "dogs",
    "cats",
    "cats",
    "cats",
    "chase",
    "cats",
    "cats",
    "cats",
    "dogs",
    "cats",
    "stop"
  ],
  [
    "sleep",
    "stop"
  ],
  [
    "chase",
    "stop"
  ],
  [
    "stop"
  ],
  [
    "dogs",
    "chase",
    "stop"
  ],
  [
    "cats",
    "chase",
    "stop"
  ],
  [
    "cats",
    "dogs",
    "sleep",
    "cats",
    "sleep",
    "stop"
  ],
  [
    "cats",
    "stop"
  ],
  [
    "chase",
    "chase",
    "stop"
  ],
  [
    "cats",
    "cats",
    "dogs",
    "cats",
    "chase",
    "chase",
    "dogs",
    "dogs",
    "cats",
    "dogs",
    "dogs",
    "chase",
    "sleep",
    "cats",
    "sleep",
    "sleep",
    "sleep",
    "chase",
    "sleep",
    "sleep",
    "dogs",
    "stop"
  ],
  [
    "stop"
  ],
  [
    "dogs",
    "chase",
    "stop"
  ],
  [
    "dogs",
    "sleep",
    "sleep",
    "stop"
  ],
  [
    "chase",
    "cats",
    "dogs",
    "stop"
  ],
  [
    "cats",
    "cats",
    "sleep",
    "cats",
    "stop"
  ],
  [
    "sleep",
    "cats",
    "dogs",
    "cats",
    "chase",
    "stop"
  ],
  [
    "dogs",
    "chase",
    "sleep",
    "stop"
  ],
  [
    "cats",
    "dogs",
    "dogs",
    "dogs",
    "stop"
  ],
  [
    "stop"
  ],
  [
    "stop"
  ],
  [
    "stop"
  ],
  [
    "stop"
  ],
  [
    "stop"
  ],
  [
    "stop"
  ],
  [
    "dogs",
    "dogs",
    "chase",
    "cats",
    "sleep",
    "cats",
    "chase",
    "sleep",
    "stop"
  ],
  [
    "chase",
    "stop"
  ],
  [
    "sleep",
    "dogs",
    "dogs",
    "stop"
  ],
  [
    "stop"
  ],
  [
    "stop"
  ],
  [
    "cats",
    "dogs",
    "stop"
  ],
  [
    "dogs",
    "cats",
    "stop"
  ],
  [
    "chase",
    "stop"
  ],
  [
    "dogs",
    "stop"
  ],
  [
    "stop"
  ],
  [
    "chase",
    "cats",
    "sleep",
    "chase",
    "sleep",
    "stop"
  ],
  [
    "stop"
  ],
  [
    "stop"
  ],
  [
    "sleep",
    "chase",
    "dogs",
    "stop"
  ],
  [
    "chase",
    "cats",
    "cats",
    "stop"
  ],
  [
    "cats",
    "dogs",
    "cats",
    "dogs",
    "dogs",
    "cats",
    "dogs",
    "dogs",
    "cats",
    "sleep",
    "stop"
  ],
  [
    "cats",
    "chase",
    "sleep",
    "chase",
    "chase",
    "stop"
  ],
  [
    "cats",
    "dogs",
    "chase",
    "dogs",
    "stop"
  ],
  [
    "sleep",
    "sleep",
    "cats",
    "chase",
    "chase",
    "chase",
    "sleep",
    "sleep",
    "chase",
    "chase",
    "sleep",
    "chase",
    "sleep",
    "sleep",
    "chase",
    "sleep",
    "chase",
    "chase",
    "sleep",
    "stop"
  ],
  [
    "stop"
  ],
  [
    "sleep",
    "dogs",
    "sleep",
    "sleep",
    "stop"
  ],
  [
    "stop"
  ],
  [
    "stop"
  ],
  [
    "stop"
  ],
  [
    "stop"
  ],
  [
    "dogs",
    "stop"
  ],
  [
    "dogs",
    "dogs",
    "sleep",
    "cats",
    "sleep",
    "chase",
    "sleep",
    "cats",
    "chase",
    "sleep",
    "dogs",
    "sleep",
    "cats",
    "sleep",
    "dogs",
    "sleep",
    "cats",
    "dogs",
    "sleep",
    "sleep",
    "sleep",
    "chase",
    "cats",
    "chase",
    "cats",
    "sleep",
    "chase",
    "sleep",
    "cats",
    "dogs",
    "stop"
  ],
  [
    "chase",
    "chase",
    "sleep",
    "slee

... (2570 more chars truncated)
feedback
not signed
probmods2-observing-sequences/ex2.d distribution 8 runs TV=0.09 TV<.5
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Hidden Markov sentence model. Words have parts of speech: N for nouns {'dogs', 'cats'}; V for verbs {'chase', 'sleep'}; plus 'stop'. Each POS has its own transition distribution drawn from dirichletDrift({alpha: ones([3,1]), concentration: 10}), memoized. A sentence is generated by starting at 'start' POS, transitioning to successive POS tags, drawing a word given each POS via drawWord(pos) (which returns uniformDraw([...]) for N or V, else 'stop'), and appending 'stop' when reached. Helper:
var comparray = function(arr1, arr2) {
  return JSON.stringify(arr1) === JSON.stringify(arr2);
};
Observe the sentence ['dogs', 'chase', 'cats', 'stop']. Then, in a second sentence whose first word is 'cats', end your program with the Infer(...) (MCMC, burn: 10000, samples: 1000, lag: 10, onlyMAP: false) returning the marginal distribution over the second word.
groundtruth code
var comparray = function(arr1,arr2){
  return (JSON.stringify(arr1) === JSON.stringify(arr2));
};

var drawWord = function(pos){
  return (pos=="N") ? uniformDraw(['dogs','cats']) :
         (pos=="V") ? uniformDraw(['chase','sleep']) : 
         'stop';
};
var POS = ["N", "V", "stop"];
var ANSWER = (Infer({method:'MCMC', burn:10000, samples: 1000, lag:10, onlyMAP: false}, function() {
  var posToDistribution = mem(function(pos) {
    return dirichletDrift({alpha:ones([POS.length,1]), concentration:10});
  });

  var transition = function(pos) {
    return categorical({ps: posToDistribution(pos), vs: POS});
  };

  let generateSentence = function(lastPOS) {
    let nextPOS = transition(lastPOS);
    let word = drawWord(nextPOS);
    return (word == 'stop') ? [word] : [word].concat(generateSentence(nextPOS));
  };
  let obs = ['dogs', 'chase', 'cats', 'stop'];
  condition(comparray(obs, generateSentence('start')));

  let newSentence = generateSentence('start');
  condition(newSentence[0] == 'cats');
  return newSentence[1];
}));
generated code
sonnet-46-primer-v3 TV<.5 haiku-45-noprimer-v3 fail haiku-45-primer-v3 fail haiku-45-think-noprimer-v3 fail haiku-45-think-primer-v3 TV<1 sonnet-46-noprimer-v3 fail sonnet-46-think-noprimer-v3 fail sonnet-46-think-primer-v3 TV<.5
var comparray = function(arr1, arr2) {
  return JSON.stringify(arr1) === JSON.stringify(arr2);
};

var posLabels = ['N', 'V', 'stop'];
var posIndex = {N: 0, V: 1, stop: 2};

var nouns = ['dogs', 'cats'];
var verbs = ['chase', 'sleep'];

var drawWord = function(pos) {
  if (pos === 'N') return uniformDraw(nouns);
  if (pos === 'V') return uniformDraw(verbs);
  return 'stop';
};

var getPosOfWord = function(word) {
  if (word === 'dogs' || word === 'cats') return 'N';
  if (word === 'chase' || word === 'sleep') return 'V';
  return 'stop';
};

var model = function() {
  // Memoized transition distributions for each POS (including 'start')
  var transitionDist = mem(function(pos) {
    return dirichletDrift({alpha: ones([3, 1]), concentration: 10});
  });

  var transition = function(pos) {
    var probs = transitionDist(pos);
    var idx = sample(Categorical({vs: posLabels, ps: [T.get(probs, 0), T.get(probs, 1), T.get(probs, 2)]}));
    return idx;
  };

  // Generate a sentence from a given starting POS
  var generateSentence = function(currentPos, acc) {
    var nextPos = transition(currentPos);
    var word = drawWord(nextPos);
    if (nextPos === 'stop') {
      return acc.concat(['stop']);
    } else {
      return generateSentence(nextPos, acc.concat([word]));
    }
  };

  // Observe sentence 1: ['dogs', 'chase', 'cats', 'stop']
  // We need to condition on this sentence being generated
  // The sentence is: start -> N(dogs) -> V(chase) -> N(cats) -> stop
  // We observe the transitions and words

  // Sentence 1 observations
  var s1 = ['dogs', 'chase', 'cats', 'stop'];
  var s1pos = map(getPosOfWord, s1); // ['N', 'V', 'N', 'stop']

  // Observe transitions for sentence 1
  // start -> N
  var t1_0 = transitionDist('start');
  observe(Categorical({vs: posLabels, ps: [T.get(t1_0, 0), T.get(t1_0, 1), T.get(t1_0, 2)]}), 'N');
  // N -> V
  var t1_1 = transitionDist('N');
  observe(Categorical({vs: posLabels, ps: [T.get(t1_1, 0), T.get(t1_1, 1), T.get(t1_1, 2)]}), 'V');
  // V -> N
  var t1_2 = transitionDist('V');
  observe(Categorical({vs: posLabels, ps: [T.get(t1_2, 0), T.get(t1_2, 1), T.get(t1_2, 2)]}), 'N');
  // N -> stop (second time N is used, but transitionDist('N') is memoized so same dist)
  observe(Categorical({vs: posLabels, ps: [T.get(t1_1, 0), T.get(t1_1, 1), T.get(t1_1, 2)]}), 'stop');

  // Sentence 2: first word is 'cats' (POS = N)
  // start -> N(cats) -> ?
  var t2_0 = transitionDist('start');
  observe(Categorical({vs: posLabels, ps: [T.get(t2_0, 0), T.get(t2_0, 1), T.get(t2_0, 2)]}), 'N');

  // Now sample the second POS (transition from N)
  var secondPos = transition('N');
  var secondWord = drawWord(secondPos);

  return secondWord;
};

var ANSWER = Infer({method: 'MCMC', burn: 10000, samples: 1000, lag: 10, onlyMAP: false}, model);
Error: cpsInnerStatement
var comparray = function(arr1, arr2) {
  return JSON.stringify(arr1) === JSON.stringify(arr2);
};

var nouns = ['dogs', 'cats'];
var verbs = ['chase', 'sleep'];

var drawWord = function(pos) {
  if (pos === 'N') {
    return uniformDraw(nouns);
  } else if (pos === 'V') {
    return uniformDraw(verbs);
  } else {
    return 'stop';
  }
};

var getTransitionDist = function(pos) {
  return memoize(function(fromPos) {
    var transitions = ['N', 'V', 'stop'];
    var probs = dirichletDrift({alpha: ones([3, 1]), concentration: 10});
    return categorical({vs: transitions, ps: probs});
  })(pos);
};

var generateSentence = function() {
  var sentence = [];
  var pos = 'start';
  
  while (true) {
    var nextPos;
    if (pos === 'start') {
      nextPos = uniformDraw(['N', 'V']);
    } else {
      nextPos = getTransitionDist(pos);
    }
    
    var word = drawWord(nextPos);
    sentence.push(word);
    
    if (word === 'stop') {
      break;
    }
    
    pos = nextPos;
  }
  
  return sentence;
};

var model = function() {
  var sentence1 = generateSentence();
  observe(comparray(sentence1, ['dogs', 'chase', 'cats', 'stop']), true);
  
  var sentence2 = [];
  sentence2.push('cats');
  
  var pos = 'N';
  var nextPos = getTransitionDist(pos);
  var word = drawWord(nextPos);
  sentence2.push(word);
  
  return word;
};

var ANSWER = Infer({method: 'MCMC', burn: 10000, samples: 1000, lag: 10, onlyMAP: false}, model);
TypeError: dist.score is not a function
var comparray = function(arr1, arr2) {
  return JSON.stringify(arr1) === JSON.stringify(arr2);
};

var drawWord = function(pos) {
  if (pos === 'N') {
    return uniformDraw(['dogs', 'cats']);
  } else if (pos === 'V') {
    return uniformDraw(['chase', 'sleep']);
  } else {
    return 'stop';
  }
};

var getTransitionDist = mem(function(pos) {
  var alpha = ones([3, 1]);
  var transitions = dirichletDrift({alpha: alpha, concentration: 10});
  return transitions;
});

var generateSentence = function() {
  var recur = function(currentPos, sentence) {
    var transitionProbs = getTransitionDist(currentPos);
    var nextPos = categorical({
      vs: ['N', 'V', 'stop'],
      ps: [T.get(transitionProbs, 0), T.get(transitionProbs, 1), T.get(transitionProbs, 2)]
    });
    var word = drawWord(nextPos);
    var newSentence = sentence.concat([word]);
    
    if (nextPos === 'stop') {
      return newSentence;
    } else {
      return recur(nextPos, newSentence);
    }
  };
  
  return recur('start', []);
};

var model = function() {
  var sentence1 = generateSentence();
  observe(sentence1, ['dogs', 'chase', 'cats', 'stop']);
  
  var sentence2 = function() {
    var recur = function(currentPos, sentence) {
      var transitionProbs = getTransitionDist(currentPos);
      var nextPos = categorical({
        vs: ['N', 'V', 'stop'],
        ps: [T.get(transitionProbs, 0), T.get(transitionProbs, 1), T.get(transitionProbs, 2)]
      });
      var word = drawWord(nextPos);
      var newSentence = sentence.concat([word]);
      
      if (nextPos === 'stop') {
        return newSentence;
      } else {
        return recur(nextPos, newSentence);
      }
    };
    
    return recur('V', ['cats']);
  };
  
  var sent2 = sentence2();
  return sent2[1];
};

var ANSWER = Infer({
  method: 'MCMC',
  burn: 10000,
  samples: 1000,
  lag: 10,
  onlyMAP: false
}, model);
Error: cpsInnerStatement
var comparray = function(arr1, arr2) {
  return JSON.stringify(arr1) === JSON.stringify(arr2);
};

var drawWord = function(pos) {
  if (pos === 'N') {
    return uniformDraw(['dogs', 'cats']);
  } else if (pos === 'V') {
    return uniformDraw(['chase', 'sleep']);
  } else {
    return 'stop';
  }
};

var transitionDist = memoize(function(fromPOS) {
  var alpha = [1, 1, 1];
  return dirichlet({alpha: alpha});
});

var generateSentence = function() {
  var sentence = [];
  var currentPOS = 'start';
  
  while (true) {
    var transDist = transitionDist(currentPOS);
    var nextPOS = categorical({ps: transDist, vs: ['N', 'V', 'stop']});
    
    var word = drawWord(nextPOS);
    sentence.push(word);
    
    if (nextPOS === 'stop') break;
    currentPOS = nextPOS;
  }
  
  return sentence;
};

var model = function() {
  // Generate and observe the first sentence
  var sentence1 = generateSentence();
  var observed = ['dogs', 'chase', 'cats', 'stop'];
  observe(comparray(sentence1, observed), true);
  
  // Generate a second sentence
  var sentence2 = generateSentence();
  
  // Condition on first word being 'cats'
  condition(sentence2[0] === 'cats');
  
  // Return the second word
  return sentence2[1];
};

var ANSWER = Infer({
  method: 'MCMC',
  burn: 10000,
  samples: 1000,
  lag: 10,
  onlyMAP: false
}, model);
var comparray = function(arr1, arr2) {
  return JSON.stringify(arr1) === JSON.stringify(arr2);
};

// Memoized function to get transition distribution for a state
var getTransitionDist = mem(function(state) {
  var alpha = ones([3, 1]); // For N, V, stop
  var transitions = dirichletDrift({alpha: alpha, concentration: 10});
  return transitions; // Returns a vector of probabilities [p_N, p_V, p_stop]
});

// Draw a word given a POS
var drawWord = function(pos) {
  if (pos === 'N') {
    return uniformDraw(['dogs', 'cats']);
  } else if (pos === 'V') {
    return uniformDraw(['chase', 'sleep']);
  } else {
    return 'stop';
  }
};

// Index to POS mapping
var indexToPos = function(idx) {
  if (idx === 0) return 'N';
  if (idx === 1) return 'V';
  if (idx === 2) return 'stop';
};

// Generate a sentence
var generateSentence = function() {
  var sentence = [];
  
  var step = function(currentPos) {
    // Get transition probabilities
    var transProbs = getTransitionDist(currentPos);
    var idx = sample(Categorical({
      vs: [0, 1, 2],
      ps: [T.get(transProbs, 0), T.get(transProbs, 1), T.get(transProbs, 2)]
    }));
    var nextPos = indexToPos(idx);
    var word = drawWord(nextPos);
    sentence.push(word);
    
    if (nextPos === 'stop') {
      return sentence;
    } else {
      return step(nextPos);
    }
  };
  
  return step('start');
};

// Model
var model = function() {
  // Generate first sentence and condition on observing ['dogs', 'chase', 'cats', 'stop']
  var sent1 = generateSentence();
  condition(comparray(sent1, ['dogs', 'chase', 'cats', 'stop']));
  
  // Generate second sentence and condition on first word being 'cats'
  var sent2 = generateSentence();
  condition(sent2[0] === 'cats');
  
  // Return the second word of the second sentence
  return sent2[1];
};

var ANSWER = Infer({
  method: 'MCMC',
  burn: 10000,
  samples: 1000,
  lag: 10,
  onlyMAP: false
}, model);
Error: sample() expected a distribution but received "{"dims":[3,1],"length":3,"data":{"0":0.11281629838309856,"1":0.692513462395544,"2":0.19467023922135746}}".
var comparray = function(arr1, arr2) {
  return JSON.stringify(arr1) === JSON.stringify(arr2);
};

var posLabels = ['N', 'V', 'stop'];
var posIndex = {N: 0, V: 1, stop: 2};

var nouns = ['dogs', 'cats'];
var verbs = ['chase', 'sleep'];

var drawWord = function(pos) {
  if (pos === 'N') return uniformDraw(nouns);
  if (pos === 'V') return uniformDraw(verbs);
  return 'stop';
};

var wordToPos = function(word) {
  if (nouns.indexOf(word) >= 0) return 'N';
  if (verbs.indexOf(word) >= 0) return 'V';
  return 'stop';
};

var model = function() {
  // Memoized transition distributions for each POS (including 'start')
  var transitionDist = mem(function(pos) {
    return dirichletDrift({alpha: ones([3, 1]), concentration: 10});
  });

  var transition = function(pos) {
    var probs = sample(transitionDist(pos));
    return posLabels[discrete(probs)];
  };

  // Generate a sentence starting from 'start' POS
  var generateSentence = function(currentPos, words) {
    var nextPos = transition(currentPos);
    var word = drawWord(nextPos);
    if (word === 'stop') {
      return words.concat(['stop']);
    } else {
      return generateSentence(nextPos, words.concat([word]));
    }
  };

  // Observe sentence 1: ['dogs', 'chase', 'cats', 'stop']
  var obs1 = ['dogs', 'chase', 'cats', 'stop'];
  
  // Score sentence 1 by conditioning on it
  // We need to factor in the probability of generating this exact sentence
  // Sentence: start -> N(dogs) -> V(chase) -> N(cats) -> stop
  
  // Transition from 'start' to N
  var t1 = transitionDist('start');
  var p1 = sample(t1);
  factor(Math.log(p1.data[posIndex['N']])); // transition to N
  // word 'dogs' from N: prob 1/2
  factor(Math.log(0.5));
  
  // Transition from N to V
  var t2 = transitionDist('N');
  var p2 = sample(t2);
  factor(Math.log(p2.data[posIndex['V']])); // transition to V
  // word 'chase' from V: prob 1/2
  factor(Math.log(0.5));
  
  // Transition from V to N
  var t3 = transitionDist('V');
  var p3 = sample(t3);
  factor(Math.log(p3.data[posIndex['N']])); // transition to N
  // word 'cats' from N: prob 1/2
  factor(Math.log(0.5));
  
  // Transition from N to stop
  var t4 = transitionDist('N');
  // t4 is same as t2 (memoized on 'N'), so p2 is already sampled
  factor(Math.log(p2.data[posIndex['stop']])); // transition to stop
  // word 'stop' from stop: prob 1
  
  // Now generate second sentence, first word is 'cats'
  // First word is 'cats' (N), so transition from 'start' must go to N
  // Use the same transition distributions (memoized)
  
  // Transition from 'start' to N for sentence 2
  factor(Math.log(p1.data[posIndex['N']])); // same dist as sentence 1 start
  // word 'cats' from N: prob 1/2
  factor(Math.log(0.5));
  
  // Now sample the second word
  // Transition from N
  var nextPos2 = posLabels[discrete(p2)]; // p2 is transitionDist('N')
  var secondWord = drawWord(nextPos2);
  
  return secondWord;
};

var ANSWER = Infer({method: 'MCMC', burn: 10000, samples: 1000, lag: 10, onlyMAP: false}, model);
ReferenceError: DirichletDrift is not defined
var comparray = function(arr1, arr2) {
  return JSON.stringify(arr1) === JSON.stringify(arr2);
};

var POStags = ['N', 'V', 'stop'];

var transitionDist = mem(function(pos) {
  return sample(DirichletDrift({alpha: ones([3, 1]), concentration: 10}));
});

var sampleNextPos = function(currentPos) {
  var probs = T.toScalars(transitionDist(currentPos));
  return categorical({ps: probs, vs: POStags});
};

var drawWord = function(pos) {
  if (pos === 'N') return uniformDraw(['dogs', 'cats']);
  if (pos === 'V') return uniformDraw(['chase', 'sleep']);
  return 'stop';
};

var generateSentence = function(currentPos, words) {
  var nextPos = sampleNextPos(currentPos);
  var word = drawWord(nextPos);
  if (nextPos === 'stop') {
    return words.concat(['stop']);
  }
  return generateSentence(nextPos, words.concat([word]));
};

var model = function() {
  // Observe first sentence: ['dogs', 'chase', 'cats', 'stop']
  // This means: start->N(dogs)->V(chase)->N(cats)->stop
  var sent1 = generateSentence('start', []);
  condition(comparray(sent1, ['dogs', 'chase', 'cats', 'stop']));

  // Second sentence: first word is 'cats' (noun => POS = N)
  // Start transition from 'start' must lead to N
  var pos1 = sampleNextPos('start');
  condition(pos1 === 'N');
  var word1 = drawWord(pos1);
  condition(word1 === 'cats');

  // Second word
  var pos2 = sampleNextPos(pos1);
  var word2 = drawWord(pos2);

  return word2;
};

var ANSWER = Infer({method: 'MCMC', burn: 10000, samples: 1000, lag: 10, onlyMAP: false}, model);
var comparray = function(arr1, arr2) {
  return JSON.stringify(arr1) === JSON.stringify(arr2);
};

var posOptions = ['N', 'V', 'stop'];

var transitionDist = mem(function(pos) {
  return dirichletDrift({alpha: ones([3, 1]), concentration: 10});
});

var drawWord = function(pos) {
  if (pos === 'N') {
    return uniformDraw(['dogs', 'cats']);
  } else if (pos === 'V') {
    return uniformDraw(['chase', 'sleep']);
  } else {
    return 'stop';
  }
};

var nextPos = function(pos) {
  var probs = transitionDist(pos);
  return posOptions[discrete(probs)];
};

var generateSentence = function(curPos, acc) {
  var nPos = nextPos(curPos);
  var word = drawWord(nPos);
  if (nPos === 'stop') {
    return acc.concat(['stop']);
  } else {
    return generateSentence(nPos, acc.concat([word]));
  }
};

var model = function() {
  // Observe first sentence
  var sent1 = generateSentence('start', []);
  condition(comparray(sent1, ['dogs', 'chase', 'cats', 'stop']));

  // Second sentence: first word is 'cats'
  var pos1 = nextPos('start');
  condition(pos1 === 'N');
  var w1 = drawWord(pos1);
  condition(w1 === 'cats');

  // Second word of second sentence
  var pos2 = nextPos(pos1);
  var w2 = drawWord(pos2);

  return w2;
};

var ANSWER = Infer({
  method: 'MCMC',
  burn: 10000,
  samples: 1000,
  lag: 10,
  onlyMAP: false
}, model);
groundtruth output
stop0.4770
sleep0.1790
chase0.1590
cats0.1100
dogs0.0750
raw JSON
{
  "__kind": "distribution",
  "probs": [
    0.159,
    0.075,
    0.11000000000000001,
    0.179,
    0.477
  ],
  "support": [
    "chase",
    "dogs",
    "cats",
    "sleep",
    "stop"
  ]
}
feedback
not signed
probmods2-observing-sequences/ex3.a distribution 8 runs TV=0.00 TV=0
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Hidden Markov sentence model with extended vocabulary. POS tags: N for nouns {'dog', 'cat'}, V for verbs {'chases', 'sleeps'}, D for determiners {'the', 'a'}, A for adverbs ({'dilligently'}), plus 'stop'. Per-POS transitions drawn from dirichletDrift({alpha: ones([5,1]), concentration: 10}), memoized. drawWord(pos) returns uniformDraw(...) for the respective list, or 'dilligently' for A, or 'stop' for stop. Helper:
var comparray = function(arr1, arr2) {
  return JSON.stringify(arr1) === JSON.stringify(arr2);
};
Use factor(comparray(['the', 'dog', 'chases', 'a', 'cat', 'stop'], generateSentence('start')) * 5) to softly condition on the observed sentence. Then sample five new sentences from generateSentence('start'). End your program with Infer({method: 'MCMC', burn: 10000, samples: 1000, lag: 10, onlyMAP: true}, ...) returning a record {sent1, sent2, sent3, sent4, sent5}.
groundtruth code
var comparray = function(arr1,arr2){
  return (JSON.stringify(arr1) === JSON.stringify(arr2));
};

var drawWord = function(pos){
  return (pos=="N") ? uniformDraw(['dog','cat']) :
         (pos=="V") ? uniformDraw(['chases','sleeps']) : 
         (pos=="D") ? uniformDraw(['the','a']) :
         (pos=="A") ? 'dilligently' : 
         'stop';
};
var POS = ["N", "V", "D", "A", "stop"];
var ANSWER = (Infer({method:'MCMC', burn:10000, samples: 1000, lag:10, onlyMAP: true}, function() {
  var posToDistribution = mem(function(pos) {
    return dirichletDrift({alpha:ones([POS.length,1]), concentration:10});
  });

  var transition = function(pos) {
    return categorical({ps: posToDistribution(pos), vs: POS});
  };

  let generateSentence = function(lastPOS) {
    let nextPOS = transition(lastPOS);
    let word = drawWord(nextPOS);
    return (word == 'stop') ? [word] : [word].concat(generateSentence(nextPOS));
  };
  let obs = ['the', 'dog', 'chases', 'a', 'cat', 'stop'];

  factor(comparray(obs, generateSentence('start'))*5);

  var sent1 = generateSentence('start');
  var sent2 = generateSentence('start');
  var sent3 = generateSentence('start');
  var sent4 = generateSentence('start');
  var sent5 = generateSentence('start');
  return {sent1: sent1, sent2: sent2, sent3: sent3, sent4: sent4, sent5: sent5};
}));
generated code
sonnet-46-primer-v3 TV=0 haiku-45-noprimer-v3 TV=0 haiku-45-primer-v3 TV=1 haiku-45-think-noprimer-v3 fail haiku-45-think-primer-v3 fail sonnet-46-noprimer-v3 fail sonnet-46-think-noprimer-v3 fail sonnet-46-think-primer-v3 TV=0
var tags = ['N', 'V', 'D', 'A', 'stop'];

var tagIndex = function(tag) {
  return _.indexOf(tags, tag);
};

var transitionDist = mem(function(fromTag) {
  return dirichletDrift({alpha: ones([5, 1]), concentration: 10});
});

var nextTag = function(fromTag) {
  var probs = transitionDist(fromTag);
  var idx = sample(Categorical({vs: tags, ps: [T.get(probs, 0), T.get(probs, 1), T.get(probs, 2), T.get(probs, 3), T.get(probs, 4)]}));
  return idx;
};

var drawWord = function(pos) {
  return pos === 'N' ? uniformDraw(['dog', 'cat']) :
         pos === 'V' ? uniformDraw(['chases', 'sleeps']) :
         pos === 'D' ? uniformDraw(['the', 'a']) :
         pos === 'A' ? 'dilligently' :
         'stop';
};

var generateSentence = function(currentTag) {
  var next = nextTag(currentTag);
  return next === 'stop' ? ['stop'] : [drawWord(next)].concat(generateSentence(next));
};

var comparray = function(arr1, arr2) {
  return JSON.stringify(arr1) === JSON.stringify(arr2);
};

var model = function() {
  factor(comparray(['the', 'dog', 'chases', 'a', 'cat', 'stop'], generateSentence('start')) * 5);
  
  var sent1 = generateSentence('start');
  var sent2 = generateSentence('start');
  var sent3 = generateSentence('start');
  var sent4 = generateSentence('start');
  var sent5 = generateSentence('start');
  
  return {sent1: sent1, sent2: sent2, sent3: sent3, sent4: sent4, sent5: sent5};
};

var ANSWER = Infer({method: 'MCMC', burn: 10000, samples: 1000, lag: 10, onlyMAP: true}, model);
var comparray = function(arr1, arr2) {
  return JSON.stringify(arr1) === JSON.stringify(arr2);
};

var posTagList = ['N', 'V', 'D', 'A', 'stop'];

var getTransitionDist = mem(function(pos) {
  return dirichletDrift({alpha: ones([5, 1]), concentration: 10});
});

var drawWord = function(pos) {
  if (pos === 'N') {
    return uniformDraw(['dog', 'cat']);
  } else if (pos === 'V') {
    return uniformDraw(['chases', 'sleeps']);
  } else if (pos === 'D') {
    return uniformDraw(['the', 'a']);
  } else if (pos === 'A') {
    return 'dilligently';
  } else if (pos === 'stop') {
    return 'stop';
  }
};

var generateSentence = function(currentPos) {
  if (currentPos === 'start') {
    var nextPos = categorical({ps: getTransitionDist('start'), vs: posTagList});
    return generateSentence(nextPos);
  } else if (currentPos === 'stop') {
    return ['stop'];
  } else {
    var word = drawWord(currentPos);
    var nextPos = categorical({ps: getTransitionDist(currentPos), vs: posTagList});
    var restSentence = generateSentence(nextPos);
    return [word].concat(restSentence);
  }
};

var model = function() {
  var sentence = generateSentence('start');
  var observedSentence = ['the', 'dog', 'chases', 'a', 'cat', 'stop'];
  factor(comparray(observedSentence, sentence) ? 5 : 0);
  
  var sent1 = generateSentence('start');
  var sent2 = generateSentence('start');
  var sent3 = generateSentence('start');
  var sent4 = generateSentence('start');
  var sent5 = generateSentence('start');
  
  return {sent1: sent1, sent2: sent2, sent3: sent3, sent4: sent4, sent5: sent5};
};

var ANSWER = Infer({method: 'MCMC', burn: 10000, samples: 1000, lag: 10, onlyMAP: true}, model);
var comparray = function(arr1, arr2) {
  return JSON.stringify(arr1) === JSON.stringify(arr2);
};

var nouns = ['dog', 'cat'];
var verbs = ['chases', 'sleeps'];
var determiners = ['the', 'a'];
var adverbs = ['dilligently'];
var tags = ['N', 'V', 'D', 'A', 'stop'];

var drawWord = function(pos) {
  if (pos === 'N') {
    return uniformDraw(nouns);
  } else if (pos === 'V') {
    return uniformDraw(verbs);
  } else if (pos === 'D') {
    return uniformDraw(determiners);
  } else if (pos === 'A') {
    return 'dilligently';
  } else if (pos === 'stop') {
    return 'stop';
  }
};

var transitionDist = mem(function(pos) {
  var alpha = ones([5, 1]);
  var probs = dirichletDrift({alpha: alpha, concentration: 10});
  return Categorical({vs: tags, ps: probs});
});

var generateSentence = function(currentPos) {
  if (currentPos === 'stop') {
    return [];
  }
  
  var nextPos = sample(transitionDist(currentPos));
  var word = drawWord(nextPos);
  
  if (nextPos === 'stop') {
    return ['stop'];
  } else {
    var rest = generateSentence(nextPos);
    return [word].concat(rest);
  }
};

var model = function() {
  var sent1 = generateSentence('start');
  factor(comparray(['the', 'dog', 'chases', 'a', 'cat', 'stop'], sent1) ? 5 : 0);
  
  var sent2 = generateSentence('start');
  var sent3 = generateSentence('start');
  var sent4 = generateSentence('start');
  var sent5 = generateSentence('start');
  
  return {
    sent1: sent1,
    sent2: sent2,
    sent3: sent3,
    sent4: sent4,
    sent5: sent5
  };
};

var ANSWER = Infer({method: 'MCMC', burn: 10000, samples: 1000, lag: 10, onlyMAP: true}, model);
Error: cpsInnerStatement
var comparray = function(arr1, arr2) {
  return JSON.stringify(arr1) === JSON.stringify(arr2);
};

var indexToPos = ['N', 'V', 'D', 'A', 'stop'];

// Memoized transition distribution
var getTransitions = mem(function(currentPos) {
  var transitions = dirichletDrift({
    alpha: [1, 1, 1, 1, 1],
    concentration: 10
  });
  return transitions;
});

// Draw a word given a POS tag
var drawWord = function(pos) {
  if (pos === 'N') {
    return uniformDraw(['dog', 'cat']);
  } else if (pos === 'V') {
    return uniformDraw(['chases', 'sleeps']);
  } else if (pos === 'D') {
    return uniformDraw(['the', 'a']);
  } else if (pos === 'A') {
    return 'dilligently';
  } else if (pos === 'stop') {
    return 'stop';
  }
};

// Generate a sentence
var generateSentence = function(startPos) {
  var sentence = [];
  var currentPos = startPos;
  
  while (true) {
    // Get transition probabilities for current POS
    var transitions = getTransitions(currentPos);
    
    // Sample next POS tag
    var nextPos = categorical({ps: transitions, vs: indexToPos});
    
    // Draw a word for this POS
    var word = drawWord(nextPos);
    sentence.push(word);
    
    // Stop if we've generated the stop token
    if (nextPos === 'stop') {
      break;
    }
    
    currentPos = nextPos;
  }
  
  return sentence;
};

// Define the model
var model = function() {
  var sent1 = generateSentence('start');
  var sent2 = generateSentence('start');
  var sent3 = generateSentence('start');
  var sent4 = generateSentence('start');
  var sent5 = generateSentence('start');
  
  // Soft condition on observed sentence
  factor(comparray(['the', 'dog', 'chases', 'a', 'cat', 'stop'], sent1) * 5);
  
  return {sent1: sent1, sent2: sent2, sent3: sent3, sent4: sent4, sent5: sent5};
};

var ANSWER = Infer({method: 'MCMC', burn: 10000, samples: 1000, lag: 10, onlyMAP: true}, model);
TypeError: T.toArray is not a function
var comparray = function(arr1, arr2) {
  return JSON.stringify(arr1) === JSON.stringify(arr2);
};

// Memoized transition function for each POS tag
// Returns a distribution over [N, V, D, A, stop]
var getTransition = mem(function(pos) {
  return dirichletDrift({alpha: ones([5, 1]), concentration: 10});
});

// Draw a word given a POS tag
var drawWord = function(pos) {
  if (pos === 'N') {
    return uniformDraw(['dog', 'cat']);
  } else if (pos === 'V') {
    return uniformDraw(['chases', 'sleeps']);
  } else if (pos === 'D') {
    return uniformDraw(['the', 'a']);
  } else if (pos === 'A') {
    return 'dilligently';
  } else if (pos === 'stop') {
    return 'stop';
  }
};

// Map indices to POS tags
var indexToPOS = ['N', 'V', 'D', 'A', 'stop'];

// Generate a sentence recursively
var generateSentence = function(state) {
  if (state === 'start') {
    // From start, sample first POS tag
    var transitions = getTransition('start');
    var posIndex = sample(Categorical({vs: _.range(0, 5), ps: T.toArray(transitions)}));
    var pos = indexToPOS[posIndex];
    return generateSentence(pos);
  } else if (state === 'stop') {
    return ['stop'];
  } else {
    // Sample next POS tag
    var transitions = getTransition(state);
    var nextIndex = sample(Categorical({vs: _.range(0, 5), ps: T.toArray(transitions)}));
    var nextPOS = indexToPOS[nextIndex];
    
    // Draw word for current POS
    var word = drawWord(state);
    
    // Recursively generate rest of sentence
    var rest = generateSentence(nextPOS);
    return [word].concat(rest);
  }
};

// Model function
var model = function() {
  // Generate a sentence and softly condition on observed sentence
  var sent = generateSentence('start');
  var observed = ['the', 'dog', 'chases', 'a', 'cat', 'stop'];
  factor(comparray(observed, sent) * 5);
  
  // Sample 5 new sentences
  var sent1 = generateSentence('start');
  var sent2 = generateSentence('start');
  var sent3 = generateSentence('start');
  var sent4 = generateSentence('start');
  var sent5 = generateSentence('start');
  
  return {sent1: sent1, sent2: sent2, sent3: sent3, sent4: sent4, sent5: sent5};
};

var ANSWER = Infer({method: 'MCMC', burn: 10000, samples: 1000, lag: 10, onlyMAP: true}, model);
Error: Parameter "alpha" should be of type "vector (0, Infinity)".
var ones = function(dims) {
  return repeat(dims[0], function() { return 1; });
};

var posOrder = ['start', 'N', 'V', 'D', 'A', 'stop'];
var posIndex = {start: 0, N: 1, V: 2, D: 3, A: 4, stop: 5};

var nounWords = ['dog', 'cat'];
var verbWords = ['chases', 'sleeps'];
var detWords = ['the', 'a'];
var advWords = ['dilligently'];

var getTransition = mem(function(pos) {
  return dirichletDrift({alpha: ones([5, 1]), concentration: 10});
});

var nextPOS = function(pos) {
  var probs = sample(getTransition(pos));
  var tags = ['N', 'V', 'D', 'A', 'stop'];
  return tags[discrete(probs)];
};

var drawWord = function(pos) {
  if (pos === 'N') {
    return uniformDraw(nounWords);
  } else if (pos === 'V') {
    return uniformDraw(verbWords);
  } else if (pos === 'D') {
    return uniformDraw(detWords);
  } else if (pos === 'A') {
    return 'dilligently';
  } else if (pos === 'stop') {
    return 'stop';
  }
};

var generateSentence = function(pos) {
  var next = nextPOS(pos);
  if (next === 'stop') {
    return ['stop'];
  } else {
    var word = drawWord(next);
    return [word].concat(generateSentence(next));
  }
};

var comparray = function(arr1, arr2) {
  return JSON.stringify(arr1) === JSON.stringify(arr2);
};

var ANSWER = Infer({method: 'MCMC', burn: 10000, samples: 1000, lag: 10, onlyMAP: true}, function() {
  factor(comparray(['the', 'dog', 'chases', 'a', 'cat', 'stop'], generateSentence('start')) * 5);
  
  var sent1 = generateSentence('start');
  var sent2 = generateSentence('start');
  var sent3 = generateSentence('start');
  var sent4 = generateSentence('start');
  var sent5 = generateSentence('start');
  
  return {sent1: sent1, sent2: sent2, sent3: sent3, sent4: sent4, sent5: sent5};
});
Error: sample() expected a distribution but received "{"dims":[5,1],"length":5,"data":{"0":0.03199880024831679,"1":0.19642197333240166,"2":0.05521555119618678,"3":0.6723021334704087,"4":0.04406154175268608}}".
var comparray = function(arr1, arr2) {
  return JSON.stringify(arr1) === JSON.stringify(arr2);
};

var posList = ['N', 'V', 'D', 'A', 'stop'];

var getTransition = mem(function(pos) {
  return sample(dirichletDrift({alpha: ones([5, 1]), concentration: 10}));
});

var nextPos = function(pos) {
  var probs = getTransition(pos);
  return categorical({ps: probs, vs: posList});
};

var drawWord = function(pos) {
  if (pos === 'N') return uniformDraw(['dog', 'cat']);
  if (pos === 'V') return uniformDraw(['chases', 'sleeps']);
  if (pos === 'D') return uniformDraw(['the', 'a']);
  if (pos === 'A') return 'dilligently';
  if (pos === 'stop') return 'stop';
};

var generateSentence = function(pos) {
  if (pos === 'stop') return ['stop'];
  var next = nextPos(pos);
  if (pos === 'start') {
    return generateSentence(next);
  } else {
    var word = drawWord(pos);
    return [word].concat(generateSentence(next));
  }
};

var model = function() {
  factor(comparray(['the', 'dog', 'chases', 'a', 'cat', 'stop'],
                   generateSentence('start')) * 5);

  var sent1 = generateSentence('start');
  var sent2 = generateSentence('start');
  var sent3 = generateSentence('start');
  var sent4 = generateSentence('start');
  var sent5 = generateSentence('start');

  return {sent1: sent1, sent2: sent2, sent3: sent3, sent4: sent4, sent5: sent5};
};

var ANSWER = Infer({method: 'MCMC', burn: 10000, samples: 1000, lag: 10, onlyMAP: true}, model);
var comparray = function(arr1, arr2) {
  return JSON.stringify(arr1) === JSON.stringify(arr2);
};

var tags = ['N', 'V', 'D', 'A', 'stop'];

var getTransition = mem(function(pos) {
  return dirichletDrift({alpha: ones([5, 1]), concentration: 10});
});

var nextTag = function(pos) {
  var probs = getTransition(pos);
  var idx = sample(Categorical({vs: _.range(5), ps: probs}));
  return tags[idx];
};

var drawWord = function(pos) {
  return pos === 'N' ? uniformDraw(['dog', 'cat']) :
         pos === 'V' ? uniformDraw(['chases', 'sleeps']) :
         pos === 'D' ? uniformDraw(['the', 'a']) :
         pos === 'A' ? 'dilligently' :
         'stop';
};

var generateSentence = function(pos) {
  var next = nextTag(pos);
  if (next === 'stop') {
    return ['stop'];
  }
  var word = drawWord(next);
  return [word].concat(generateSentence(next));
};

var model = function() {
  factor(comparray(['the', 'dog', 'chases', 'a', 'cat', 'stop'],
                   generateSentence('start')) * 5);

  var sent1 = generateSentence('start');
  var sent2 = generateSentence('start');
  var sent3 = generateSentence('start');
  var sent4 = generateSentence('start');
  var sent5 = generateSentence('start');

  return {
    sent1: sent1,
    sent2: sent2,
    sent3: sent3,
    sent4: sent4,
    sent5: sent5
  };
};

var ANSWER = Infer({method: 'MCMC', burn: 10000, samples: 1000, lag: 10, onlyMAP: true}, model);
groundtruth output
{"sent1":["stop"],"sent2":["sleeps","…1.0000
raw JSON
{
  "__kind": "distribution",
  "probs": [
    1
  ],
  "support": [
    {
      "sent1": [
        "stop"
      ],
      "sent2": [
        "sleeps",
        "a",
        "stop"
      ],
      "sent3": [
        "stop"
      ],
      "sent4": [
        "stop"
      ],
      "sent5": [
        "stop"
      ]
    }
  ]
}
feedback
not signed
probmods2-observing-sequences/ex3.b distribution 8 runs TV=1.00 TV=1
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Use a phrase structure grammar instead of an HMM. Vocabulary: D='the','a'; N='cat','dog'; V='chases','sleeps'; A='diligently'. Production rules: AP -> A; NP -> [D, N]; VP -> [V, AP] | [V, NP]; S -> [NP, VP]. Condition on [['the', 'dog'], ['chases', ['a', 'cat']]] being a sentence (using comparray and S()), then sample 5 new sentences. Use MCMC with onlyMAP=true. End with the Infer(...) returning {sent1...sent5}.
groundtruth code
var comparray = function(arr1,arr2){
  return (JSON.stringify(arr1) === JSON.stringify(arr2));
};

var uniformDraw = function (xs) {return xs[randomInteger(xs.length)]};

var D  = function() {return uniformDraw(['the', 'a'])};
var N  = function() {return uniformDraw(['cat', 'dog'])};
var V  = function() {return uniformDraw(['chases', 'sleeps'])};
var A  = function() {return uniformDraw(['diligently'])};
var AP = function() {return uniformDraw([A()])};
var NP = function() {return [D(), N()]};
var VP = function() {return uniformDraw([[V(), AP()],
                                         [V(), NP()]])};
var S  = function() {return [NP(), VP()]};
var ANSWER = (Infer({method:'MCMC', burn:10000, samples: 1000, onlyMAP: true}, function() {
  let obs = [['the', 'dog'], ['chases', ['a', 'cat']]];
  condition(comparray(obs, S()));

  var sent1 = S();
  var sent2 = S();
  var sent3 = S();
  var sent4 = S();
  var sent5 = S();
  return {sent1: sent1, sent2: sent2, sent3: sent3, sent4: sent4, sent5: sent5};
}));
generated code
sonnet-46-primer-v3 TV=1 haiku-45-noprimer-v3 fail haiku-45-primer-v3 fail haiku-45-think-noprimer-v3 fail haiku-45-think-primer-v3 fail sonnet-46-noprimer-v3 fail sonnet-46-think-noprimer-v3 TV=1 sonnet-46-think-primer-v3 TV=1
var D = ['the', 'a'];
var N = ['cat', 'dog'];
var V = ['chases', 'sleeps'];
var A = ['diligently'];

var AP = function() {
  return sample(Categorical({vs: A, ps: [1/A.length, 1/A.length].slice(0, A.length)}));
};

var apSample = function() {
  return A[randomInteger(A.length)];
};

var npSample = function() {
  return [D[randomInteger(D.length)], N[randomInteger(N.length)]];
};

var vpSample = function() {
  var choice = flip(0.5);
  return choice ? [V[randomInteger(V.length)], apSample()] : [V[randomInteger(V.length)], npSample()];
};

var sSample = function() {
  return [npSample(), vpSample()];
};

var comparray = function(a, b) {
  if (Array.isArray(a) && Array.isArray(b)) {
    if (a.length !== b.length) return false;
    return _.every(map2(function(x, y) { return comparray(x, y); }, a, b));
  } else {
    return a === b;
  }
};

var observed = [['the', 'dog'], ['chases', ['a', 'cat']]];

var model = function() {
  condition(comparray(sSample(), observed));
  var sent1 = sSample();
  var sent2 = sSample();
  var sent3 = sSample();
  var sent4 = sSample();
  var sent5 = sSample();
  return {sent1: sent1, sent2: sent2, sent3: sent3, sent4: sent4, sent5: sent5};
};

var ANSWER = Infer({method: 'MCMC', samples: 1000, onlyMAP: true}, model);
Error: cpsInnerStatement
var D = ['the', 'a'];
var N = ['cat', 'dog'];
var V = ['chases', 'sleeps'];
var A = ['diligently'];

var AP = function() {
  return uniformDraw(A);
};

var NP = function() {
  return [uniformDraw(D), uniformDraw(N)];
};

var VP = function() {
  if (flip(0.5)) {
    return [uniformDraw(V), AP()];
  } else {
    return [uniformDraw(V), NP()];
  }
};

var S = function() {
  return [NP(), VP()];
};

var comparray = function(a, b) {
  if (a.length !== b.length) return false;
  for (var i = 0; i < a.length; i++) {
    if (Array.isArray(a[i]) && Array.isArray(b[i])) {
      if (!comparray(a[i], b[i])) return false;
    } else if (a[i] !== b[i]) {
      return false;
    }
  }
  return true;
};

var model = function() {
  var sent1 = S();
  var sent2 = S();
  var sent3 = S();
  var sent4 = S();
  var sent5 = S();
  
  var targetSent = [['the', 'dog'], ['chases', ['a', 'cat']]];
  condition(comparray(sent1, targetSent));
  
  return {sent1: sent1, sent2: sent2, sent3: sent3, sent4: sent4, sent5: sent5};
};

var ANSWER = Infer({method: 'MCMC', samples: 1000, onlyMAP: true}, model);
Timeout after 60s
var D = ['the', 'a'];
var N = ['cat', 'dog'];
var V = ['chases', 'sleeps'];
var A = ['diligently'];

var AP = function() {
  return [uniformDraw(A)];
};

var NP = function() {
  return [uniformDraw(D), uniformDraw(N)];
};

var VP = function() {
  var choice = flip(0.5);
  if (choice) {
    return [uniformDraw(V)].concat(AP());
  } else {
    return [uniformDraw(V)].concat(NP());
  }
};

var S = function() {
  return NP().concat(VP());
};

var comparray = function(a, b) {
  if (!Array.isArray(a) || !Array.isArray(b)) {
    return a === b;
  }
  if (a.length !== b.length) {
    return false;
  }
  return all(function(i) {
    return comparray(a[i], b[i]);
  }, _.range(0, a.length));
};

var model = function() {
  var sent = S();
  var target = [['the', 'dog'], ['chases', ['a', 'cat']]];
  condition(comparray(sent, target));
  return sent;
};

var posterior = Infer({method: 'MCMC', samples: 5000, onlyMAP: true}, model);

var samples = repeat(5, function() {
  return sample(posterior);
});

var ANSWER = {
  sent1: samples[0],
  sent2: samples[1],
  sent3: samples[2],
  sent4: samples[3],
  sent5: samples[4]
};
ReferenceError: comparray is not defined
var D = function() { return uniformDraw(['the', 'a']); };
var N = function() { return uniformDraw(['cat', 'dog']); };
var V = function() { return uniformDraw(['chases', 'sleeps']); };
var A = function() { return 'diligently'; };

var AP = function() { 
  return [A()];
};

var NP = function() { 
  return [D(), N()];
};

var VP = function() { 
  return flip() ? [V(), AP()] : [V(), NP()];
};

var S = function() { 
  return [NP(), VP()];
};

var model = function() {
  var sentence = S();
  var target = [['the', 'dog'], ['chases', ['a', 'cat']]];
  condition(comparray(sentence, target));
  return sentence;
};

var posterior = Infer({method: 'MCMC', samples: 5000, onlyMAP: true}, model);

var sent1 = sample(posterior);
var sent2 = sample(posterior);
var sent3 = sample(posterior);
var sent4 = sample(posterior);
var sent5 = sample(posterior);

var ANSWER = {sent1: sent1, sent2: sent2, sent3: sent3, sent4: sent4, sent5: sent5};
ReferenceError: comparray is not defined
var D = function() { return uniformDraw(['the', 'a']); };
var N = function() { return uniformDraw(['cat', 'dog']); };
var V = function() { return uniformDraw(['chases', 'sleeps']); };
var A = function() { return uniformDraw(['diligently']); };

var AP = function() {
  return A();
};

var NP = function() {
  return [D(), N()];
};

var VP = function() {
  if (flip()) {
    return [V(), AP()];
  } else {
    return [V(), NP()];
  }
};

var S = function() {
  return [NP(), VP()];
};

var model = function() {
  var observedSentence = [['the', 'dog'], ['chases', ['a', 'cat']]];
  var sampledSentence = S();
  condition(comparray(sampledSentence, observedSentence));
  
  var sent1 = S();
  var sent2 = S();
  var sent3 = S();
  var sent4 = S();
  var sent5 = S();
  
  return {sent1: sent1, sent2: sent2, sent3: sent3, sent4: sent4, sent5: sent5};
};

var ANSWER = Infer({method: 'MCMC', samples: 1, onlyMAP: true}, model);
Error: cpsInnerStatement
// Phrase structure grammar

// Helper to compare arrays deeply
var comparray = function(a, b) {
  if (a.length !== b.length) return false;
  var result = true;
  for (var i = 0; i < a.length; i++) {
    if (Array.isArray(a[i]) && Array.isArray(b[i])) {
      if (!comparray(a[i], b[i])) result = false;
    } else {
      if (a[i] !== b[i]) result = false;
    }
  }
  return result;
};

// Grammar rules
var D = function() {
  return uniformDraw(['the', 'a']);
};

var N = function() {
  return uniformDraw(['cat', 'dog']);
};

var V = function() {
  return uniformDraw(['chases', 'sleeps']);
};

var A = function() {
  return 'diligently';
};

var AP = function() {
  return A();
};

var NP = function() {
  return [D(), N()];
};

var VP = function() {
  var choice = flip();
  if (choice) {
    // VP -> [V, AP]
    return [V(), AP()];
  } else {
    // VP -> [V, NP]
    return [V(), NP()];
  }
};

var S = function() {
  return [NP(), VP()];
};

// Target sentence
var target = [['the', 'dog'], ['chases', ['a', 'cat']]];

var model = function() {
  var sentence = S();
  condition(comparray(sentence, target));
  
  // Sample 5 new sentences (unconditioned)
  var sent1 = S();
  var sent2 = S();
  var sent3 = S();
  var sent4 = S();
  var sent5 = S();
  
  return {
    sent1: sent1,
    sent2: sent2,
    sent3: sent3,
    sent4: sent4,
    sent5: sent5
  };
};

var ANSWER = Infer({method: 'MCMC', samples: 1000, onlyMAP: true}, model);
// Deep array comparison
var comparray = function(a, b) {
  if (Array.isArray(a) && Array.isArray(b)) {
    if (a.length !== b.length) return false;
    return all(function(i) { return comparray(a[i], b[i]); }, _.range(a.length));
  }
  return a === b;
};

// Lexical categories
var D = function() { return uniformDraw(['the', 'a']); };
var N = function() { return uniformDraw(['cat', 'dog']); };
var V = function() { return uniformDraw(['chases', 'sleeps']); };
var A = function() { return 'diligently'; };

// Phrase-structure rules
var AP = function() { return A(); };
var NP = function() { return [D(), N()]; };
var VP = function() {
  return flip() ? [V(), AP()] : [V(), NP()];
};
var S = function() { return [NP(), VP()]; };

// Target sentence (parse tree form)
var target = [['the', 'dog'], ['chases', ['a', 'cat']]];

var model = function() {
  // Condition on target being a well-formed sentence
  condition(comparray(S(), target));

  // Sample 5 new sentences from the posterior
  var sent1 = S();
  var sent2 = S();
  var sent3 = S();
  var sent4 = S();
  var sent5 = S();

  return { sent1: sent1, sent2: sent2, sent3: sent3, sent4: sent4, sent5: sent5 };
};

var ANSWER = Infer({ method: 'MCMC', onlyMAP: true, samples: 5000 }, model);
var comparray = function(a, b) {
  if (Array.isArray(a) && Array.isArray(b)) {
    if (a.length !== b.length) return false;
    return _.every(_.range(a.length), function(i) {
      return comparray(a[i], b[i]);
    });
  }
  return a === b;
};

// Lexical rules
var D = function() { return uniformDraw(['the', 'a']); };
var N = function() { return uniformDraw(['cat', 'dog']); };
var V = function() { return uniformDraw(['chases', 'sleeps']); };
var A = function() { return 'diligently'; };

// Phrase structure rules
// AP -> A
var AP = function() { return A(); };

// NP -> [D, N]
var NP = function() { return [D(), N()]; };

// VP -> [V, AP] | [V, NP]
var VP = function() {
  return flip(0.5) ? [V(), AP()] : [V(), NP()];
};

// S -> [NP, VP]
var S = function() { return [NP(), VP()]; };

var target = [['the', 'dog'], ['chases', ['a', 'cat']]];

var model = function() {
  // Condition on the target being a grammatical sentence
  condition(comparray(S(), target));

  // Sample 5 new sentences from the grammar
  var sent1 = S();
  var sent2 = S();
  var sent3 = S();
  var sent4 = S();
  var sent5 = S();

  return {
    sent1: sent1,
    sent2: sent2,
    sent3: sent3,
    sent4: sent4,
    sent5: sent5
  };
};

var ANSWER = Infer({method: 'MCMC', samples: 5000, onlyMAP: true}, model);
groundtruth output
{"sent1":[["the","dog"],["sleeps","di…1.0000
raw JSON
{
  "__kind": "distribution",
  "probs": [
    1
  ],
  "support": [
    {
      "sent1": [
        [
          "the",
          "dog"
        ],
        [
          "sleeps",
          "diligently"
        ]
      ],
      "sent2": [
        [
          "the",
          "cat"
        ],
        [
          "sleeps",
          "diligently"
        ]
      ],
      "sent3": [
        [
          "a",
          "dog"
        ],
        [
          "chases",
          "diligently"
        ]
      ],
      "sent4": [
        [
          "the",
          "cat"
        ],
        [
          "sleeps",
          [
            "a",
            "cat"
          ]
        ]
      ],
      "sent5": [
        [
          "the",
          "cat"
        ],
        [
          "sleeps",
          "diligently"
        ]
      ]
    }
  ]
}
feedback
not signed
bayesian-data-analysis 1 atom 1✓ · 0✗
probmods2-bayesian-data-analysis/ex1.2 distribution 8 runs TV=0.00 TV=0
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Given this binomial model where predictive distributions are computed for the same number of attempts as the observed data:
var k = 1; var n = 20;
var priorDist = Uniform({a: 0, b: 1});
var model = function() {
  var p = sample(priorDist);
  observe(Binomial({p: p, n: n}), k);
  var posteriorPredictive = binomial(p, n);
  var prior_p = sample(priorDist);
  var priorPredictive = binomial(prior_p, n);
  return {prior: prior_p, priorPredictive, posterior: p, posteriorPredictive};
}
Infer({method: "MCMC", samples: 2500, lag: 50}, model);
Predictive distributions can be used to predict the outcome of a *different* experiment with new_n != n. Modify the model so the posterior predictive uses new_n = 5 attempts (while observed data is still k=1 success in n=20 attempts). Use Beta(a=1, b=1) as the prior. End with the Infer(...) call.
groundtruth code
var k = 1;
var n = 20;
var new_n = 5;
var priorDist = Beta({a: 1, b: 1});

var model = function() {
   var p = sample(priorDist);
   observe(Binomial({p : p, n: n}), k);
   var posteriorPredictive = binomial(p, new_n);
   var prior_p = sample(priorDist);
   var priorPredictive = binomial(prior_p, n);
   return {
       prior: prior_p, priorPredictive : priorPredictive,
       posterior : p, posteriorPredictive : posteriorPredictive
   };
};
var ANSWER = (Infer({method: "MCMC", samples: 2500, lag: 50}, model));
generated code
sonnet-46-primer-v3 TV=0 haiku-45-noprimer-v3 TV=0 haiku-45-primer-v3 TV=0 haiku-45-think-noprimer-v3 TV=0 haiku-45-think-primer-v3 TV=0 sonnet-46-noprimer-v3 TV=0 sonnet-46-think-noprimer-v3 TV<1 sonnet-46-think-primer-v3 TV=0
var k = 1;
var n = 20;
var new_n = 5;
var priorDist = Beta({a: 1, b: 1});

var model = function() {
  var p = sample(priorDist);
  observe(Binomial({p: p, n: n}), k);
  var posteriorPredictive = binomial(p, new_n);
  var prior_p = sample(priorDist);
  var priorPredictive = binomial(prior_p, n);
  return {prior: prior_p, priorPredictive, posterior: p, posteriorPredictive};
};

var ANSWER = Infer({method: "MCMC", samples: 2500, lag: 50}, model);
var k = 1; var n = 20; var new_n = 5;
var priorDist = Beta({a: 1, b: 1});
var model = function() {
  var p = sample(priorDist);
  observe(Binomial({p: p, n: n}), k);
  var posteriorPredictive = binomial(p, new_n);
  var prior_p = sample(priorDist);
  var priorPredictive = binomial(prior_p, n);
  return {prior: prior_p, priorPredictive, posterior: p, posteriorPredictive};
}
var ANSWER = Infer({method: "MCMC", samples: 2500, lag: 50}, model);
var k = 1; var n = 20; var new_n = 5;
var priorDist = Beta({a: 1, b: 1});
var model = function() {
  var p = sample(priorDist);
  observe(Binomial({p: p, n: n}), k);
  var posteriorPredictive = binomial(p, new_n);
  var prior_p = sample(priorDist);
  var priorPredictive = binomial(prior_p, n);
  return {prior: prior_p, priorPredictive, posterior: p, posteriorPredictive};
};
var ANSWER = Infer({method: "MCMC", samples: 2500, lag: 50}, model);
var k = 1; var n = 20; var new_n = 5;
var priorDist = Beta({a: 1, b: 1});
var model = function() {
  var p = sample(priorDist);
  observe(Binomial({p: p, n: n}), k);
  var posteriorPredictive = binomial(p, new_n);
  var prior_p = sample(priorDist);
  var priorPredictive = binomial(prior_p, n);
  return {prior: prior_p, priorPredictive, posterior: p, posteriorPredictive};
}
var ANSWER = Infer({method: "MCMC", samples: 2500, lag: 50}, model);
var k = 1; 
var n = 20;
var new_n = 5;
var priorDist = Beta({a: 1, b: 1});
var model = function() {
  var p = sample(priorDist);
  observe(Binomial({p: p, n: n}), k);
  var posteriorPredictive = sample(Binomial({p: p, n: new_n}));
  var prior_p = sample(priorDist);
  var priorPredictive = sample(Binomial({p: prior_p, n: n}));
  return {prior: prior_p, priorPredictive, posterior: p, posteriorPredictive};
}
var ANSWER = Infer({method: "MCMC", samples: 2500, lag: 50}, model);
var k = 1;
var n = 20;
var new_n = 5;
var priorDist = Beta({a: 1, b: 1});

var model = function() {
  var p = sample(priorDist);
  observe(Binomial({p: p, n: n}), k);
  var posteriorPredictive = binomial(p, new_n);
  var prior_p = sample(priorDist);
  var priorPredictive = binomial(prior_p, n);
  return {prior: prior_p, priorPredictive: priorPredictive, posterior: p, posteriorPredictive: posteriorPredictive};
};

var ANSWER = Infer({method: "MCMC", samples: 2500, lag: 50}, model);
var k = 1;
var n = 20;
var new_n = 5;

var priorDist = Beta({a: 1, b: 1});

var model = function() {
  var p = sample(priorDist);
  observe(Binomial({p: p, n: n}), k);
  var posteriorPredictive = binomial(p, new_n);
  var prior_p = sample(priorDist);
  var priorPredictive = binomial(prior_p, new_n);
  return {prior: prior_p, priorPredictive, posterior: p, posteriorPredictive};
};

var ANSWER = Infer({method: "MCMC", samples: 2500, lag: 50}, model);
var k = 1;
var n = 20;
var new_n = 5;

var priorDist = Beta({a: 1, b: 1});

var model = function() {
  var p = sample(priorDist);
  observe(Binomial({p: p, n: n}), k);
  var posteriorPredictive = sample(Binomial({p: p, n: new_n}));
  var prior_p = sample(priorDist);
  var priorPredictive = sample(Binomial({p: prior_p, n: n}));
  return {prior: prior_p, priorPredictive: priorPredictive, posterior: p, posteriorPredictive: posteriorPredictive};
};

var ANSWER = Infer({method: "MCMC", samples: 2500, lag: 50}, model);
groundtruth output
{"prior":0.9838947372415268,"priorPre…0.0008
{"prior":0.008386507187340549,"priorP…0.0008
{"prior":0.6835795295486877,"priorPre…0.0004
{"prior":0.4625728897988316,"priorPre…0.0004
{"prior":0.459280074975201,"priorPred…0.0004
{"prior":0.4227631773615444,"priorPre…0.0004
{"prior":0.5666763577505871,"priorPre…0.0004
{"prior":0.5006186233546616,"priorPre…0.0004
{"prior":0.2075469012194067,"priorPre…0.0004
{"prior":0.10303381271114298,"priorPr…0.0004
{"prior":0.490870602206004,"priorPred…0.0004
{"prior":0.8177265342929015,"priorPre…0.0004
… 2486 more
raw JSON
{
  "__kind": "distribution",
  "probs": [
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.000400000

... (432745 more chars truncated)
feedback
not signed
conditional-dependence 2 atoms 2✓ · 0✗
probmods2-conditional-dependence/ex1.a record(prior, death, deathAndCold, deathAndNoCold) 8 runs TV=0.00 TV<.05
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Imagine that you are an epidemiologist determining people's cause of death. There are two main diseases: cancer (rare, P = 0.00001, often fatal P(death|cancer)=0.9) and the common cold (P = 0.2, rarely fatal P(death|cold)=0.00006). Very rarely, people die of other causes (P = 0.000000001). Return an object literal with four marginals over cancer: - prior: unconditional - death: given death - deathAndCold: given death && cold - deathAndNoCold: given death && !cold
groundtruth code
var ANSWER = (({
  prior: Infer({method: 'enumerate'}, function() {
var cancer = flip(0.00001);
var cold = flip(0.2);
var death_by_cancer = cancer && flip(0.9);
var death_by_cold = cold && flip(0.00006);
var other_death = flip(0.000000001);
var death = death_by_cancer || death_by_cold || other_death;
    return cancer;
  }),
  death: Infer({method: 'enumerate'}, function() {
var cancer = flip(0.00001);
var cold = flip(0.2);
var death_by_cancer = cancer && flip(0.9);
var death_by_cold = cold && flip(0.00006);
var other_death = flip(0.000000001);
var death = death_by_cancer || death_by_cold || other_death;
    condition(death);
    return cancer;
  }),
  deathAndCold: Infer({method: 'enumerate'}, function() {
var cancer = flip(0.00001);
var cold = flip(0.2);
var death_by_cancer = cancer && flip(0.9);
var death_by_cold = cold && flip(0.00006);
var other_death = flip(0.000000001);
var death = death_by_cancer || death_by_cold || other_death;
    condition(death && cold);
    return cancer;
  }),
  deathAndNoCold: Infer({method: 'enumerate'}, function() {
var cancer = flip(0.00001);
var cold = flip(0.2);
var death_by_cancer = cancer && flip(0.9);
var death_by_cold = cold && flip(0.00006);
var other_death = flip(0.000000001);
var death = death_by_cancer || death_by_cold || other_death;
    condition(death && !cold);
    return cancer;
  })
}));
generated code
sonnet-46-primer-v3 TV<.05 haiku-45-noprimer-v3 TV<.05 haiku-45-primer-v3 fail haiku-45-think-noprimer-v3 fail haiku-45-think-primer-v3 TV<.05 sonnet-46-noprimer-v3 TV<.05 sonnet-46-think-noprimer-v3 TV=0 sonnet-46-think-primer-v3 TV=0
var model = function() {
  var cancer = flip(0.00001);
  var cold = flip(0.2);
  
  var pDeath = (cancer ? 0.9 : 0) + 
               (cold ? 0.00006 : 0) + 
               0.000000001;
  // Clamp to [0,1]
  var pDeathClamped = Math.min(1, pDeath);
  
  var death = flip(pDeathClamped);
  
  return {cancer: cancer, cold: cold, death: death};
};

var prior = Infer({method: 'enumerate'}, function() {
  var cancer = flip(0.00001);
  return cancer;
});

var death = Infer({method: 'enumerate'}, function() {
  var r = model();
  condition(r.death);
  return r.cancer;
});

var deathAndCold = Infer({method: 'enumerate'}, function() {
  var r = model();
  condition(r.death && r.cold);
  return r.cancer;
});

var deathAndNoCold = Infer({method: 'enumerate'}, function() {
  var r = model();
  condition(r.death && !r.cold);
  return r.cancer;
});

var ANSWER = {
  prior: prior,
  death: death,
  deathAndCold: deathAndCold,
  deathAndNoCold: deathAndNoCold
};
var model = function() {
  var cancer = flip(0.00001);
  var cold = flip(0.2);
  var other = flip(0.000000001);
  
  var deathFromCancer = cancer && flip(0.9);
  var deathFromCold = cold && flip(0.00006);
  var deathFromOther = other;
  
  var death = deathFromCancer || deathFromCold || deathFromOther;
  
  return {
    cancer: cancer,
    cold: cold,
    death: death
  };
};

var prior = Infer({method: "enumerate"}, function() {
  return model().cancer;
});

var givenDeath = Infer({method: "enumerate"}, function() {
  var m = model();
  condition(m.death);
  return m.cancer;
});

var givenDeathAndCold = Infer({method: "enumerate"}, function() {
  var m = model();
  condition(m.death && m.cold);
  return m.cancer;
});

var givenDeathAndNoCold = Infer({method: "enumerate"}, function() {
  var m = model();
  condition(m.death && !m.cold);
  return m.cancer;
});

var ANSWER = {
  prior: prior,
  death: givenDeath,
  deathAndCold: givenDeathAndCold,
  deathAndNoCold: givenDeathAndNoCold
};
Timeout after 60s
var model = function() {
  var cancer = flip(0.00001);
  var cold = flip(0.2);
  var other = flip(0.000000001);
  
  var deathFromCancer = cancer && flip(0.9);
  var deathFromCold = cold && flip(0.00006);
  var deathFromOther = other;
  
  var death = deathFromCancer || deathFromCold || deathFromOther;
  
  return {
    cancer: cancer,
    cold: cold,
    death: death
  };
};

var prior = Infer({method: 'forward', samples: 100000}, function() {
  return model().cancer;
});

var givenDeath = Infer({method: 'rejection', samples: 100000}, function() {
  var s = model();
  condition(s.death);
  return s.cancer;
});

var givenDeathAndCold = Infer({method: 'rejection', samples: 100000}, function() {
  var s = model();
  condition(s.death && s.cold);
  return s.cancer;
});

var givenDeathAndNoCold = Infer({method: 'rejection', samples: 100000}, function() {
  var s = model();
  condition(s.death && !s.cold);
  return s.cancer;
});

var ANSWER = {
  prior: prior,
  death: givenDeath,
  deathAndCold: givenDeathAndCold,
  deathAndNoCold: givenDeathAndNoCold
};
Error: sample() expected a distribution but received "{"cancer":false,"death":false,"cold":false}".
var model = function() {
  // Prior probabilities
  var cancer = flip(0.00001);
  var cold = flip(0.2);
  
  // Death outcomes from each disease
  var deathFromCancer = cancer && flip(0.9);
  var deathFromCold = cold && flip(0.00006);
  var deathFromOther = flip(0.000000001);
  
  // Overall death
  var death = deathFromCancer || deathFromCold || deathFromOther;
  
  return {cancer: cancer, death: death, cold: cold};
};

// Prior: unconditional marginal over cancer
var prior = Infer({method: 'enumerate'}, function() {
  return sample(model()).cancer;
});

// Given death
var death = Infer({method: 'enumerate'}, function() {
  var s = sample(model());
  condition(s.death);
  return s.cancer;
});

// Given death and cold
var deathAndCold = Infer({method: 'enumerate'}, function() {
  var s = sample(model());
  condition(s.death && s.cold);
  return s.cancer;
});

// Given death and not cold
var deathAndNoCold = Infer({method: 'enumerate'}, function() {
  var s = sample(model());
  condition(s.death && !s.cold);
  return s.cancer;
});

var ANSWER = {
  prior: prior,
  death: death,
  deathAndCold: deathAndCold,
  deathAndNoCold: deathAndNoCold
};
var model = function() {
  var cancer = flip(0.00001);
  var cold = flip(0.2);
  
  // P(death) depends on which disease is present
  // If cancer: 0.9 probability of death
  // Else if cold: 0.00006 probability of death  
  // Else: 0.000000001 probability of death (other causes)
  var pDeath = cancer ? 0.9 : (cold ? 0.00006 : 0.000000001);
  var death = flip(pDeath);
  
  return {cancer: cancer, cold: cold, death: death};
};

var prior = Infer({method: 'enumerate'}, function() {
  return model().cancer;
});

var deathPosterior = Infer({method: 'enumerate'}, function() {
  var result = model();
  condition(result.death);
  return result.cancer;
});

var deathAndColdPosterior = Infer({method: 'enumerate'}, function() {
  var result = model();
  condition(result.death && result.cold);
  return result.cancer;
});

var deathAndNoColdPosterior = Infer({method: 'enumerate'}, function() {
  var result = model();
  condition(result.death && !result.cold);
  return result.cancer;
});

var ANSWER = {
  prior: prior,
  death: deathPosterior,
  deathAndCold: deathAndColdPosterior,
  deathAndNoCold: deathAndNoColdPosterior
};
var model = function() {
  var cancer = flip(0.00001);
  var cold = flip(0.2);
  
  var pDeath = (cancer ? 0.9 : 0) + 
               (cold ? 0.00006 : 0) + 
               0.000000001;
  // Cap at 1
  var pDeathCapped = Math.min(pDeath, 1);
  
  var death = flip(pDeathCapped);
  
  return {cancer: cancer, cold: cold, death: death};
};

var prior = Infer({method: 'enumerate'}, function() {
  var result = model();
  return result.cancer;
});

var death = Infer({method: 'enumerate'}, function() {
  var result = model();
  condition(result.death === true);
  return result.cancer;
});

var deathAndCold = Infer({method: 'enumerate'}, function() {
  var result = model();
  condition(result.death === true && result.cold === true);
  return result.cancer;
});

var deathAndNoCold = Infer({method: 'enumerate'}, function() {
  var result = model();
  condition(result.death === true && result.cold === false);
  return result.cancer;
});

var ANSWER = {
  prior: prior,
  death: death,
  deathAndCold: deathAndCold,
  deathAndNoCold: deathAndNoCold
};
var model = function() {
  var cancer = flip(0.00001);
  var cold   = flip(0.2);
  var deathFromCancer = cancer ? flip(0.9)         : false;
  var deathFromCold   = cold   ? flip(0.00006)     : false;
  var deathFromOther  = flip(0.000000001);
  var death = deathFromCancer || deathFromCold || deathFromOther;
  return { cancer: cancer, cold: cold, death: death };
};

var prior = Infer({ method: 'enumerate' }, function() {
  var r = model();
  return r.cancer;
});

var death = Infer({ method: 'enumerate' }, function() {
  var r = model();
  condition(r.death === true);
  return r.cancer;
});

var deathAndCold = Infer({ method: 'enumerate' }, function() {
  var r = model();
  condition(r.death === true && r.cold === true);
  return r.cancer;
});

var deathAndNoCold = Infer({ method: 'enumerate' }, function() {
  var r = model();
  condition(r.death === true && r.cold === false);
  return r.cancer;
});

var ANSWER = {
  prior:         prior,
  death:         death,
  deathAndCold:  deathAndCold,
  deathAndNoCold: deathAndNoCold
};
var makeModel = function(condFn) {
  return Infer({method: 'enumerate'}, function() {
    var cancer = flip(0.00001);
    var cold   = flip(0.2);

    // Death can arise from any of three independent sources
    var diedOfCancer = cancer ? flip(0.9)         : false;
    var diedOfCold   = cold   ? flip(0.00006)     : false;
    var diedOfOther  = flip(0.000000001);

    var death = diedOfCancer || diedOfCold || diedOfOther;

    condition(condFn(death, cold));

    return cancer;
  });
};

var prior          = makeModel(function(death, cold) { return true; });
var death          = makeModel(function(death, cold) { return death; });
var deathAndCold   = makeModel(function(death, cold) { return death && cold; });
var deathAndNoCold = makeModel(function(death, cold) { return death && !cold; });

var ANSWER = {
  prior:          prior,
  death:          death,
  deathAndCold:   deathAndCold,
  deathAndNoCold: deathAndNoCold
};
groundtruth output
{
  "prior": {
    "__kind": "distribution",
    "probs": [
      0.99999,
      0.000009999999999999999
    ],
    "support": [
      false,
      true
    ]
  },
  "death": {
    "__kind": "distribution",
    "probs": [
      0.5714462028565476,
      0.4285537971434522
    ],
    "support": [
      false,
      true
    ]
  },
  "deathAndCold": {
    "__kind": "distribution",
    "probs": [
      0.8695652172463794,
      0.1304347827536201
    ],
    "support": [
      false,
      true
    ]
  },
  "deathAndNoCold": {
    "__kind": "distribution",
    "probs": [
      0.00011109765592710606,
      0.9998889023440727
    ],
    "support": [
      false,
      true
    ]
  }
}
feedback
not signed
probmods2-conditional-dependence/ex1.b record(prior, death, deathAndCancer, deathAndNoCancer) 8 runs TV=0.00 TV<.05
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Imagine that you are an epidemiologist determining people's cause of death. There are two main diseases: cancer (rare, P = 0.00001, often fatal P(death|cancer)=0.9) and the common cold (P = 0.2, rarely fatal P(death|cold)=0.00006). Very rarely, people die of other causes (P = 0.000000001). Return an object literal with four marginals over cold: - prior: unconditional - death: given death - deathAndCancer: given death && cancer - deathAndNoCancer: given death && !cancer
groundtruth code
var ANSWER = (({
  prior: Infer({method: 'enumerate'}, function() {
var cancer = flip(0.00001);
var cold = flip(0.2);
var death_by_cancer = cancer && flip(0.9);
var death_by_cold = cold && flip(0.00006);
var other_death = flip(0.000000001);
var death = death_by_cancer || death_by_cold || other_death;
    return cold;
  }),
  death: Infer({method: 'enumerate'}, function() {
var cancer = flip(0.00001);
var cold = flip(0.2);
var death_by_cancer = cancer && flip(0.9);
var death_by_cold = cold && flip(0.00006);
var other_death = flip(0.000000001);
var death = death_by_cancer || death_by_cold || other_death;
    condition(death);
    return cold;
  }),
  deathAndCancer: Infer({method: 'enumerate'}, function() {
var cancer = flip(0.00001);
var cold = flip(0.2);
var death_by_cancer = cancer && flip(0.9);
var death_by_cold = cold && flip(0.00006);
var other_death = flip(0.000000001);
var death = death_by_cancer || death_by_cold || other_death;
    condition(death && cancer);
    return cold;
  }),
  deathAndNoCancer: Infer({method: 'enumerate'}, function() {
var cancer = flip(0.00001);
var cold = flip(0.2);
var death_by_cancer = cancer && flip(0.9);
var death_by_cold = cold && flip(0.00006);
var other_death = flip(0.000000001);
var death = death_by_cancer || death_by_cold || other_death;
    condition(death && !cancer);
    return cold;
  })
}));
generated code
sonnet-46-primer-v3 TV<.05 haiku-45-noprimer-v3 TV<.05 haiku-45-primer-v3 TV<.05 haiku-45-think-noprimer-v3 TV<.05 haiku-45-think-primer-v3 TV<1 sonnet-46-noprimer-v3 fail sonnet-46-think-noprimer-v3 TV<.05 sonnet-46-think-primer-v3 TV<.05
var model = function() {
  var cancer = flip(0.00001);
  var cold = flip(0.2);
  
  var pDeath = (cancer ? 0.9 : 0) + (cold ? 0.00006 : 0) + 0.000000001;
  // Cap at 1 to be safe
  var pDeathCapped = Math.min(pDeath, 1);
  var death = flip(pDeathCapped);
  
  return {cancer: cancer, cold: cold, death: death};
};

var prior = Infer({method: 'enumerate'}, function() {
  var r = model();
  return r.cold;
});

var death = Infer({method: 'enumerate'}, function() {
  var r = model();
  condition(r.death);
  return r.cold;
});

var deathAndCancer = Infer({method: 'enumerate'}, function() {
  var r = model();
  condition(r.death && r.cancer);
  return r.cold;
});

var deathAndNoCancer = Infer({method: 'enumerate'}, function() {
  var r = model();
  condition(r.death && !r.cancer);
  return r.cold;
});

var ANSWER = {
  prior: prior,
  death: death,
  deathAndCancer: deathAndCancer,
  deathAndNoCancer: deathAndNoCancer
};
var model = function() {
  var cancer = flip(0.00001);
  var cold = flip(0.2);
  var other = flip(0.000000001);
  
  var deathFromCancer = cancer && flip(0.9);
  var deathFromCold = cold && flip(0.00006);
  var deathFromOther = other;
  
  var death = deathFromCancer || deathFromCold || deathFromOther;
  
  return {
    cancer: cancer,
    cold: cold,
    death: death
  };
};

var prior = Infer({method: "enumerate"}, function() {
  var s = model();
  return s.cold;
});

var givenDeath = Infer({method: "enumerate"}, function() {
  var s = model();
  condition(s.death);
  return s.cold;
});

var givenDeathAndCancer = Infer({method: "enumerate"}, function() {
  var s = model();
  condition(s.death && s.cancer);
  return s.cold;
});

var givenDeathAndNoCancer = Infer({method: "enumerate"}, function() {
  var s = model();
  condition(s.death && !s.cancer);
  return s.cold;
});

var ANSWER = {
  prior: prior,
  death: givenDeath,
  deathAndCancer: givenDeathAndCancer,
  deathAndNoCancer: givenDeathAndNoCancer
};
var ANSWER = {
  prior: Infer({method: 'enumerate'}, function() {
    return flip(0.2);
  }),
  
  death: Infer({method: 'enumerate'}, function() {
    var cold = flip(0.2);
    var cancer = flip(0.00001);
    var other = flip(0.000000001);
    
    var deathFromCold = cold && flip(0.00006);
    var deathFromCancer = cancer && flip(0.9);
    var deathFromOther = other;
    
    var died = deathFromCold || deathFromCancer || deathFromOther;
    condition(died);
    
    return cold;
  }),
  
  deathAndCancer: Infer({method: 'enumerate'}, function() {
    var cold = flip(0.2);
    var cancer = flip(0.00001);
    var other = flip(0.000000001);
    
    var deathFromCold = cold && flip(0.00006);
    var deathFromCancer = cancer && flip(0.9);
    var deathFromOther = other;
    
    var died = deathFromCold || deathFromCancer || deathFromOther;
    condition(died && cancer);
    
    return cold;
  }),
  
  deathAndNoCancer: Infer({method: 'enumerate'}, function() {
    var cold = flip(0.2);
    var cancer = flip(0.00001);
    var other = flip(0.000000001);
    
    var deathFromCold = cold && flip(0.00006);
    var deathFromCancer = cancer && flip(0.9);
    var deathFromOther = other;
    
    var died = deathFromCold || deathFromCancer || deathFromOther;
    condition(died && !cancer);
    
    return cold;
  })
};
var model = function() {
  // Prior probabilities of diseases
  var hasCancer = flip(0.00001);
  var hasCold = flip(0.2);
  
  // Probability of death from other causes
  var dieFromOtherCauses = flip(0.000000001);
  
  // Determine if person dies
  var dieFromCancer = hasCancer && flip(0.9);
  var dieFromCold = hasCold && flip(0.00006);
  var dies = dieFromCancer || dieFromCold || dieFromOtherCauses;
  
  return {
    cold: hasCold,
    cancer: hasCancer,
    dies: dies
  };
};

// Prior marginal on cold
var prior = Infer({method: "enumerate"}, function() {
  var state = model();
  return state.cold;
});

// Posterior marginal on cold given death
var death = Infer({method: "enumerate"}, function() {
  var state = model();
  condition(state.dies);
  return state.cold;
});

// Posterior marginal on cold given death and cancer
var deathAndCancer = Infer({method: "enumerate"}, function() {
  var state = model();
  condition(state.dies && state.cancer);
  return state.cold;
});

// Posterior marginal on cold given death and no cancer
var deathAndNoCancer = Infer({method: "enumerate"}, function() {
  var state = model();
  condition(state.dies && !state.cancer);
  return state.cold;
});

var ANSWER = {
  prior: prior,
  death: death,
  deathAndCancer: deathAndCancer,
  deathAndNoCancer: deathAndNoCancer
};
var model = function() {
  var cancer = flip(0.00001);
  var cold = flip(0.2);
  var otherDeath = flip(0.000000001);
  
  // Determine if person dies
  var deathFromCancer = cancer && flip(0.9);
  var deathFromCold = cold && flip(0.00006);
  var death = deathFromCancer || deathFromCold || otherDeath;
  
  return {
    cold: cold,
    cancer: cancer,
    death: death
  };
};

// Prior: unconditional distribution over cold
var prior = Infer({method: 'forward', samples: 100000}, function() {
  return model().cold;
});

// Given death
var deathMarginal = Infer({method: 'forward', samples: 100000}, function() {
  var s = model();
  condition(s.death);
  return s.cold;
});

// Given death && cancer
var deathAndCancer = Infer({method: 'forward', samples: 100000}, function() {
  var s = model();
  condition(s.death && s.cancer);
  return s.cold;
});

// Given death && !cancer
var deathAndNoCancer = Infer({method: 'forward', samples: 100000}, function() {
  var s = model();
  condition(s.death && !s.cancer);
  return s.cold;
});

var ANSWER = {
  prior: prior,
  death: deathMarginal,
  deathAndCancer: deathAndCancer,
  deathAndNoCancer: deathAndNoCancer
};
Timeout after 60s
var model = function() {
  var cancer = flip(0.00001);
  var cold = flip(0.2);
  
  var pDeath = (cancer ? 0.9 : 0) + 
               (cold ? 0.00006 : 0) + 
               0.000000001;
  var clampedPDeath = Math.min(pDeath, 1);
  
  var death = flip(clampedPDeath);
  
  return {cancer: cancer, cold: cold, death: death};
};

var prior = Infer({method: 'forward', samples: 100000}, function() {
  var result = model();
  return result.cold;
});

var death = Infer({method: 'rejection', samples: 10000}, function() {
  var result = model();
  condition(result.death);
  return result.cold;
});

var deathAndCancer = Infer({method: 'rejection', samples: 10000}, function() {
  var result = model();
  condition(result.death && result.cancer);
  return result.cold;
});

var deathAndNoCancer = Infer({method: 'rejection', samples: 10000}, function() {
  var result = model();
  condition(result.death && !result.cancer);
  return result.cold;
});

var ANSWER = {
  prior: prior,
  death: death,
  deathAndCancer: deathAndCancer,
  deathAndNoCancer: deathAndNoCancer
};
var priorDist = Infer({method: 'enumerate'}, function() {
  return flip(0.2);
});

var deathDist = Infer({method: 'enumerate'}, function() {
  var cancer = flip(0.00001);
  var cold   = flip(0.2);

  var deathFromCancer = cancer ? flip(0.9)         : false;
  var deathFromCold   = cold   ? flip(0.00006)     : false;
  var deathFromOther  =          flip(0.000000001);

  var death = deathFromCancer || deathFromCold || deathFromOther;

  condition(death);
  return cold;
});

var deathAndCancerDist = Infer({method: 'enumerate'}, function() {
  var cancer = flip(0.00001);
  var cold   = flip(0.2);

  var deathFromCancer = cancer ? flip(0.9)         : false;
  var deathFromCold   = cold   ? flip(0.00006)     : false;
  var deathFromOther  =          flip(0.000000001);

  var death = deathFromCancer || deathFromCold || deathFromOther;

  condition(death && cancer);
  return cold;
});

var deathAndNoCancerDist = Infer({method: 'enumerate'}, function() {
  var cancer = flip(0.00001);
  var cold   = flip(0.2);

  var deathFromCancer = cancer ? flip(0.9)         : false;
  var deathFromCold   = cold   ? flip(0.00006)     : false;
  var deathFromOther  =          flip(0.000000001);

  var death = deathFromCancer || deathFromCold || deathFromOther;

  condition(death && !cancer);
  return cold;
});

var ANSWER = {
  prior:            priorDist,
  death:            deathDist,
  deathAndCancer:   deathAndCancerDist,
  deathAndNoCancer: deathAndNoCancerDist
};
var baseModel = function(condFn) {
  return Infer({method: 'enumerate'}, function() {
    var cancer        = flip(0.00001);
    var cold          = flip(0.2);
    var deathFromCancer = cancer && flip(0.9);
    var deathFromCold   = cold   && flip(0.00006);
    var deathFromOther  = flip(0.000000001);
    var death = deathFromCancer || deathFromCold || deathFromOther;
    condFn(death, cancer, cold);
    return cold;
  });
};

var prior = Infer({method: 'enumerate'}, function() {
  return flip(0.2);
});

var death = baseModel(function(d, c, cold) {
  condition(d);
});

var deathAndCancer = baseModel(function(d, c, cold) {
  condition(d && c);
});

var deathAndNoCancer = baseModel(function(d, c, cold) {
  condition(d && !c);
});

var ANSWER = {
  prior:            prior,
  death:            death,
  deathAndCancer:   deathAndCancer,
  deathAndNoCancer: deathAndNoCancer
};
groundtruth output
{
  "prior": {
    "__kind": "distribution",
    "probs": [
      0.8,
      0.19999999999999993
    ],
    "support": [
      false,
      true
    ]
  },
  "death": {
    "__kind": "distribution",
    "probs": [
      0.34288067383044696,
      0.6571193261695538
    ],
    "support": [
      false,
      true
    ]
  },
  "deathAndCancer": {
    "__kind": "distribution",
    "probs": [
      0.799998933334757,
      0.20000106666524353
    ],
    "support": [
      false,
      true
    ]
  },
  "deathAndNoCancer": {
    "__kind": "distribution",
    "probs": [
      0.00006666111164069112,
      0.9999333388883596
    ],
    "support": [
      false,
      true
    ]
  }
}
feedback
not signed
conditioning 13 atoms 12✓ · 0✗
probmods2-conditioning/ex1.a value 8 runs val+
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
I flip a fair coin. Compute the probability that it lands heads. End your program with the numeric answer.
groundtruth code
var model = function() { return flip() ? "H" : "T" };
var ANSWER = (Math.exp(Infer({method:'enumerate'}, model).score('H')));
generated code
sonnet-46-primer-v3 val+ haiku-45-noprimer-v3 val+ haiku-45-primer-v3 val+ haiku-45-think-noprimer-v3 val+ haiku-45-think-primer-v3 val+ sonnet-46-noprimer-v3 val+ sonnet-46-think-noprimer-v3 val+ sonnet-46-think-primer-v3 val+
var ANSWER = 0.5;
var model = function() {
  var coin = flip(0.5);
  return coin;
};

var dist = Infer({method: 'enumerate'}, model);
var ANSWER = Math.exp(dist.score(true));
var ANSWER = 0.5;
var model = function() {
  return flip(0.5);
};

var dist = Infer({method: "enumerate"}, model);
var ANSWER = Math.exp(dist.score(true));
var model = function() {
  return flip(0.5);
};

var dist = Infer({method: 'enumerate'}, model);
var ANSWER = Math.exp(dist.score(true));
var model = function() {
  var coin = flip(0.5);
  return coin;
};

var dist = Infer({method: 'enumerate'}, model);
var ANSWER = Math.exp(dist.score(true));
var model = function() {
  var coin = flip(0.5);
  return coin;
};

var dist = Infer({ method: 'enumerate' }, model);
var ANSWER = Math.exp(dist.score(true));
var ANSWER = 0.5;
groundtruth output
0.5
feedback
not signed
probmods2-conditioning/ex1.b distribution 8 runs TV=0.00 TV=0
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
I have a biased coin with P(heads) = 0.9, and a fair coin. Someone hands me one of the two coins, chosen uniformly at random, without telling me which. I flip it three times. Given that the first two flips landed heads, what is the posterior distribution over the third flip (as a boolean: true=heads, false=tails)? End your program with the Infer(...) call returning that distribution.
groundtruth code
var flipCoin = function(coinType) {
  return coinType == "fair" ? flip() : flip(0.9);
}
var model = function() {
  var coinType = flip() ? "fair" : "biased";
  var flip1 = flipCoin(coinType);
  var flip2 = flipCoin(coinType);
  var flip3 = flipCoin(coinType);
  condition(flip1 && flip2);
  return flip3;
};
var ANSWER = (Infer({method:'enumerate'}, model));
generated code
sonnet-46-primer-v3 TV=0 haiku-45-noprimer-v3 TV=0 haiku-45-primer-v3 TV=0 haiku-45-think-noprimer-v3 TV=0 haiku-45-think-primer-v3 TV=0 sonnet-46-noprimer-v3 TV=0 sonnet-46-think-noprimer-v3 TV=0 sonnet-46-think-primer-v3 TV=0
var model = function() {
  // Choose coin uniformly at random
  var biased = flip(0.5);
  var p = biased ? 0.9 : 0.5;
  
  // Condition on first two flips being heads
  observe(Bernoulli({p: p}), true);
  observe(Bernoulli({p: p}), true);
  
  // Return the third flip
  return flip(p);
};

var ANSWER = Infer({method: 'enumerate'}, model);
var model = function() {
  // Prior: choose a coin uniformly at random
  // true = biased coin (P(heads) = 0.9), false = fair coin (P(heads) = 0.5)
  var biasedCoin = flip(0.5);
  
  // Probability of heads depends on which coin we have
  var pHeads = biasedCoin ? 0.9 : 0.5;
  
  // Flip the coin three times
  var flip1 = flip(pHeads);
  var flip2 = flip(pHeads);
  var flip3 = flip(pHeads);
  
  // Condition on observing heads on first two flips
  condition(flip1 === true);
  condition(flip2 === true);
  
  // Return the third flip
  return flip3;
};

var ANSWER = Infer({method: "enumerate"}, model);
var model = function() {
  // Prior: choose which coin (true = biased, false = fair)
  var isBiased = flip(0.5);
  
  // Probability of heads depends on which coin
  var pHeads = isBiased ? 0.9 : 0.5;
  
  // Observe first two flips are heads
  observe(Bernoulli({p: pHeads}), true);
  observe(Bernoulli({p: pHeads}), true);
  
  // Return the third flip
  var thirdFlip = flip(pHeads);
  return thirdFlip;
};

var ANSWER = Infer({method: 'enumerate'}, model);
var model = function() {
  // Sample which coin we have: true = biased (0.9), false = fair (0.5)
  var isBiased = flip(0.5);
  var pHeads = isBiased ? 0.9 : 0.5;
  
  // Flip the coin three times
  var flip1 = flip(pHeads);
  var flip2 = flip(pHeads);
  var flip3 = flip(pHeads);
  
  // Condition on observing heads for the first two flips
  condition(flip1 && flip2);
  
  // Return the third flip
  return flip3;
};

var ANSWER = Infer({method: 'enumerate'}, model);
var model = function() {
  // Choose which coin we have (uniform prior)
  var coinType = flip(0.5) ? 'biased' : 'fair';
  
  // Set probability of heads based on coin type
  var pHeads = (coinType === 'biased') ? 0.9 : 0.5;
  
  // Flip the coin three times
  var flip1 = flip(pHeads);
  var flip2 = flip(pHeads);
  var flip3 = flip(pHeads);
  
  // Condition on observing heads on first two flips
  condition(flip1 === true && flip2 === true);
  
  // Return the third flip as our query
  return flip3;
};

var ANSWER = Infer({method: 'enumerate'}, model);
var model = function() {
  // Choose coin uniformly at random
  var biased = flip(0.5);
  var p = biased ? 0.9 : 0.5;
  
  // Condition on first two flips being heads
  observe(Bernoulli({p: p}), true);
  observe(Bernoulli({p: p}), true);
  
  // Return the third flip
  return flip(p);
};

var ANSWER = Infer({method: 'enumerate'}, model);
var model = function() {
  // Choose a coin uniformly at random
  var isBiased = flip(0.5);
  var p = isBiased ? 0.9 : 0.5;

  // Flip the coin three times
  var flip1 = flip(p);
  var flip2 = flip(p);
  var flip3 = flip(p);

  // Condition on first two flips being heads
  condition(flip1 === true && flip2 === true);

  // Return the third flip
  return flip3;
};

var ANSWER = Infer({method: 'enumerate'}, model);
var model = function() {
  // Choose coin uniformly at random
  var isBiased = flip(0.5);
  var p = isBiased ? 0.9 : 0.5;

  // Flip three times
  var flip1 = flip(p);
  var flip2 = flip(p);
  var flip3 = flip(p);

  // Condition on first two flips being heads
  condition(flip1 === true && flip2 === true);

  // Return the third flip
  return flip3;
};

var ANSWER = Infer({method: 'enumerate'}, model);
groundtruth output
true0.8057
false0.1943
raw JSON
{
  "__kind": "distribution",
  "probs": [
    0.19433962264150942,
    0.8056603773584906
  ],
  "support": [
    false,
    true
  ]
}
feedback
not signed
probmods2-conditioning/ex1.c distribution 8 runs TV=0.00 TV=0
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
I have a biased coin with P(heads) = 0.9, and a fair coin. Someone hands me one of the two coins, chosen uniformly at random, without telling me which. I flip it three times. Given that all three flips landed heads, what is the posterior distribution over the coin type (return the string 'fair' or 'biased')? End your program with the Infer(...) call.
groundtruth code
var flipCoin = function(coinType) {
  return coinType == "fair" ? flip() : flip(0.9);
}
var model = function() {
  var coinType = flip() ? "fair" : "biased";
  var flip1 = flipCoin(coinType);
  var flip2 = flipCoin(coinType);
  var flip3 = flipCoin(coinType);
  condition(flip1 && flip2 && flip3);
  return coinType;
};
var ANSWER = (Infer({method:'enumerate'}, model));
generated code
sonnet-46-primer-v3 TV=0 haiku-45-noprimer-v3 TV=0 haiku-45-primer-v3 TV=0 haiku-45-think-noprimer-v3 fail haiku-45-think-primer-v3 TV=0 sonnet-46-noprimer-v3 TV=0 sonnet-46-think-noprimer-v3 TV=0 sonnet-46-think-primer-v3 TV=0
var model = function() {
  var coin = flip(0.5) ? 'biased' : 'fair';
  var p = coin === 'biased' ? 0.9 : 0.5;
  observe(Binomial({p: p, n: 3}), 3);
  return coin;
};

var ANSWER = Infer({method: 'enumerate'}, model);
var model = function() {
  // Prior: equally likely to get either coin
  var coinType = uniformDraw(['fair', 'biased']);
  
  // Likelihood of heads depends on coin type
  var pHeads = coinType === 'fair' ? 0.5 : 0.9;
  
  // Observe three heads
  observe(Bernoulli({p: pHeads}), true);
  observe(Bernoulli({p: pHeads}), true);
  observe(Bernoulli({p: pHeads}), true);
  
  return coinType;
};

var ANSWER = Infer({method: 'enumerate'}, model);
var model = function() {
  // Prior: uniformly choose between fair and biased coin
  var coinType = flip(0.5) ? 'biased' : 'fair';
  
  // Probability of heads depends on coin type
  var pHeads = coinType === 'biased' ? 0.9 : 0.5;
  
  // Flip the coin three times
  var flip1 = flip(pHeads);
  var flip2 = flip(pHeads);
  var flip3 = flip(pHeads);
  
  // Condition on observing three heads
  condition(flip1 && flip2 && flip3);
  
  return coinType;
};

var ANSWER = Infer({method: 'enumerate'}, model);
Error: sample() expected a distribution but received "false".
var model = function() {
  // Prior: uniformly choose which coin we have
  var coinType = uniformDraw(['fair', 'biased']);
  
  // Set the probability of heads based on coin type
  var p = coinType === 'fair' ? 0.5 : 0.9;
  
  // Flip the coin three times
  var flip1 = flip(p);
  var flip2 = flip(p);
  var flip3 = flip(p);
  
  // Observe that all three flips are heads
  observe(flip1 && flip2 && flip3);
  
  // Return the coin type
  return coinType;
};

var ANSWER = Infer({method: 'enumerate'}, model);
var model = function() {
  // Prior: coin type chosen uniformly at random
  var coinType = flip(0.5) ? 'biased' : 'fair';
  
  // Define probability of heads for each coin type
  var pHeads = coinType === 'biased' ? 0.9 : 0.5;
  
  // Observe three heads
  observe(Bernoulli({p: pHeads}), true);
  observe(Bernoulli({p: pHeads}), true);
  observe(Bernoulli({p: pHeads}), true);
  
  return coinType;
};

var ANSWER = Infer({method: 'enumerate'}, model);
var ANSWER = Infer({method: 'enumerate'}, function() {
  var coin = uniformDraw(['fair', 'biased']);
  var pHeads = coin === 'fair' ? 0.5 : 0.9;
  
  // Observe three heads
  observe(Binomial({p: pHeads, n: 3}), 3);
  
  return coin;
});
var ANSWER = Infer({method: 'enumerate'}, function() {
  var coinType = flip(0.5) ? 'biased' : 'fair';
  var pHeads = coinType === 'biased' ? 0.9 : 0.5;
  
  // Three flips all landed heads
  observe(Binomial({p: pHeads, n: 3}), 3);
  
  return coinType;
});
var model = function() {
  var coinType = flip(0.5) ? 'biased' : 'fair';
  var p = coinType === 'biased' ? 0.9 : 0.5;
  observe(Binomial({p: p, n: 3}), 3);
  return coinType;
};

var ANSWER = Infer({method: 'enumerate'}, model);
groundtruth output
biased0.8536
fair0.1464
raw JSON
{
  "__kind": "distribution",
  "probs": [
    0.8536299765807963,
    0.14637002341920374
  ],
  "support": [
    "biased",
    "fair"
  ]
}
feedback
not signed
probmods2-conditioning/ex1.d distribution 8 runs TV=0.00 TV<.05
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
I have a biased coin with P(heads) = 0.9, and a fair coin. Someone hands me one of the two coins, chosen uniformly at random, without telling me which. I flip it three times. Given that the first two flips were different, what is the posterior distribution over the third flip (as a boolean: true=heads, false=tails)? End your program with the Infer(...) call.
groundtruth code
var flipCoin = function(coinType) {
  return coinType == "fair" ? flip() : flip(0.9);
}
var model = function() {
  var coinType = flip() ? "fair" : "biased";
  var flip1 = flipCoin(coinType);
  var flip2 = flipCoin(coinType);
  var flip3 = flipCoin(coinType);
  condition(flip1 != flip2);
  return flip3;
};
var ANSWER = (Infer({method:'enumerate'}, model));
generated code
sonnet-46-primer-v3 TV<.05 haiku-45-noprimer-v3 TV<.05 haiku-45-primer-v3 TV<.05 haiku-45-think-noprimer-v3 TV<.05 haiku-45-think-primer-v3 TV<.05 sonnet-46-noprimer-v3 TV<.05 sonnet-46-think-noprimer-v3 TV<.05 sonnet-46-think-primer-v3 TV<.05
var model = function() {
  // Choose coin uniformly at random
  var biased = flip(0.5);
  var p = biased ? 0.9 : 0.5;
  
  // First two flips
  var flip1 = flip(p);
  var flip2 = flip(p);
  
  // Condition on first two flips being different
  condition(flip1 !== flip2);
  
  // Third flip
  var flip3 = flip(p);
  
  return flip3;
};

var ANSWER = Infer({method: 'enumerate'}, model);
var model = function() {
  // Prior: choose a coin uniformly at random
  // true = biased coin (P(heads) = 0.9), false = fair coin (P(heads) = 0.5)
  var biased = flip(0.5);
  
  // Probability of heads depends on which coin
  var pHeads = biased ? 0.9 : 0.5;
  
  // Flip the coin three times
  var flip1 = flip(pHeads);
  var flip2 = flip(pHeads);
  var flip3 = flip(pHeads);
  
  // Condition on first two flips being different
  condition(flip1 !== flip2);
  
  // Return the third flip
  return flip3;
};

var ANSWER = Infer({method: "enumerate"}, model);
var model = function() {
  // Prior: choose a coin uniformly at random
  // true = biased coin (P(heads) = 0.9), false = fair coin (P(heads) = 0.5)
  var isBiased = flip(0.5);
  
  // Probability of heads depends on which coin
  var pHeads = isBiased ? 0.9 : 0.5;
  
  // Flip the coin three times
  var flip1 = flip(pHeads);
  var flip2 = flip(pHeads);
  var flip3 = flip(pHeads);
  
  // Condition on the first two flips being different
  condition(flip1 !== flip2);
  
  // Return the third flip
  return flip3;
};

var ANSWER = Infer({method: 'enumerate'}, model);
var model = function() {
  // Choose which coin we have (uniformly at random)
  var isBiased = flip(0.5);
  
  // Set the probability of heads based on coin type
  var pHeads = isBiased ? 0.9 : 0.5;
  
  // Flip the coin three times
  var flip1 = flip(pHeads);
  var flip2 = flip(pHeads);
  var flip3 = flip(pHeads);
  
  // Condition on first two flips being different
  condition(flip1 !== flip2);
  
  return flip3;
};

var ANSWER = Infer({method: 'enumerate'}, model);
var model = function() {
  // Choose which coin we received (true = biased coin, false = fair coin)
  var isBiased = flip(0.5);
  var p = isBiased ? 0.9 : 0.5;
  
  // Flip the coin three times
  var flip1 = flip(p);
  var flip2 = flip(p);
  var flip3 = flip(p);
  
  // Condition on observing that the first two flips were different
  condition(flip1 !== flip2);
  
  // Return the third flip (true = heads, false = tails)
  return flip3;
};

var ANSWER = Infer({method: 'enumerate'}, model);
var model = function() {
  // Choose coin uniformly at random
  var biased = flip(0.5);
  var pHeads = biased ? 0.9 : 0.5;
  
  // First two flips
  var flip1 = flip(pHeads);
  var flip2 = flip(pHeads);
  
  // Condition on first two flips being different
  condition(flip1 !== flip2);
  
  // Third flip
  var flip3 = flip(pHeads);
  
  return flip3;
};

var ANSWER = Infer({method: 'enumerate'}, model);
var model = function() {
  // Choose a coin uniformly at random
  var isBiased = flip(0.5);
  var pHeads = isBiased ? 0.9 : 0.5;

  // First two flips
  var flip1 = flip(pHeads);
  var flip2 = flip(pHeads);

  // Condition: first two flips were different
  condition(flip1 !== flip2);

  // Third flip
  var flip3 = flip(pHeads);

  return flip3;
};

var ANSWER = Infer({method: 'enumerate'}, model);
var model = function() {
  // Choose coin uniformly at random
  var isBiased = flip(0.5);
  var p = isBiased ? 0.9 : 0.5;

  // Flip three times
  var flip1 = flip(p);
  var flip2 = flip(p);
  var flip3 = flip(p);

  // Condition: first two flips were different
  condition(flip1 !== flip2);

  // Return the third flip
  return flip3;
};

var ANSWER = Infer({method: 'enumerate'}, model);
groundtruth output
true0.6059
false0.3941
raw JSON
{
  "__kind": "distribution",
  "probs": [
    0.39411764705882346,
    0.6058823529411763
  ],
  "support": [
    false,
    true
  ]
}
feedback
not signed
probmods2-conditioning/ex2.a record(original, intervention, conditioning) 8 runs TV=0.00 TV=0
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Consider this generative model of a cough symptom:
var lungCancer = flip(0.01);
var cold = flip(0.2);
var cough = (cold && flip(0.5)) || (lungCancer && flip(0.3));
Show that *intervening* on lungCancer (setting it to true directly) produces the same marginal of cough as *conditioning* on lungCancer. Return an object literal with three keys, each a distribution over cough: - original: the unconditional marginal - intervention: the marginal under lungCancer = true - conditioning: the marginal under condition(lungCancer)
groundtruth code
var ANSWER = (({
  original: Infer({method: "enumerate"}, function() {
    var lungCancer = flip(0.01);
    var cold = flip(0.2);
    var cough = (cold && flip(0.5)) || (lungCancer && flip(0.3));
    return cough;
  }),
  intervention: Infer({method: "enumerate"}, function() {
    var lungCancer = true;
    var cold = flip(0.2);
    var cough = (cold && flip(0.5)) || (lungCancer && flip(0.3));
    return cough;
  }),
  conditioning: Infer({method: "enumerate"}, function() {
    var lungCancer = flip(0.01);
    condition(lungCancer);
    var cold = flip(0.2);
    var cough = (cold && flip(0.5)) || (lungCancer && flip(0.3));
    return cough;
  })
}));
generated code
sonnet-46-primer-v3 TV=0 haiku-45-noprimer-v3 TV=0 haiku-45-primer-v3 TV=0 haiku-45-think-noprimer-v3 TV=0 haiku-45-think-primer-v3 TV=0 sonnet-46-noprimer-v3 TV=0 sonnet-46-think-noprimer-v3 TV=0 sonnet-46-think-primer-v3 TV=0
var model = function() {
  var lungCancer = flip(0.01);
  var cold = flip(0.2);
  var cough = (cold && flip(0.5)) || (lungCancer && flip(0.3));
  return cough;
};

var interventionModel = function() {
  var lungCancer = true;
  var cold = flip(0.2);
  var cough = (cold && flip(0.5)) || (lungCancer && flip(0.3));
  return cough;
};

var conditioningModel = function() {
  var lungCancer = flip(0.01);
  var cold = flip(0.2);
  var cough = (cold && flip(0.5)) || (lungCancer && flip(0.3));
  condition(lungCancer);
  return cough;
};

var ANSWER = {
  original: Infer({method: 'enumerate'}, model),
  intervention: Infer({method: 'enumerate'}, interventionModel),
  conditioning: Infer({method: 'enumerate'}, conditioningModel)
};
var model_original = function() {
  var lungCancer = flip(0.01);
  var cold = flip(0.2);
  var cough = (cold && flip(0.5)) || (lungCancer && flip(0.3));
  return cough;
};

var model_intervention = function() {
  var lungCancer = true;
  var cold = flip(0.2);
  var cough = (cold && flip(0.5)) || (lungCancer && flip(0.3));
  return cough;
};

var model_conditioning = function() {
  var lungCancer = flip(0.01);
  var cold = flip(0.2);
  var cough = (cold && flip(0.5)) || (lungCancer && flip(0.3));
  condition(lungCancer);
  return cough;
};

var original = Infer({method: "enumerate"}, model_original);
var intervention = Infer({method: "enumerate"}, model_intervention);
var conditioning = Infer({method: "enumerate"}, model_conditioning);

var ANSWER = {
  original: original,
  intervention: intervention,
  conditioning: conditioning
};
var model_original = function() {
  var lungCancer = flip(0.01);
  var cold = flip(0.2);
  var cough = (cold && flip(0.5)) || (lungCancer && flip(0.3));
  return cough;
};

var model_intervention = function() {
  var lungCancer = true;
  var cold = flip(0.2);
  var cough = (cold && flip(0.5)) || (lungCancer && flip(0.3));
  return cough;
};

var model_conditioning = function() {
  var lungCancer = flip(0.01);
  condition(lungCancer);
  var cold = flip(0.2);
  var cough = (cold && flip(0.5)) || (lungCancer && flip(0.3));
  return cough;
};

var original = Infer({method: 'enumerate'}, model_original);
var intervention = Infer({method: 'enumerate'}, model_intervention);
var conditioning = Infer({method: 'enumerate'}, model_conditioning);

var ANSWER = {
  original: original,
  intervention: intervention,
  conditioning: conditioning
};
var original = Infer({
  model: function() {
    var lungCancer = flip(0.01);
    var cold = flip(0.2);
    var cough = (cold && flip(0.5)) || (lungCancer && flip(0.3));
    return cough;
  }
});

var intervention = Infer({
  model: function() {
    var lungCancer = true;  // intervention: set directly
    var cold = flip(0.2);
    var cough = (cold && flip(0.5)) || (lungCancer && flip(0.3));
    return cough;
  }
});

var conditioning = Infer({
  model: function() {
    var lungCancer = flip(0.01);
    var cold = flip(0.2);
    var cough = (cold && flip(0.5)) || (lungCancer && flip(0.3));
    condition(lungCancer);  // conditioning: observe lungCancer is true
    return cough;
  }
});

var ANSWER = {
  original: original,
  intervention: intervention,
  conditioning: conditioning
};
// Original model: unconditional marginal
var originalModel = function() {
  var lungCancer = flip(0.01);
  var cold = flip(0.2);
  var cough = (cold && flip(0.5)) || (lungCancer && flip(0.3));
  return cough;
};

// Intervention: set lungCancer = true directly
var interventionModel = function() {
  var lungCancer = true;
  var cold = flip(0.2);
  var cough = (cold && flip(0.5)) || (lungCancer && flip(0.3));
  return cough;
};

// Conditioning: use condition() to enforce lungCancer = true
var conditioningModel = function() {
  var lungCancer = flip(0.01);
  condition(lungCancer);
  var cold = flip(0.2);
  var cough = (cold && flip(0.5)) || (lungCancer && flip(0.3));
  return cough;
};

var original = Infer({method: 'enumerate'}, originalModel);
var intervention = Infer({method: 'enumerate'}, interventionModel);
var conditioning = Infer({method: 'enumerate'}, conditioningModel);

var ANSWER = {
  original: original,
  intervention: intervention,
  conditioning: conditioning
};
var model_original = function() {
  var lungCancer = flip(0.01);
  var cold = flip(0.2);
  var cough = (cold && flip(0.5)) || (lungCancer && flip(0.3));
  return cough;
};

var model_intervention = function() {
  var lungCancer = true; // intervene: set directly
  var cold = flip(0.2);
  var cough = (cold && flip(0.5)) || (lungCancer && flip(0.3));
  return cough;
};

var model_conditioning = function() {
  var lungCancer = flip(0.01);
  var cold = flip(0.2);
  var cough = (cold && flip(0.5)) || (lungCancer && flip(0.3));
  condition(lungCancer === true);
  return cough;
};

var original = Infer({ method: 'enumerate' }, model_original);
var intervention = Infer({ method: 'enumerate' }, model_intervention);
var conditioning = Infer({ method: 'enumerate' }, model_conditioning);

var ANSWER = {
  original: original,
  intervention: intervention,
  conditioning: conditioning
};
var original = Infer({method: 'enumerate'}, function() {
  var lungCancer = flip(0.01);
  var cold = flip(0.2);
  var cough = (cold && flip(0.5)) || (lungCancer && flip(0.3));
  return cough;
});

var intervention = Infer({method: 'enumerate'}, function() {
  var lungCancer = true; // do(lungCancer = true)
  var cold = flip(0.2);
  var cough = (cold && flip(0.5)) || (lungCancer && flip(0.3));
  return cough;
});

var conditioning = Infer({method: 'enumerate'}, function() {
  var lungCancer = flip(0.01);
  var cold = flip(0.2);
  var cough = (cold && flip(0.5)) || (lungCancer && flip(0.3));
  condition(lungCancer === true); // observe lungCancer = true
  return cough;
});

var ANSWER = {
  original: original,
  intervention: intervention,
  conditioning: conditioning
};
var original = Infer({method: 'enumerate'}, function() {
  var lungCancer = flip(0.01);
  var cold = flip(0.2);
  var cough = (cold && flip(0.5)) || (lungCancer && flip(0.3));
  return cough;
});

var intervention = Infer({method: 'enumerate'}, function() {
  var lungCancer = true; // do(lungCancer = true): set directly, ignore prior
  var cold = flip(0.2);
  var cough = (cold && flip(0.5)) || (lungCancer && flip(0.3));
  return cough;
});

var conditioning = Infer({method: 'enumerate'}, function() {
  var lungCancer = flip(0.01);
  var cold = flip(0.2);
  var cough = (cold && flip(0.5)) || (lungCancer && flip(0.3));
  condition(lungCancer); // observe lungCancer === true
  return cough;
});

var ANSWER = {
  original: original,
  intervention: intervention,
  conditioning: conditioning
};
groundtruth output
{
  "original": {
    "__kind": "distribution",
    "probs": [
      0.8972999999999999,
      0.1027
    ],
    "support": [
      false,
      true
    ]
  },
  "intervention": {
    "__kind": "distribution",
    "probs": [
      0.63,
      0.37
    ],
    "support": [
      false,
      true
    ]
  },
  "conditioning": {
    "__kind": "distribution",
    "probs": [
      0.6299999999999997,
      0.37
    ],
    "support": [
      false,
      true
    ]
  }
}
feedback
not signed
probmods2-conditioning/ex2.b record(original, intervention, conditioning) 8 runs TV=0.00 TV=0
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Using the same lungCancer/cold/cough generative model:
var lungCancer = flip(0.01);
var cold = flip(0.2);
var cough = (cold && flip(0.5)) || (lungCancer && flip(0.3));
Show that *intervening* on cough (setting cough = true directly) produces a different marginal of lungCancer than *conditioning* on cough does. Return an object literal with three distributions over lungCancer: - original: the unconditional marginal - intervention: replace cough = (...) with cough = true - conditioning: keep the original assignment and add condition(cough)
groundtruth code
var ANSWER = (({
  original: Infer({method: "enumerate"}, function() {
    var lungCancer = flip(0.01);
    var cold = flip(0.2);
    var cough = (cold && flip(0.5)) || (lungCancer && flip(0.3));
    return lungCancer;
  }),
  intervention: Infer({method: "enumerate"}, function() {
    var lungCancer = flip(0.01);
    var cold = flip(0.2);
    var cough = true;
    return lungCancer;
  }),
  conditioning: Infer({method: "enumerate"}, function() {
    var lungCancer = flip(0.01);
    var cold = flip(0.2);
    var cough = (cold && flip(0.5)) || (lungCancer && flip(0.3));
    condition(cough);
    return lungCancer;
  })
}));
generated code
sonnet-46-primer-v3 TV=0 haiku-45-noprimer-v3 TV=0 haiku-45-primer-v3 TV=0 haiku-45-think-noprimer-v3 TV=0 haiku-45-think-primer-v3 TV=0 sonnet-46-noprimer-v3 TV=0 sonnet-46-think-noprimer-v3 TV=0 sonnet-46-think-primer-v3 TV=0
var model_original = function() {
  var lungCancer = flip(0.01);
  var cold = flip(0.2);
  var cough = (cold && flip(0.5)) || (lungCancer && flip(0.3));
  return lungCancer;
};

var model_intervention = function() {
  var lungCancer = flip(0.01);
  var cold = flip(0.2);
  var cough = true;
  return lungCancer;
};

var model_conditioning = function() {
  var lungCancer = flip(0.01);
  var cold = flip(0.2);
  var cough = (cold && flip(0.5)) || (lungCancer && flip(0.3));
  condition(cough);
  return lungCancer;
};

var ANSWER = {
  original: Infer({method: 'enumerate'}, model_original),
  intervention: Infer({method: 'enumerate'}, model_intervention),
  conditioning: Infer({method: 'enumerate'}, model_conditioning)
};
var model_original = function() {
  var lungCancer = flip(0.01);
  var cold = flip(0.2);
  var cough = (cold && flip(0.5)) || (lungCancer && flip(0.3));
  return lungCancer;
};

var model_intervention = function() {
  var lungCancer = flip(0.01);
  var cold = flip(0.2);
  var cough = true;
  return lungCancer;
};

var model_conditioning = function() {
  var lungCancer = flip(0.01);
  var cold = flip(0.2);
  var cough = (cold && flip(0.5)) || (lungCancer && flip(0.3));
  condition(cough);
  return lungCancer;
};

var ANSWER = {
  original: Infer({method: "enumerate"}, model_original),
  intervention: Infer({method: "enumerate"}, model_intervention),
  conditioning: Infer({method: "enumerate"}, model_conditioning)
};
var ANSWER = {
  original: Infer({method: 'enumerate'}, function() {
    var lungCancer = flip(0.01);
    var cold = flip(0.2);
    var cough = (cold && flip(0.5)) || (lungCancer && flip(0.3));
    return lungCancer;
  }),
  
  intervention: Infer({method: 'enumerate'}, function() {
    var lungCancer = flip(0.01);
    var cold = flip(0.2);
    var cough = true;  // intervene: set cough directly
    return lungCancer;
  }),
  
  conditioning: Infer({method: 'enumerate'}, function() {
    var lungCancer = flip(0.01);
    var cold = flip(0.2);
    var cough = (cold && flip(0.5)) || (lungCancer && flip(0.3));
    condition(cough);  // condition on observing cough = true
    return lungCancer;
  })
};
var original = Infer({method: 'enumerate'}, function() {
  var lungCancer = flip(0.01);
  var cold = flip(0.2);
  var cough = (cold && flip(0.5)) || (lungCancer && flip(0.3));
  return lungCancer;
});

var intervention = Infer({method: 'enumerate'}, function() {
  var lungCancer = flip(0.01);
  var cold = flip(0.2);
  var cough = true;  // Direct intervention: set cough to true
  return lungCancer;
});

var conditioning = Infer({method: 'enumerate'}, function() {
  var lungCancer = flip(0.01);
  var cold = flip(0.2);
  var cough = (cold && flip(0.5)) || (lungCancer && flip(0.3));
  condition(cough);  // Condition on observing cough = true
  return lungCancer;
});

var ANSWER = {
  original: original,
  intervention: intervention,
  conditioning: conditioning
};
var model_original = function() {
  var lungCancer = flip(0.01);
  var cold = flip(0.2);
  var cough = (cold && flip(0.5)) || (lungCancer && flip(0.3));
  return lungCancer;
};

var model_intervention = function() {
  var lungCancer = flip(0.01);
  var cold = flip(0.2);
  var cough = true;
  return lungCancer;
};

var model_conditioning = function() {
  var lungCancer = flip(0.01);
  var cold = flip(0.2);
  var cough = (cold && flip(0.5)) || (lungCancer && flip(0.3));
  condition(cough);
  return lungCancer;
};

var original = Infer({method: 'enumerate'}, model_original);
var intervention = Infer({method: 'enumerate'}, model_intervention);
var conditioning = Infer({method: 'enumerate'}, model_conditioning);

var ANSWER = {
  original: original,
  intervention: intervention,
  conditioning: conditioning
};
var original = Infer({method: 'enumerate'}, function() {
  var lungCancer = flip(0.01);
  var cold = flip(0.2);
  var cough = (cold && flip(0.5)) || (lungCancer && flip(0.3));
  return lungCancer;
});

var intervention = Infer({method: 'enumerate'}, function() {
  var lungCancer = flip(0.01);
  var cold = flip(0.2);
  var cough = true; // intervention: set cough = true directly
  return lungCancer;
});

var conditioning = Infer({method: 'enumerate'}, function() {
  var lungCancer = flip(0.01);
  var cold = flip(0.2);
  var cough = (cold && flip(0.5)) || (lungCancer && flip(0.3));
  condition(cough);
  return lungCancer;
});

var ANSWER = {
  original: original,
  intervention: intervention,
  conditioning: conditioning
};
var original = Infer({method: 'enumerate'}, function() {
  var lungCancer = flip(0.01);
  var cold = flip(0.2);
  var cough = (cold && flip(0.5)) || (lungCancer && flip(0.3));
  return lungCancer;
});

var intervention = Infer({method: 'enumerate'}, function() {
  var lungCancer = flip(0.01);
  var cold = flip(0.2);
  var cough = true; // do(cough = true): cut the causal links, set directly
  return lungCancer;
});

var conditioning = Infer({method: 'enumerate'}, function() {
  var lungCancer = flip(0.01);
  var cold = flip(0.2);
  var cough = (cold && flip(0.5)) || (lungCancer && flip(0.3));
  condition(cough);
  return lungCancer;
});

var ANSWER = {
  original: original,
  intervention: intervention,
  conditioning: conditioning
};
var originalModel = function() {
  var lungCancer = flip(0.01);
  var cold = flip(0.2);
  var cough = (cold && flip(0.5)) || (lungCancer && flip(0.3));
  return lungCancer;
};

var interventionModel = function() {
  var lungCancer = flip(0.01);
  var cold = flip(0.2);
  // do-operator: cough is set directly, severing its causal parents
  var cough = true;
  return lungCancer;
};

var conditioningModel = function() {
  var lungCancer = flip(0.01);
  var cold = flip(0.2);
  var cough = (cold && flip(0.5)) || (lungCancer && flip(0.3));
  condition(cough);
  return lungCancer;
};

var ANSWER = {
  original:      Infer({method: 'enumerate'}, originalModel),
  intervention:  Infer({method: 'enumerate'}, interventionModel),
  conditioning:  Infer({method: 'enumerate'}, conditioningModel)
};
groundtruth output
{
  "original": {
    "__kind": "distribution",
    "probs": [
      0.99,
      0.010000000000000005
    ],
    "support": [
      false,
      true
    ]
  },
  "intervention": {
    "__kind": "distribution",
    "probs": [
      0.99,
      0.010000000000000005
    ],
    "support": [
      false,
      true
    ]
  },
  "conditioning": {
    "__kind": "distribution",
    "probs": [
      0.963972736124635,
      0.03602726387536519
    ],
    "support": [
      false,
      true
    ]
  }
}
feedback
not signed
probmods2-conditioning/ex4.b distribution 8 runs TV=0.12 TV<.5
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Here is a model of how niceness affects whether a person smiles:
var smilesModel = function() {
  var nice = mem(function(person) { flip(.7) });
  var smiles = function(person) {
    return nice(person) ? flip(.8) : flip(.5);
  }
  condition(smiles('alice') && smiles('bob') && smiles('alice'));
  return nice('alice');
}
Extend this into a function extendedSmilesModel that captures two additional factors: 1. People smile 80% of the time if they want something from you, and 50% otherwise. 2. Nice people want something from you 20% of the time; non-nice people 50%. Nice people should still smile more often regardless of whether they want something. Niceness is a stable property of a person; whether they want something can vary. Have extendedSmilesModel return whether Alice smiles today, and end your program with Infer({method: 'enumerate'}, extendedSmilesModel) returning that distribution.
groundtruth code
var extendedSmilesModel = function() {
  var nice = mem(function(person) { flip(.7) });
  var wantsSomething = function(person) {
    return flip(nice(person) ? .2 : .5);
  }
  var smiles = function(person, wants) {
    return (wants ? flip(.8) : flip(.5))
            || (nice(person) ? flip(.8) : flip(.5));
  }
  var wants = wantsSomething('alice');
  return smiles('alice', wants);
};
var ANSWER = (Infer({method: "enumerate"}, extendedSmilesModel));
generated code
sonnet-46-primer-v3 TV<.5 haiku-45-noprimer-v3 TV<.5 haiku-45-primer-v3 TV<.5 haiku-45-think-noprimer-v3 TV<.5 haiku-45-think-primer-v3 TV<.5 sonnet-46-noprimer-v3 TV<.5 sonnet-46-think-noprimer-v3 TV<.5 sonnet-46-think-primer-v3 TV<.5
var extendedSmilesModel = function() {
  var nice = mem(function(person) { return flip(.7); });
  
  var wants = function(person) {
    return nice(person) ? flip(.2) : flip(.5);
  };
  
  var smiles = function(person) {
    var w = wants(person);
    var n = nice(person);
    // Nice people smile more regardless of wanting something
    // wants -> 80% smile; not wants -> 50% smile
    // nice adds extra smile probability
    if (n && w) {
      return flip(.8);
    } else if (n && !w) {
      return flip(.8);
    } else if (!n && w) {
      return flip(.8);
    } else {
      return flip(.5);
    }
  };
  
  return smiles('alice');
};

// Wait, re-reading: "Nice people should still smile more often regardless of whether they want something"
// So niceness affects smile rate independently of wanting.
// Let me re-interpret: 
// - wants -> 80% smile, not wants -> 50% smile (base from wanting)
// - nice people smile more: perhaps nice AND wants -> higher, nice AND not wants -> still higher than non-nice
// A natural interpretation: smile probability depends on BOTH niceness and wanting.
// nice: flip(.8) if nice else flip(.5) [original]
// wants: flip(.8) if wants else flip(.5)
// Combined: nice AND wants -> flip(.8) (both push up), nice AND not wants -> flip(.8) (nice dominates),
// not nice AND wants -> flip(.8) (wants dominates), not nice AND not wants -> flip(.5)
// But that doesn't differentiate niceness well.
// Better: use additive/multiplicative model or just define 4 cases explicitly:
// nice+wants: .9, nice+!wants: .8, !nice+wants: .8, !nice+!wants: .5

var extendedSmilesModel = function() {
  var nice = mem(function(person) { return flip(.7); });
  
  var wants = function(person) {
    return nice(person) ? flip(.2) : flip(.5);
  };
  
  var smiles = function(person) {
    var n = nice(person);
    var w = wants(person);
    if (n && w) {
      return flip(.9);
    } else if (n && !w) {
      return flip(.8);
    } else if (!n && w) {
      return flip(.8);
    } else {
      return flip(.5);
    }
  };
  
  return smiles('alice');
};

var ANSWER = Infer({method: 'enumerate'}, extendedSmilesModel);
var extendedSmilesModel = function() {
  var nice = mem(function(person) { flip(.7) });
  
  var wantsSmth = function(person) {
    return nice(person) ? flip(.2) : flip(.5);
  };
  
  var smiles = function(person) {
    var isNice = nice(person);
    var wants = wantsSmth(person);
    
    if (isNice && wants) {
      return flip(.8);
    } else if (isNice && !wants) {
      return flip(.8);
    } else if (!isNice && wants) {
      return flip(.8);
    } else {
      return flip(.5);
    }
  };
  
  condition(smiles('alice') && smiles('bob') && smiles('alice'));
  return smiles('alice');
};

var ANSWER = Infer({method: 'enumerate'}, extendedSmilesModel);
var extendedSmilesModel = function() {
  var nice = mem(function(person) { flip(.7) });
  
  var wants = function(person) {
    return nice(person) ? flip(.2) : flip(.5);
  };
  
  var smiles = function(person) {
    var isNice = nice(person);
    var doesWant = wants(person);
    
    if (isNice && doesWant) {
      return flip(.8);
    } else if (isNice && !doesWant) {
      return flip(.8);
    } else if (!isNice && doesWant) {
      return flip(.8);
    } else {
      return flip(.5);
    }
  };
  
  condition(smiles('alice') && smiles('bob') && smiles('alice'));
  return smiles('alice');
};

var ANSWER = Infer({method: 'enumerate'}, extendedSmilesModel);
var extendedSmilesModel = function() {
  var nice = mem(function(person) { flip(.7) });
  
  var wantsSmth = function(person) {
    return nice(person) ? flip(.2) : flip(.5);
  }
  
  var smiles = function(person) {
    var n = nice(person);
    var wants = wantsSmth(person);
    
    if (n && wants) {
      return flip(.8);
    } else if (n && !wants) {
      return flip(.7);
    } else if (!n && wants) {
      return flip(.8);
    } else {
      return flip(.5);
    }
  }
  
  condition(smiles('alice') && smiles('bob') && smiles('alice'));
  return smiles('alice');
}

var ANSWER = Infer({method: 'enumerate'}, extendedSmilesModel);
var extendedSmilesModel = function() {
  var nice = mem(function(person) { flip(.7) });
  
  var wants = mem(function(person) { 
    var isNice = nice(person);
    return isNice ? flip(.2) : flip(.5);
  });
  
  var smiles = function(person) {
    var isNice = nice(person);
    var personWants = wants(person);
    
    // Combine niceness and wanting factors
    // Base: 0.5, +0.3 if nice, +0.3 if wants something
    var smileProb = 0.5 + (isNice ? 0.3 : 0) + (personWants ? 0.3 : 0);
    var cappedProb = Math.min(smileProb, 0.99);
    
    return flip(cappedProb);
  };
  
  condition(smiles('alice') && smiles('bob') && smiles('alice'));
  return smiles('alice');
};

var ANSWER = Infer({method: 'enumerate'}, extendedSmilesModel);
var extendedSmilesModel = function() {
  var nice = mem(function(person) { flip(.7) });
  
  var wantsSomething = function(person) {
    return nice(person) ? flip(.2) : flip(.5);
  };
  
  var smiles = function(person) {
    var wants = wantsSomething(person);
    if (nice(person)) {
      return wants ? flip(.8) : flip(.8);
    } else {
      return wants ? flip(.8) : flip(.5);
    }
  };
  
  condition(smiles('alice') && smiles('bob') && smiles('alice'));
  
  return smiles('alice');
};

var ANSWER = Infer({method: 'enumerate'}, extendedSmilesModel);
var extendedSmilesModel = function() {
  // Niceness is a stable property of a person
  var nice = mem(function(person) { return flip(.7); });

  // Whether someone wants something can vary (not memoized by default call)
  var wantsSomething = function(person) {
    return nice(person) ? flip(.2) : flip(.5);
  };

  // Smile probability depends on both niceness and wanting something.
  // Nice people smile more often regardless of whether they want something.
  // Wanting something increases smiling to 80% (vs 50% otherwise).
  var smiles = function(person) {
    var isNice = nice(person);
    var wants = wantsSomething(person);
    if (isNice && wants)   { return flip(.9); }  // nice + wants: highest
    if (isNice && !wants)  { return flip(.8); }  // nice alone: still high
    if (!isNice && wants)  { return flip(.8); }  // wanting alone: 80%
    return flip(.5);                              // baseline
  };

  condition(smiles('alice') && smiles('bob') && smiles('alice'));

  // Return whether Alice smiles today
  return smiles('alice');
};

var ANSWER = Infer({method: 'enumerate'}, extendedSmilesModel);
var extendedSmilesModel = function() {
  // Niceness is a stable property of a person
  var nice = mem(function(person) { return flip(.7); });

  // Wanting something can vary each time (not memoized)
  var wantsSomething = function(person) {
    return nice(person) ? flip(.2) : flip(.5);
  };

  // Smiling depends on both niceness and wanting something
  // Nice people smile more regardless of wanting, wanting also boosts smiling
  var smiles = function(person) {
    var isNice = nice(person);
    var wants = wantsSomething(person);
    return isNice  && wants  ? flip(.9) :   // nice + wants: both boost smiling
           isNice  && !wants ? flip(.8) :   // nice alone keeps it high
           !isNice && wants  ? flip(.8) :   // wanting alone keeps it high
                               flip(.5);    // neither: baseline
  };

  condition(smiles('alice') && smiles('bob') && smiles('alice'));
  return smiles('alice');
};

var ANSWER = Infer({method: 'enumerate'}, extendedSmilesModel);
groundtruth output
true0.8859
false0.1141
raw JSON
{
  "__kind": "distribution",
  "probs": [
    0.11409999999999997,
    0.8859000000000001
  ],
  "support": [
    false,
    true
  ]
}
feedback
not signed
probmods2-conditioning/ex4.c distribution 8 runs TV=0.00 TV=0
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Given this extendedSmilesModel skeleton:
var extendedSmilesModel = function() {
  var nice = mem(function(person) { flip(.7) });
  var wantsSomething = function(person) {
    return flip(nice(person) ? .2 : .5);
  }
  var smiles = function(person, wants) {
    return (wants ? flip(.8) : flip(.5))
            || (nice(person) ? flip(.8) : flip(.5));
  }
  // ... your code here ...
}
Suppose you've seen Bob five times this week and each time he was *not* smiling. Today, you see him smiling. Modify the model body to compute the posterior probability that Bob wants something from you today. Niceness is stable (use mem as given); whether he wants something varies day-to-day. End your program with Infer({method: 'enumerate'}, extendedSmilesModel).
groundtruth code
var extendedSmilesModel = function() {
  var nice = mem(function(person) { flip(.7) });
  var wantsSomething = function(person) {
    return flip(nice(person) ? .2 : .5);
  }
  var smiles = function(person, wants) {
    return (wants ? flip(.8) : flip(.5))
            || (nice(person) ? flip(.8) : flip(.5));
  }
  var wantsToday = wantsSomething('bob');
  condition(!smiles('bob', wantsSomething('bob')));
  condition(!smiles('bob', wantsSomething('bob')));
  condition(!smiles('bob', wantsSomething('bob')));
  condition(!smiles('bob', wantsSomething('bob')));
  condition(!smiles('bob', wantsSomething('bob')));
  condition(smiles('bob', wantsToday));
  return wantsToday;
};
var ANSWER = (Infer({method: "enumerate"}, extendedSmilesModel));
generated code
sonnet-46-primer-v3 TV=0 haiku-45-noprimer-v3 fail haiku-45-primer-v3 fail haiku-45-think-noprimer-v3 fail haiku-45-think-primer-v3 TV=0 sonnet-46-noprimer-v3 TV=0 sonnet-46-think-noprimer-v3 TV=0 sonnet-46-think-primer-v3 TV=0
var extendedSmilesModel = function() {
  var nice = mem(function(person) { flip(.7) });
  var wantsSomething = function(person) {
    return flip(nice(person) ? .2 : .5);
  };
  var smiles = function(person, wants) {
    return (wants ? flip(.8) : flip(.5))
            || (nice(person) ? flip(.8) : flip(.5));
  };

  // Observe Bob not smiling five times (niceness is stable, wants varies each day)
  var bobNice = nice("Bob");

  // Each of the 5 past days: wants something varies, smiles is false
  map(function(i) {
    var wants = wantsSomething("Bob");
    condition(smiles("Bob", wants) === false);
  }, _.range(5));

  // Today: observe Bob smiling
  var wantsToday = wantsSomething("Bob");
  condition(smiles("Bob", wantsToday) === true);

  return wantsToday;
};

var ANSWER = Infer({method: 'enumerate'}, extendedSmilesModel);
Error: sample() expected a distribution but received "true".
var extendedSmilesModel = function() {
  var nice = mem(function(person) { flip(.7) });
  var wantsSomething = function(person) {
    return flip(nice(person) ? .2 : .5);
  }
  var smiles = function(person, wants) {
    return (wants ? flip(.8) : flip(.5))
            || (nice(person) ? flip(.8) : flip(.5));
  }
  
  // Observe Bob not smiling five times this week
  observe(smiles("Bob", wantsSomething("Bob")) === false);
  observe(smiles("Bob", wantsSomething("Bob")) === false);
  observe(smiles("Bob", wantsSomething("Bob")) === false);
  observe(smiles("Bob", wantsSomething("Bob")) === false);
  observe(smiles("Bob", wantsSomething("Bob")) === false);
  
  // Today, Bob wants something or not (fresh sample for today)
  var todayWants = flip(nice("Bob") ? .2 : .5);
  
  // Observe Bob smiling today
  observe(smiles("Bob", todayWants) === true);
  
  return todayWants;
}

var ANSWER = Infer({method: 'enumerate'}, extendedSmilesModel);
Error: All paths explored by Enumerate have probability zero.
var extendedSmilesModel = function() {
  var nice = mem(function(person) { flip(.7) });
  var wantsSomething = function(person) {
    return flip(nice(person) ? .2 : .5);
  }
  var smiles = function(person, wants) {
    return (wants ? flip(.8) : flip(.5))
            || (nice(person) ? flip(.8) : flip(.5));
  }
  
  // Observe Bob not smiling five times
  var bob_nice = nice("Bob");
  repeat(5, function() {
    var bob_wants = wantsSomething("Bob");
    var bob_smiles = smiles("Bob", bob_wants);
    observe(Bernoulli({p: bob_smiles ? 1 : 0}), 0);
  });
  
  // Today: observe Bob smiling and infer if he wants something
  var bob_wants_today = wantsSomething("Bob");
  var bob_smiles_today = smiles("Bob", bob_wants_today);
  observe(Bernoulli({p: bob_smiles_today ? 1 : 0}), 1);
  
  return bob_wants_today;
};

var ANSWER = Infer({method: 'enumerate'}, extendedSmilesModel);
Error: cpsInnerStatement
var extendedSmilesModel = function() {
  var nice = mem(function(person) { flip(.7) });
  var wantsSomething = function(person) {
    return flip(nice(person) ? .2 : .5);
  }
  var smiles = function(person, wants) {
    return (wants ? flip(.8) : flip(.5))
            || (nice(person) ? flip(.8) : flip(.5));
  }
  
  // Condition on 5 days of not smiling
  for (var i = 0; i < 5; i++) {
    var wants = wantsSomething("Bob");
    var didSmile = smiles("Bob", wants);
    condition(!didSmile);
  }
  
  // Today: condition on smiling
  var wantsToday = wantsSomething("Bob");
  var smilesToday = smiles("Bob", wantsToday);
  condition(smilesToday);
  
  return wantsToday;
}

var ANSWER = Infer({method: 'enumerate'}, extendedSmilesModel);
var extendedSmilesModel = function() {
  var nice = mem(function(person) { flip(.7) });
  var wantsSomething = function(person) {
    return flip(nice(person) ? .2 : .5);
  }
  var smiles = function(person, wants) {
    return (wants ? flip(.8) : flip(.5))
            || (nice(person) ? flip(.8) : flip(.5));
  }
  
  // Condition on Bob not smiling 5 times in the past
  repeat(5, function() {
    var wantsThatDay = wantsSomething('Bob');
    var smilesThatDay = smiles('Bob', wantsThatDay);
    condition(!smilesThatDay);
  });
  
  // Today: Bob wants something?
  var todayWants = wantsSomething('Bob');
  var todaySmiles = smiles('Bob', todayWants);
  
  // Condition on Bob smiling today
  condition(todaySmiles);
  
  // Return whether Bob wants something today
  return todayWants;
};

var ANSWER = Infer({method: 'enumerate'}, extendedSmilesModel);
var extendedSmilesModel = function() {
  var nice = mem(function(person) { flip(.7) });
  var wantsSomething = function(person) {
    return flip(nice(person) ? .2 : .5);
  };
  var smiles = function(person, wants) {
    return (wants ? flip(.8) : flip(.5))
            || (nice(person) ? flip(.8) : flip(.5));
  };

  // Condition on Bob not smiling 5 times this week
  var bobNice = nice('bob');
  map(function(i) {
    var wants = wantsSomething('bob');
    condition(!smiles('bob', wants));
  }, [1,2,3,4,5]);

  // Today Bob is smiling - compute posterior that he wants something
  var wantsToday = wantsSomething('bob');
  condition(smiles('bob', wantsToday));

  return wantsToday;
};

var ANSWER = Infer({method: 'enumerate'}, extendedSmilesModel);
var extendedSmilesModel = function() {
  var nice = mem(function(person) { flip(.7) });

  var wantsSomething = function(person) {
    return flip(nice(person) ? .2 : .5);
  };

  var smiles = function(person, wants) {
    return (wants ? flip(.8) : flip(.5))
            || (nice(person) ? flip(.8) : flip(.5));
  };

  // Bob's niceness is stable (memoized).
  // Each past day, "wants" is a fresh sample (not memoized).

  // Day 1 – not smiling
  var w1 = wantsSomething('bob');
  condition(!smiles('bob', w1));

  // Day 2 – not smiling
  var w2 = wantsSomething('bob');
  condition(!smiles('bob', w2));

  // Day 3 – not smiling
  var w3 = wantsSomething('bob');
  condition(!smiles('bob', w3));

  // Day 4 – not smiling
  var w4 = wantsSomething('bob');
  condition(!smiles('bob', w4));

  // Day 5 – not smiling
  var w5 = wantsSomething('bob');
  condition(!smiles('bob', w5));

  // Today – smiling; query whether he wants something
  var wantsToday = wantsSomething('bob');
  condition(smiles('bob', wantsToday));

  return wantsToday;
};

var ANSWER = Infer({method: 'enumerate'}, extendedSmilesModel);
var extendedSmilesModel = function() {
  var nice = mem(function(person) { flip(.7) });
  var wantsSomething = function(person) {
    return flip(nice(person) ? .2 : .5);
  };
  var smiles = function(person, wants) {
    return (wants ? flip(.8) : flip(.5))
            || (nice(person) ? flip(.8) : flip(.5));
  };

  // Condition on Bob NOT smiling on each of the 5 previous days.
  // Each day is independent (wants varies day-to-day; nice is memoized/stable).
  map(function(day) {
    var wantsPast = wantsSomething('bob');
    var smiledPast = smiles('bob', wantsPast);
    condition(!smiledPast);
  }, _.range(5));

  // Today Bob IS smiling — infer whether he wants something.
  var wantsToday = wantsSomething('bob');
  var smiledToday = smiles('bob', wantsToday);
  condition(smiledToday);

  return wantsToday;
};

var ANSWER = Infer({method: 'enumerate'}, extendedSmilesModel);
groundtruth output
true0.5198
false0.4802
raw JSON
{
  "__kind": "distribution",
  "probs": [
    0.48019560581828696,
    0.5198043941817135
  ],
  "support": [
    false,
    true
  ]
}
feedback
not signed
probmods2-conditioning/ex5.a record(rain, sprinkler) 8 runs TV=0.00 TV<.05
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
I have a sprinkler in my garden that turns on each morning at random - half the time, independently each day. I live in a city where it rains on 30% of mornings. The lawn gets wet whenever the sprinkler turns on, it rains, or both. One morning, I notice my lawn is wet. Return an object literal with two keys: - rain: posterior distribution over whether it rained - sprinkler: posterior distribution over whether the sprinkler turned on
groundtruth code
var ANSWER = (({
  rain: Infer({method: "enumerate"}, function() {
    var sprinkler = flip();
    var rain = flip(0.3);
    var wetLawn = sprinkler || rain;
    condition(wetLawn);
    return rain;
  }),
  sprinkler: Infer({method: "enumerate"}, function() {
    var sprinkler = flip();
    var rain = flip(0.3);
    var wetLawn = sprinkler || rain;
    condition(wetLawn);
    return sprinkler;
  })
}));
generated code
sonnet-46-primer-v3 TV<.05 haiku-45-noprimer-v3 fail haiku-45-primer-v3 fail haiku-45-think-noprimer-v3 TV<.05 haiku-45-think-primer-v3 TV=0 sonnet-46-noprimer-v3 TV<.05 sonnet-46-think-noprimer-v3 TV<.05 sonnet-46-think-primer-v3 TV<.05
var model = function() {
  var rained = flip(0.3);
  var sprinkler = flip(0.5);
  var wet = rained || sprinkler;
  condition(wet);
  return { rain: rained, sprinkler: sprinkler };
};

var joint = Infer({method: 'enumerate'}, model);

var rainDist = Infer({method: 'enumerate'}, function() {
  var s = sample(joint);
  return s.rain;
});

var sprinklerDist = Infer({method: 'enumerate'}, function() {
  var s = sample(joint);
  return s.sprinkler;
});

var ANSWER = { rain: rainDist, sprinkler: sprinklerDist };
TypeError: dist.score is not a function
var model = function() {
  var sprinkler = flip(0.5);
  var rain = flip(0.3);
  var lawn_wet = sprinkler || rain;
  
  observe(lawn_wet, true);
  
  return {
    sprinkler: sprinkler,
    rain: rain
  };
};

var posterior = Infer({method: "enumerate"}, model);

var rain_dist = Infer({method: "enumerate"}, function() {
  var s = sample(posterior);
  return s.rain;
});

var sprinkler_dist = Infer({method: "enumerate"}, function() {
  var s = sample(posterior);
  return s.sprinkler;
});

var ANSWER = {
  rain: rain_dist,
  sprinkler: sprinkler_dist
};
Error: All paths explored by Enumerate have probability zero.
var model = function() {
  var sprinkler = flip(0.5);
  var rain = flip(0.3);
  var lawn_wet = sprinkler || rain;
  
  observe(Bernoulli({p: lawn_wet ? 1 : 0}), 1);
  
  return {sprinkler: sprinkler, rain: rain};
};

var posterior = Infer({method: 'enumerate'}, model);

var rain_marginal = Infer({method: 'enumerate'}, function() {
  var s = sample(posterior);
  return s.rain;
});

var sprinkler_marginal = Infer({method: 'enumerate'}, function() {
  var s = sample(posterior);
  return s.sprinkler;
});

var ANSWER = {
  rain: rain_marginal,
  sprinkler: sprinkler_marginal
};
var model = function() {
  var sprinkler = flip(0.5);
  var rain = flip(0.3);
  var wet = sprinkler || rain;
  
  condition(wet);
  
  return {sprinkler: sprinkler, rain: rain};
};

var posterior = Infer({method: 'enumerate'}, model);

var ANSWER = {
  rain: marginalize(posterior, 'rain'),
  sprinkler: marginalize(posterior, 'sprinkler')
};
var sprinklerDist = Infer({method: 'enumerate'}, function() {
  var sprinkler = flip(0.5);
  var rain = flip(0.3);
  var wet = sprinkler || rain;
  condition(wet);
  return sprinkler;
});

var rainDist = Infer({method: 'enumerate'}, function() {
  var sprinkler = flip(0.5);
  var rain = flip(0.3);
  var wet = sprinkler || rain;
  condition(wet);
  return rain;
});

var ANSWER = {
  sprinkler: sprinklerDist,
  rain: rainDist
};
var model = function() {
  var rain = flip(0.3);
  var sprinkler = flip(0.5);
  var wet = rain || sprinkler;
  condition(wet);
  return { rain: rain, sprinkler: sprinkler };
};

var posterior = Infer({ method: 'enumerate' }, model);

var rainDist = marginalize(posterior, function(x) { return x.rain; });
var sprinklerDist = marginalize(posterior, function(x) { return x.sprinkler; });

var ANSWER = { rain: rainDist, sprinkler: sprinklerDist };
var model = function() {
  var rained = flip(0.3);
  var sprinkler = flip(0.5);
  var wet = rained || sprinkler;
  condition(wet);
  return { rain: rained, sprinkler: sprinkler };
};

var result = Infer({ method: 'enumerate' }, model);

var rainDist = marginalize(result, function(x) { return x.rain; });
var sprinklerDist = marginalize(result, function(x) { return x.sprinkler; });

var ANSWER = { rain: rainDist, sprinkler: sprinklerDist };
var model = function() {
  var rained = flip(0.3);
  var sprinkler = flip(0.5);
  var wet = rained || sprinkler;
  condition(wet);
  return { rained: rained, sprinkler: sprinkler };
};

var joint = Infer({ method: 'enumerate' }, model);

var rainDist = Infer({ method: 'enumerate' }, function() {
  var s = sample(joint);
  return s.rained;
});

var sprinklerDist = Infer({ method: 'enumerate' }, function() {
  var s = sample(joint);
  return s.sprinkler;
});

var ANSWER = { rain: rainDist, sprinkler: sprinklerDist };
groundtruth output
{
  "rain": {
    "__kind": "distribution",
    "probs": [
      0.46153846153846156,
      0.5384615384615383
    ],
    "support": [
      true,
      false
    ]
  },
  "sprinkler": {
    "__kind": "distribution",
    "probs": [
      0.23076923076923078,
      0.7692307692307692
    ],
    "support": [
      false,
      true
    ]
  }
}
feedback
not signed
probmods2-conditioning/ex5.b distribution 8 runs TV=0.00 TV=0
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
I have a sprinkler in my garden that turns on each morning at random - half the time, independently each day. I live in a city where it rains on 30% of mornings. The lawn gets wet whenever the sprinkler turns on, it rains, or both. My neighbor Kelsey has the same kind of sprinkler (independent of mine, also turning on with probability 0.5 each morning). One morning, both my lawn and Kelsey's lawn are wet. End your program with Infer(...) returning the posterior distribution over whether it rained.
groundtruth code
var ANSWER = (Infer({method: "enumerate"}, function() {
  var rain = flip(0.3);
  var mySprinkler = flip();
  var herSprinkler = flip();
  var myLawnIsWet = mySprinkler || rain;
  var herLawnIsWet = herSprinkler || rain;
  condition(myLawnIsWet && herLawnIsWet);
  return rain;
}));
generated code
sonnet-46-primer-v3 TV=0 haiku-45-noprimer-v3 TV=0 haiku-45-primer-v3 TV=0 haiku-45-think-noprimer-v3 TV=0 haiku-45-think-primer-v3 TV=0 sonnet-46-noprimer-v3 TV=0 sonnet-46-think-noprimer-v3 TV=0 sonnet-46-think-primer-v3 TV=0
var model = function() {
  var rain = flip(0.3);
  var sprinkler = flip(0.5);
  var kelseySprinkler = flip(0.5);
  
  var myLawnWet = rain || sprinkler;
  var kelseyLawnWet = rain || kelseySprinkler;
  
  condition(myLawnWet && kelseyLawnWet);
  
  return rain;
};

var ANSWER = Infer({method: 'enumerate'}, model);
var model = function() {
  // Prior: it rained this morning with probability 0.3
  var rained = flip(0.3);
  
  // My sprinkler turns on with probability 0.5
  var mySprinkerOn = flip(0.5);
  
  // My lawn is wet if sprinkler is on OR it rained
  var myLawnWet = mySprinkerOn || rained;
  
  // Kelsey's sprinkler turns on with probability 0.5 (independent)
  var kelseySprinkerOn = flip(0.5);
  
  // Kelsey's lawn is wet if her sprinkler is on OR it rained
  var kelseyLawnWet = kelseySprinkerOn || rained;
  
  // Condition on observing both lawns wet
  condition(myLawnWet && kelseyLawnWet);
  
  return rained;
};

var ANSWER = Infer({method: "enumerate"}, model);
var model = function() {
  // Prior: it rained this morning with probability 0.3
  var rained = flip(0.3);
  
  // My sprinkler turns on with probability 0.5
  var mySprinkerOn = flip(0.5);
  
  // Kelsey's sprinkler turns on with probability 0.5 (independent)
  var kelseysSprinkerOn = flip(0.5);
  
  // My lawn is wet if sprinkler is on OR it rained
  var myLawnWet = mySprinkerOn || rained;
  
  // Kelsey's lawn is wet if her sprinkler is on OR it rained
  var kelseysLawnWet = kelseysSprinkerOn || rained;
  
  // Observe that both lawns are wet
  condition(myLawnWet && kelseysLawnWet);
  
  return rained;
};

var ANSWER = Infer({method: 'enumerate'}, model);
var model = function() {
  var rain = flip(0.3);
  var sprinkler_mine = flip(0.5);
  var sprinkler_kelsey = flip(0.5);
  
  var my_lawn_wet = sprinkler_mine || rain;
  var kelsey_lawn_wet = sprinkler_kelsey || rain;
  
  condition(my_lawn_wet && kelsey_lawn_wet);
  
  return rain;
};

var ANSWER = Infer({method: "enumerate"}, model);
var model = function() {
  var rained = flip(0.3);
  var mySprinkler = flip(0.5);
  var kelsey_sprinkler = flip(0.5);
  
  var my_lawn_wet = mySprinkler || rained;
  var kelsey_lawn_wet = kelsey_sprinkler || rained;
  
  condition(my_lawn_wet && kelsey_lawn_wet);
  
  return rained;
};

var ANSWER = Infer({method: 'enumerate'}, model);
var model = function() {
  var rain = flip(0.3);
  var sprinkler = flip(0.5);
  var kelseySprinkler = flip(0.5);
  
  var myLawnWet = rain || sprinkler;
  var kelseyLawnWet = rain || kelseySprinkler;
  
  condition(myLawnWet && kelseyLawnWet);
  
  return rain;
};

var ANSWER = Infer({method: 'enumerate'}, model);
var model = function() {
  var rain = flip(0.3);
  var sprinkler = flip(0.5);
  var kelseySprinkler = flip(0.5);

  var myLawnWet = rain || sprinkler;
  var kelseyLawnWet = rain || kelseySprinkler;

  condition(myLawnWet && kelseyLawnWet);

  return rain;
};

var ANSWER = Infer({ method: 'enumerate' }, model);
var model = function() {
  var rain = flip(0.3);
  var mySprinkler = flip(0.5);
  var kelseySprinkler = flip(0.5);

  var myLawnWet = rain || mySprinkler;
  var kelseyLawnWet = rain || kelseySprinkler;

  condition(myLawnWet && kelseyLawnWet);

  return rain;
};

var ANSWER = Infer({method: 'enumerate'}, model);
groundtruth output
true0.6316
false0.3684
raw JSON
{
  "__kind": "distribution",
  "probs": [
    0.3684210526315789,
    0.631578947368421
  ],
  "support": [
    false,
    true
  ]
}
feedback
not signed
probmods2-conditioning/ex5.c distribution 8 runs TV=0.00 TV<.05
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
I have a sprinkler in my garden that turns on each morning at random - half the time, independently each day. I live in a city where it rains on 30% of mornings. Lawns are wet whenever the sprinkler turns on, it rains, or both. Five people in the area - me, Kelsey, Kevin, Manu, and Josh - all have the same kind of sprinkler (each independent, p=0.5). One morning all five lawns are wet. Use mem so each person's sprinkler is modeled independently. End your program with Infer(...) returning the posterior over whether it rained.
groundtruth code
var ANSWER = (Infer({method: "enumerate"}, function() {
  var rain = flip(0.3);
  var sprinkler = mem(function(person) { return flip() });
  var wetLawn = function(person) { return rain || sprinkler(person) };
  condition(wetLawn("me"));
  condition(wetLawn("Kelsey"));
  condition(wetLawn("Kevin"));
  condition(wetLawn("Manu"));
  condition(wetLawn("Josh"));
  return rain;
}));
generated code
sonnet-46-primer-v3 TV<.05 haiku-45-noprimer-v3 TV=0 haiku-45-primer-v3 fail haiku-45-think-noprimer-v3 TV<1 haiku-45-think-primer-v3 TV<.05 sonnet-46-noprimer-v3 TV<.05 sonnet-46-think-noprimer-v3 TV<.05 sonnet-46-think-primer-v3 TV<.05
var model = function() {
  var rained = flip(0.3);
  
  var sprinklerOn = mem(function(person) {
    return flip(0.5);
  });
  
  var lawnWet = function(person) {
    return sprinklerOn(person) || rained;
  };
  
  var people = ["me", "Kelsey", "Kevin", "Manu", "Josh"];
  
  map(function(person) {
    condition(lawnWet(person));
  }, people);
  
  return rained;
};

var ANSWER = Infer({method: 'enumerate'}, model);
var sprinklerProb = 0.5;
var rainProb = 0.3;

var model = function() {
  var rain = flip(rainProb);
  
  var sprinkler = mem(function(person) {
    return flip(sprinklerProb);
  });
  
  var lawns = ["me", "Kelsey", "Kevin", "Manu", "Josh"];
  
  var lawnWet = function(person) {
    return rain || sprinkler(person);
  };
  
  // Condition on all lawns being wet
  mapData({data: lawns}, function(person) {
    condition(lawnWet(person));
  });
  
  return rain;
};

var ANSWER = Infer({method: "enumerate"}, model);
Error: All paths explored by Enumerate have probability zero.
var model = function() {
  // Whether it rained this morning
  var rained = flip(0.3);
  
  // Each person's sprinkler (memoized so each person has independent sprinkler)
  var sprinkler = mem(function(person) {
    return flip(0.5);
  });
  
  // Each person's lawn is wet if sprinkler OR rain
  var lawnWet = function(person) {
    return sprinkler(person) || rained;
  };
  
  // Observe that all five lawns are wet
  var people = ["me", "Kelsey", "Kevin", "Manu", "Josh"];
  mapData({data: people}, function(person) {
    observe(Bernoulli({p: 1.0}), lawnWet(person) ? 1 : 0);
  });
  
  return rained;
};

var ANSWER = Infer({method: "enumerate"}, model);
var model = function() {
  // Prior: it rains with probability 0.3
  var rained = flip(0.3);
  
  // Memoized function so each person's sprinkler is independent
  var getSprinkler = mem(function(person) {
    return flip(0.5);
  });
  
  // List of five people
  var people = ["me", "Kelsey", "Kevin", "Manu", "Josh"];
  
  // Observe that all five lawns are wet
  people.forEach(function(person) {
    var sprinklerOn = getSprinkler(person);
    var lawnWet = sprinklerOn || rained;
    condition(lawnWet);
  });
  
  return rained;
};

var ANSWER = Infer({model: model});
var model = function() {
  // Prior: did it rain this morning?
  var rained = flip(0.3);
  
  // Each person's sprinkler (independent, p=0.5)
  // Using mem so each person's sprinkler state is constant within a run
  var sprinkler = mem(function(person) {
    return flip(0.5);
  });
  
  var people = ["me", "Kelsey", "Kevin", "Manu", "Josh"];
  
  // Condition on all lawns being wet
  mapData({data: people}, function(person) {
    var sprinklerOn = sprinkler(person);
    var lawWet = sprinklerOn || rained;
    condition(lawWet);
  });
  
  return rained;
};

var ANSWER = Infer({method: 'enumerate'}, model);
var model = function() {
  var rained = flip(0.3);
  
  var sprinklerOn = mem(function(person) {
    return flip(0.5);
  });
  
  var lawnWet = function(person) {
    return sprinklerOn(person) || rained;
  };
  
  var people = ["me", "Kelsey", "Kevin", "Manu", "Josh"];
  
  map(function(person) {
    condition(lawnWet(person));
  }, people);
  
  return rained;
};

var ANSWER = Infer({method: "enumerate"}, model);
var model = function() {
  var rained = flip(0.3);

  var sprinklerOn = mem(function(person) {
    return flip(0.5);
  });

  var isWet = function(person) {
    return sprinklerOn(person) || rained;
  };

  var people = ["me", "Kelsey", "Kevin", "Manu", "Josh"];

  map(function(person) {
    condition(isWet(person));
  }, people);

  return rained;
};

var ANSWER = Infer({method: "enumerate"}, model);
var model = function() {
  var rained = flip(0.3);

  var sprinklerOn = mem(function(person) {
    return flip(0.5);
  });

  var lawnWet = function(person) {
    return sprinklerOn(person) || rained;
  };

  var people = ["me", "Kelsey", "Kevin", "Manu", "Josh"];

  map(function(person) {
    condition(lawnWet(person));
  }, people);

  return rained;
};

var ANSWER = Infer({method: 'enumerate'}, model);
groundtruth output
true0.9320
false0.0680
raw JSON
{
  "__kind": "distribution",
  "probs": [
    0.06796116504854369,
    0.9320388349514566
  ],
  "support": [
    false,
    true
  ]
}
feedback
not signed
probmods2-conditioning/ex6.c distribution 8 runs TV=0.00 TV=0
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
A machine randomly draws a letter of the word "game" with probabilities {g: 0.05, a: 0.45, m: 0.05, e: 0.45}. Bob's probability of winning given letter h is 1/k^2 where k is the position of that letter in the word "game" (so g=1, a=2, m=3, e=4). We observe that Bob won, but don't know which letter he drew. Fill in the ...'s in the program below to compute p(letter | win), and end your program with the resulting distribution:
var checkVowel = function(letter) { _.includes(['a', 'e', 'i', 'o', 'u'], letter) };
var letterVals = ['g', 'a', 'm', 'e'];
var letterProbs = map(function(letter) { checkVowel(letter) ? 0.45 : 0.05 }, letterVals);
var letters = Categorical({vs: letterVals, ps: letterProbs});

Infer({method: 'enumerate'}, function() {
  var letter = sample(letters);
  var position = letterVals.indexOf(letter) + 1;
  var winProb = 1 / Math.pow(position, 2);
  condition(...);
  return ...;
});
groundtruth code
var checkVowel = function(letter) { _.includes(['a', 'e', 'i', 'o', 'u'], letter) };
var letterVals = ['g', 'a', 'm', 'e'];
var letterProbs = map(function(letter) { checkVowel(letter) ? 0.45 : 0.05 }, letterVals);
var letters = Categorical({vs: letterVals, ps: letterProbs});
var ANSWER = (Infer({method: 'enumerate'}, function() {
  var letter = sample(letters);
  var position = letterVals.indexOf(letter) + 1;
  var winProb = 1 / Math.pow(position, 2);
  condition(flip(winProb));
  return letter;
}));
generated code
sonnet-46-primer-v3 TV=0 haiku-45-noprimer-v3 TV=0 haiku-45-primer-v3 TV=0 haiku-45-think-noprimer-v3 TV=0 haiku-45-think-primer-v3 TV=0 sonnet-46-noprimer-v3 TV=0 sonnet-46-think-noprimer-v3 TV=0 sonnet-46-think-primer-v3 TV=0
var checkVowel = function(letter) { _.includes(['a', 'e', 'i', 'o', 'u'], letter) };
var letterVals = ['g', 'a', 'm', 'e'];
var letterProbs = map(function(letter) { checkVowel(letter) ? 0.45 : 0.05 }, letterVals);
var letters = Categorical({vs: letterVals, ps: letterProbs});

var ANSWER = Infer({method: 'enumerate'}, function() {
  var letter = sample(letters);
  var position = _.indexOf(letterVals, letter) + 1;
  var winProb = 1 / Math.pow(position, 2);
  condition(flip(winProb));
  return letter;
});
var checkVowel = function(letter) { _.includes(['a', 'e', 'i', 'o', 'u'], letter) };
var letterVals = ['g', 'a', 'm', 'e'];
var letterProbs = map(function(letter) { checkVowel(letter) ? 0.45 : 0.05 }, letterVals);
var letters = Categorical({vs: letterVals, ps: letterProbs});

var ANSWER = Infer({method: 'enumerate'}, function() {
  var letter = sample(letters);
  var position = letterVals.indexOf(letter) + 1;
  var winProb = 1 / Math.pow(position, 2);
  condition(flip(winProb));
  return letter;
});
var checkVowel = function(letter) { _.includes(['a', 'e', 'i', 'o', 'u'], letter) };
var letterVals = ['g', 'a', 'm', 'e'];
var letterProbs = map(function(letter) { checkVowel(letter) ? 0.45 : 0.05 }, letterVals);
var letters = Categorical({vs: letterVals, ps: letterProbs});

var ANSWER = Infer({method: 'enumerate'}, function() {
  var letter = sample(letters);
  var position = letterVals.indexOf(letter) + 1;
  var winProb = 1 / Math.pow(position, 2);
  condition(flip(winProb));
  return letter;
});
var checkVowel = function(letter) { _.includes(['a', 'e', 'i', 'o', 'u'], letter) };
var letterVals = ['g', 'a', 'm', 'e'];
var letterProbs = map(function(letter) { checkVowel(letter) ? 0.45 : 0.05 }, letterVals);
var letters = Categorical({vs: letterVals, ps: letterProbs});

var ANSWER = Infer({method: 'enumerate'}, function() {
  var letter = sample(letters);
  var position = letterVals.indexOf(letter) + 1;
  var winProb = 1 / Math.pow(position, 2);
  condition(flip(winProb));
  return letter;
});
var checkVowel = function(letter) { _.includes(['a', 'e', 'i', 'o', 'u'], letter) };
var letterVals = ['g', 'a', 'm', 'e'];
var letterProbs = map(function(letter) { checkVowel(letter) ? 0.45 : 0.05 }, letterVals);
var letters = Categorical({vs: letterVals, ps: letterProbs});

var ANSWER = Infer({method: 'enumerate'}, function() {
  var letter = sample(letters);
  var position = letterVals.indexOf(letter) + 1;
  var winProb = 1 / Math.pow(position, 2);
  condition(flip(winProb));
  return letter;
});
var checkVowel = function(letter) { _.includes(['a', 'e', 'i', 'o', 'u'], letter) };
var letterVals = ['g', 'a', 'm', 'e'];
var letterProbs = map(function(letter) { checkVowel(letter) ? 0.45 : 0.05 }, letterVals);
var letters = Categorical({vs: letterVals, ps: letterProbs});

var ANSWER = Infer({method: 'enumerate'}, function() {
  var letter = sample(letters);
  var position = letterVals.indexOf(letter) + 1;
  var winProb = 1 / Math.pow(position, 2);
  condition(flip(winProb));
  return letter;
});
var checkVowel = function(letter) { _.includes(['a', 'e', 'i', 'o', 'u'], letter) };
var letterVals = ['g', 'a', 'm', 'e'];
var letterProbs = map(function(letter) { checkVowel(letter) ? 0.45 : 0.05 }, letterVals);
var letters = Categorical({vs: letterVals, ps: letterProbs});

var ANSWER = Infer({method: 'enumerate'}, function() {
  var letter = sample(letters);
  var position = letterVals.indexOf(letter) + 1;
  var winProb = 1 / Math.pow(position, 2);
  condition(flip(winProb));
  return letter;
});
var checkVowel = function(letter) { _.includes(['a', 'e', 'i', 'o', 'u'], letter) };
var letterVals = ['g', 'a', 'm', 'e'];
var letterProbs = map(function(letter) { checkVowel(letter) ? 0.45 : 0.05 }, letterVals);
var letters = Categorical({vs: letterVals, ps: letterProbs});

var ANSWER = Infer({method: 'enumerate'}, function() {
  var letter = sample(letters);
  var position = letterVals.indexOf(letter) + 1;
  var winProb = 1 / Math.pow(position, 2);
  condition(flip(winProb));
  return letter;
});
groundtruth output
a0.5735
g0.2549
e0.1434
m0.0283
raw JSON
{
  "__kind": "distribution",
  "probs": [
    0.14336283185840704,
    0.02831858407079646,
    0.5734513274336283,
    0.25486725663716814
  ],
  "support": [
    "e",
    "m",
    "a",
    "g"
  ]
}
feedback
not signed
probmods2-conditioning/ex6.d distribution 8 runs TV=0.00 TV=0
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Same casino-game setup: letters {g, a, m, e} drawn with probs {0.05, 0.45, 0.05, 0.45}, win prob = 1/k^2 with k = position in "game". Fill in the program below so the posterior distribution is over vowel vs consonant (instead of over the letter itself), given that Bob won:
var checkVowel = function(letter) { _.includes(['a', 'e', 'i', 'o', 'u'], letter) };
var letterVals = ['g', 'a', 'm', 'e'];
var letterProbs = map(function(letter) { checkVowel(letter) ? 0.45 : 0.05 }, letterVals);
var letters = Categorical({vs: letterVals, ps: letterProbs});

Infer({method: 'enumerate'}, function() {
  var letter = sample(letters);
  var position = letterVals.indexOf(letter) + 1;
  var winProb = 1 / Math.pow(position, 2);
  condition(...);
  return ...;
});
End your program with the resulting Infer(...) distribution.
groundtruth code
var checkVowel = function(letter) { _.includes(['a', 'e', 'i', 'o', 'u'], letter) };
var letterVals = ['g', 'a', 'm', 'e'];
var letterProbs = map(function(letter) { checkVowel(letter) ? 0.45 : 0.05 }, letterVals);
var letters = Categorical({vs: letterVals, ps: letterProbs});
var ANSWER = (Infer({method: 'enumerate'}, function() {
  var letter = sample(letters);
  var position = letterVals.indexOf(letter) + 1;
  var winProb = 1 / Math.pow(position, 2);
  condition(flip(winProb));
  return checkVowel(letter) ? 'vowel' : 'consonant';
}));
generated code
sonnet-46-primer-v3 TV=0 haiku-45-noprimer-v3 TV=0 haiku-45-primer-v3 TV=0 haiku-45-think-noprimer-v3 TV=0 haiku-45-think-primer-v3 TV=0 sonnet-46-noprimer-v3 TV=0 sonnet-46-think-noprimer-v3 TV=0 sonnet-46-think-primer-v3 TV=0
var checkVowel = function(letter) { _.includes(['a', 'e', 'i', 'o', 'u'], letter) };
var letterVals = ['g', 'a', 'm', 'e'];
var letterProbs = map(function(letter) { checkVowel(letter) ? 0.45 : 0.05 }, letterVals);
var letters = Categorical({vs: letterVals, ps: letterProbs});

var ANSWER = Infer({method: 'enumerate'}, function() {
  var letter = sample(letters);
  var position = _.indexOf(letterVals, letter) + 1;
  var winProb = 1 / Math.pow(position, 2);
  condition(flip(winProb));
  return checkVowel(letter) ? 'vowel' : 'consonant';
});
var checkVowel = function(letter) { _.includes(['a', 'e', 'i', 'o', 'u'], letter) };
var letterVals = ['g', 'a', 'm', 'e'];
var letterProbs = map(function(letter) { checkVowel(letter) ? 0.45 : 0.05 }, letterVals);
var letters = Categorical({vs: letterVals, ps: letterProbs});

var ANSWER = Infer({method: 'enumerate'}, function() {
  var letter = sample(letters);
  var position = letterVals.indexOf(letter) + 1;
  var winProb = 1 / Math.pow(position, 2);
  condition(flip(winProb));
  return checkVowel(letter) ? 'vowel' : 'consonant';
});
var checkVowel = function(letter) { _.includes(['a', 'e', 'i', 'o', 'u'], letter) };
var letterVals = ['g', 'a', 'm', 'e'];
var letterProbs = map(function(letter) { checkVowel(letter) ? 0.45 : 0.05 }, letterVals);
var letters = Categorical({vs: letterVals, ps: letterProbs});

var ANSWER = Infer({method: 'enumerate'}, function() {
  var letter = sample(letters);
  var position = letterVals.indexOf(letter) + 1;
  var winProb = 1 / Math.pow(position, 2);
  condition(flip(winProb));
  var isVowel = checkVowel(letter);
  return isVowel ? 'vowel' : 'consonant';
});
var checkVowel = function(letter) { _.includes(['a', 'e', 'i', 'o', 'u'], letter) };
var letterVals = ['g', 'a', 'm', 'e'];
var letterProbs = map(function(letter) { checkVowel(letter) ? 0.45 : 0.05 }, letterVals);
var letters = Categorical({vs: letterVals, ps: letterProbs});

var ANSWER = Infer({method: 'enumerate'}, function() {
  var letter = sample(letters);
  var position = letterVals.indexOf(letter) + 1;
  var winProb = 1 / Math.pow(position, 2);
  condition(sample(Bernoulli({p: winProb})));
  return checkVowel(letter) ? 'vowel' : 'consonant';
});
var checkVowel = function(letter) { _.includes(['a', 'e', 'i', 'o', 'u'], letter) };
var letterVals = ['g', 'a', 'm', 'e'];
var letterProbs = map(function(letter) { checkVowel(letter) ? 0.45 : 0.05 }, letterVals);
var letters = Categorical({vs: letterVals, ps: letterProbs});

var ANSWER = Infer({method: 'enumerate'}, function() {
  var letter = sample(letters);
  var position = letterVals.indexOf(letter) + 1;
  var winProb = 1 / Math.pow(position, 2);
  condition(flip(winProb));
  var isVowel = checkVowel(letter);
  return isVowel ? 'vowel' : 'consonant';
});
var checkVowel = function(letter) { _.includes(['a', 'e', 'i', 'o', 'u'], letter) };
var letterVals = ['g', 'a', 'm', 'e'];
var letterProbs = map(function(letter) { checkVowel(letter) ? 0.45 : 0.05 }, letterVals);
var letters = Categorical({vs: letterVals, ps: letterProbs});

var ANSWER = Infer({method: 'enumerate'}, function() {
  var letter = sample(letters);
  var position = letterVals.indexOf(letter) + 1;
  var winProb = 1 / Math.pow(position, 2);
  condition(flip(winProb));
  return checkVowel(letter) ? 'vowel' : 'consonant';
});
var checkVowel = function(letter) { _.includes(['a', 'e', 'i', 'o', 'u'], letter) };
var letterVals = ['g', 'a', 'm', 'e'];
var letterProbs = map(function(letter) { checkVowel(letter) ? 0.45 : 0.05 }, letterVals);
var letters = Categorical({vs: letterVals, ps: letterProbs});

var ANSWER = Infer({method: 'enumerate'}, function() {
  var letter = sample(letters);
  var position = letterVals.indexOf(letter) + 1;
  var winProb = 1 / Math.pow(position, 2);
  condition(flip(winProb));
  return checkVowel(letter) ? 'vowel' : 'consonant';
});
var checkVowel = function(letter) { _.includes(['a', 'e', 'i', 'o', 'u'], letter); };
var letterVals = ['g', 'a', 'm', 'e'];
var letterProbs = map(function(letter) { checkVowel(letter) ? 0.45 : 0.05 }, letterVals);
var letters = Categorical({vs: letterVals, ps: letterProbs});

var ANSWER = Infer({method: 'enumerate'}, function() {
  var letter = sample(letters);
  var position = _.indexOf(letterVals, letter) + 1;
  var winProb = 1 / Math.pow(position, 2);
  condition(flip(winProb));
  return checkVowel(letter) ? 'vowel' : 'consonant';
});
groundtruth output
vowel0.7168
consonant0.2832
raw JSON
{
  "__kind": "distribution",
  "probs": [
    0.7168141592920354,
    0.28318584070796465
  ],
  "support": [
    "vowel",
    "consonant"
  ]
}
feedback
not signed
generative-models 10 atoms 6✓ · 3✗
probmods2-generative-models/ex1.b record(p1, p2, p3) 8 runs TV=0.00 TV=0
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Verify by sampling 1000 times each that the following three programs have the same marginal distribution: Program 1: flip() ? flip(.7) : flip(.1) Program 2: flip(flip() ? .7 : .1) Program 3: flip(.4) Return an object literal {p1, p2, p3} where each value is an array of 1000 samples from the corresponding program (use repeat(1000, function() {...})).
groundtruth code
var ANSWER = (({
  p1: repeat(1000, function() { return flip() ? flip(.7) : flip(.1); }),
  p2: repeat(1000, function() { return flip(flip() ? .7 : .1); }),
  p3: repeat(1000, function() { return flip(.4); })
}));
generated code
sonnet-46-primer-v3 TV=0 haiku-45-noprimer-v3 TV=0 haiku-45-primer-v3 TV=0 haiku-45-think-noprimer-v3 TV=0 haiku-45-think-primer-v3 TV=0 sonnet-46-noprimer-v3 TV=0 sonnet-46-think-noprimer-v3 TV=0 sonnet-46-think-primer-v3 TV=0
var p1 = repeat(1000, function() { return flip() ? flip(.7) : flip(.1); });
var p2 = repeat(1000, function() { return flip(flip() ? .7 : .1); });
var p3 = repeat(1000, function() { return flip(.4); });

var ANSWER = {p1: p1, p2: p2, p3: p3};
var p1 = repeat(1000, function() {
  return flip() ? flip(.7) : flip(.1);
});

var p2 = repeat(1000, function() {
  return flip(flip() ? .7 : .1);
});

var p3 = repeat(1000, function() {
  return flip(.4);
});

var ANSWER = {p1: p1, p2: p2, p3: p3};
var p1 = repeat(1000, function() {
  return flip() ? flip(.7) : flip(.1);
});

var p2 = repeat(1000, function() {
  return flip(flip() ? .7 : .1);
});

var p3 = repeat(1000, function() {
  return flip(.4);
});

var ANSWER = {p1: p1, p2: p2, p3: p3};
var p1 = repeat(1000, function() {
  return flip() ? flip(.7) : flip(.1);
});

var p2 = repeat(1000, function() {
  return flip(flip() ? .7 : .1);
});

var p3 = repeat(1000, function() {
  return flip(.4);
});

var ANSWER = {p1: p1, p2: p2, p3: p3};
var p1 = repeat(1000, function() {
  return flip() ? flip(.7) : flip(.1);
});

var p2 = repeat(1000, function() {
  return flip(flip() ? .7 : .1);
});

var p3 = repeat(1000, function() {
  return flip(.4);
});

var ANSWER = {p1: p1, p2: p2, p3: p3};
var p1 = repeat(1000, function() { return flip() ? flip(.7) : flip(.1); });
var p2 = repeat(1000, function() { return flip(flip() ? .7 : .1); });
var p3 = repeat(1000, function() { return flip(.4); });

var ANSWER = {p1: p1, p2: p2, p3: p3};
var p1 = repeat(1000, function() { return flip() ? flip(.7) : flip(.1); });
var p2 = repeat(1000, function() { return flip(flip() ? .7 : .1); });
var p3 = repeat(1000, function() { return flip(.4); });

var ANSWER = {p1: p1, p2: p2, p3: p3};
var p1 = repeat(1000, function() { return flip() ? flip(.7) : flip(.1); });
var p2 = repeat(1000, function() { return flip(flip() ? .7 : .1); });
var p3 = repeat(1000, function() { return flip(.4); });

var ANSWER = {p1: p1, p2: p2, p3: p3};
groundtruth output
{
  "p1": [
    false,
    false,
    false,
    false,
    false,
    false,
    false,
    false,
    false,
    false,
    true,
    false,
    true,
    false,
    false,
    false,
    false,
    false,
    true,
    true,
    false,
    false,
    false,
    false,
    false,
    false,
    false,
    false,
    false,
    true,
    true,
    true,
    true,
    false,
    true,
    false,
    false,
    true,
    false,
    true,
    true,
    false,
    true,
    false,
    false,
    true,
    false,
    false,
    false,
    true,
    true,
    true,
    false,
    true,
    false,
    false,
    false,
    true,
    true,
    true,
    false,
    true,
    true,
    true,
    false,
    false,
    false,
    true,
    true,
    false,
    false,
    false,
    false,
    false,
    false,
    false,
    false,
    true,
    false,
    true,
    false,
    false,
    false,
    false,
    false,
    false,
    false,
    false,
    true,
    false,
    false,
    true,
    false,
    false,
    true,
    true,
    true,
    false,
    false,
    false,
    false,
    false,
    true,
    false,
    false,
    false,
    false,
    false,
    true,
    false,
    false,
    false,
    true,
    true,
    false,
    false,
    false,
    false,
    true,
    false,
    true,
    false,
    false,
    false,
    true,
    false,
    false,
    false,
    false,
    true,
    false,
    false,
    false,
    true,
    false,
    false,
    false,
    false,
    false,
    false,
    false,
    false,
    true,
    false,
    true,
    false,
    false,
    false,
    false,
    true,
    false,
    false,
    true,
    true,
    false,
    true,
    false,
    false,
    false,
    false,
    true,
    false,
    true,
    false,
    false,
    false,
    false,
    false,
    false,
    false,
    true,
    true,
    false,
    true,
    false,
    false,
    false,
    false,
    true,
    false,
    false,
    false,
    false,
    false,
    false,
    false,
    true,
    false,
    false,
    true,
    true,
    true,
    true,
    true,
    true,
    false,
    false,
    false,
    false,
    false,
    false,
    false,
    true,
    false,
    true,
    true,
    false,
    true,
    true,
    true,
    false,
    false,
    false,
    true,
    false,
    false,
    false,
    false,
    false,
    true,
    false,
    true,
    false,
    false,
    false,
    false,
    true,
    false,
    false,
    true,
    false,
    false,
    false,
    false,
    false,
    false,
    true,
    false,
    true,
    false,
    false,
    false,
    true,
    false,
    false,
    false,
    false,
    false,
    true,
    false,
    true,
    false,
    false,
    true,
    false,
    true,
    false,
    true,
    false,
    false,
    false,
    false,
    false,
    false,
    true,
    false,
    false,
    false,
    false,
    true,
    false,
    true,
    true,
    false,
    false,
    false,
    false,
    false,
    true,
    true,
    false,
    false,
    false,
    true,
    false,
    true,
    false,
    false,
    false,
    false,
    true,
    true,
    false,
    true,
    true,
    false,
    false,
    true,
    false,
    false,
    false,
    false,
    false,
    false,
    true,
    true,
    true,
    false,
    true,
    false,
    false,
    false,
    true,
    false,
    false,
    false,
    false,
    false,
    false,
    false,
    true,
    true,
    true,
    false,
    true,
    false,
    false,
    true,
    true,
    true,
    false,
    true,
    true,
    false,
    false,
    false,
    false,
    false,
    false,
    false,
    true,
    false,
    false,
    true,
    false,
    false,
    true,
    true,
    false,
    true,
    false,
    false,
    true,
    false,
    false,
    false,
    false,
    true,
    true,
    false,
    false,
    false,
    false,
    true,
    false,
    true,
    false,
    false,
    true,
    true,
    true,
    false,
    true,
    fa

... (27819 more chars truncated)
feedback
not signed
probmods2-generative-models/ex1.c samples 8 runs shape!
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Write a *new* WebPPL expression with the same marginal distribution as flip(.4) (i.e., true with probability 0.4) that looks structurally different from flip(.4), flip() ? flip(.7) : flip(.1), and flip(flip() ? .7 : .1). End with that expression - the harness will rerun your program multiple times to estimate its marginal.
groundtruth code
var ANSWER = (flip() ? false : flip(.8));
generated code
sonnet-46-primer-v3 shape! haiku-45-noprimer-v3 shape! haiku-45-primer-v3 shape! haiku-45-think-noprimer-v3 shape! haiku-45-think-primer-v3 shape! sonnet-46-noprimer-v3 shape! sonnet-46-think-noprimer-v3 TV<.05 sonnet-46-think-primer-v3 shape!
var ANSWER = sample(Bernoulli({p: 0.4}));
var ANSWER = flip() && flip(.5) || flip(.2);
var ANSWER = (function() {
  var x = uniform(0, 1);
  return x < 0.4;
})();
var ANSWER = flip(0.5) && flip(0.8);
var ANSWER = flip(.5) && flip(.8);
var ANSWER = (function() {
  // Use a geometric-like construction: sample from a categorical
  // and map to boolean
  // P(true) = 0.4 using: sample uniform [0,1) < 0.4
  // But structurally different: use a mixture via enumeration
  // Let's use: marginalize over a latent variable
  // Pick a random integer 1-5, return true if <= 2 (2/5 = 0.4)
  var n = randomInteger(5); // 0,1,2,3,4 uniformly
  n < 2
})();

var ANSWER = (function() {
  // Use beta/bernoulli or nested structure
  // Structurally: condition on a joint sample
  var model = Infer({method: 'enumerate'}, function() {
    var a = flip(0.8);
    var b = flip(0.5);
    return a && b;
  });
  return sample(model);
})();
// flip(.5) && flip(.8) => P(true) = 0.5 * 0.8 = 0.4
var ANSWER = Infer({method: 'enumerate'}, function() {
  return flip(.5) && flip(.8);
});
var ANSWER = randomInteger(10) < 4;
groundtruth output
[
  false,
  true,
  true,
  false,
  false,
  false,
  false,
  false,
  false,
  false,
  false,
  true,
  true,
  false,
  false,
  true,
  false,
  true,
  true,
  false,
  false,
  false,
  false,
  true,
  false,
  true,
  false,
  false,
  true,
  false,
  false,
  false,
  true,
  true,
  false,
  true,
  false,
  true,
  false,
  true,
  false,
  true,
  true,
  false,
  true,
  false,
  false,
  false,
  false,
  false,
  false,
  true,
  true,
  false,
  false,
  false,
  false,
  true,
  false,
  true,
  true,
  true,
  false,
  true,
  false,
  true,
  false,
  true,
  true,
  true,
  false,
  true,
  false,
  false,
  true,
  false,
  false,
  false,
  true,
  false,
  true,
  false,
  false,
  true,
  false,
  true,
  false,
  true,
  false,
  false,
  false,
  false,
  false,
  true,
  true,
  false,
  false,
  true,
  false,
  false
]
feedback
not signed
probmods2-generative-models/ex2.b samples 8 runs TV=1.00 TV=1
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Given:
var foo = function() { return flip(); };
[foo(), foo(), foo()];
Each call to foo() independently flips, so the list can have any combination of trues and falses. Modify the program using mem so that [foo(), foo(), foo()] is always either [true, true, true] or [false, false, false]. End with the list expression - the harness will rerun your program multiple times. Use a ; to terminate each top-level statement.
groundtruth code
var foo = mem(function() { return flip(); });
var ANSWER = ([foo(), foo(), foo()]);
generated code
sonnet-46-primer-v3 TV=1 haiku-45-noprimer-v3 TV=1 haiku-45-primer-v3 TV=1 haiku-45-think-noprimer-v3 TV=1 haiku-45-think-primer-v3 fail sonnet-46-noprimer-v3 TV=1 sonnet-46-think-noprimer-v3 TV=1 sonnet-46-think-primer-v3 TV=1
var foo = mem(function() { return flip(); });
var ANSWER = [foo(), foo(), foo()];
var foo = mem(function() { return flip(); });
var ANSWER = [foo(), foo(), foo()];
var foo = mem(function() { return flip(); });
var ANSWER = [foo(), foo(), foo()];
var foo = mem(function() { return flip(); });
var ANSWER = [foo(), foo(), foo()];
ReferenceError: ANSWER is not defined
var foo = mem(function() { return flip(); });
[foo(), foo(), foo()];
var foo = mem(function() { return flip(); });
var ANSWER = [foo(), foo(), foo()];
var foo = mem(function() { return flip(); });
var ANSWER = [foo(), foo(), foo()];
var foo = mem(function() { return flip(); });
var ANSWER = [foo(), foo(), foo()];
groundtruth output
[
  [
    true,
    true,
    true
  ],
  [
    false,
    false,
    false
  ],
  [
    false,
    false,
    false
  ],
  [
    true,
    true,
    true
  ],
  [
    true,
    true,
    true
  ],
  [
    true,
    true,
    true
  ],
  [
    true,
    true,
    true
  ],
  [
    true,
    true,
    true
  ],
  [
    true,
    true,
    true
  ],
  [
    false,
    false,
    false
  ],
  [
    true,
    true,
    true
  ],
  [
    false,
    false,
    false
  ],
  [
    false,
    false,
    false
  ],
  [
    false,
    false,
    false
  ],
  [
    true,
    true,
    true
  ],
  [
    false,
    false,
    false
  ],
  [
    true,
    true,
    true
  ],
  [
    false,
    false,
    false
  ],
  [
    false,
    false,
    false
  ],
  [
    true,
    true,
    true
  ],
  [
    false,
    false,
    false
  ],
  [
    true,
    true,
    true
  ],
  [
    true,
    true,
    true
  ],
  [
    false,
    false,
    false
  ],
  [
    false,
    false,
    false
  ],
  [
    false,
    false,
    false
  ],
  [
    true,
    true,
    true
  ],
  [
    true,
    true,
    true
  ],
  [
    false,
    false,
    false
  ],
  [
    true,
    true,
    true
  ],
  [
    false,
    false,
    false
  ],
  [
    true,
    true,
    true
  ],
  [
    false,
    false,
    false
  ],
  [
    false,
    false,
    false
  ],
  [
    false,
    false,
    false
  ],
  [
    false,
    false,
    false
  ],
  [
    true,
    true,
    true
  ],
  [
    false,
    false,
    false
  ],
  [
    true,
    true,
    true
  ],
  [
    false,
    false,
    false
  ],
  [
    true,
    true,
    true
  ],
  [
    false,
    false,
    false
  ],
  [
    false,
    false,
    false
  ],
  [
    true,
    true,
    true
  ],
  [
    false,
    false,
    false
  ],
  [
    false,
    false,
    false
  ],
  [
    true,
    true,
    true
  ],
  [
    true,
    true,
    true
  ],
  [
    true,
    true,
    true
  ],
  [
    true,
    true,
    true
  ],
  [
    true,
    true,
    true
  ],
  [
    false,
    false,
    false
  ],
  [
    false,
    false,
    false
  ],
  [
    true,
    true,
    true
  ],
  [
    true,
    true,
    true
  ],
  [
    false,
    false,
    false
  ],
  [
    true,
    true,
    true
  ],
  [
    false,
    false,
    false
  ],
  [
    true,
    true,
    true
  ],
  [
    false,
    false,
    false
  ],
  [
    false,
    false,
    false
  ],
  [
    false,
    false,
    false
  ],
  [
    true,
    true,
    true
  ],
  [
    false,
    false,
    false
  ],
  [
    true,
    true,
    true
  ],
  [
    false,
    false,
    false
  ],
  [
    true,
    true,
    true
  ],
  [
    false,
    false,
    false
  ],
  [
    false,
    false,
    false
  ],
  [
    false,
    false,
    false
  ],
  [
    true,
    true,
    true
  ],
  [
    false,
    false,
    false
  ],
  [
    true,
    true,
    true
  ],
  [
    true,
    true,
    true
  ],
  [
    false,
    false,
    false
  ],
  [
    true,
    true,
    true
  ],
  [
    true,
    true,
    true
  ],
  [
    true,
    true,
    true
  ],
  [
    false,
    false,
    false
  ],
  [
    false,
    false,
    false
  ],
  [
    false,
    false,
    false
  ],
  [
    true,
    true,
    true
  ],
  [
    true,
    true,
    true
  ],
  [
    false,
    false,
    false
  ],
  [
    true,
    true,
    true
  ],
  [
    false,
    false,
    false
  ],
  [
    true,
    true,
    true
  ],
  [
    false,
    false,
    false
  ],
  [
    true,
    true,
    true
  ],
  [
    true,
    true,
    true
  ],
  [
    true,
    true,
    true
  ],
  [
    true,
    true,
    true
  ],
  [
    false,
    false,
    false
  ],
  [
    false,
    false,
    false
  ],
  [
    false,
    false,
    false
  ],
  [
    true,
    true,
    true
  ],
  [
    true,
    true,
    true
  ],
  [
    false,
    false,
    false
  ],
  [
    true,
    true,
    true
  ],
  [
    true,
    true,
    true
  ]
]
feedback
not signed
probmods2-generative-models/ex2.c samples 8 runs TV=1.00 TV=1
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Given the memoized program:
var foo = mem(function() { return flip(); });
[foo(), foo(), foo()];
Modify it so that the first two elements are always equal but the third can differ. Hint: pass an argument to foo that distinguishes the two calls you want to be the same. End with the list expression. Use a ; to terminate each top-level statement.
groundtruth code
var foo = mem(function(x) { return flip(); });
var ANSWER = ([foo(0), foo(0), foo(1)]);
generated code
sonnet-46-primer-v3 TV=1 haiku-45-noprimer-v3 TV=1 haiku-45-primer-v3 TV=1 haiku-45-think-noprimer-v3 TV=1 haiku-45-think-primer-v3 TV=1 sonnet-46-noprimer-v3 TV=1 sonnet-46-think-noprimer-v3 TV=1 sonnet-46-think-primer-v3 TV=1
var foo = mem(function(id) { return flip(); });
var ANSWER = [foo('a'), foo('a'), foo('b')];
var foo = mem(function(x) { return flip(); });
var ANSWER = [foo(0), foo(0), foo(1)];
var foo = mem(function(x) { return flip(); });
var ANSWER = [foo(0), foo(0), foo(1)];
var foo = mem(function(arg) { return flip(); });
var ANSWER = [foo(0), foo(0), foo(1)];
var foo = mem(function(x) { return flip(); });
var ANSWER = [foo(0), foo(0), foo(1)];
var foo = mem(function(id) { return flip(); });
var ANSWER = [foo("a"), foo("a"), foo("b")];
var foo = mem(function(id) { return flip(); });
var ANSWER = [foo(1), foo(1), foo(2)];
var foo = mem(function(id) { return flip(); });
var ANSWER = [foo(1), foo(1), foo(2)];
groundtruth output
[
  [
    true,
    true,
    false
  ],
  [
    false,
    false,
    true
  ],
  [
    false,
    false,
    true
  ],
  [
    true,
    true,
    true
  ],
  [
    true,
    true,
    true
  ],
  [
    true,
    true,
    true
  ],
  [
    true,
    true,
    true
  ],
  [
    true,
    true,
    true
  ],
  [
    true,
    true,
    true
  ],
  [
    false,
    false,
    false
  ],
  [
    true,
    true,
    true
  ],
  [
    false,
    false,
    true
  ],
  [
    false,
    false,
    false
  ],
  [
    false,
    false,
    false
  ],
  [
    true,
    true,
    false
  ],
  [
    false,
    false,
    true
  ],
  [
    true,
    true,
    true
  ],
  [
    false,
    false,
    true
  ],
  [
    false,
    false,
    true
  ],
  [
    true,
    true,
    false
  ],
  [
    false,
    false,
    false
  ],
  [
    true,
    true,
    false
  ],
  [
    true,
    true,
    true
  ],
  [
    false,
    false,
    false
  ],
  [
    false,
    false,
    false
  ],
  [
    false,
    false,
    true
  ],
  [
    true,
    true,
    true
  ],
  [
    true,
    true,
    false
  ],
  [
    false,
    false,
    true
  ],
  [
    true,
    true,
    false
  ],
  [
    false,
    false,
    false
  ],
  [
    true,
    true,
    false
  ],
  [
    false,
    false,
    false
  ],
  [
    false,
    false,
    true
  ],
  [
    false,
    false,
    false
  ],
  [
    false,
    false,
    true
  ],
  [
    true,
    true,
    true
  ],
  [
    false,
    false,
    true
  ],
  [
    true,
    true,
    true
  ],
  [
    false,
    false,
    true
  ],
  [
    true,
    true,
    true
  ],
  [
    false,
    false,
    true
  ],
  [
    false,
    false,
    true
  ],
  [
    true,
    true,
    false
  ],
  [
    false,
    false,
    true
  ],
  [
    false,
    false,
    false
  ],
  [
    true,
    true,
    true
  ],
  [
    true,
    true,
    true
  ],
  [
    true,
    true,
    false
  ],
  [
    true,
    true,
    true
  ],
  [
    true,
    true,
    false
  ],
  [
    false,
    false,
    true
  ],
  [
    false,
    false,
    true
  ],
  [
    true,
    true,
    true
  ],
  [
    true,
    true,
    false
  ],
  [
    false,
    false,
    false
  ],
  [
    true,
    true,
    true
  ],
  [
    false,
    false,
    false
  ],
  [
    true,
    true,
    true
  ],
  [
    false,
    false,
    true
  ],
  [
    false,
    false,
    true
  ],
  [
    false,
    false,
    false
  ],
  [
    true,
    true,
    false
  ],
  [
    false,
    false,
    true
  ],
  [
    true,
    true,
    true
  ],
  [
    false,
    false,
    false
  ],
  [
    true,
    true,
    false
  ],
  [
    false,
    false,
    true
  ],
  [
    false,
    false,
    false
  ],
  [
    false,
    false,
    false
  ],
  [
    true,
    true,
    false
  ],
  [
    false,
    false,
    true
  ],
  [
    true,
    true,
    false
  ],
  [
    true,
    true,
    true
  ],
  [
    false,
    false,
    true
  ],
  [
    true,
    true,
    true
  ],
  [
    true,
    true,
    false
  ],
  [
    true,
    true,
    true
  ],
  [
    false,
    false,
    false
  ],
  [
    false,
    false,
    false
  ],
  [
    false,
    false,
    true
  ],
  [
    true,
    true,
    false
  ],
  [
    true,
    true,
    true
  ],
  [
    false,
    false,
    false
  ],
  [
    true,
    true,
    true
  ],
  [
    false,
    false,
    true
  ],
  [
    true,
    true,
    true
  ],
  [
    false,
    false,
    false
  ],
  [
    true,
    true,
    true
  ],
  [
    true,
    true,
    false
  ],
  [
    true,
    true,
    false
  ],
  [
    true,
    true,
    true
  ],
  [
    false,
    false,
    false
  ],
  [
    false,
    false,
    true
  ],
  [
    false,
    false,
    false
  ],
  [
    true,
    true,
    true
  ],
  [
    true,
    true,
    false
  ],
  [
    false,
    false,
    false
  ],
  [
    true,
    true,
    true
  ],
  [
    true,
    true,
    false
  ]
]
feedback
not signed
probmods2-generative-models/ex4.b distribution 8 runs TV=0.00 TV=0
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Given a simple symptoms model:
var allergies = flip(0.3)
var cold = flip(0.2)
var sneeze = cold || allergies
var fever = cold
End with Infer({method: 'forward', samples: 1000}, ...) returning the joint distribution over {sneeze, fever}.
groundtruth code
var ANSWER = (Infer({method: "forward", samples: 1000}, function() {
  var allergies = flip(0.3);
  var cold = flip(0.2);

  var sneeze = cold || allergies;
  var fever = cold;

  return {"sneeze": sneeze, "fever": fever};
}));
generated code
sonnet-46-primer-v3 TV=0 haiku-45-noprimer-v3 TV=0 haiku-45-primer-v3 TV=0 haiku-45-think-noprimer-v3 TV=0 haiku-45-think-primer-v3 TV=0 sonnet-46-noprimer-v3 TV=0 sonnet-46-think-noprimer-v3 TV=0 sonnet-46-think-primer-v3 TV=0
var model = function() {
  var allergies = flip(0.3);
  var cold = flip(0.2);
  var sneeze = cold || allergies;
  var fever = cold;
  return {sneeze: sneeze, fever: fever};
};

var ANSWER = Infer({method: 'forward', samples: 1000}, model);
var model = function() {
  var allergies = flip(0.3);
  var cold = flip(0.2);
  var sneeze = cold || allergies;
  var fever = cold;
  return {sneeze: sneeze, fever: fever};
};

var ANSWER = Infer({method: 'forward', samples: 1000}, model);
var model = function() {
  var allergies = flip(0.3);
  var cold = flip(0.2);
  var sneeze = cold || allergies;
  var fever = cold;
  return {sneeze: sneeze, fever: fever};
};

var ANSWER = Infer({method: 'forward', samples: 1000}, model);
var model = function() {
  var allergies = flip(0.3);
  var cold = flip(0.2);
  var sneeze = cold || allergies;
  var fever = cold;
  
  return {sneeze: sneeze, fever: fever};
};

var ANSWER = Infer({method: 'forward', samples: 1000}, model);
var model = function() {
  var allergies = flip(0.3);
  var cold = flip(0.2);
  var sneeze = cold || allergies;
  var fever = cold;
  return {sneeze: sneeze, fever: fever};
};

var ANSWER = Infer({method: 'forward', samples: 1000}, model);
var model = function() {
  var allergies = flip(0.3);
  var cold = flip(0.2);
  var sneeze = cold || allergies;
  var fever = cold;
  return { sneeze: sneeze, fever: fever };
};

var ANSWER = Infer({ method: 'forward', samples: 1000 }, model);
var model = function() {
  var allergies = flip(0.3);
  var cold = flip(0.2);
  var sneeze = cold || allergies;
  var fever = cold;
  return { sneeze: sneeze, fever: fever };
};

var ANSWER = Infer({ method: 'forward', samples: 1000 }, model);
var model = function() {
  var allergies = flip(0.3);
  var cold = flip(0.2);
  var sneeze = cold || allergies;
  var fever = cold;
  return { sneeze: sneeze, fever: fever };
};

var ANSWER = Infer({ method: 'forward', samples: 1000 }, model);
groundtruth output
{"sneeze":false,"fever":false}0.5360
{"sneeze":true,"fever":false}0.2470
{"sneeze":true,"fever":true}0.2170
raw JSON
{
  "__kind": "distribution",
  "probs": [
    0.247,
    0.536,
    0.217
  ],
  "support": [
    {
      "sneeze": true,
      "fever": false
    },
    {
      "sneeze": false,
      "fever": false
    },
    {
      "sneeze": true,
      "fever": true
    }
  ]
}
feedback
not signed
probmods2-generative-models/ex4.c distribution 8 runs TV=0.00 TV=0
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Given a multi-patient symptoms model attempted via per-person functions:
var allergies = function(person) { return flip(.3) }
var cold = function(person) { return flip(.2) }
var sneeze = function(person) { return cold(person) || allergies(person) }
[sneeze('bob'), sneeze('alice')]
Add fever, then end with Infer({method: 'forward', samples: 1000}, ...) returning the joint distribution over Bob's symptoms {sneeze, fever}. The raw program double-counts: sneeze('bob') flips a fresh cold, and so does fever('bob'). Fix this by using mem on cold and allergies so the same person has consistent disease state within a sample.
groundtruth code
var ANSWER = (Infer({method: "forward", samples: 1000}, function() {
  var allergies = mem(function(person) { return flip(.3); });
  var cold = mem(function(person) { return flip(.2); });

  var sneeze = function(person) { return cold(person) || allergies(person); };
  var fever = function(person) { return cold(person); };

  return {"sneeze": sneeze('bob'), "fever": fever('bob')};
}));
generated code
sonnet-46-primer-v3 TV=0 haiku-45-noprimer-v3 TV<.5 haiku-45-primer-v3 TV<.5 haiku-45-think-noprimer-v3 TV<.5 haiku-45-think-primer-v3 TV<.5 sonnet-46-noprimer-v3 TV=0 sonnet-46-think-noprimer-v3 TV=0 sonnet-46-think-primer-v3 TV=0
var allergies = mem(function(person) { return flip(0.3); });
var cold = mem(function(person) { return flip(0.2); });
var sneeze = function(person) { return cold(person) || allergies(person); };
var fever = function(person) { return cold(person); };

var model = function() {
  return { sneeze: sneeze('bob'), fever: fever('bob') };
};

var ANSWER = Infer({method: 'forward', samples: 1000}, model);
var allergies = mem(function(person) { return flip(.3) })
var cold = mem(function(person) { return flip(.2) })
var fever = function(person) { return cold(person) && flip(.6) }
var sneeze = function(person) { return cold(person) || allergies(person) }

var model = function() {
  return {
    sneeze: sneeze('bob'),
    fever: fever('bob')
  }
}

var ANSWER = Infer({method: 'forward', samples: 1000}, model)
var allergies = mem(function(person) { return flip(.3); });
var cold = mem(function(person) { return flip(.2); });
var sneeze = function(person) { return cold(person) || allergies(person); };
var fever = function(person) { return cold(person) && flip(.6); };

var ANSWER = Infer({method: 'forward', samples: 1000}, function() {
  return {
    sneeze: sneeze('bob'),
    fever: fever('bob')
  };
});
var allergies = mem(function(person) { return flip(.3) });
var cold = mem(function(person) { return flip(.2) });
var sneeze = function(person) { return cold(person) || allergies(person) };
var fever = function(person) { return cold(person) ? flip(.8) : flip(.1) };

var model = function() {
  return {
    sneeze: sneeze('bob'),
    fever: fever('bob')
  };
};

var ANSWER = Infer({method: 'forward', samples: 1000}, model);
var allergies = mem(function(person) { return flip(.3) })
var cold = mem(function(person) { return flip(.2) })
var sneeze = function(person) { return cold(person) || allergies(person) }
var fever = function(person) { return cold(person) && flip(.5) }

var model = function() {
  return {
    sneeze: sneeze('bob'),
    fever: fever('bob')
  }
}

var ANSWER = Infer({method: 'forward', samples: 1000}, model);
var allergies = mem(function(person) { return flip(0.3); });
var cold = mem(function(person) { return flip(0.2); });
var sneeze = function(person) { return cold(person) || allergies(person); };
var fever = function(person) { return cold(person); };

var ANSWER = Infer({method: 'forward', samples: 1000}, function() {
  return {sneeze: sneeze('bob'), fever: fever('bob')};
});
var allergies = mem(function(person) { return flip(0.3); });
var cold      = mem(function(person) { return flip(0.2); });

var sneeze = function(person) {
  return cold(person) || allergies(person);
};

var fever = function(person) {
  return cold(person); // fever only from cold, consistent via mem
};

var model = function() {
  return {
    sneeze: sneeze('bob'),
    fever:  fever('bob')
  };
};

var ANSWER = Infer({ method: 'forward', samples: 1000 }, model);
var allergies = mem(function(person) { return flip(0.3); });
var cold      = mem(function(person) { return flip(0.2); });

var sneeze = function(person) {
  return cold(person) || allergies(person);
};

var fever = function(person) {
  return cold(person);
};

var model = function() {
  return {
    sneeze: sneeze('bob'),
    fever:  fever('bob')
  };
};

var ANSWER = Infer({method: 'forward', samples: 1000}, model);
groundtruth output
{"sneeze":false,"fever":false}0.5490
{"sneeze":true,"fever":false}0.2470
{"sneeze":true,"fever":true}0.2040
raw JSON
{
  "__kind": "distribution",
  "probs": [
    0.204,
    0.549,
    0.247
  ],
  "support": [
    {
      "sneeze": true,
      "fever": true
    },
    {
      "sneeze": false,
      "fever": false
    },
    {
      "sneeze": true,
      "fever": false
    }
  ]
}
feedback
not signed
probmods2-generative-models/ex5.b distribution 8 runs TV=0.00 TV=0
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Given:
var makeCoin = function(weight) {
  return function() { return flip(weight) ? 'h' : 't' }
}
var bend = function(coin) {
  return function() {
    return coin() == 'h' ? makeCoin(.7)() : makeCoin(.1)()
  }
}
var fairCoin = makeCoin(.5)
var bentCoin = bend(fairCoin)
End with Infer({method: 'forward', samples: 10000}, bentCoin) returning the bent coin's distribution.
groundtruth code
var makeCoin = function(weight) {
  return function() {
    return flip(weight) ? 'h' : 't';
  };
};
var bend = function(coin) {
  return function() {
    return coin() == 'h' ? makeCoin(.7)() : makeCoin(.1)();
  };
};

var fairCoin = makeCoin(.5);
var bentCoin = bend(fairCoin);
var ANSWER = (Infer({method: 'forward', samples: 10000}, bentCoin));
generated code
sonnet-46-primer-v3 TV=0 haiku-45-noprimer-v3 TV=0 haiku-45-primer-v3 TV=0 haiku-45-think-noprimer-v3 TV=0 haiku-45-think-primer-v3 TV=0 sonnet-46-noprimer-v3 TV=0 sonnet-46-think-noprimer-v3 TV=0 sonnet-46-think-primer-v3 TV=0
var makeCoin = function(weight) {
  return function() { return flip(weight) ? 'h' : 't' }
}
var bend = function(coin) {
  return function() {
    return coin() == 'h' ? makeCoin(.7)() : makeCoin(.1)()
  }
}
var fairCoin = makeCoin(.5)
var bentCoin = bend(fairCoin)

var ANSWER = Infer({method: 'forward', samples: 10000}, bentCoin);
var makeCoin = function(weight) {
  return function() { return flip(weight) ? 'h' : 't' }
}
var bend = function(coin) {
  return function() {
    return coin() == 'h' ? makeCoin(.7)() : makeCoin(.1)()
  }
}
var fairCoin = makeCoin(.5)
var bentCoin = bend(fairCoin)

var ANSWER = Infer({method: 'forward', samples: 10000}, bentCoin);
var makeCoin = function(weight) {
  return function() { return flip(weight) ? 'h' : 't' }
}
var bend = function(coin) {
  return function() {
    return coin() == 'h' ? makeCoin(.7)() : makeCoin(.1)()
  }
}
var fairCoin = makeCoin(.5)
var bentCoin = bend(fairCoin)

var ANSWER = Infer({method: 'forward', samples: 10000}, bentCoin);
var makeCoin = function(weight) {
  return function() { return flip(weight) ? 'h' : 't' }
}
var bend = function(coin) {
  return function() {
    return coin() == 'h' ? makeCoin(.7)() : makeCoin(.1)()
  }
}
var fairCoin = makeCoin(.5)
var bentCoin = bend(fairCoin)

var ANSWER = Infer({method: 'forward', samples: 10000}, bentCoin);
var makeCoin = function(weight) {
  return function() { return flip(weight) ? 'h' : 't' }
}

var bend = function(coin) {
  return function() {
    return coin() == 'h' ? makeCoin(.7)() : makeCoin(.1)()
  }
}

var fairCoin = makeCoin(.5)
var bentCoin = bend(fairCoin)

var ANSWER = Infer({method: 'forward', samples: 10000}, bentCoin);
var makeCoin = function(weight) {
  return function() { return flip(weight) ? 'h' : 't' }
}
var bend = function(coin) {
  return function() {
    return coin() == 'h' ? makeCoin(.7)() : makeCoin(.1)()
  }
}
var fairCoin = makeCoin(.5)
var bentCoin = bend(fairCoin)

var ANSWER = Infer({method: 'forward', samples: 10000}, bentCoin);
var makeCoin = function(weight) {
  return function() { return flip(weight) ? 'h' : 't' }
}
var bend = function(coin) {
  return function() {
    return coin() == 'h' ? makeCoin(.7)() : makeCoin(.1)()
  }
}
var fairCoin = makeCoin(.5)
var bentCoin = bend(fairCoin)

var ANSWER = Infer({method: 'forward', samples: 10000}, bentCoin);
var makeCoin = function(weight) {
  return function() { return flip(weight) ? 'h' : 't'; }
};

var bend = function(coin) {
  return function() {
    return coin() == 'h' ? makeCoin(.7)() : makeCoin(.1)();
  };
};

var fairCoin = makeCoin(.5);
var bentCoin = bend(fairCoin);

var ANSWER = Infer({method: 'forward', samples: 10000}, bentCoin);
groundtruth output
t0.6002
h0.3998
raw JSON
{
  "__kind": "distribution",
  "probs": [
    0.6002,
    0.3998
  ],
  "support": [
    "t",
    "h"
  ]
}
feedback
not signed
probmods2-generative-models/ex6.b distribution 8 runs TV=0.00 TV=0
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
The recursive geometric distribution is defined as:
var geometric = function() {
  return flip() ? 0 : 1 + geometric()
}
End with Infer({method: 'forward', samples: 10000}, geometric) returning its distribution.
groundtruth code
var geometric = function() {
  return flip() ? 0 : 1 + geometric();
};
var ANSWER = (Infer({method: "forward", samples:10000}, geometric));
generated code
sonnet-46-primer-v3 TV=0 haiku-45-noprimer-v3 TV=0 haiku-45-primer-v3 TV=0 haiku-45-think-noprimer-v3 TV=0 haiku-45-think-primer-v3 TV=0 sonnet-46-noprimer-v3 TV=0 sonnet-46-think-noprimer-v3 TV=0 sonnet-46-think-primer-v3 TV=0
var geometric = function() {
  return flip() ? 0 : 1 + geometric();
};

var ANSWER = Infer({method: 'forward', samples: 10000}, geometric);
var geometric = function() {
  return flip() ? 0 : 1 + geometric()
}

var ANSWER = Infer({method: 'forward', samples: 10000}, geometric);
var geometric = function() {
  return flip() ? 0 : 1 + geometric()
};

var ANSWER = Infer({method: 'forward', samples: 10000}, geometric);
var geometric = function() {
  return flip() ? 0 : 1 + geometric()
}

var ANSWER = Infer({method: 'forward', samples: 10000}, geometric);
var geometric = function() {
  return flip() ? 0 : 1 + geometric();
};

var ANSWER = Infer({method: 'forward', samples: 10000}, geometric);
var geometric = function() {
  return flip() ? 0 : 1 + geometric();
};

var ANSWER = Infer({method: 'forward', samples: 10000}, geometric);
var geometric = function() {
  return flip() ? 0 : 1 + geometric();
};

var ANSWER = Infer({method: 'forward', samples: 10000}, geometric);
var geometric = function() {
  return flip() ? 0 : 1 + geometric();
};

var ANSWER = Infer({method: 'forward', samples: 10000}, geometric);
groundtruth output
00.4968
10.2482
20.1294
30.0635
40.0347
50.0133
60.0073
70.0039
80.0017
90.0008
100.0002
110.0001
… 1 more
raw JSON
{
  "__kind": "distribution",
  "probs": [
    0.4968,
    0.2482,
    0.12940000000000002,
    0.0635,
    0.0347,
    0.0133,
    0.007299999999999999,
    0.0038999999999999994,
    0.0017000000000000001,
    0.0008000000000000003,
    0.00019999999999999985,
    0.00010000000000000009,
    0.00010000000000000009
  ],
  "support": [
    0,
    1,
    2,
    3,
    4,
    5,
    6,
    7,
    8,
    9,
    10,
    11,
    13
  ]
}
feedback
not signed
probmods2-generative-models/ex7.a samples 8 runs TV=1.00 TV<1
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Convert this joint probability table into a compact WebPPL program: | A | B | P(A,B) | |---|---|--------| | F | F | 0.14 | | F | T | 0.06 | | T | F | 0.4 | | T | T | 0.4 | Requirement: fix P(A) first, then define the probability of B as a function of A. Use flip(...) for both. End with the expression [a, b] - the harness will rerun your program multiple times. Use a ; to terminate each top-level statement.
groundtruth code
var a = flip(0.8);
var b = flip(a ? 0.5 : 0.3);
var ANSWER = ([a, b]);
generated code
sonnet-46-primer-v3 TV<1 haiku-45-noprimer-v3 TV<1 haiku-45-primer-v3 TV=1 haiku-45-think-noprimer-v3 TV=1 haiku-45-think-primer-v3 TV=1 sonnet-46-noprimer-v3 TV<.5 sonnet-46-think-noprimer-v3 TV<.5 sonnet-46-think-primer-v3 TV=1
var a = flip(0.8);
var b = a ? flip(0.5) : flip(0.3);
var ANSWER = [a, b];
var a = flip(0.8);
var b = flip(a ? 0.5 : 0.3);
var ANSWER = [a, b];
var a = flip(0.8);
var b = flip(a ? 0.5 : 0.3);
var ANSWER = [a, b];
var a = flip(0.8);
var b = flip(a ? 0.5 : 0.3);
var ANSWER = [a, b];
var a = flip(0.8);
var b = flip(a ? 0.5 : 0.3);
var ANSWER = [a, b];
var model = function() {
  var a = flip(0.8);
  var b = a ? flip(0.5) : flip(0.3);
  [a, b]
};

var ANSWER = Infer({method: 'enumerate'}, model);
var model = function() {
  var a = flip(0.8);
  var b = a ? flip(0.5) : flip(0.3);
  return [a, b];
};

var ANSWER = Infer({method: 'enumerate'}, model);
var a = flip(0.8);
var b = a ? flip(0.5) : flip(0.3);
var ANSWER = [a, b];
groundtruth output
[
  [
    true,
    false
  ],
  [
    true,
    true
  ],
  [
    false,
    false
  ],
  [
    true,
    true
  ],
  [
    true,
    true
  ],
  [
    true,
    true
  ],
  [
    true,
    true
  ],
  [
    true,
    true
  ],
  [
    true,
    true
  ],
  [
    true,
    false
  ],
  [
    true,
    true
  ],
  [
    true,
    true
  ],
  [
    false,
    false
  ],
  [
    true,
    false
  ],
  [
    true,
    false
  ],
  [
    true,
    true
  ],
  [
    true,
    true
  ],
  [
    true,
    true
  ],
  [
    false,
    false
  ],
  [
    true,
    false
  ],
  [
    true,
    false
  ],
  [
    true,
    false
  ],
  [
    true,
    true
  ],
  [
    true,
    false
  ],
  [
    true,
    false
  ],
  [
    true,
    true
  ],
  [
    true,
    true
  ],
  [
    true,
    false
  ],
  [
    true,
    true
  ],
  [
    true,
    false
  ],
  [
    true,
    false
  ],
  [
    true,
    false
  ],
  [
    true,
    false
  ],
  [
    true,
    true
  ],
  [
    true,
    false
  ],
  [
    false,
    false
  ],
  [
    true,
    true
  ],
  [
    true,
    true
  ],
  [
    true,
    true
  ],
  [
    false,
    false
  ],
  [
    true,
    true
  ],
  [
    true,
    true
  ],
  [
    false,
    true
  ],
  [
    true,
    false
  ],
  [
    true,
    true
  ],
  [
    true,
    false
  ],
  [
    true,
    true
  ],
  [
    true,
    true
  ],
  [
    true,
    false
  ],
  [
    true,
    true
  ],
  [
    true,
    false
  ],
  [
    true,
    true
  ],
  [
    true,
    true
  ],
  [
    true,
    true
  ],
  [
    true,
    false
  ],
  [
    true,
    false
  ],
  [
    true,
    true
  ],
  [
    true,
    false
  ],
  [
    true,
    true
  ],
  [
    true,
    true
  ],
  [
    true,
    true
  ],
  [
    false,
    false
  ],
  [
    true,
    false
  ],
  [
    true,
    true
  ],
  [
    true,
    true
  ],
  [
    false,
    false
  ],
  [
    true,
    false
  ],
  [
    false,
    true
  ],
  [
    false,
    false
  ],
  [
    false,
    false
  ],
  [
    true,
    false
  ],
  [
    false,
    false
  ],
  [
    true,
    false
  ],
  [
    true,
    true
  ],
  [
    false,
    false
  ],
  [
    true,
    true
  ],
  [
    true,
    false
  ],
  [
    true,
    true
  ],
  [
    false,
    false
  ],
  [
    true,
    false
  ],
  [
    false,
    false
  ],
  [
    true,
    false
  ],
  [
    true,
    true
  ],
  [
    false,
    false
  ],
  [
    true,
    true
  ],
  [
    true,
    true
  ],
  [
    true,
    true
  ],
  [
    true,
    false
  ],
  [
    true,
    true
  ],
  [
    true,
    false
  ],
  [
    true,
    false
  ],
  [
    true,
    true
  ],
  [
    true,
    false
  ],
  [
    false,
    false
  ],
  [
    true,
    false
  ],
  [
    true,
    true
  ],
  [
    true,
    false
  ],
  [
    false,
    false
  ],
  [
    true,
    true
  ],
  [
    true,
    false
  ]
]
feedback
not signed
probmods2-generative-models/ex7.b distribution 8 runs TV=0.00 TV=0
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Wrap the joint probability program for (A, B) - where P(A=T)=0.8 and P(B=T|A) is 0.5 if A=T else 0.3 - in Infer({method: 'forward', samples: 10000}, ...) to verify the joint distribution. End with the Infer(...) returning [a, b].
groundtruth code
var ANSWER = (Infer({method: "forward", samples: 10000}, function() {
  var a = flip(0.8);
  var b = flip(a ? 0.5 : 0.3);
  return [a, b];
}));
generated code
sonnet-46-primer-v3 TV=0 haiku-45-noprimer-v3 TV=0 haiku-45-primer-v3 TV=0 haiku-45-think-noprimer-v3 TV=0 haiku-45-think-primer-v3 TV=0 sonnet-46-noprimer-v3 TV=0 sonnet-46-think-noprimer-v3 TV=0 sonnet-46-think-primer-v3 TV=0
var model = function() {
  var a = flip(0.8);
  var b = flip(a ? 0.5 : 0.3);
  return [a, b];
};

var ANSWER = Infer({method: 'forward', samples: 10000}, model);
var model = function() {
  var a = flip(0.8);
  var b = flip(a ? 0.5 : 0.3);
  return [a, b];
};

var ANSWER = Infer({method: 'forward', samples: 10000}, model);
var ANSWER = Infer({method: 'forward', samples: 10000}, function() {
  var a = flip(0.8);
  var b = flip(a ? 0.5 : 0.3);
  return [a, b];
});
var ANSWER = Infer({method: 'forward', samples: 10000}, function() {
  var a = flip(0.8);
  var b = flip(a ? 0.5 : 0.3);
  return [a, b];
});
var ANSWER = Infer({method: 'forward', samples: 10000}, function() {
  var a = flip(0.8);
  var b = flip(a ? 0.5 : 0.3);
  return [a, b];
});
var ANSWER = Infer({method: 'forward', samples: 10000}, function() {
  var a = flip(0.8);
  var b = a ? flip(0.5) : flip(0.3);
  return [a, b];
});
var ANSWER = Infer({method: 'forward', samples: 10000}, function() {
  var a = flip(0.8);
  var b = a ? flip(0.5) : flip(0.3);
  return [a, b];
});
var model = function() {
  var a = flip(0.8);
  var b = a ? flip(0.5) : flip(0.3);
  return [a, b];
};

var ANSWER = Infer({method: 'forward', samples: 10000}, model);
groundtruth output
[true,true]0.4056
[true,false]0.3958
[false,false]0.1360
[false,true]0.0626
raw JSON
{
  "__kind": "distribution",
  "probs": [
    0.3958,
    0.4056,
    0.06259999999999999,
    0.136
  ],
  "support": [
    [
      true,
      false
    ],
    [
      true,
      true
    ],
    [
      false,
      true
    ],
    [
      false,
      false
    ]
  ]
}
feedback
not signed
hierarchical-models 7 atoms 4✓ · 0✗
probmods2-hierarchical-models/ex1 record(observed, usealpha) 8 runs TV=0.00 TV=0
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Show that setting Dirichlet alpha = [2, 3, 1, 1, 1] is equivalent to setting alpha = [1, 1, 1, 1, 1] then observing the first category once and the second twice. Build the second model (with alpha = [2, 3, 1, 1, 1]) given the first.
var colors = ['black', 'blue', 'green', 'orange', 'red'];
var observedData = [{bag: 'bag1', draw: 'blue'},
                    {bag: 'bag1', draw: 'blue'},
                    {bag: 'bag1', draw: 'black'}];

var observed = Infer({method: 'MCMC', samples: 20000}, function() {
  var makeBag = mem(function(bag) {
    var colorProbs = dirichlet(ones([colors.length, 1]));
    return Categorical({vs: colors, ps: colorProbs});
  })
  var obsFn = function(datum) { observe(makeBag(datum.bag), datum.draw); }
  mapData({data: observedData}, obsFn);
  return {bag1: sample(makeBag('bag1'))};
})

var usealpha = Infer({method: 'MCMC', samples: 20000}, function () {
  // ...fill in: alpha = [2, 3, 1, 1, 1] without observation
})
Return an object literal {observed, usealpha} with both posteriors.
groundtruth code
var colors = ['black', 'blue', 'green', 'orange', 'red'];
var observedData = [{bag: 'bag1', draw: 'blue'},
                    {bag: 'bag1', draw: 'blue'},
                    {bag: 'bag1', draw: 'black'}];

var observed = Infer({method: 'MCMC', samples: 20000}, function() {
  var makeBag = mem(function(bag) {
    var colorProbs = dirichlet(ones([colors.length, 1]));
    return Categorical({vs: colors, ps: colorProbs});
  });
  var obsFn = function(datum) { observe(makeBag(datum.bag), datum.draw); };
  mapData({data: observedData}, obsFn);
  return {bag1: sample(makeBag('bag1'))};
});

var usealpha = Infer({method: 'MCMC', samples: 20000}, function () {
  var makeBag = mem(function(bag) {
    var colorProbs = dirichlet(Vector([2, 3, 1, 1, 1]));
    return Categorical({vs: colors, ps: colorProbs});
  });
  return {bag1: sample(makeBag('bag1'))};
});
var ANSWER = (({observed: observed, usealpha: usealpha}));
generated code
sonnet-46-primer-v3 TV=0 haiku-45-noprimer-v3 fail haiku-45-primer-v3 TV=0 haiku-45-think-noprimer-v3 fail haiku-45-think-primer-v3 TV=0 sonnet-46-noprimer-v3 TV=0 sonnet-46-think-noprimer-v3 TV=0 sonnet-46-think-primer-v3 TV=0
var colors = ['black', 'blue', 'green', 'orange', 'red'];
var observedData = [{bag: 'bag1', draw: 'blue'},
                    {bag: 'bag1', draw: 'blue'},
                    {bag: 'bag1', draw: 'black'}];

var observed = Infer({method: 'MCMC', samples: 20000}, function() {
  var makeBag = mem(function(bag) {
    var colorProbs = dirichlet(ones([colors.length, 1]));
    return Categorical({vs: colors, ps: colorProbs});
  });
  var obsFn = function(datum) { observe(makeBag(datum.bag), datum.draw); };
  mapData({data: observedData}, obsFn);
  return {bag1: sample(makeBag('bag1'))};
});

var usealpha = Infer({method: 'MCMC', samples: 20000}, function() {
  var makeBag = mem(function(bag) {
    // alpha = [2, 3, 1, 1, 1] corresponds to black=2, blue=3, green=1, orange=1, red=1
    // which is equivalent to uniform prior + 1 black observation + 2 blue observations
    var colorProbs = dirichlet(Vector([2, 3, 1, 1, 1]));
    return Categorical({vs: colors, ps: colorProbs});
  });
  return {bag1: sample(makeBag('bag1'))};
});

var ANSWER = {observed: observed, usealpha: usealpha};
Error: Parameter "alpha" should be of type "vector (0, Infinity)".
var colors = ['black', 'blue', 'green', 'orange', 'red'];
var observedData = [{bag: 'bag1', draw: 'blue'},
                    {bag: 'bag1', draw: 'blue'},
                    {bag: 'bag1', draw: 'black'}];

var observed = Infer({method: 'MCMC', samples: 20000}, function() {
  var makeBag = mem(function(bag) {
    var colorProbs = dirichlet(ones([colors.length, 1]));
    return Categorical({vs: colors, ps: colorProbs});
  })
  var obsFn = function(datum) { observe(makeBag(datum.bag), datum.draw); }
  mapData({data: observedData}, obsFn);
  return {bag1: sample(makeBag('bag1'))};
})

var usealpha = Infer({method: 'MCMC', samples: 20000}, function () {
  var makeBag = mem(function(bag) {
    var alpha = [2, 3, 1, 1, 1];
    var colorProbs = dirichlet(alpha);
    return Categorical({vs: colors, ps: colorProbs});
  })
  return {bag1: sample(makeBag('bag1'))};
})

var ANSWER = {observed: observed, usealpha: usealpha};
var colors = ['black', 'blue', 'green', 'orange', 'red'];
var observedData = [{bag: 'bag1', draw: 'blue'},
                    {bag: 'bag1', draw: 'blue'},
                    {bag: 'bag1', draw: 'black'}];

var observed = Infer({method: 'MCMC', samples: 20000}, function() {
  var makeBag = mem(function(bag) {
    var colorProbs = dirichlet(ones([colors.length, 1]));
    return Categorical({vs: colors, ps: colorProbs});
  })
  var obsFn = function(datum) { observe(makeBag(datum.bag), datum.draw); }
  mapData({data: observedData}, obsFn);
  return {bag1: sample(makeBag('bag1'))};
})

var usealpha = Infer({method: 'MCMC', samples: 20000}, function () {
  var makeBag = mem(function(bag) {
    var alpha = Vector([2, 3, 1, 1, 1]);
    var colorProbs = dirichlet(alpha);
    return Categorical({vs: colors, ps: colorProbs});
  })
  return {bag1: sample(makeBag('bag1'))};
})

var ANSWER = {observed: observed, usealpha: usealpha};
Error: Parameter "alpha" should be of type "vector (0, Infinity)".
var colors = ['black', 'blue', 'green', 'orange', 'red'];
var observedData = [{bag: 'bag1', draw: 'blue'},
                    {bag: 'bag1', draw: 'blue'},
                    {bag: 'bag1', draw: 'black'}];

var observed = Infer({method: 'MCMC', samples: 20000}, function() {
  var makeBag = mem(function(bag) {
    var colorProbs = dirichlet(ones([colors.length, 1]));
    return Categorical({vs: colors, ps: colorProbs});
  })
  var obsFn = function(datum) { observe(makeBag(datum.bag), datum.draw); }
  mapData({data: observedData}, obsFn);
  return {bag1: sample(makeBag('bag1'))};
})

var usealpha = Infer({method: 'MCMC', samples: 20000}, function () {
  var makeBag = mem(function(bag) {
    var colorProbs = dirichlet([2, 3, 1, 1, 1]);
    return Categorical({vs: colors, ps: colorProbs});
  })
  return {bag1: sample(makeBag('bag1'))};
})

var ANSWER = {observed, usealpha};
var colors = ['black', 'blue', 'green', 'orange', 'red'];
var observedData = [{bag: 'bag1', draw: 'blue'},
                    {bag: 'bag1', draw: 'blue'},
                    {bag: 'bag1', draw: 'black'}];

var observed = Infer({method: 'MCMC', samples: 20000}, function() {
  var makeBag = mem(function(bag) {
    var colorProbs = dirichlet(ones([colors.length, 1]));
    return Categorical({vs: colors, ps: colorProbs});
  })
  var obsFn = function(datum) { observe(makeBag(datum.bag), datum.draw); }
  mapData({data: observedData}, obsFn);
  return {bag1: sample(makeBag('bag1'))};
})

var usealpha = Infer({method: 'MCMC', samples: 20000}, function () {
  var colorProbs = dirichlet(Vector([2, 3, 1, 1, 1]));
  var sampleColor = sample(Categorical({vs: colors, ps: colorProbs}));
  return {bag1: sampleColor};
})

var ANSWER = {observed: observed, usealpha: usealpha};
var colors = ['black', 'blue', 'green', 'orange', 'red'];
var observedData = [{bag: 'bag1', draw: 'blue'},
                    {bag: 'bag1', draw: 'blue'},
                    {bag: 'bag1', draw: 'black'}];

var observed = Infer({method: 'MCMC', samples: 20000}, function() {
  var makeBag = mem(function(bag) {
    var colorProbs = dirichlet(ones([colors.length, 1]));
    return Categorical({vs: colors, ps: colorProbs});
  });
  var obsFn = function(datum) { observe(makeBag(datum.bag), datum.draw); };
  mapData({data: observedData}, obsFn);
  return {bag1: sample(makeBag('bag1'))};
});

var usealpha = Infer({method: 'MCMC', samples: 20000}, function() {
  // alpha = [2, 3, 1, 1, 1] corresponds to:
  // black=2 (1 prior + 1 observed black),
  // blue=3 (1 prior + 2 observed blue),
  // green=1, orange=1, red=1
  var colorProbs = dirichlet(Vector([2, 3, 1, 1, 1]));
  var bag1 = Categorical({vs: colors, ps: colorProbs});
  return {bag1: sample(bag1)};
});

var ANSWER = {observed: observed, usealpha: usealpha};
var colors = ['black', 'blue', 'green', 'orange', 'red'];
var observedData = [{bag: 'bag1', draw: 'blue'},
                    {bag: 'bag1', draw: 'blue'},
                    {bag: 'bag1', draw: 'black'}];

var observed = Infer({method: 'MCMC', samples: 20000}, function() {
  var makeBag = mem(function(bag) {
    var colorProbs = dirichlet(ones([colors.length, 1]));
    return Categorical({vs: colors, ps: colorProbs});
  });
  var obsFn = function(datum) { observe(makeBag(datum.bag), datum.draw); };
  mapData({data: observedData}, obsFn);
  return {bag1: sample(makeBag('bag1'))};
});

var usealpha = Infer({method: 'MCMC', samples: 20000}, function() {
  // alpha = [1,1,1,1,1] + 1 black + 2 blue = [2, 3, 1, 1, 1]
  // colors order: black, blue, green, orange, red
  var alpha = Vector([2, 3, 1, 1, 1]);
  var makeBag = mem(function(bag) {
    var colorProbs = dirichlet(alpha);
    return Categorical({vs: colors, ps: colorProbs});
  });
  return {bag1: sample(makeBag('bag1'))};
});

var ANSWER = {observed: observed, usealpha: usealpha};
var colors = ['black', 'blue', 'green', 'orange', 'red'];
var observedData = [{bag: 'bag1', draw: 'blue'},
                    {bag: 'bag1', draw: 'blue'},
                    {bag: 'bag1', draw: 'black'}];

var observed = Infer({method: 'MCMC', samples: 20000}, function() {
  var makeBag = mem(function(bag) {
    var colorProbs = dirichlet(ones([colors.length, 1]));
    return Categorical({vs: colors, ps: colorProbs});
  });
  var obsFn = function(datum) { observe(makeBag(datum.bag), datum.draw); };
  mapData({data: observedData}, obsFn);
  return {bag1: sample(makeBag('bag1'))};
});

var usealpha = Infer({method: 'MCMC', samples: 20000}, function() {
  // alpha = [2,3,1,1,1]: black(+1 obs) = 2, blue(+2 obs) = 3, rest = 1
  var colorProbs = dirichlet(Vector([2, 3, 1, 1, 1]));
  var bag1 = Categorical({vs: colors, ps: colorProbs});
  return {bag1: sample(bag1)};
});

var ANSWER = {observed: observed, usealpha: usealpha};
groundtruth output
{
  "observed": {
    "__kind": "distribution",
    "probs": [
      0.12054999999999999,
      0.3799,
      0.13419999999999999,
      0.11934999999999997,
      0.24599999999999997
    ],
    "support": [
      {
        "bag1": "red"
      },
      {
        "bag1": "blue"
      },
      {
        "bag1": "orange"
      },
      {
        "bag1": "green"
      },
      {
        "bag1": "black"
      }
    ]
  },
  "usealpha": {
    "__kind": "distribution",
    "probs": [
      0.24599999999999997,
      0.12660000000000002,
      0.3772,
      0.12575,
      0.12445000000000003
    ],
    "support": [
      {
        "bag1": "black"
      },
      {
        "bag1": "orange"
      },
      {
        "bag1": "blue"
      },
      {
        "bag1": "green"
      },
      {
        "bag1": "red"
      }
    ]
  }
}
feedback
not signed
probmods2-hierarchical-models/ex2.1 distribution 8 runs TV=0.10 TV<.5
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Apples in a barrel: each apple is rotten with probability p, where p ~ Beta(.1, .2) (mass at endpoints). Implement makeBarrel(barrelName) returning a function barrel(n) that returns an array of n booleans (whether each apple is rotten). End with the Infer(...) over Math.sum(barrel(10)).
groundtruth code
var makeBarrel = mem(function(barrelName) {
  var pRotten = beta({a: .1, b: .2});
  var barrel = function(n) {
    return repeat(n, function() { flip(pRotten) });
  };
  return barrel;
});
var ANSWER = (Infer({method: 'forward'}, function() {
  var barrel = makeBarrel('barrel');
  return Math.sum(barrel(10));
}));
generated code
sonnet-46-primer-v3 TV<.5 haiku-45-noprimer-v3 TV<1 haiku-45-primer-v3 TV<1 haiku-45-think-noprimer-v3 fail haiku-45-think-primer-v3 TV<.5 sonnet-46-noprimer-v3 TV<.5 sonnet-46-think-noprimer-v3 TV<.5 sonnet-46-think-primer-v3 TV<.5
var makeBarrel = function(barrelName) {
  var p = mem(function(name) { return beta(0.1, 0.2); });
  var barrel = function(n) {
    return repeat(n, function() { return flip(p(barrelName)); });
  };
  return barrel;
};

var model = function() {
  var barrel = makeBarrel("myBarrel");
  return Math.sum(barrel(10));
};

var ANSWER = Infer({method: 'forward', samples: 10000}, model);
var makeBarrel = function(barrelName) {
  var p = sample(Beta({a: 0.1, b: 0.2}));
  return function barrel(n) {
    return mapN(function(i) {
      return flip(p);
    }, n);
  };
};

var myBarrel = makeBarrel("barrel1");
var model = function() {
  var applesRotten = myBarrel(10);
  return Math.sum(applesRotten);
};

var ANSWER = Infer({method: "enumerate"}, model);
var makeBarrel = function(barrelName) {
  var p = sample(Beta({a: 0.1, b: 0.2}));
  var barrel = function(n) {
    return repeat(n, function() {
      return flip(p);
    });
  };
  return barrel;
};

var barrel = makeBarrel("myBarrel");

var ANSWER = Infer({method: 'MCMC', samples: 5000}, function() {
  var apples = barrel(10);
  var rottenCount = sum(apples);
  return rottenCount;
});
ReferenceError: range is not defined
var makeBarrel = function(barrelName) {
  var p = beta({a: 0.1, b: 0.2});
  return function barrel(n) {
    return map(function() {
      return flip(p);
    }, range(n));
  };
};

var ANSWER = Infer({method: 'rejection', samples: 1000}, function() {
  var barrel = makeBarrel("myBarrel");
  return Math.sum(barrel(10));
});
var makeBarrel = function(barrelName) {
  var barrel = function(n) {
    var p = sample(Beta({a: 0.1, b: 0.2}));
    return repeat(n, function() { return flip(p); });
  };
  return barrel;
};

var barrel = makeBarrel("mybarrel");
var ANSWER = Infer({method: 'rejection', samples: 1000}, function() {
  var apples = barrel(10);
  return sum(map(function(isRotten) { return isRotten ? 1 : 0; }, apples));
});
var makeBarrel = function(barrelName) {
  var p = sample(Beta({a: 0.1, b: 0.2}));
  var barrel = function(n) {
    return repeat(n, function() {
      return sample(Bernoulli({p: p}));
    });
  };
  return barrel;
};

var model = function() {
  var barrel = makeBarrel("myBarrel");
  var apples = barrel(10);
  return Math.sum(apples);
};

var ANSWER = Infer({method: 'MCMC', samples: 10000}, model);
var makeBarrel = function(barrelName) {
  var p = sample(Beta({a: 0.1, b: 0.2}));
  var barrel = function(n) {
    return repeat(n, function() {
      return sample(Bernoulli({p: p}));
    });
  };
  return barrel;
};

var model = function() {
  var barrel = makeBarrel("myBarrel");
  var apples = barrel(10);
  return Math.sum(apples);
};

var ANSWER = Infer({method: 'forward', samples: 50000}, model);
var ANSWER = Infer({method: 'forward', samples: 20000}, function() {

  var makeBarrel = mem(function(barrelName) {
    var p = beta(0.1, 0.2);
    var barrel = function(n) {
      return repeat(n, function() { return flip(p); });
    };
    return barrel;
  });

  var barrel = makeBarrel("myBarrel");
  return sum(map(function(x) { return x ? 1 : 0; }, barrel(10)));

});
groundtruth output
00.4600
100.1800
90.0900
10.0800
30.0500
50.0400
20.0300
80.0300
60.0200
40.0100
70.0100
raw JSON
{
  "__kind": "distribution",
  "probs": [
    0.46,
    0.07999999999999999,
    0.029999999999999995,
    0.05,
    0.010000000000000005,
    0.04000000000000001,
    0.020000000000000004,
    0.010000000000000005,
    0.029999999999999995,
    0.09000000000000002,
    0.17999999999999997
  ],
  "support": [
    0,
    1,
    2,
    3,
    4,
    5,
    6,
    7,
    8,
    9,
    10
  ]
}
feedback
not signed
probmods2-hierarchical-models/ex2.2 record(sameStore, differentStore) 8 runs TV=0.00 TV=0
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Apples in a barrel: each apple is rotten with probability pRotten, and pRotten is itself drawn from a Beta distribution. Different *stores* tend to be either mostly-rotten or mostly-fresh, captured by drawing the Beta hyperparameters from a discrete prior. Implement makeStore(storeName) that returns a makeBarrel(barrelName) function. Each store's Beta hyperparameters come from a 50/50 mix: {a: .1, b: .3} (mostly fresh) vs {a: .3, b: .1} (mostly rotten). Within a store, all makeBarrel(...) calls share that store's Beta. Memoize at both store and barrel level. The barrel function takes n and returns an array of n booleans (rotten or not). End your program with an object literal {sameStore, differentStore}, each a forward-sample distribution over the absolute difference in rotten counts between two barrels of size 10: - sameStore: both barrels from the same store - differentStore: barrels from two different stores Use Infer({method: 'forward', samples: 10000}, ...) for each.
groundtruth code
var makeStore = mem(function(storeName) {
  var storePrior = flip() ? {a: .1, b: .3} : {a: .3, b: .1};
  var makeBarrel = mem(function(barrelName) {
    var pRotten = beta(storePrior);
    var barrel = function(n) {
      return repeat(n, function() { flip(pRotten) });
    };
    return barrel;
  });
  return makeBarrel;
});
var ANSWER = (({
  sameStore: Infer({method: 'forward', samples: 10000}, function() {
    var S = makeStore('S');
    var B1 = S('B1');
    var B2 = S('B2');
    return Math.abs(Math.sum(B1(10)) - Math.sum(B2(10)));
  }),
  differentStore: Infer({method: 'forward', samples: 10000}, function() {
    var S1 = makeStore('S1');
    var S2 = makeStore('S2');
    var B1 = S1('B1');
    var B2 = S2('B2');
    return Math.abs(Math.sum(B1(10)) - Math.sum(B2(10)));
  })
}));
generated code
sonnet-46-primer-v3 TV=0 haiku-45-noprimer-v3 TV<.5 haiku-45-primer-v3 TV<.5 haiku-45-think-noprimer-v3 fail haiku-45-think-primer-v3 TV<1 sonnet-46-noprimer-v3 TV=0 sonnet-46-think-noprimer-v3 TV=0 sonnet-46-think-primer-v3 TV=0
var makeStore = mem(function(storeName) {
  var hyperparams = flip(0.5) ? {a: 0.1, b: 0.3} : {a: 0.3, b: 0.1};
  
  var makeBarrel = mem(function(barrelName) {
    var pRotten = beta(hyperparams.a, hyperparams.b);
    
    var barrel = function(n) {
      return repeat(n, function() { return flip(pRotten); });
    };
    
    return barrel;
  });
  
  return makeBarrel;
});

var countRotten = function(arr) {
  return reduce(function(x, acc) { return acc + (x ? 1 : 0); }, 0, arr);
};

var sameStore = Infer({method: 'forward', samples: 10000}, function() {
  var makeBarrel = makeStore('store1');
  var barrel1 = makeBarrel('barrel1');
  var barrel2 = makeBarrel('barrel2');
  var count1 = countRotten(barrel1(10));
  var count2 = countRotten(barrel2(10));
  return Math.abs(count1 - count2);
});

var differentStore = Infer({method: 'forward', samples: 10000}, function() {
  var makeBarrel1 = makeStore('storeA');
  var makeBarrel2 = makeStore('storeB');
  var barrel1 = makeBarrel1('barrel1');
  var barrel2 = makeBarrel2('barrel2');
  var count1 = countRotten(barrel1(10));
  var count2 = countRotten(barrel2(10));
  return Math.abs(count1 - count2);
});

var ANSWER = {sameStore: sameStore, differentStore: differentStore};
var makeStore = mem(function(storeName) {
  var hyperparams = flip(0.5) ? {a: 0.1, b: 0.3} : {a: 0.3, b: 0.1};
  var pRotten = beta(hyperparams.a, hyperparams.b);
  
  var makeBarrel = mem(function(barrelName) {
    return function(n) {
      return repeat(n, function() {
        return flip(pRotten);
      });
    };
  });
  
  return makeBarrel;
});

var sameStore = Infer({method: 'forward', samples: 10000}, function() {
  var store = makeStore('store1');
  var barrel1 = store('barrel1')(10);
  var barrel2 = store('barrel2')(10);
  var count1 = sum(barrel1);
  var count2 = sum(barrel2);
  return Math.abs(count1 - count2);
});

var differentStore = Infer({method: 'forward', samples: 10000}, function() {
  var store1 = makeStore('store1');
  var store2 = makeStore('store2');
  var barrel1 = store1('barrel1')(10);
  var barrel2 = store2('barrel1')(10);
  var count1 = sum(barrel1);
  var count2 = sum(barrel2);
  return Math.abs(count1 - count2);
});

var ANSWER = {sameStore: sameStore, differentStore: differentStore};
var makeStore = mem(function(storeName) {
  // Sample Beta hyperparameters for this store
  var hyperparams = flip(0.5) ? {a: 0.1, b: 0.3} : {a: 0.3, b: 0.1};
  
  // Sample pRotten from Beta distribution with these hyperparameters
  var pRotten = beta(hyperparams.a, hyperparams.b);
  
  // Return a function that creates barrels in this store
  return mem(function(barrelName) {
    // Each barrel is an array of n booleans indicating rotten apples
    return function(n) {
      return repeat(n, function() {
        return flip(pRotten);
      });
    };
  });
});

// Distribution for same store comparison
var sameStore = Infer({method: 'forward', samples: 10000}, function() {
  var store = makeStore('store1');
  var barrel1 = store('barrel1')(10);
  var barrel2 = store('barrel2')(10);
  
  var rottenCount1 = sum(barrel1);
  var rottenCount2 = sum(barrel2);
  
  return Math.abs(rottenCount1 - rottenCount2);
});

// Distribution for different store comparison
var differentStore = Infer({method: 'forward', samples: 10000}, function() {
  var store1 = makeStore('store1');
  var store2 = makeStore('store2');
  
  var barrel1 = store1('barrel1')(10);
  var barrel2 = store2('barrel1')(10);
  
  var rottenCount1 = sum(barrel1);
  var rottenCount2 = sum(barrel2);
  
  return Math.abs(rottenCount1 - rottenCount2);
});

var ANSWER = {sameStore: sameStore, differentStore: differentStore};
ReferenceError: range is not defined
// Memoized store factory with hyperparameter selection
var makeStore = mem(function(storeName) {
  // Each store draws hyperparameters from discrete prior
  var hyperparams = flip(0.5) ? 
    {a: 0.1, b: 0.3} :  // mostly fresh
    {a: 0.3, b: 0.1};   // mostly rotten
  
  // Return a memoized barrel-making function for this store
  var makeBarrel = mem(function(barrelName) {
    // Draw pRotten from this store's Beta distribution
    var pRotten = beta(hyperparams.a, hyperparams.b);
    
    // Return function that generates n apples
    return function(n) {
      return map(function() {
        return flip(pRotten);
      }, range(n));
    };
  });
  
  return makeBarrel;
});

// Sample distribution for same store
var sameStore = Infer({method: 'forward', samples: 10000}, function() {
  var store = makeStore('store1');
  var barrel1 = store('barrel1')(10);
  var barrel2 = store('barrel2')(10);
  
  var rottenCount1 = sum(barrel1);
  var rottenCount2 = sum(barrel2);
  
  return Math.abs(rottenCount1 - rottenCount2);
});

// Sample distribution for different stores
var differentStore = Infer({method: 'forward', samples: 10000}, function() {
  var store1 = makeStore('store1');
  var store2 = makeStore('store2');
  
  var barrel1 = store1('barrel1')(10);
  var barrel2 = store2('barrel1')(10);
  
  var rottenCount1 = sum(barrel1);
  var rottenCount2 = sum(barrel2);
  
  return Math.abs(rottenCount1 - rottenCount2);
});

var ANSWER = {sameStore: sameStore, differentStore: differentStore};
var makeStore = mem(function(storeName) {
  // Draw Beta hyperparameters from discrete prior (50/50)
  var hyperparams = flip(0.5) ? {a: 0.1, b: 0.3} : {a: 0.3, b: 0.1};
  
  // Return a memoized barrel function for this store
  var makeBarrel = mem(function(barrelName) {
    // Draw pRotten from the store's Beta distribution
    var pRotten = beta(hyperparams.a, hyperparams.b);
    
    // Return function that generates n apples
    return function(n) {
      return map(function() {
        return flip(pRotten);
      }, _.range(0, n));
    };
  });
  
  return makeBarrel;
});

// Inference: two barrels from the same store
var sameStore = Infer({method: 'forward', samples: 10000}, function() {
  var store = makeStore("store1");
  var barrel1 = store("barrel1")(10);
  var barrel2 = store("barrel2")(10);
  
  var count1 = _.filter(barrel1, function(x) { return x; }).length;
  var count2 = _.filter(barrel2, function(x) { return x; }).length;
  
  return Math.abs(count1 - count2);
});

// Inference: two barrels from different stores
var differentStore = Infer({method: 'forward', samples: 10000}, function() {
  var store1 = makeStore("store_A");
  var store2 = makeStore("store_B");
  
  var barrel1 = store1("barrel1")(10);
  var barrel2 = store2("barrel1")(10);
  
  var count1 = _.filter(barrel1, function(x) { return x; }).length;
  var count2 = _.filter(barrel2, function(x) { return x; }).length;
  
  return Math.abs(count1 - count2);
});

var ANSWER = {sameStore: sameStore, differentStore: differentStore};
var makeStore = mem(function(storeName) {
  var hyperparams = flip(0.5) ? {a: 0.1, b: 0.3} : {a: 0.3, b: 0.1};
  
  var makeBarrel = mem(function(barrelName) {
    var pRotten = beta(hyperparams.a, hyperparams.b);
    
    return function(n) {
      return repeat(n, function() { return flip(pRotten); });
    };
  });
  
  return makeBarrel;
});

var sameStoreDist = Infer({method: 'forward', samples: 10000}, function() {
  var makeBarrel = makeStore('store1');
  var barrel1 = makeBarrel('barrel1');
  var barrel2 = makeBarrel('barrel2');
  var apples1 = barrel1(10);
  var apples2 = barrel2(10);
  var count1 = sum(map(function(x) { return x ? 1 : 0; }, apples1));
  var count2 = sum(map(function(x) { return x ? 1 : 0; }, apples2));
  return Math.abs(count1 - count2);
});

var differentStoreDist = Infer({method: 'forward', samples: 10000}, function() {
  var makeBarrelA = makeStore('storeA');
  var makeBarrelB = makeStore('storeB');
  var barrel1 = makeBarrelA('barrel1');
  var barrel2 = makeBarrelB('barrel2');
  var apples1 = barrel1(10);
  var apples2 = barrel2(10);
  var count1 = sum(map(function(x) { return x ? 1 : 0; }, apples1));
  var count2 = sum(map(function(x) { return x ? 1 : 0; }, apples2));
  return Math.abs(count1 - count2);
});

var ANSWER = {sameStore: sameStoreDist, differentStore: differentStoreDist};
var sameStoreDist = Infer({method: 'forward', samples: 10000}, function() {
  var makeStore = mem(function(storeName) {
    var hyperparams = flip(0.5) ? {a: 0.1, b: 0.3} : {a: 0.3, b: 0.1};

    var makeBarrel = mem(function(barrelName) {
      var pRotten = sample(Beta({a: hyperparams.a, b: hyperparams.b}));
      return function(n) {
        return repeat(n, function() { return sample(Bernoulli({p: pRotten})); });
      };
    });

    return makeBarrel;
  });

  var store      = makeStore('store1');
  var barrel1    = store('barrel1');
  var barrel2    = store('barrel2');
  var apples1    = barrel1(10);
  var apples2    = barrel2(10);
  var count1     = sum(map(function(x) { return x ? 1 : 0; }, apples1));
  var count2     = sum(map(function(x) { return x ? 1 : 0; }, apples2));
  return Math.abs(count1 - count2);
});

var differentStoreDist = Infer({method: 'forward', samples: 10000}, function() {
  var makeStore = mem(function(storeName) {
    var hyperparams = flip(0.5) ? {a: 0.1, b: 0.3} : {a: 0.3, b: 0.1};

    var makeBarrel = mem(function(barrelName) {
      var pRotten = sample(Beta({a: hyperparams.a, b: hyperparams.b}));
      return function(n) {
        return repeat(n, function() { return sample(Bernoulli({p: pRotten})); });
      };
    });

    return makeBarrel;
  });

  var store1     = makeStore('store1');
  var store2     = makeStore('store2');
  var barrel1    = store1('barrel1');
  var barrel2    = store2('barrel2');
  var apples1    = barrel1(10);
  var apples2    = barrel2(10);
  var count1     = sum(map(function(x) { return x ? 1 : 0; }, apples1));
  var count2     = sum(map(function(x) { return x ? 1 : 0; }, apples2));
  return Math.abs(count1 - count2);
});

var ANSWER = {sameStore: sameStoreDist, differentStore: differentStoreDist};
var makeStore = mem(function(storeName) {
  var hyperparams = flip(0.5) ? {a: 0.1, b: 0.3} : {a: 0.3, b: 0.1};

  var makeBarrel = mem(function(barrelName) {
    var pRotten = beta(hyperparams.a, hyperparams.b);
    return function(n) {
      return repeat(n, function() { return flip(pRotten); });
    };
  });

  return makeBarrel;
});

var rottenCount = function(apples) {
  return sum(map(function(x) { return x ? 1 : 0; }, apples));
};

var sameStore = Infer({method: 'forward', samples: 10000}, function() {
  var store = makeStore('storeA');
  var barrel1 = store('barrelA');
  var barrel2 = store('barrelB');
  var count1 = rottenCount(barrel1(10));
  var count2 = rottenCount(barrel2(10));
  return Math.abs(count1 - count2);
});

var differentStore = Infer({method: 'forward', samples: 10000}, function() {
  var store1 = makeStore('store1');
  var store2 = makeStore('store2');
  var barrel1 = store1('barrelA');
  var barrel2 = store2('barrelA');
  var count1 = rottenCount(barrel1(10));
  var count2 = rottenCount(barrel2(10));
  return Math.abs(count1 - count2);
});

var ANSWER = {sameStore: sameStore, differentStore: differentStore};
groundtruth output
{
  "sameStore": {
    "__kind": "distribution",
    "probs": [
      0.3684,
      0.09709999999999999,
      0.0683,
      0.0516,
      0.04490000000000001,
      0.04390000000000001,
      0.0397,
      0.0407,
      0.0464,
      0.05919999999999999,
      0.1398
    ],
    "support": [
      0,
      1,
      2,
      3,
      4,
      5,
      6,
      7,
      8,
      9,
      10
    ]
  },
  "differentStore": {
    "__kind": "distribution",
    "probs": [
      0.2641,
      0.08629999999999999,
      0.056100000000000004,
      0.04590000000000001,
      0.0447,
      0.038000000000000006,
      0.03989999999999999,
      0.045000000000000005,
      0.05330000000000001,
      0.07149999999999998,
      0.2552
    ],
    "support": [
      0,
      1,
      2,
      3,
      4,
      5,
      6,
      7,
      8,
      9,
      10
    ]
  }
}
feedback
not signed
probmods2-hierarchical-models/ex2.3 distribution 8 runs TV=0.67 TV<1
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Three-level rotten-apple hierarchy: cities → stores → barrels. Each city has a cityPrior = beta({a: .25, b: .25}) controlling the probability that a store in that city is the 'fresh' kind (Beta(.1, .3) for pRotten) vs the 'rotten' kind (Beta(.3, .1)). Within a store, each barrel draws pRotten = beta(storePrior), and barrel(n) returns an array of n booleans (rotten or not). Implement makeCity(cityName) returning a makeStore function returning a makeBarrel function as described. Memoize at all three levels. For city C1, store S1, barrel B1, end your program with the Infer over Math.sum(B1(20)) using forward sampling.
groundtruth code
var makeCity = mem(function(cityName){
  var cityPrior = beta({a: .25, b: .25});
  var makeStore = mem(function(storeName) {
    var storePrior = flip(cityPrior) ? {a: .1, b: .3} : {a: .3, b: .1};
    var makeBarrel = mem(function(barrelName) {
      var pRotten = beta(storePrior);
      var barrel = function(n) {
        return repeat(n, function() { flip(pRotten) });
      };
      return barrel;
    });
    return makeBarrel;
  });
  return makeStore;
});

var C1 = makeCity("C1");
var S1 = C1("S1");
var B1 = S1("B1");
var ANSWER = (Infer({method: 'forward'}, function(){
    return Math.sum(B1(20));
}));
generated code
sonnet-46-primer-v3 TV<1 haiku-45-noprimer-v3 TV<1 haiku-45-primer-v3 TV<.05 haiku-45-think-noprimer-v3 TV<.5 haiku-45-think-primer-v3 TV<.05 sonnet-46-noprimer-v3 TV<.05 sonnet-46-think-noprimer-v3 TV<1 sonnet-46-think-primer-v3 TV<.05
var makeCity = mem(function(cityName) {
  var cityPrior = beta({a: 0.25, b: 0.25});
  var makeStore = mem(function(storeName) {
    var storePrior = flip(cityPrior)
      ? {a: 0.1, b: 0.3}
      : {a: 0.3, b: 0.1};
    var makeBarrel = mem(function(barrelName) {
      var pRotten = beta(storePrior);
      var barrel = function(n) {
        return repeat(n, function() { return flip(pRotten); });
      };
      return barrel;
    });
    return makeBarrel;
  });
  return makeStore;
});

var ANSWER = Infer({method: 'forward', samples: 10000}, function() {
  var makeStore = makeCity('C1');
  var makeBarrel = makeStore('S1');
  var B1 = makeBarrel('B1');
  return Math.sum(B1(20));
});
var makeCity = mem(function(cityName) {
  var cityPrior = beta({a: 0.25, b: 0.25});
  
  var makeStore = mem(function(storeName) {
    var storePrior = cityPrior < 0.5 ? 
      {a: 0.1, b: 0.3} :  // fresh kind
      {a: 0.3, b: 0.1};   // rotten kind
    
    var makeBarrel = mem(function(barrelName) {
      return function(n) {
        var pRotten = beta(storePrior);
        return mapN(function(i) {
          return flip(pRotten);
        }, n);
      };
    });
    
    return makeBarrel;
  });
  
  return makeStore;
});

var C1 = makeCity("C1");
var S1 = C1("S1");
var B1 = S1("B1");

var ANSWER = Infer({method: "forward", samples: 10000}, function() {
  return Math.sum(B1(20));
});
var makeCity = mem(function(cityName) {
  var cityPrior = beta({a: 0.25, b: 0.25});
  
  var makeStore = mem(function(storeName) {
    var isFresh = flip(cityPrior);
    var storePrior = isFresh ? 
      {a: 0.1, b: 0.3} : 
      {a: 0.3, b: 0.1};
    
    var makeBarrel = mem(function(barrelName) {
      var pRotten = beta(storePrior);
      
      var barrel = function(n) {
        return repeat(n, function() {
          return flip(pRotten);
        });
      };
      
      return barrel;
    });
    
    return makeBarrel;
  });
  
  return makeStore;
});

var C1 = makeCity("C1");
var S1 = C1("S1");
var B1 = S1("B1");

var ANSWER = Infer({method: 'forward', samples: 10000}, function() {
  var barrelContents = B1(20);
  return _.sum(map(function(x) { return x ? 1 : 0; }, barrelContents));
});
var makeCity = mem(function(cityName) {
  var cityPrior = beta({a: 0.25, b: 0.25}); // P(store is fresh)
  
  return mem(function(storeName) {
    var isFresh = flip(cityPrior);
    var storePrior = isFresh ? 
      {a: 0.1, b: 0.3} :  // fresh store: low rotten rate
      {a: 0.3, b: 0.1};   // rotten store: high rotten rate
    
    return mem(function(barrelName) {
      return function(n) {
        var pRotten = beta(storePrior);
        return mapN(function(i) {
          return flip(pRotten);
        }, n);
      };
    });
  });
});

var C1 = makeCity('C1');
var S1 = C1('S1');
var B1 = S1('B1');

var countRotten = function(arr) {
  return sum(map(function(x) { return x ? 1 : 0; }, arr));
};

var ANSWER = Infer({method: 'forward', samples: 10000}, function() {
  var barrelSample = B1(20);
  return countRotten(barrelSample);
});
var makeCity = mem(function(cityName) {
  var cityPrior = beta({a: 0.25, b: 0.25});
  
  var makeStore = mem(function(storeName) {
    var isFresh = flip(cityPrior);
    var storePrior = isFresh ? 
      {a: 0.1, b: 0.3} :
      {a: 0.3, b: 0.1};
    
    var makeBarrel = mem(function(barrelId) {
      var pRotten = beta(storePrior);
      return function(n) {
        return map(function() { return flip(pRotten); }, _.range(0, n));
      };
    });
    
    return makeBarrel;
  });
  
  return makeStore;
});

var C1 = makeCity("C1");
var S1 = C1("S1");
var B1 = S1("B1");

var ANSWER = Infer({method: 'forward', samples: 10000}, function() {
  var barrel = B1(20);
  return _.sum(map(function(x) { return x ? 1 : 0; }, barrel));
});
var makeCity = mem(function(cityName) {
  var cityPrior = beta({a: 0.25, b: 0.25});
  
  var makeStore = mem(function(storeName) {
    // cityPrior controls probability that store is 'fresh' kind
    // fresh kind: pRotten ~ Beta(.1, .3)
    // rotten kind: pRotten ~ Beta(.3, .1)
    var isFresh = flip(cityPrior);
    var storePrior = isFresh ? {a: 0.1, b: 0.3} : {a: 0.3, b: 0.1};
    
    var makeBarrel = mem(function(barrelName) {
      var pRotten = beta(storePrior);
      
      var barrel = function(n) {
        return repeat(n, function() { return flip(pRotten); });
      };
      
      return barrel;
    });
    
    return makeBarrel;
  });
  
  return makeStore;
});

var C1 = makeCity("C1");
var S1 = C1("S1");
var B1 = S1("B1");

var ANSWER = Infer({method: 'forward', samples: 10000}, function() {
  return Math.sum(B1(20));
});
var ANSWER = Infer({ method: 'forward', samples: 10000 }, function() {

  var makeCity = mem(function(cityName) {
    // cityPrior: probability that a store in this city is the 'fresh' kind
    var cityPrior = beta({ a: 0.25, b: 0.25 });

    var makeStore = mem(function(storeName) {
      // Use cityPrior to pick store kind
      var isFresh = flip(cityPrior);
      // fresh store: Beta(.1, .3) → low pRotten; rotten store: Beta(.3, .1) → high pRotten
      var storePrior = isFresh ? { a: 0.1, b: 0.3 } : { a: 0.3, b: 0.1 };

      var makeBarrel = mem(function(barrelName) {
        // Each barrel draws its own pRotten from the store's distribution
        var pRotten = beta(storePrior);

        // barrel(n) returns n booleans: true = rotten
        var barrel = function(n) {
          return repeat(n, function() { return flip(pRotten); });
        };

        return barrel;
      });

      return makeBarrel;
    });

    return makeStore;
  });

  // Instantiate city C1, store S1, barrel B1
  var C1 = makeCity('C1');
  var S1 = C1('S1');
  var B1 = S1('B1');

  // Sum of 20 booleans = number of rotten apples in barrel B1
  return Math.sum(B1(20));
});
var makeCity = mem(function(cityName) {
  var cityPrior = beta({a: 0.25, b: 0.25});

  var makeStore = mem(function(storeName) {
    var isFresh = flip(cityPrior);
    var storePrior = isFresh ? {a: 0.1, b: 0.3} : {a: 0.3, b: 0.1};

    var makeBarrel = mem(function(barrelName) {
      var pRotten = beta(storePrior);

      var barrel = mem(function(n) {
        return repeat(n, function() { return flip(pRotten); });
      });

      return barrel;
    });

    return makeBarrel;
  });

  return makeStore;
});

var C1 = makeCity('C1');
var S1 = C1('S1');
var B1 = S1('B1');

var ANSWER = Infer({method: 'forward', samples: 10000}, function() {
  return Math.sum(B1(20));
});
groundtruth output
200.9900
190.0100
raw JSON
{
  "__kind": "distribution",
  "probs": [
    0.010000000000000005,
    0.99
  ],
  "support": [
    19,
    20
  ]
}
feedback
not signed
probmods2-hierarchical-models/ex2.4 distribution 8 runs TV=0.17 TV<.5
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Three-level rotten-apple hierarchy (cities → stores → barrels), as in the previous exercise: city has cityPrior = beta({a: .25, b: .25}); stores within a city pick Beta(.1, .3) (fresh) with prob cityPrior, else Beta(.3, .1) (rotten); barrels in a store sample pRotten = beta(storePrior), and barrel(n) returns n booleans. Memoized at all levels. You visit a store in a city and observe a barrel of 10 apples, 7 of which are rotten. You then visit a *different* store in the *same* city. End your program with Infer({method: 'MCMC', samples: 5000, lag: 100}, ...) returning the posterior over the number of rotten apples in a 10-apple barrel from this second store.
groundtruth code
var makeCity = mem(function(cityName){
    var cityPrior = beta({a: .25, b: .25});

    var makeStore = mem(function(storeName) {
        var storePrior = flip(cityPrior) ? {a: .1, b: .3} : {a: .3, b: .1};

        var makeBarrel = mem(function(barrelName) {
            var pRotten = beta(storePrior);
            var barrel = function(n) {
                return repeat(n, function() { flip(pRotten) });
            };
            return barrel;
        });

        return makeBarrel;
    });

    return makeStore;
});
var ANSWER = (Infer({method: 'MCMC', samples:5000, lag: 100}, function(){
    var C = makeCity("C");
    var S1 = C("S1");
    var B1 = S1("B1");
    var S2 = C("S2");
    var B2 = S2("B2");

    condition(Math.sum(B1(10)) == 7);

    return Math.sum(B2(10));
}));
generated code
sonnet-46-primer-v3 TV<.5 haiku-45-noprimer-v3 fail haiku-45-primer-v3 TV<.05 haiku-45-think-noprimer-v3 fail haiku-45-think-primer-v3 fail sonnet-46-noprimer-v3 TV<.5 sonnet-46-think-noprimer-v3 TV<.05 sonnet-46-think-primer-v3 TV<.5
var model = function() {
  // City-level prior
  var cityPrior = beta({a: 0.25, b: 0.25});
  
  // Store-level: given cityPrior, pick Beta params
  var makeStore = function() {
    var isFresh = flip(cityPrior);
    var storePrior = isFresh ? beta({a: 0.1, b: 0.3}) : beta({a: 0.3, b: 0.1});
    return storePrior;
  };
  
  // Barrel: given store's rotten rate, sample n booleans
  var makeBarrel = function(storePrior, n) {
    var pRotten = beta({a: storePrior * 10, b: (1 - storePrior) * 10});
    return repeat(n, function() { return flip(pRotten); });
  };
  
  // Store 1: observe 7 rotten out of 10
  var store1Prior = makeStore();
  var pRotten1 = beta({a: store1Prior * 10, b: (1 - store1Prior) * 10});
  // Observe 10 apples, 7 rotten
  observe(Binomial({p: pRotten1, n: 10}), 7);
  
  // Store 2: different store, same city (same cityPrior)
  var store2Prior = makeStore();
  var pRotten2 = beta({a: store2Prior * 10, b: (1 - store2Prior) * 10});
  
  // Return number of rotten apples in a 10-apple barrel from store 2
  return sample(Binomial({p: pRotten2, n: 10}));
};

var ANSWER = Infer({method: 'MCMC', samples: 5000, lag: 100}, model);
Error: Parameter "b" should be of type "real (0, Infinity)".
// Memoized city-level prior
var cityPrior = mem(function(cityName) {
  return beta({a: 0.25, b: 0.25});
});

// Memoized store-level prior
var storePrior = mem(function(cityName, storeName) {
  var cityRottenness = cityPrior(cityName);
  var isFresh = flip(cityRottenness);
  if (isFresh) {
    return beta({a: 0.1, b: 0.3});
  } else {
    return beta({a: 0.3, b: 0.1});
  }
});

// Memoized barrel sampling
var barrel = mem(function(cityName, storeName, barrelName, n) {
  var pRotten = beta(storePrior(cityName, storeName));
  return repeat(n, function() { return flip(pRotten); });
});

var model = function() {
  // Visit store 1 in city A, observe 7 rotten out of 10
  var barrel1 = barrel("cityA", "store1", "barrel1", 10);
  var rottenCount1 = sum(barrel1);
  observe(Categorical({ps: [1], vs: [rottenCount1]}), 7);
  
  // Visit store 2 in the same city A
  var barrel2 = barrel("cityA", "store2", "barrel2", 10);
  var rottenCount2 = sum(barrel2);
  
  return rottenCount2;
};

var ANSWER = Infer({method: 'MCMC', samples: 5000, lag: 100}, model);
// Memoized city-level prior
var cityPrior = mem(function(cityId) {
  return beta({a: 0.25, b: 0.25});
});

// Memoized store-level prior
var storePrior = mem(function(cityId, storeId) {
  var cityRottennessProb = cityPrior(cityId);
  var isFresh = flip(cityRottennessProb);
  if (isFresh) {
    return beta({a: 0.1, b: 0.3});
  } else {
    return beta({a: 0.3, b: 0.1});
  }
});

// Generate barrel observations
var barrel = function(storeId, pRotten, n) {
  return repeat(n, function() {
    return flip(pRotten);
  });
};

// Model
var model = function() {
  var cityId = "city1";
  var store1Id = "store1";
  var store2Id = "store2";
  
  // Get store priors
  var store1Prior = storePrior(cityId, store1Id);
  var store2Prior = storePrior(cityId, store2Id);
  
  // Observe barrel from store 1: 10 apples, 7 rotten
  var barrel1 = barrel(store1Id, store1Prior, 10);
  var rottenCount1 = sum(barrel1);
  observe(Binomial({p: store1Prior, n: 10}), 7);
  
  // Sample barrel from store 2
  var barrel2 = barrel(store2Id, store2Prior, 10);
  var rottenCount2 = sum(barrel2);
  
  return rottenCount2;
};

var ANSWER = Infer({method: 'MCMC', samples: 5000, lag: 100}, model);
Error: sample() expected a distribution but received "false".
var city = mem(function(cityName) {
  return beta({a: 0.25, b: 0.25});
});

var store = mem(function(cityName, storeName) {
  var cityPrior = city(cityName);
  if (flip(cityPrior)) {
    // Fresh store
    return beta({a: 0.1, b: 0.3});
  } else {
    // Rotten store
    return beta({a: 0.3, b: 0.1});
  }
});

var barrel = mem(function(cityName, storeName, barrelName) {
  var pRotten = store(cityName, storeName);
  var apples = repeat(10, function() { return flip(pRotten); });
  return apples;
});

var model = function() {
  // Observe barrel from store 1 in city 1: 7 rotten out of 10
  var barrel1 = barrel("city1", "store1", "barrel1");
  var numRotten1 = sum(barrel1);
  observe(numRotten1 === 7);
  
  // Query barrel from store 2 in the same city
  var barrel2 = barrel("city1", "store2", "barrel2");
  var numRotten2 = sum(barrel2);
  return numRotten2;
};

var ANSWER = Infer({method: 'MCMC', samples: 5000, lag: 100}, model);
Error: Parameter "b" should be of type "real (0, Infinity)".
var cityPrior = mem(function(cityID) {
  return beta({a: .25, b: .25});
});

var storePrior = mem(function(cityID, storeID) {
  var city_p = cityPrior(cityID);
  var isFresh = flip(city_p);
  if (isFresh) {
    return beta({a: .1, b: .3});
  } else {
    return beta({a: .3, b: .1});
  }
});

var barrel = mem(function(cityID, storeID, barrelID, n) {
  var p_rotten = beta(storePrior(cityID, storeID));
  return repeat(n, function() {
    return flip(p_rotten);
  });
});

var model = function() {
  // Observe barrel from first store in the city
  var barrel1 = barrel("city1", "store1", "barrel1", 10);
  var rotten1 = sum(map(function(x) { return x ? 1 : 0; }, barrel1));
  condition(rotten1 === 7);
  
  // Predict barrel from second (different) store in the same city
  var barrel2 = barrel("city1", "store2", "barrel1", 10);
  var rotten2 = sum(map(function(x) { return x ? 1 : 0; }, barrel2));
  
  return rotten2;
};

var ANSWER = Infer({method: 'MCMC', samples: 5000, lag: 100}, model);
var cityPrior = function() {
  return beta({a: 0.25, b: 0.25});
};

var model = function() {
  // Memoized city-level parameter
  var getCityParam = mem(function(cityId) {
    return cityPrior();
  });

  // Memoized store-level prior
  var getStorePrior = mem(function(cityId, storeId) {
    var cp = getCityParam(cityId);
    var isFresh = flip(cp);
    if (isFresh) {
      return beta({a: 0.1, b: 0.3});
    } else {
      return beta({a: 0.3, b: 0.1});
    }
  });

  // Memoized barrel pRotten
  var getBarrelPRotten = mem(function(cityId, storeId, barrelId) {
    var sp = getStorePrior(cityId, storeId);
    return beta({a: sp * 10, b: (1 - sp) * 10});
  });

  // barrel(n) returns n booleans for a given barrel
  var barrel = function(cityId, storeId, barrelId, n) {
    var pRotten = getBarrelPRotten(cityId, storeId, barrelId);
    return repeat(n, function() { return flip(pRotten); });
  };

  // Observe barrel from store 1 in city 1: 7 rotten out of 10
  var obs = barrel('city1', 'store1', 'barrel1', 10);
  var numRotten1 = sum(map(function(x) { return x ? 1 : 0; }, obs));
  observe(Gaussian({mu: numRotten1, sigma: 0.01}), 7);

  // Second store in same city
  var obs2 = barrel('city1', 'store2', 'barrel1', 10);
  var numRotten2 = sum(map(function(x) { return x ? 1 : 0; }, obs2));

  return numRotten2;
};

var ANSWER = Infer({method: 'MCMC', samples: 5000, lag: 100}, model);
var model = function() {
  // City-level parameter (shared across all stores in this city)
  var cityPrior = beta({a: 0.25, b: 0.25});

  // Store-level: memoized per store ID
  // With prob cityPrior pick "fresh" params, else "rotten" params
  var storeParams = mem(function(storeId) {
    var isFresh = flip(cityPrior);
    return isFresh ? {a: 0.1, b: 0.3} : {a: 0.3, b: 0.1};
  });

  // Barrel-level: memoized per (store, barrel) pair
  // Sample pRotten ~ Beta(storeParams)
  var barrelPRotten = mem(function(storeId, barrelId) {
    return beta(storeParams(storeId));
  });

  // Helper: observe or sample n apples from a given barrel
  var observeBarrel = function(storeId, barrelId, n, nRotten) {
    var p = barrelPRotten(storeId, barrelId);
    observe(Binomial({n: n, p: p}), nRotten);
  };

  var sampleBarrel = function(storeId, barrelId, n) {
    var p = barrelPRotten(storeId, barrelId);
    return binomial({n: n, p: p});
  };

  // Observation: store 1, barrel 0 → 7 of 10 apples are rotten
  observeBarrel('store1', 0, 10, 7);

  // Query: different store (store 2) in the same city, fresh barrel
  return sampleBarrel('store2', 0, 10);
};

var ANSWER = Infer({method: 'MCMC', samples: 5000, lag: 100}, model);
var model = function() {
  // City-level: one city, one cityPrior
  var cityPrior = beta({a: 0.25, b: 0.25});

  // Store-level: memoized by storeId; each store picks a Beta distribution
  var storeDist = mem(function(storeId) {
    return flip(cityPrior)
      ? Beta({a: 0.1, b: 0.3})   // fresh store distribution
      : Beta({a: 0.3, b: 0.1});  // rotten store distribution
  });

  // Barrel-level: pRotten memoized by (storeId, barrelId)
  var barrelPRotten = mem(function(storeId, barrelId) {
    return sample(storeDist(storeId));
  });

  // barrel(storeId, barrelId, n): returns n booleans (true = rotten)
  var barrel = function(storeId, barrelId, n) {
    var p = barrelPRotten(storeId, barrelId);
    return repeat(n, function() { return flip(p); });
  };

  // Observe: store 1, barrel 1 => 7 rotten out of 10
  var p1 = barrelPRotten(1, 1);
  observe(Binomial({p: p1, n: 10}), 7);

  // Predict: different store (store 2) in the same city, sample a 10-apple barrel
  var store2Apples = barrel(2, 1, 10);
  var nRotten = sum(map(function(x) { return x ? 1 : 0; }, store2Apples));
  return nRotten;
};

var ANSWER = Infer({method: 'MCMC', samples: 5000, lag: 100}, model);
groundtruth output
100.4176
00.2898
90.0564
10.0412
80.0344
30.0304
20.0284
50.0268
60.0256
40.0248
70.0246
raw JSON
{
  "__kind": "distribution",
  "probs": [
    0.2898,
    0.041199999999999994,
    0.028400000000000005,
    0.030400000000000007,
    0.024799999999999996,
    0.026799999999999997,
    0.025599999999999998,
    0.024600000000000004,
    0.03439999999999999,
    0.05639999999999999,
    0.41759999999999997
  ],
  "support": [
    0,
    1,
    2,
    3,
    4,
    5,
    6,
    7,
    8,
    9,
    10
  ]
}
feedback
not signed
probmods2-hierarchical-models/ex3.1 distribution 8 runs TV=0.00 TV=0
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Hierarchical Bayesian data analysis on word reading times. Each data point has {group: "vowel"|"consonant", word: str, id: int, rt: number}. Data:
var data = [{group: "vowel", word: "abacus", id: 1, rt: 210},
            {group: "vowel", word: "abacus", id: 2, rt: 212},
            {group: "vowel", word: "abacus", id: 3, rt: 209},
            {group: "vowel", word: "aardvark", id: 1, rt: 200},
            {group: "vowel", word: "aardvark", id: 2, rt: 201},
            {group: "vowel", word: "aardvark", id: 3, rt: 198},
            {group: "vowel", word: "ellipse", id: 1, rt: 220},
            {group: "vowel", word: "ellipse", id: 2, rt: 222},
            {group: "vowel", word: "ellipse", id: 3, rt: 219},
            {group: "consonant", word: "proton", id: 1, rt: 190},
            {group: "consonant", word: "proton", id: 2, rt: 191},
            {group: "consonant", word: "proton", id: 3, rt: 189},
            {group: "consonant", word: "folder", id: 1, rt: 180},
            {group: "consonant", word: "folder", id: 2, rt: 182},
            {group: "consonant", word: "folder", id: 3, rt: 178},
            {group: "consonant", word: "fedora", id: 1, rt: 230},
            {group: "consonant", word: "fedora", id: 2, rt: 231},
            {group: "consonant", word: "fedora", id: 3, rt: 228},
            {group: "consonant", word: "fedora", id: 1, rt: 231},
            {group: "consonant", word: "fedora", id: 2, rt: 233},
            {group: "consonant", word: "fedora", id: 3, rt: 230},
            {group: "consonant", word: "fedora", id: 1, rt: 230},
            {group: "consonant", word: "fedora", id: 2, rt: 232},
            {group: "consonant", word: "fedora", id: 3, rt: 228}];
Simple BDA model (without per-word random effects):
var opts = {method: "MCMC", burn: 10000, lag: 5, samples: 5000};
Infer(opts, function() {
  var groupMeans = {vowel: gaussian(200, 100),
                    consonant: gaussian(200, 100)};
  var obsFn = function(d) {
    observe(Gaussian({mu: groupMeans[d.group], sigma: 10}), d.rt);
  };
  mapData({data: data}, obsFn);
  return groupMeans['vowel'] - groupMeans['consonant'];
});
Adjust the model so each *word* has its own mean reading time drawn from its group's mean: use mem(function(word, group) { gaussian(groupMeans[group], 20) }) for the per-word effect, and observe each rt as Gaussian({mu: wordMean(d.word, d.group), sigma: 10}). End your program with the Infer(...) returning the posterior over groupMeans['vowel'] - groupMeans['consonant'].
groundtruth code
var data = [{group: "vowel", word: "abacus", id: 1, rt: 210},
            {group: "vowel", word: "abacus", id: 2, rt: 212},
            {group: "vowel", word: "abacus", id: 3, rt: 209},
            {group: "vowel", word: "aardvark", id: 1, rt: 200},
            {group: "vowel", word: "aardvark", id: 2, rt: 201},
            {group: "vowel", word: "aardvark", id: 3, rt: 198},
            {group: "vowel", word: "ellipse", id: 1, rt: 220},
            {group: "vowel", word: "ellipse", id: 2, rt: 222},
            {group: "vowel", word: "ellipse", id: 3, rt: 219},
            {group: "consonant", word: "proton", id: 1, rt: 190},
            {group: "consonant", word: "proton", id: 2, rt: 191},
            {group: "consonant", word: "proton", id: 3, rt: 189},
            {group: "consonant", word: "folder", id: 1, rt: 180},
            {group: "consonant", word: "folder", id: 2, rt: 182},
            {group: "consonant", word: "folder", id: 3, rt: 178},
            {group: "consonant", word: "fedora", id: 1, rt: 230},
            {group: "consonant", word: "fedora", id: 2, rt: 231},
            {group: "consonant", word: "fedora", id: 3, rt: 228},
            {group: "consonant", word: "fedora", id: 1, rt: 231},
            {group: "consonant", word: "fedora", id: 2, rt: 233},
            {group: "consonant", word: "fedora", id: 3, rt: 230},
            {group: "consonant", word: "fedora", id: 1, rt: 230},
            {group: "consonant", word: "fedora", id: 2, rt: 232},
            {group: "consonant", word: "fedora", id: 3, rt: 228}];

var opts = {method: "MCMC", burn: 10000, lag: 5, samples: 5000};
var ANSWER = (Infer(opts, function() {
  var groupMeans = {vowel: gaussian(200, 100),
                    consonant: gaussian(200, 100)};

  var wordMean = mem(function(word, group) {
    return gaussian(groupMeans[group], 20);
  });

  var obsFn = function(d) {
    observe(Gaussian({mu: wordMean(d.word, d.group),
                      sigma: 10}), d.rt);
  };

  mapData({data: data}, obsFn);

  return groupMeans['vowel'] - groupMeans['consonant'];
}));
generated code
sonnet-46-primer-v3 TV=0 haiku-45-noprimer-v3 TV=0 haiku-45-primer-v3 TV=0 haiku-45-think-noprimer-v3 fail haiku-45-think-primer-v3 TV=0 sonnet-46-noprimer-v3 TV=0 sonnet-46-think-noprimer-v3 TV=0 sonnet-46-think-primer-v3 TV=0
var data = [{group: "vowel", word: "abacus", id: 1, rt: 210},
            {group: "vowel", word: "abacus", id: 2, rt: 212},
            {group: "vowel", word: "abacus", id: 3, rt: 209},
            {group: "vowel", word: "aardvark", id: 1, rt: 200},
            {group: "vowel", word: "aardvark", id: 2, rt: 201},
            {group: "vowel", word: "aardvark", id: 3, rt: 198},
            {group: "vowel", word: "ellipse", id: 1, rt: 220},
            {group: "vowel", word: "ellipse", id: 2, rt: 222},
            {group: "vowel", word: "ellipse", id: 3, rt: 219},
            {group: "consonant", word: "proton", id: 1, rt: 190},
            {group: "consonant", word: "proton", id: 2, rt: 191},
            {group: "consonant", word: "proton", id: 3, rt: 189},
            {group: "consonant", word: "folder", id: 1, rt: 180},
            {group: "consonant", word: "folder", id: 2, rt: 182},
            {group: "consonant", word: "folder", id: 3, rt: 178},
            {group: "consonant", word: "fedora", id: 1, rt: 230},
            {group: "consonant", word: "fedora", id: 2, rt: 231},
            {group: "consonant", word: "fedora", id: 3, rt: 228},
            {group: "consonant", word: "fedora", id: 1, rt: 231},
            {group: "consonant", word: "fedora", id: 2, rt: 233},
            {group: "consonant", word: "fedora", id: 3, rt: 230},
            {group: "consonant", word: "fedora", id: 1, rt: 230},
            {group: "consonant", word: "fedora", id: 2, rt: 232},
            {group: "consonant", word: "fedora", id: 3, rt: 228}];

var opts = {method: "MCMC", burn: 10000, lag: 5, samples: 5000};

var ANSWER = Infer(opts, function() {
  var groupMeans = {vowel: gaussian(200, 100),
                    consonant: gaussian(200, 100)};
  var wordMean = mem(function(word, group) {
    return gaussian(groupMeans[group], 20);
  });
  var obsFn = function(d) {
    observe(Gaussian({mu: wordMean(d.word, d.group), sigma: 10}), d.rt);
  };
  mapData({data: data}, obsFn);
  return groupMeans['vowel'] - groupMeans['consonant'];
});
var data = [{group: "vowel", word: "abacus", id: 1, rt: 210},
            {group: "vowel", word: "abacus", id: 2, rt: 212},
            {group: "vowel", word: "abacus", id: 3, rt: 209},
            {group: "vowel", word: "aardvark", id: 1, rt: 200},
            {group: "vowel", word: "aardvark", id: 2, rt: 201},
            {group: "vowel", word: "aardvark", id: 3, rt: 198},
            {group: "vowel", word: "ellipse", id: 1, rt: 220},
            {group: "vowel", word: "ellipse", id: 2, rt: 222},
            {group: "vowel", word: "ellipse", id: 3, rt: 219},
            {group: "consonant", word: "proton", id: 1, rt: 190},
            {group: "consonant", word: "proton", id: 2, rt: 191},
            {group: "consonant", word: "proton", id: 3, rt: 189},
            {group: "consonant", word: "folder", id: 1, rt: 180},
            {group: "consonant", word: "folder", id: 2, rt: 182},
            {group: "consonant", word: "folder", id: 3, rt: 178},
            {group: "consonant", word: "fedora", id: 1, rt: 230},
            {group: "consonant", word: "fedora", id: 2, rt: 231},
            {group: "consonant", word: "fedora", id: 3, rt: 228},
            {group: "consonant", word: "fedora", id: 1, rt: 231},
            {group: "consonant", word: "fedora", id: 2, rt: 233},
            {group: "consonant", word: "fedora", id: 3, rt: 230},
            {group: "consonant", word: "fedora", id: 1, rt: 230},
            {group: "consonant", word: "fedora", id: 2, rt: 232},
            {group: "consonant", word: "fedora", id: 3, rt: 228}];

var opts = {method: "MCMC", burn: 10000, lag: 5, samples: 5000};

var ANSWER = Infer(opts, function() {
  var groupMeans = {vowel: gaussian(200, 100),
                    consonant: gaussian(200, 100)};
  
  var wordMean = mem(function(word, group) {
    return gaussian(groupMeans[group], 20);
  });
  
  var obsFn = function(d) {
    observe(Gaussian({mu: wordMean(d.word, d.group), sigma: 10}), d.rt);
  };
  
  mapData({data: data}, obsFn);
  
  return groupMeans['vowel'] - groupMeans['consonant'];
});
var data = [{group: "vowel", word: "abacus", id: 1, rt: 210},
            {group: "vowel", word: "abacus", id: 2, rt: 212},
            {group: "vowel", word: "abacus", id: 3, rt: 209},
            {group: "vowel", word: "aardvark", id: 1, rt: 200},
            {group: "vowel", word: "aardvark", id: 2, rt: 201},
            {group: "vowel", word: "aardvark", id: 3, rt: 198},
            {group: "vowel", word: "ellipse", id: 1, rt: 220},
            {group: "vowel", word: "ellipse", id: 2, rt: 222},
            {group: "vowel", word: "ellipse", id: 3, rt: 219},
            {group: "consonant", word: "proton", id: 1, rt: 190},
            {group: "consonant", word: "proton", id: 2, rt: 191},
            {group: "consonant", word: "proton", id: 3, rt: 189},
            {group: "consonant", word: "folder", id: 1, rt: 180},
            {group: "consonant", word: "folder", id: 2, rt: 182},
            {group: "consonant", word: "folder", id: 3, rt: 178},
            {group: "consonant", word: "fedora", id: 1, rt: 230},
            {group: "consonant", word: "fedora", id: 2, rt: 231},
            {group: "consonant", word: "fedora", id: 3, rt: 228},
            {group: "consonant", word: "fedora", id: 1, rt: 231},
            {group: "consonant", word: "fedora", id: 2, rt: 233},
            {group: "consonant", word: "fedora", id: 3, rt: 230},
            {group: "consonant", word: "fedora", id: 1, rt: 230},
            {group: "consonant", word: "fedora", id: 2, rt: 232},
            {group: "consonant", word: "fedora", id: 3, rt: 228}];

var opts = {method: "MCMC", burn: 10000, lag: 5, samples: 5000};

var ANSWER = Infer(opts, function() {
  var groupMeans = {vowel: gaussian(200, 100),
                    consonant: gaussian(200, 100)};
  
  var wordMean = mem(function(word, group) {
    return gaussian(groupMeans[group], 20);
  });
  
  var obsFn = function(d) {
    observe(Gaussian({mu: wordMean(d.word, d.group), sigma: 10}), d.rt);
  };
  
  mapData({data: data}, obsFn);
  
  return groupMeans['vowel'] - groupMeans['consonant'];
});
Error: Parameter "mu" should be of type "real".
var data = [{group: "vowel", word: "abacus", id: 1, rt: 210},
            {group: "vowel", word: "abacus", id: 2, rt: 212},
            {group: "vowel", word: "abacus", id: 3, rt: 209},
            {group: "vowel", word: "aardvark", id: 1, rt: 200},
            {group: "vowel", word: "aardvark", id: 2, rt: 201},
            {group: "vowel", word: "aardvark", id: 3, rt: 198},
            {group: "vowel", word: "ellipse", id: 1, rt: 220},
            {group: "vowel", word: "ellipse", id: 2, rt: 222},
            {group: "vowel", word: "ellipse", id: 3, rt: 219},
            {group: "consonant", word: "proton", id: 1, rt: 190},
            {group: "consonant", word: "proton", id: 2, rt: 191},
            {group: "consonant", word: "proton", id: 3, rt: 189},
            {group: "consonant", word: "folder", id: 1, rt: 180},
            {group: "consonant", word: "folder", id: 2, rt: 182},
            {group: "consonant", word: "folder", id: 3, rt: 178},
            {group: "consonant", word: "fedora", id: 1, rt: 230},
            {group: "consonant", word: "fedora", id: 2, rt: 231},
            {group: "consonant", word: "fedora", id: 3, rt: 228},
            {group: "consonant", word: "fedora", id: 1, rt: 231},
            {group: "consonant", word: "fedora", id: 2, rt: 233},
            {group: "consonant", word: "fedora", id: 3, rt: 230},
            {group: "consonant", word: "fedora", id: 1, rt: 230},
            {group: "consonant", word: "fedora", id: 2, rt: 232},
            {group: "consonant", word: "fedora", id: 3, rt: 228}];

var opts = {method: "MCMC", burn: 10000, lag: 5, samples: 5000};

var ANSWER = Infer(opts, function() {
  var groupMeans = {
    vowel: Gaussian({mu: 200, sigma: 100}),
    consonant: Gaussian({mu: 200, sigma: 100})
  };
  
  var wordMean = mem(function(word, group) {
    return Gaussian({mu: groupMeans[group], sigma: 20});
  });
  
  var obsFn = function(d) {
    observe(Gaussian({mu: wordMean(d.word, d.group), sigma: 10}), d.rt);
  };
  
  mapData({data: data}, obsFn);
  
  return groupMeans['vowel'] - groupMeans['consonant'];
});
var data = [{group: "vowel", word: "abacus", id: 1, rt: 210},
            {group: "vowel", word: "abacus", id: 2, rt: 212},
            {group: "vowel", word: "abacus", id: 3, rt: 209},
            {group: "vowel", word: "aardvark", id: 1, rt: 200},
            {group: "vowel", word: "aardvark", id: 2, rt: 201},
            {group: "vowel", word: "aardvark", id: 3, rt: 198},
            {group: "vowel", word: "ellipse", id: 1, rt: 220},
            {group: "vowel", word: "ellipse", id: 2, rt: 222},
            {group: "vowel", word: "ellipse", id: 3, rt: 219},
            {group: "consonant", word: "proton", id: 1, rt: 190},
            {group: "consonant", word: "proton", id: 2, rt: 191},
            {group: "consonant", word: "proton", id: 3, rt: 189},
            {group: "consonant", word: "folder", id: 1, rt: 180},
            {group: "consonant", word: "folder", id: 2, rt: 182},
            {group: "consonant", word: "folder", id: 3, rt: 178},
            {group: "consonant", word: "fedora", id: 1, rt: 230},
            {group: "consonant", word: "fedora", id: 2, rt: 231},
            {group: "consonant", word: "fedora", id: 3, rt: 228},
            {group: "consonant", word: "fedora", id: 1, rt: 231},
            {group: "consonant", word: "fedora", id: 2, rt: 233},
            {group: "consonant", word: "fedora", id: 3, rt: 230},
            {group: "consonant", word: "fedora", id: 1, rt: 230},
            {group: "consonant", word: "fedora", id: 2, rt: 232},
            {group: "consonant", word: "fedora", id: 3, rt: 228}];

var opts = {method: "MCMC", burn: 10000, lag: 5, samples: 5000};

var ANSWER = Infer(opts, function() {
  var groupMeans = {vowel: gaussian(200, 100),
                    consonant: gaussian(200, 100)};
  
  var wordMean = mem(function(word, group) {
    return gaussian(groupMeans[group], 20);
  });
  
  var obsFn = function(d) {
    observe(Gaussian({mu: wordMean(d.word, d.group), sigma: 10}), d.rt);
  };
  mapData({data: data}, obsFn);
  return groupMeans['vowel'] - groupMeans['consonant'];
});
var data = [{group: "vowel", word: "abacus", id: 1, rt: 210},
            {group: "vowel", word: "abacus", id: 2, rt: 212},
            {group: "vowel", word: "abacus", id: 3, rt: 209},
            {group: "vowel", word: "aardvark", id: 1, rt: 200},
            {group: "vowel", word: "aardvark", id: 2, rt: 201},
            {group: "vowel", word: "aardvark", id: 3, rt: 198},
            {group: "vowel", word: "ellipse", id: 1, rt: 220},
            {group: "vowel", word: "ellipse", id: 2, rt: 222},
            {group: "vowel", word: "ellipse", id: 3, rt: 219},
            {group: "consonant", word: "proton", id: 1, rt: 190},
            {group: "consonant", word: "proton", id: 2, rt: 191},
            {group: "consonant", word: "proton", id: 3, rt: 189},
            {group: "consonant", word: "folder", id: 1, rt: 180},
            {group: "consonant", word: "folder", id: 2, rt: 182},
            {group: "consonant", word: "folder", id: 3, rt: 178},
            {group: "consonant", word: "fedora", id: 1, rt: 230},
            {group: "consonant", word: "fedora", id: 2, rt: 231},
            {group: "consonant", word: "fedora", id: 3, rt: 228},
            {group: "consonant", word: "fedora", id: 1, rt: 231},
            {group: "consonant", word: "fedora", id: 2, rt: 233},
            {group: "consonant", word: "fedora", id: 3, rt: 230},
            {group: "consonant", word: "fedora", id: 1, rt: 230},
            {group: "consonant", word: "fedora", id: 2, rt: 232},
            {group: "consonant", word: "fedora", id: 3, rt: 228}];

var opts = {method: "MCMC", burn: 10000, lag: 5, samples: 5000};

var ANSWER = Infer(opts, function() {
  var groupMeans = {vowel: gaussian(200, 100),
                    consonant: gaussian(200, 100)};

  var wordMean = mem(function(word, group) {
    return gaussian(groupMeans[group], 20);
  });

  var obsFn = function(d) {
    observe(Gaussian({mu: wordMean(d.word, d.group), sigma: 10}), d.rt);
  };

  mapData({data: data}, obsFn);

  return groupMeans['vowel'] - groupMeans['consonant'];
});
var data = [{group: "vowel", word: "abacus", id: 1, rt: 210},
            {group: "vowel", word: "abacus", id: 2, rt: 212},
            {group: "vowel", word: "abacus", id: 3, rt: 209},
            {group: "vowel", word: "aardvark", id: 1, rt: 200},
            {group: "vowel", word: "aardvark", id: 2, rt: 201},
            {group: "vowel", word: "aardvark", id: 3, rt: 198},
            {group: "vowel", word: "ellipse", id: 1, rt: 220},
            {group: "vowel", word: "ellipse", id: 2, rt: 222},
            {group: "vowel", word: "ellipse", id: 3, rt: 219},
            {group: "consonant", word: "proton", id: 1, rt: 190},
            {group: "consonant", word: "proton", id: 2, rt: 191},
            {group: "consonant", word: "proton", id: 3, rt: 189},
            {group: "consonant", word: "folder", id: 1, rt: 180},
            {group: "consonant", word: "folder", id: 2, rt: 182},
            {group: "consonant", word: "folder", id: 3, rt: 178},
            {group: "consonant", word: "fedora", id: 1, rt: 230},
            {group: "consonant", word: "fedora", id: 2, rt: 231},
            {group: "consonant", word: "fedora", id: 3, rt: 228},
            {group: "consonant", word: "fedora", id: 1, rt: 231},
            {group: "consonant", word: "fedora", id: 2, rt: 233},
            {group: "consonant", word: "fedora", id: 3, rt: 230},
            {group: "consonant", word: "fedora", id: 1, rt: 230},
            {group: "consonant", word: "fedora", id: 2, rt: 232},
            {group: "consonant", word: "fedora", id: 3, rt: 228}];

var opts = {method: "MCMC", burn: 10000, lag: 5, samples: 5000};

var ANSWER = Infer(opts, function() {
  var groupMeans = {vowel: gaussian(200, 100),
                    consonant: gaussian(200, 100)};

  var wordMean = mem(function(word, group) {
    return gaussian(groupMeans[group], 20);
  });

  var obsFn = function(d) {
    observe(Gaussian({mu: wordMean(d.word, d.group), sigma: 10}), d.rt);
  };

  mapData({data: data}, obsFn);

  return groupMeans['vowel'] - groupMeans['consonant'];
});
var data = [{group: "vowel", word: "abacus", id: 1, rt: 210},
            {group: "vowel", word: "abacus", id: 2, rt: 212},
            {group: "vowel", word: "abacus", id: 3, rt: 209},
            {group: "vowel", word: "aardvark", id: 1, rt: 200},
            {group: "vowel", word: "aardvark", id: 2, rt: 201},
            {group: "vowel", word: "aardvark", id: 3, rt: 198},
            {group: "vowel", word: "ellipse", id: 1, rt: 220},
            {group: "vowel", word: "ellipse", id: 2, rt: 222},
            {group: "vowel", word: "ellipse", id: 3, rt: 219},
            {group: "consonant", word: "proton", id: 1, rt: 190},
            {group: "consonant", word: "proton", id: 2, rt: 191},
            {group: "consonant", word: "proton", id: 3, rt: 189},
            {group: "consonant", word: "folder", id: 1, rt: 180},
            {group: "consonant", word: "folder", id: 2, rt: 182},
            {group: "consonant", word: "folder", id: 3, rt: 178},
            {group: "consonant", word: "fedora", id: 1, rt: 230},
            {group: "consonant", word: "fedora", id: 2, rt: 231},
            {group: "consonant", word: "fedora", id: 3, rt: 228},
            {group: "consonant", word: "fedora", id: 1, rt: 231},
            {group: "consonant", word: "fedora", id: 2, rt: 233},
            {group: "consonant", word: "fedora", id: 3, rt: 230},
            {group: "consonant", word: "fedora", id: 1, rt: 230},
            {group: "consonant", word: "fedora", id: 2, rt: 232},
            {group: "consonant", word: "fedora", id: 3, rt: 228}];

var opts = {method: "MCMC", burn: 10000, lag: 5, samples: 5000};

var ANSWER = Infer(opts, function() {
  var groupMeans = {vowel: gaussian(200, 100),
                    consonant: gaussian(200, 100)};

  var wordMean = mem(function(word, group) {
    return gaussian(groupMeans[group], 20);
  });

  var obsFn = function(d) {
    observe(Gaussian({mu: wordMean(d.word, d.group), sigma: 10}), d.rt);
  };

  mapData({data: data}, obsFn);

  return groupMeans['vowel'] - groupMeans['consonant'];
});
groundtruth output
13.7367486805788420.0064
0.067751851671602030.0058
4.8781561871490170.0052
23.5682782849404760.0052
31.0092402259284650.0050
0.124730068298930520.0050
19.0264732892764530.0048
-2.50327102210971470.0048
8.5300373585106970.0046
27.0315211996124560.0042
-9.0455378718188230.0042
6.0122858297238220.0042
… 935 more
raw JSON
{
  "__kind": "distribution",
  "probs": [
    0.0014000000000000002,
    0.0011999999999999995,
    0.00019999999999999985,
    0.00019999999999999985,
    0.0014000000000000002,
    0.0016000000000000007,
    0.00019999999999999985,
    0.0008000000000000003,
    0.00019999999999999985,
    0.0022000000000000006,
    0.00019999999999999985,
    0.0008000000000000003,
    0.0010000000000000002,
    0.0011999999999999995,
    0.0008000000000000003,
    0.0014000000000000002,
    0.00019999999999999985,
    0.0004000000000000001,
    0.0014000000000000002,
    0.005000000000000002,
    0.0008000000000000003,
    0.0004000000000000001,
    0.00019999999999999985,
    0.0010000000000000002,
    0.00019999999999999985,
    0.0004000000000000001,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.002599999999999999,
    0.0034000000000000007,
    0.00019999999999999985,
    0.002599999999999999,
    0.0005999999999999997,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0022000000000000006,
    0.0037999999999999996,
    0.0011999999999999995,
    0.0004000000000000001,
    0.00019999999999999985,
    0.0016000000000000007,
    0.00019999999999999985,
    0.0005999999999999997,
    0.0011999999999999995,
    0.00019999999999999985,
    0.0017999999999999997,
    0.00019999999999999985,
    0.0037999999999999996,
    0.004199999999999999,
    0.0004000000000000001,
    0.0008000000000000003,
    0.0011999999999999995,
    0.0004000000000000001,
    0.0020000000000000005,
    0.004199999999999999,
    0.0023999999999999994,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0005999999999999997,
    0.006399999999999998,
    0.0028000000000000013,
    0.0014000000000000002,
    0.0008000000000000003,
    0.0005999999999999997,
    0.0004000000000000001,
    0.00019999999999999985,
    0.0004000000000000001,
    0.0010000000000000002,
    0.0016000000000000007,
    0.0005999999999999997,
    0.0008000000000000003,
    0.00019999999999999985,
    0.0004000000000000001,
    0.0022000000000000006,
    0.0004000000000000001,
    0.0008000000000000003,
    0.0016000000000000007,
    0.0016000000000000007,
    0.0004000000000000001,
    0.00019999999999999985,
    0.0008000000000000003,
    0.0005999999999999997,
    0.00019999999999999985,
    0.0004000000000000001,
    0.0005999999999999997,
    0.00019999999999999985,
    0.0004000000000000001,
    0.0016000000000000007,
    0.0008000000000000003,
    0.0014000000000000002,
    0.0005999999999999997,
    0.0028000000000000013,
    0.00019999999999999985,
    0.0023999999999999994,
    0.0011999999999999995,
    0.00019999999999999985,
    0.0010000000000000002,
    0.00019999999999999985,
    0.0014000000000000002,
    0.0032000000000000015,
    0.0010000000000000002,
    0.0008000000000000003,
    0.0004000000000000001,
    0.00019999999999999985,
    0.0008000000000000003,
    0.00019999999999999985,
    0.0004000000000000001,
    0.0014000000000000002,
    0.00019999999999999985,
    0.00019999999999999985,
    0.0017999999999999997,
    0.00019999999999999985,
    0.00019999999999999985,
    0.0023999999999999994,
    0.0005999999999999997,
    0.0022000000000000006,
    0.0017999999999999997,
    0.0014000000000000002,
    0.0005999999999999997,
    0.00019999999999999985,
    0.0020000000000000005,
    0.0022000000000000006,
    0.0005999999999999997,
    0.0008000000000000003,
    0.0010000000000000002,
    0.0004000000000000001,
    0.0011999999999999995,
    0.0028000000000000013,
    0.0010000000000000002,
    0.0016000000000000007,
    0.0004000000000000001,
    0.0010000000000000002,
    0.0005999999999999997,
    0.0020000000000000005,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0008000000000000003,
    0.0023999999999999994,
    0.0004000000000000001,
    0.0008000000000000003,
    0.0004000000000000001,
    0.00019999999999999985,
    0.0028000000000000013,
    0.0005999999999999997,
    0.0004000000000

... (44179 more chars truncated)
feedback
not signed
probmods2-hierarchical-models/ex3.2 distribution 8 runs TV=0.00 TV=0
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Hierarchical Bayesian data analysis on word reading times. Each data point is {group: 'vowel'|'consonant', word: str, id: int, rt: number}. Group means are drawn from Gaussian(200, 100); each word has its own mean mem(function(word, group) { gaussian(groupMeans[group], 20) }); individual reading times are Gaussian({mu: wordMean(d.word, d.group), sigma: 10}). Extend this model with a per-participant random effect: a Gaussian(0, 2) additive offset for each participant id. Each rt is now Gaussian({mu: wordMean(d.word, d.group) + participantMean(d.id), sigma: 10}). Data:
var data = [{group: "vowel", word: "abacus", id: 1, rt: 210},
            {group: "vowel", word: "abacus", id: 2, rt: 212},
            {group: "vowel", word: "abacus", id: 3, rt: 209},
            {group: "vowel", word: "aardvark", id: 1, rt: 200},
            {group: "vowel", word: "aardvark", id: 2, rt: 201},
            {group: "vowel", word: "aardvark", id: 3, rt: 198},
            {group: "vowel", word: "ellipse", id: 1, rt: 220},
            {group: "vowel", word: "ellipse", id: 2, rt: 222},
            {group: "vowel", word: "ellipse", id: 3, rt: 219},
            {group: "consonant", word: "proton", id: 1, rt: 190},
            {group: "consonant", word: "proton", id: 2, rt: 191},
            {group: "consonant", word: "proton", id: 3, rt: 189},
            {group: "consonant", word: "folder", id: 1, rt: 180},
            {group: "consonant", word: "folder", id: 2, rt: 182},
            {group: "consonant", word: "folder", id: 3, rt: 178},
            {group: "consonant", word: "fedora", id: 1, rt: 230},
            {group: "consonant", word: "fedora", id: 2, rt: 231},
            {group: "consonant", word: "fedora", id: 3, rt: 228},
            {group: "consonant", word: "fedora", id: 1, rt: 231},
            {group: "consonant", word: "fedora", id: 2, rt: 233},
            {group: "consonant", word: "fedora", id: 3, rt: 230},
            {group: "consonant", word: "fedora", id: 1, rt: 230},
            {group: "consonant", word: "fedora", id: 2, rt: 232},
            {group: "consonant", word: "fedora", id: 3, rt: 228}];
Use Infer({method: 'MCMC', burn: 10000, lag: 5, samples: 5000}, ...). End your program with the joint posterior {diff, p1, p2, p3} where diff = groupMeans['vowel'] - groupMeans['consonant'] and p1, p2, p3 are participantMean(1), participantMean(2), participantMean(3).
groundtruth code
var data = [{group: "vowel", word: "abacus", id: 1, rt: 210},
            {group: "vowel", word: "abacus", id: 2, rt: 212},
            {group: "vowel", word: "abacus", id: 3, rt: 209},
            {group: "vowel", word: "aardvark", id: 1, rt: 200},
            {group: "vowel", word: "aardvark", id: 2, rt: 201},
            {group: "vowel", word: "aardvark", id: 3, rt: 198},
            {group: "vowel", word: "ellipse", id: 1, rt: 220},
            {group: "vowel", word: "ellipse", id: 2, rt: 222},
            {group: "vowel", word: "ellipse", id: 3, rt: 219},
            {group: "consonant", word: "proton", id: 1, rt: 190},
            {group: "consonant", word: "proton", id: 2, rt: 191},
            {group: "consonant", word: "proton", id: 3, rt: 189},
            {group: "consonant", word: "folder", id: 1, rt: 180},
            {group: "consonant", word: "folder", id: 2, rt: 182},
            {group: "consonant", word: "folder", id: 3, rt: 178},
            {group: "consonant", word: "fedora", id: 1, rt: 230},
            {group: "consonant", word: "fedora", id: 2, rt: 231},
            {group: "consonant", word: "fedora", id: 3, rt: 228},
            {group: "consonant", word: "fedora", id: 1, rt: 231},
            {group: "consonant", word: "fedora", id: 2, rt: 233},
            {group: "consonant", word: "fedora", id: 3, rt: 230},
            {group: "consonant", word: "fedora", id: 1, rt: 230},
            {group: "consonant", word: "fedora", id: 2, rt: 232},
            {group: "consonant", word: "fedora", id: 3, rt: 228}];

var opts = {method: "MCMC", burn: 10000, lag: 5, samples: 5000};
var ANSWER = (Infer(opts, function() {
  var groupMeans = {vowel: gaussian(200, 100),
                    consonant: gaussian(200, 100)};

  var participantMean = mem(function(pid) {
    return gaussian(0, 2);
  });

  var wordMean = mem(function(word, group) {
    return gaussian(groupMeans[group], 20);
  });

  var obsFn = function(d) {
    observe(Gaussian({mu: wordMean(d.word, d.group) + participantMean(d.id),
                      sigma: 10}), d.rt);
  };

  mapData({data: data}, obsFn);

  return {diff: groupMeans['vowel'] - groupMeans['consonant'],
          p1: participantMean(1),
          p2: participantMean(2),
          p3: participantMean(3)};
}));
generated code
sonnet-46-primer-v3 TV=0 haiku-45-noprimer-v3 TV=0 haiku-45-primer-v3 TV=0 haiku-45-think-noprimer-v3 TV=0 haiku-45-think-primer-v3 TV=0 sonnet-46-noprimer-v3 TV=0 sonnet-46-think-noprimer-v3 TV=0 sonnet-46-think-primer-v3 TV=0
var data = [{group: "vowel", word: "abacus", id: 1, rt: 210},
            {group: "vowel", word: "abacus", id: 2, rt: 212},
            {group: "vowel", word: "abacus", id: 3, rt: 209},
            {group: "vowel", word: "aardvark", id: 1, rt: 200},
            {group: "vowel", word: "aardvark", id: 2, rt: 201},
            {group: "vowel", word: "aardvark", id: 3, rt: 198},
            {group: "vowel", word: "ellipse", id: 1, rt: 220},
            {group: "vowel", word: "ellipse", id: 2, rt: 222},
            {group: "vowel", word: "ellipse", id: 3, rt: 219},
            {group: "consonant", word: "proton", id: 1, rt: 190},
            {group: "consonant", word: "proton", id: 2, rt: 191},
            {group: "consonant", word: "proton", id: 3, rt: 189},
            {group: "consonant", word: "folder", id: 1, rt: 180},
            {group: "consonant", word: "folder", id: 2, rt: 182},
            {group: "consonant", word: "folder", id: 3, rt: 178},
            {group: "consonant", word: "fedora", id: 1, rt: 230},
            {group: "consonant", word: "fedora", id: 2, rt: 231},
            {group: "consonant", word: "fedora", id: 3, rt: 228},
            {group: "consonant", word: "fedora", id: 1, rt: 231},
            {group: "consonant", word: "fedora", id: 2, rt: 233},
            {group: "consonant", word: "fedora", id: 3, rt: 230},
            {group: "consonant", word: "fedora", id: 1, rt: 230},
            {group: "consonant", word: "fedora", id: 2, rt: 232},
            {group: "consonant", word: "fedora", id: 3, rt: 228}];

var model = function() {
  var groupMeans = {
    vowel: gaussian(200, 100),
    consonant: gaussian(200, 100)
  };

  var wordMean = mem(function(word, group) {
    return gaussian(groupMeans[group], 20);
  });

  var participantMean = mem(function(id) {
    return gaussian(0, 2);
  });

  mapData({data: data}, function(d) {
    observe(Gaussian({mu: wordMean(d.word, d.group) + participantMean(d.id), sigma: 10}), d.rt);
  });

  return {
    diff: groupMeans['vowel'] - groupMeans['consonant'],
    p1: participantMean(1),
    p2: participantMean(2),
    p3: participantMean(3)
  };
};

var ANSWER = Infer({method: 'MCMC', burn: 10000, lag: 5, samples: 5000}, model);
var data = [{group: "vowel", word: "abacus", id: 1, rt: 210},
            {group: "vowel", word: "abacus", id: 2, rt: 212},
            {group: "vowel", word: "abacus", id: 3, rt: 209},
            {group: "vowel", word: "aardvark", id: 1, rt: 200},
            {group: "vowel", word: "aardvark", id: 2, rt: 201},
            {group: "vowel", word: "aardvark", id: 3, rt: 198},
            {group: "vowel", word: "ellipse", id: 1, rt: 220},
            {group: "vowel", word: "ellipse", id: 2, rt: 222},
            {group: "vowel", word: "ellipse", id: 3, rt: 219},
            {group: "consonant", word: "proton", id: 1, rt: 190},
            {group: "consonant", word: "proton", id: 2, rt: 191},
            {group: "consonant", word: "proton", id: 3, rt: 189},
            {group: "consonant", word: "folder", id: 1, rt: 180},
            {group: "consonant", word: "folder", id: 2, rt: 182},
            {group: "consonant", word: "folder", id: 3, rt: 178},
            {group: "consonant", word: "fedora", id: 1, rt: 230},
            {group: "consonant", word: "fedora", id: 2, rt: 231},
            {group: "consonant", word: "fedora", id: 3, rt: 228},
            {group: "consonant", word: "fedora", id: 1, rt: 231},
            {group: "consonant", word: "fedora", id: 2, rt: 233},
            {group: "consonant", word: "fedora", id: 3, rt: 230},
            {group: "consonant", word: "fedora", id: 1, rt: 230},
            {group: "consonant", word: "fedora", id: 2, rt: 232},
            {group: "consonant", word: "fedora", id: 3, rt: 228}];

var model = function() {
  // Group means
  var groupMeans = {
    vowel: gaussian(200, 100),
    consonant: gaussian(200, 100)
  };
  
  // Word means (memoized)
  var wordMean = mem(function(word, group) {
    return gaussian(groupMeans[group], 20);
  });
  
  // Participant random effects (memoized)
  var participantMean = mem(function(id) {
    return gaussian(0, 2);
  });
  
  // Likelihood
  mapData({data: data}, function(d) {
    var mu = wordMean(d.word, d.group) + participantMean(d.id);
    observe(Gaussian({mu: mu, sigma: 10}), d.rt);
  });
  
  return {
    diff: groupMeans.vowel - groupMeans.consonant,
    p1: participantMean(1),
    p2: participantMean(2),
    p3: participantMean(3)
  };
};

var ANSWER = Infer({method: 'MCMC', burn: 10000, lag: 5, samples: 5000}, model);
var data = [{group: "vowel", word: "abacus", id: 1, rt: 210},
            {group: "vowel", word: "abacus", id: 2, rt: 212},
            {group: "vowel", word: "abacus", id: 3, rt: 209},
            {group: "vowel", word: "aardvark", id: 1, rt: 200},
            {group: "vowel", word: "aardvark", id: 2, rt: 201},
            {group: "vowel", word: "aardvark", id: 3, rt: 198},
            {group: "vowel", word: "ellipse", id: 1, rt: 220},
            {group: "vowel", word: "ellipse", id: 2, rt: 222},
            {group: "vowel", word: "ellipse", id: 3, rt: 219},
            {group: "consonant", word: "proton", id: 1, rt: 190},
            {group: "consonant", word: "proton", id: 2, rt: 191},
            {group: "consonant", word: "proton", id: 3, rt: 189},
            {group: "consonant", word: "folder", id: 1, rt: 180},
            {group: "consonant", word: "folder", id: 2, rt: 182},
            {group: "consonant", word: "folder", id: 3, rt: 178},
            {group: "consonant", word: "fedora", id: 1, rt: 230},
            {group: "consonant", word: "fedora", id: 2, rt: 231},
            {group: "consonant", word: "fedora", id: 3, rt: 228},
            {group: "consonant", word: "fedora", id: 1, rt: 231},
            {group: "consonant", word: "fedora", id: 2, rt: 233},
            {group: "consonant", word: "fedora", id: 3, rt: 230},
            {group: "consonant", word: "fedora", id: 1, rt: 230},
            {group: "consonant", word: "fedora", id: 2, rt: 232},
            {group: "consonant", word: "fedora", id: 3, rt: 228}];

var model = function() {
  var groupMeans = {
    vowel: gaussian(200, 100),
    consonant: gaussian(200, 100)
  };
  
  var wordMean = mem(function(word, group) {
    return gaussian(groupMeans[group], 20);
  });
  
  var participantMean = mem(function(id) {
    return gaussian(0, 2);
  });
  
  mapData({data: data}, function(d) {
    var mu = wordMean(d.word, d.group) + participantMean(d.id);
    observe(Gaussian({mu: mu, sigma: 10}), d.rt);
  });
  
  return {
    diff: groupMeans.vowel - groupMeans.consonant,
    p1: participantMean(1),
    p2: participantMean(2),
    p3: participantMean(3)
  };
};

var ANSWER = Infer({method: 'MCMC', burn: 10000, lag: 5, samples: 5000}, model);
var data = [{group: "vowel", word: "abacus", id: 1, rt: 210},
            {group: "vowel", word: "abacus", id: 2, rt: 212},
            {group: "vowel", word: "abacus", id: 3, rt: 209},
            {group: "vowel", word: "aardvark", id: 1, rt: 200},
            {group: "vowel", word: "aardvark", id: 2, rt: 201},
            {group: "vowel", word: "aardvark", id: 3, rt: 198},
            {group: "vowel", word: "ellipse", id: 1, rt: 220},
            {group: "vowel", word: "ellipse", id: 2, rt: 222},
            {group: "vowel", word: "ellipse", id: 3, rt: 219},
            {group: "consonant", word: "proton", id: 1, rt: 190},
            {group: "consonant", word: "proton", id: 2, rt: 191},
            {group: "consonant", word: "proton", id: 3, rt: 189},
            {group: "consonant", word: "folder", id: 1, rt: 180},
            {group: "consonant", word: "folder", id: 2, rt: 182},
            {group: "consonant", word: "folder", id: 3, rt: 178},
            {group: "consonant", word: "fedora", id: 1, rt: 230},
            {group: "consonant", word: "fedora", id: 2, rt: 231},
            {group: "consonant", word: "fedora", id: 3, rt: 228},
            {group: "consonant", word: "fedora", id: 1, rt: 231},
            {group: "consonant", word: "fedora", id: 2, rt: 233},
            {group: "consonant", word: "fedora", id: 3, rt: 230},
            {group: "consonant", word: "fedora", id: 1, rt: 230},
            {group: "consonant", word: "fedora", id: 2, rt: 232},
            {group: "consonant", word: "fedora", id: 3, rt: 228}];

var model = function() {
  // Group-level means
  var groupMeans = {
    vowel: gaussian(200, 100),
    consonant: gaussian(200, 100)
  };
  
  // Word-level means (memoized by word and group)
  var wordMean = mem(function(word, group) {
    return gaussian(groupMeans[group], 20);
  });
  
  // Participant-level random effects
  var participantMean = mem(function(id) {
    return gaussian(0, 2);
  });
  
  // Likelihood: observe reading times
  map(function(d) {
    var mu = wordMean(d.word, d.group) + participantMean(d.id);
    observe(Gaussian({mu: mu, sigma: 10}), d.rt);
  }, data);
  
  // Return posterior samples
  return {
    diff: groupMeans['vowel'] - groupMeans['consonant'],
    p1: participantMean(1),
    p2: participantMean(2),
    p3: participantMean(3)
  };
};

var ANSWER = Infer({method: 'MCMC', burn: 10000, lag: 5, samples: 5000}, model);
var data = [{group: "vowel", word: "abacus", id: 1, rt: 210},
            {group: "vowel", word: "abacus", id: 2, rt: 212},
            {group: "vowel", word: "abacus", id: 3, rt: 209},
            {group: "vowel", word: "aardvark", id: 1, rt: 200},
            {group: "vowel", word: "aardvark", id: 2, rt: 201},
            {group: "vowel", word: "aardvark", id: 3, rt: 198},
            {group: "vowel", word: "ellipse", id: 1, rt: 220},
            {group: "vowel", word: "ellipse", id: 2, rt: 222},
            {group: "vowel", word: "ellipse", id: 3, rt: 219},
            {group: "consonant", word: "proton", id: 1, rt: 190},
            {group: "consonant", word: "proton", id: 2, rt: 191},
            {group: "consonant", word: "proton", id: 3, rt: 189},
            {group: "consonant", word: "folder", id: 1, rt: 180},
            {group: "consonant", word: "folder", id: 2, rt: 182},
            {group: "consonant", word: "folder", id: 3, rt: 178},
            {group: "consonant", word: "fedora", id: 1, rt: 230},
            {group: "consonant", word: "fedora", id: 2, rt: 231},
            {group: "consonant", word: "fedora", id: 3, rt: 228},
            {group: "consonant", word: "fedora", id: 1, rt: 231},
            {group: "consonant", word: "fedora", id: 2, rt: 233},
            {group: "consonant", word: "fedora", id: 3, rt: 230},
            {group: "consonant", word: "fedora", id: 1, rt: 230},
            {group: "consonant", word: "fedora", id: 2, rt: 232},
            {group: "consonant", word: "fedora", id: 3, rt: 228}];

var model = function() {
  // Group means drawn from Gaussian(200, 100)
  var groupMeans = {
    vowel: gaussian(200, 100),
    consonant: gaussian(200, 100)
  };
  
  // Word means: Gaussian(groupMean, 20) per word-group pair
  var wordMean = mem(function(word, group) {
    return gaussian(groupMeans[group], 20);
  });
  
  // Participant random effects: Gaussian(0, 2) per participant id
  var participantMean = mem(function(id) {
    return gaussian(0, 2);
  });
  
  // Likelihood: observe reading times
  mapData({data: data}, function(d) {
    var mu = wordMean(d.word, d.group) + participantMean(d.id);
    observe(Gaussian({mu: mu, sigma: 10}), d.rt);
  });
  
  // Return joint posterior of interest
  return {
    diff: groupMeans.vowel - groupMeans.consonant,
    p1: participantMean(1),
    p2: participantMean(2),
    p3: participantMean(3)
  };
};

var ANSWER = Infer({method: 'MCMC', burn: 10000, lag: 5, samples: 5000}, model);
var data = [{group: "vowel", word: "abacus", id: 1, rt: 210},
            {group: "vowel", word: "abacus", id: 2, rt: 212},
            {group: "vowel", word: "abacus", id: 3, rt: 209},
            {group: "vowel", word: "aardvark", id: 1, rt: 200},
            {group: "vowel", word: "aardvark", id: 2, rt: 201},
            {group: "vowel", word: "aardvark", id: 3, rt: 198},
            {group: "vowel", word: "ellipse", id: 1, rt: 220},
            {group: "vowel", word: "ellipse", id: 2, rt: 222},
            {group: "vowel", word: "ellipse", id: 3, rt: 219},
            {group: "consonant", word: "proton", id: 1, rt: 190},
            {group: "consonant", word: "proton", id: 2, rt: 191},
            {group: "consonant", word: "proton", id: 3, rt: 189},
            {group: "consonant", word: "folder", id: 1, rt: 180},
            {group: "consonant", word: "folder", id: 2, rt: 182},
            {group: "consonant", word: "folder", id: 3, rt: 178},
            {group: "consonant", word: "fedora", id: 1, rt: 230},
            {group: "consonant", word: "fedora", id: 2, rt: 231},
            {group: "consonant", word: "fedora", id: 3, rt: 228},
            {group: "consonant", word: "fedora", id: 1, rt: 231},
            {group: "consonant", word: "fedora", id: 2, rt: 233},
            {group: "consonant", word: "fedora", id: 3, rt: 230},
            {group: "consonant", word: "fedora", id: 1, rt: 230},
            {group: "consonant", word: "fedora", id: 2, rt: 232},
            {group: "consonant", word: "fedora", id: 3, rt: 228}];

var model = function() {
  var groupMeans = {
    vowel: gaussian(200, 100),
    consonant: gaussian(200, 100)
  };

  var wordMean = mem(function(word, group) {
    return gaussian(groupMeans[group], 20);
  });

  var participantMean = mem(function(id) {
    return gaussian(0, 2);
  });

  mapData({data: data}, function(d) {
    observe(Gaussian({mu: wordMean(d.word, d.group) + participantMean(d.id), sigma: 10}), d.rt);
  });

  return {
    diff: groupMeans['vowel'] - groupMeans['consonant'],
    p1: participantMean(1),
    p2: participantMean(2),
    p3: participantMean(3)
  };
};

var ANSWER = Infer({method: 'MCMC', burn: 10000, lag: 5, samples: 5000}, model);
var data = [{group: "vowel", word: "abacus", id: 1, rt: 210},
            {group: "vowel", word: "abacus", id: 2, rt: 212},
            {group: "vowel", word: "abacus", id: 3, rt: 209},
            {group: "vowel", word: "aardvark", id: 1, rt: 200},
            {group: "vowel", word: "aardvark", id: 2, rt: 201},
            {group: "vowel", word: "aardvark", id: 3, rt: 198},
            {group: "vowel", word: "ellipse", id: 1, rt: 220},
            {group: "vowel", word: "ellipse", id: 2, rt: 222},
            {group: "vowel", word: "ellipse", id: 3, rt: 219},
            {group: "consonant", word: "proton", id: 1, rt: 190},
            {group: "consonant", word: "proton", id: 2, rt: 191},
            {group: "consonant", word: "proton", id: 3, rt: 189},
            {group: "consonant", word: "folder", id: 1, rt: 180},
            {group: "consonant", word: "folder", id: 2, rt: 182},
            {group: "consonant", word: "folder", id: 3, rt: 178},
            {group: "consonant", word: "fedora", id: 1, rt: 230},
            {group: "consonant", word: "fedora", id: 2, rt: 231},
            {group: "consonant", word: "fedora", id: 3, rt: 228},
            {group: "consonant", word: "fedora", id: 1, rt: 231},
            {group: "consonant", word: "fedora", id: 2, rt: 233},
            {group: "consonant", word: "fedora", id: 3, rt: 230},
            {group: "consonant", word: "fedora", id: 1, rt: 230},
            {group: "consonant", word: "fedora", id: 2, rt: 232},
            {group: "consonant", word: "fedora", id: 3, rt: 228}];

var model = function() {
  var groupMeans = {
    vowel: gaussian(200, 100),
    consonant: gaussian(200, 100)
  };

  var wordMean = mem(function(word, group) {
    return gaussian(groupMeans[group], 20);
  });

  var participantMean = mem(function(id) {
    return gaussian(0, 2);
  });

  mapData({data: data}, function(d) {
    var mu = wordMean(d.word, d.group) + participantMean(d.id);
    observe(Gaussian({mu: mu, sigma: 10}), d.rt);
  });

  var diff = groupMeans['vowel'] - groupMeans['consonant'];
  var p1 = participantMean(1);
  var p2 = participantMean(2);
  var p3 = participantMean(3);

  return {diff: diff, p1: p1, p2: p2, p3: p3};
};

var ANSWER = Infer({method: 'MCMC', burn: 10000, lag: 5, samples: 5000}, model);
var data = [{group: "vowel", word: "abacus", id: 1, rt: 210},
            {group: "vowel", word: "abacus", id: 2, rt: 212},
            {group: "vowel", word: "abacus", id: 3, rt: 209},
            {group: "vowel", word: "aardvark", id: 1, rt: 200},
            {group: "vowel", word: "aardvark", id: 2, rt: 201},
            {group: "vowel", word: "aardvark", id: 3, rt: 198},
            {group: "vowel", word: "ellipse", id: 1, rt: 220},
            {group: "vowel", word: "ellipse", id: 2, rt: 222},
            {group: "vowel", word: "ellipse", id: 3, rt: 219},
            {group: "consonant", word: "proton", id: 1, rt: 190},
            {group: "consonant", word: "proton", id: 2, rt: 191},
            {group: "consonant", word: "proton", id: 3, rt: 189},
            {group: "consonant", word: "folder", id: 1, rt: 180},
            {group: "consonant", word: "folder", id: 2, rt: 182},
            {group: "consonant", word: "folder", id: 3, rt: 178},
            {group: "consonant", word: "fedora", id: 1, rt: 230},
            {group: "consonant", word: "fedora", id: 2, rt: 231},
            {group: "consonant", word: "fedora", id: 3, rt: 228},
            {group: "consonant", word: "fedora", id: 1, rt: 231},
            {group: "consonant", word: "fedora", id: 2, rt: 233},
            {group: "consonant", word: "fedora", id: 3, rt: 230},
            {group: "consonant", word: "fedora", id: 1, rt: 230},
            {group: "consonant", word: "fedora", id: 2, rt: 232},
            {group: "consonant", word: "fedora", id: 3, rt: 228}];

var model = function() {
  var groupMeans = {
    vowel: gaussian(200, 100),
    consonant: gaussian(200, 100)
  };

  var wordMean = mem(function(word, group) {
    return gaussian(groupMeans[group], 20);
  });

  var participantMean = mem(function(id) {
    return gaussian(0, 2);
  });

  mapData({data: data}, function(d) {
    var mu = wordMean(d.word, d.group) + participantMean(d.id);
    observe(Gaussian({mu: mu, sigma: 10}), d.rt);
  });

  var diff = groupMeans['vowel'] - groupMeans['consonant'];
  var p1 = participantMean(1);
  var p2 = participantMean(2);
  var p3 = participantMean(3);

  return {diff: diff, p1: p1, p2: p2, p3: p3};
};

var ANSWER = Infer({method: 'MCMC', burn: 10000, lag: 5, samples: 5000}, model);
groundtruth output
{"diff":15.340727250765042,"p1":-0.50…0.0012
{"diff":3.3380186513486194,"p1":1.563…0.0010
{"diff":28.940411038944063,"p1":2.005…0.0010
{"diff":1.2840063823981893,"p1":-2.85…0.0010
{"diff":31.156777816328287,"p1":-3.97…0.0008
{"diff":5.635044295637783,"p1":0.1417…0.0008
{"diff":6.838982167405447,"p1":1.2591…0.0008
{"diff":10.727286310740908,"p1":0.626…0.0008
{"diff":15.692308518093483,"p1":-1.02…0.0008
{"diff":17.322251229749014,"p1":0.440…0.0008
{"diff":-3.480298661863344,"p1":-6.03…0.0008
{"diff":7.636474448478197,"p1":3.5417…0.0008
… 4086 more
raw JSON
{
  "__kind": "distribution",
  "probs": [
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.0005999999999999997,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.0004000000000000001,
    0.0004000000000000001,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.0004000000000000001,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.0004000000000000001,
    0.00019999999999999985,
    0.00019999999999999985,
    0.0004000000000000001,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.0004000000000000001,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.0004000000000000001,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.0005999999999999997,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.0004000000000000001,
    0.00019999999999999985,
    0.0004000000000000001,
    0.00019999999999999985,
    0.00019999999999999985,
    0.0005999999999999997,
    0.0004000000000000001,
    0.00019999999999999985,
    0.0004000000000000001,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.0004000000000000001,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.0005999999999999997,
    0.00019999999999999985,
    0.0004000000000000001,
    0.00019999999999999985,
    0.0005999999999999997,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.0005999999999999997,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.0005999999999999997,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.0008000000000000003,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.0004000000000000001,
    0.0004000000000000001,
    0.0004000000000000001,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.0005999999999999997,
    0.0004000000000000001,
    0.0004000000000000001,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.0004000000000000001,
    0.000

... (695622 more chars truncated)
feedback
not signed
inference-algorithms 9 atoms 8✓ · 0✗
probmods2-inference-algorithms/ex1.1 distribution 8 runs TV=0.00 TV=0
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Heart-shaped implicit curve: a point (x, y) is *on the curve* if x^2 + (y - x^(2/3))^2 - 1 is within 0.01 of 0. The reference model draws x and y from independent Gaussians around the bounding box center, and conditions on onCurve(x, y). Helpers (include them or equivalent in your program):
var onCurve = function(x, y) {
  var x2 = x*x;
  var term1 = y - Math.pow(x2, 1/3);
  var crossSection = x2 + term1*term1 - 1;
  return Math.abs(crossSection) < 0.01;
};
var xbounds = [-1, 1];
var ybounds = [-1, 1.6];
var xmu = 0.5 * (xbounds[0] + xbounds[1]);
var ymu = 0.5 * (ybounds[0] + ybounds[1]);
var xsigma = 0.5 * (xbounds[1] - xbounds[0]);
var ysigma = 0.5 * (ybounds[1] - ybounds[0]);
Use Metropolis-Hastings MCMC instead of rejection sampling on the independent-Gaussians model var x = gaussian(xmu, xsigma); var y = gaussian(ymu, ysigma); condition(onCurve(x, y)). End your program with Infer({method: 'MCMC', samples: 10000, lag: 10}, model) returning the posterior over {x, y}.
groundtruth code
var onCurve = function(x, y) {
  var x2 = x*x;
  var term1 = y - Math.pow(x2, 1/3);
  var crossSection = x2 + term1*term1 - 1;
  return Math.abs(crossSection) < 0.01;
};
var xbounds = [-1, 1];
var ybounds = [-1, 1.6];

var xmu = 0.5 * (xbounds[0] + xbounds[1]);
var ymu = 0.5 * (ybounds[0] + ybounds[1]);
var xsigma = 0.5 * (xbounds[1] - xbounds[0]);
var ysigma = 0.5 * (ybounds[1] - ybounds[0]);

var model = function() {
  var x = gaussian(xmu, xsigma);
  var y = gaussian(ymu, ysigma);
  condition(onCurve(x, y));
  return {x: x, y: y};
};
var ANSWER = (Infer({method: 'MCMC',
       samples: 10000,
       lag: 10}, model));
generated code
sonnet-46-primer-v3 TV=0 haiku-45-noprimer-v3 TV=0 haiku-45-primer-v3 TV=0 haiku-45-think-noprimer-v3 TV=0 haiku-45-think-primer-v3 TV=0 sonnet-46-noprimer-v3 TV=0 sonnet-46-think-noprimer-v3 TV=0 sonnet-46-think-primer-v3 TV=0
var onCurve = function(x, y) {
  var x2 = x*x;
  var term1 = y - Math.pow(x2, 1/3);
  var crossSection = x2 + term1*term1 - 1;
  return Math.abs(crossSection) < 0.01;
};

var xbounds = [-1, 1];
var ybounds = [-1, 1.6];
var xmu = 0.5 * (xbounds[0] + xbounds[1]);
var ymu = 0.5 * (ybounds[0] + ybounds[1]);
var xsigma = 0.5 * (xbounds[1] - xbounds[0]);
var ysigma = 0.5 * (ybounds[1] - ybounds[0]);

var model = function() {
  var x = gaussian(xmu, xsigma);
  var y = gaussian(ymu, ysigma);
  condition(onCurve(x, y));
  return {x: x, y: y};
};

var ANSWER = Infer({method: 'MCMC', samples: 10000, lag: 10}, model);
var onCurve = function(x, y) {
  var x2 = x*x;
  var term1 = y - Math.pow(x2, 1/3);
  var crossSection = x2 + term1*term1 - 1;
  return Math.abs(crossSection) < 0.01;
};

var xbounds = [-1, 1];
var ybounds = [-1, 1.6];
var xmu = 0.5 * (xbounds[0] + xbounds[1]);
var ymu = 0.5 * (ybounds[0] + ybounds[1]);
var xsigma = 0.5 * (xbounds[1] - xbounds[0]);
var ysigma = 0.5 * (ybounds[1] - ybounds[0]);

var model = function() {
  var x = gaussian(xmu, xsigma);
  var y = gaussian(ymu, ysigma);
  condition(onCurve(x, y));
  return {x: x, y: y};
};

var ANSWER = Infer({method: 'MCMC', samples: 10000, lag: 10}, model);
var onCurve = function(x, y) {
  var x2 = x*x;
  var term1 = y - Math.pow(x2, 1/3);
  var crossSection = x2 + term1*term1 - 1;
  return Math.abs(crossSection) < 0.01;
};

var xbounds = [-1, 1];
var ybounds = [-1, 1.6];
var xmu = 0.5 * (xbounds[0] + xbounds[1]);
var ymu = 0.5 * (ybounds[0] + ybounds[1]);
var xsigma = 0.5 * (xbounds[1] - xbounds[0]);
var ysigma = 0.5 * (ybounds[1] - ybounds[0]);

var model = function() {
  var x = gaussian(xmu, xsigma);
  var y = gaussian(ymu, ysigma);
  condition(onCurve(x, y));
  return {x: x, y: y};
};

var ANSWER = Infer({method: 'MCMC', samples: 10000, lag: 10}, model);
var onCurve = function(x, y) {
  var x2 = x*x;
  var term1 = y - Math.pow(x2, 1/3);
  var crossSection = x2 + term1*term1 - 1;
  return Math.abs(crossSection) < 0.01;
};

var xbounds = [-1, 1];
var ybounds = [-1, 1.6];
var xmu = 0.5 * (xbounds[0] + xbounds[1]);
var ymu = 0.5 * (ybounds[0] + ybounds[1]);
var xsigma = 0.5 * (xbounds[1] - xbounds[0]);
var ysigma = 0.5 * (ybounds[1] - ybounds[0]);

var model = function() {
  var x = gaussian(xmu, xsigma);
  var y = gaussian(ymu, ysigma);
  condition(onCurve(x, y));
  return {x: x, y: y};
};

var ANSWER = Infer({method: 'MCMC', samples: 10000, lag: 10}, model);
var onCurve = function(x, y) {
  var x2 = x*x;
  var term1 = y - Math.pow(x2, 1/3);
  var crossSection = x2 + term1*term1 - 1;
  return Math.abs(crossSection) < 0.01;
};

var xbounds = [-1, 1];
var ybounds = [-1, 1.6];
var xmu = 0.5 * (xbounds[0] + xbounds[1]);
var ymu = 0.5 * (ybounds[0] + ybounds[1]);
var xsigma = 0.5 * (xbounds[1] - xbounds[0]);
var ysigma = 0.5 * (ybounds[1] - ybounds[0]);

var model = function() {
  var x = gaussian(xmu, xsigma);
  var y = gaussian(ymu, ysigma);
  condition(onCurve(x, y));
  return {x: x, y: y};
};

var ANSWER = Infer({method: 'MCMC', samples: 10000, lag: 10}, model);
var onCurve = function(x, y) {
  var x2 = x*x;
  var term1 = y - Math.pow(x2, 1/3);
  var crossSection = x2 + term1*term1 - 1;
  return Math.abs(crossSection) < 0.01;
};
var xbounds = [-1, 1];
var ybounds = [-1, 1.6];
var xmu = 0.5 * (xbounds[0] + xbounds[1]);
var ymu = 0.5 * (ybounds[0] + ybounds[1]);
var xsigma = 0.5 * (xbounds[1] - xbounds[0]);
var ysigma = 0.5 * (ybounds[1] - ybounds[0]);

var model = function() {
  var x = gaussian(xmu, xsigma);
  var y = gaussian(ymu, ysigma);
  condition(onCurve(x, y));
  return {x: x, y: y};
};

var ANSWER = Infer({method: 'MCMC', samples: 10000, lag: 10}, model);
var onCurve = function(x, y) {
  var x2 = x*x;
  var term1 = y - Math.pow(x2, 1/3);
  var crossSection = x2 + term1*term1 - 1;
  return Math.abs(crossSection) < 0.01;
};

var xbounds = [-1, 1];
var ybounds = [-1, 1.6];
var xmu = 0.5 * (xbounds[0] + xbounds[1]);
var ymu = 0.5 * (ybounds[0] + ybounds[1]);
var xsigma = 0.5 * (xbounds[1] - xbounds[0]);
var ysigma = 0.5 * (ybounds[1] - ybounds[0]);

var model = function() {
  var x = gaussian(xmu, xsigma);
  var y = gaussian(ymu, ysigma);
  condition(onCurve(x, y));
  return {x: x, y: y};
};

var ANSWER = Infer({method: 'MCMC', samples: 10000, lag: 10}, model);
var onCurve = function(x, y) {
  var x2 = x * x;
  var term1 = y - Math.pow(x2, 1/3);
  var crossSection = x2 + term1 * term1 - 1;
  return Math.abs(crossSection) < 0.01;
};

var xbounds = [-1, 1];
var ybounds = [-1, 1.6];
var xmu = 0.5 * (xbounds[0] + xbounds[1]);
var ymu = 0.5 * (ybounds[0] + ybounds[1]);
var xsigma = 0.5 * (xbounds[1] - xbounds[0]);
var ysigma = 0.5 * (ybounds[1] - ybounds[0]);

var model = function() {
  var x = gaussian(xmu, xsigma);
  var y = gaussian(ymu, ysigma);
  condition(onCurve(x, y));
  return { x: x, y: y };
};

var ANSWER = Infer({ method: 'MCMC', samples: 10000, lag: 10 }, model);
groundtruth output
{"x":-0.5810586310091981,"y":-0.12246…0.0087
{"x":0.677634785848371,"y":0.03832479…0.0083
{"x":0.6450011069846323,"y":-0.012080…0.0081
{"x":0.6985404268355182,"y":0.0756960…0.0071
{"x":0.5745934653314195,"y":-0.126008…0.0068
{"x":-0.7418105022980765,"y":0.147168…0.0067
{"x":-0.7187565836760806,"y":0.100789…0.0066
{"x":-0.7373007189257397,"y":0.136047…0.0063
{"x":0.654702850748931,"y":-0.0022909…0.0059
{"x":-0.43262464197025463,"y":-0.3266…0.0059
{"x":-0.8155000442272854,"y":0.290561…0.0059
{"x":0.7354877878056911,"y":0.1370670…0.0055
… 1898 more
raw JSON
{
  "__kind": "distribution",
  "probs": [
    0.0023999999999999994,
    0.002599999999999999,
    0.0011999999999999995,
    0.0008000000000000003,
    0.0041,
    0.0007000000000000001,
    0.0018999999999999996,
    0.0014000000000000002,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00010000000000000009,
    0.0003000000000000001,
    0.00010000000000000009,
    0.00010000000000000009,
    0.0003000000000000001,
    0.00010000000000000009,
    0.0003000000000000001,
    0.00019999999999999985,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.0004000000000000001,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00010000000000000009,
    0.0003000000000000001,
    0.00010000000000000009,
    0.00019999999999999985,
    0.0005999999999999997,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.0004000000000000001,
    0.0003000000000000001,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00010000000000000009,
    0.0003000000000000001,
    0.0003000000000000001,
    0.00010000000000000009,
    0.00010000000000000009,
    0.0004000000000000001,
    0.00010000000000000009,
    0.00019999999999999985,
    0.00010000000000000009,
    0.0003000000000000001,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00019999999999999985,
    0.00010000000000000009,
    0.00019999999999999985,
    0.00010000000000000009,
    0.0003000000000000001,
    0.0003000000000000001,
    0.0007000000000000001,
    0.00019999999999999985,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00019999999999999985,
    0.00010000000000000009,
    0.0004000000000000001,
    0.00010000000000000009,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00010000000000000009,
    0.00019999999999999985,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00019999999999999985,
    0.00010000000000000009,
    0.0004000000000000001,
    0.00010000000000000009,
    0.0017999999999999997,
    0.00010000000000000009,
    0.00019999999999999985,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.0004000000000000001,
    0.00010000000000000009,
    0.00010000000000000009,
    0.0005000000000000001,
    0.0003000000000000001,
    0.00019999999999999985,
    0.00010000000000000009,
    0.0037999999999999996,
    0.0025000000000000005,
    0.0008999999999999998,
    0.0017000000000000001,
    0.00010000000000000009,
    0.0017999999999999997,
    0.0032999999999999987,
    0.004799999999999999,
    0.0003000000000000001,
    0.00019999999999999985,
    0.00010000000000000009,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.0003000000000000001,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00019999999999999985,
    0.0003000000000000001,
    0.00019999999999999985,
    0.0003000000000000001,
    0.0012999999999999995,
    0.0005000000000000001,
    0.002099999999999999,
    0.0005999999999999997,
    0.0014999999999999994,
    0.0005999999999999997,
    0.0011999999999999995,
    0.0003000000000000001,
    0.0023,
    0.00019999999999999985,
    0.00010000000000000009,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00010000000000000009,
    0.00010000000000000009,
    0.0003000000000000001,
    0.00019999999999999985,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00

... (190376 more chars truncated)
feedback
not signed
probmods2-inference-algorithms/ex1.2 distribution 8 runs TV=0.00 TV=0
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Heart-shaped implicit curve: a point (x, y) is *on the curve* if x^2 + (y - x^(2/3))^2 - 1 is within 0.01 of 0. The reference model draws x and y from independent Gaussians around the bounding box center, and conditions on onCurve(x, y). Helpers (include them or equivalent in your program):
var onCurve = function(x, y) {
  var x2 = x*x;
  var term1 = y - Math.pow(x2, 1/3);
  var crossSection = x2 + term1*term1 - 1;
  return Math.abs(crossSection) < 0.01;
};
var xbounds = [-1, 1];
var ybounds = [-1, 1.6];
var xmu = 0.5 * (xbounds[0] + xbounds[1]);
var ymu = 0.5 * (ybounds[0] + ybounds[1]);
var xsigma = 0.5 * (xbounds[1] - xbounds[0]);
var ysigma = 0.5 * (ybounds[1] - ybounds[0]);
Change the model to draw x and y *jointly* via a diagCovGaussian centered at (xmu, ymu) with diagonal covariance (xsigma, ysigma), so MH MCMC successfully traces the curve. Use T.get to extract the x and y components of the sample. End your program with Infer({method: 'MCMC', samples: 1000, lag: 100}, model) returning the posterior over {x, y}.
groundtruth code
var onCurve = function(x, y) {
  var x2 = x*x;
  var term1 = y - Math.pow(x2, 1/3);
  var crossSection = x2 + term1*term1 - 1;
  return Math.abs(crossSection) < 0.01;
};
var xbounds = [-1, 1];
var ybounds = [-1, 1.6];

var xmu = 0.5 * (xbounds[0] + xbounds[1]);
var ymu = 0.5 * (ybounds[0] + ybounds[1]);
var xsigma = 0.5 * (xbounds[1] - xbounds[0]);
var ysigma = 0.5 * (ybounds[1] - ybounds[0]);

var model = function() {
  var xy = diagCovGaussian({mu: Vector([xmu, ymu]),
                            sigma: Vector([xsigma, ysigma])});
  var x = T.get(xy, 0);
  var y = T.get(xy, 1);
  condition(onCurve(x, y));
  return {x: x, y: y};
};
var ANSWER = (Infer({method: 'MCMC',
       samples: 1000,
       lag: 100}, model));
generated code
sonnet-46-primer-v3 TV=0 haiku-45-noprimer-v3 fail haiku-45-primer-v3 TV<1 haiku-45-think-noprimer-v3 fail haiku-45-think-primer-v3 TV=0 sonnet-46-noprimer-v3 TV=0 sonnet-46-think-noprimer-v3 TV=0 sonnet-46-think-primer-v3 TV=0
var onCurve = function(x, y) {
  var x2 = x*x;
  var term1 = y - Math.pow(x2, 1/3);
  var crossSection = x2 + term1*term1 - 1;
  return Math.abs(crossSection) < 0.01;
};

var xbounds = [-1, 1];
var ybounds = [-1, 1.6];
var xmu = 0.5 * (xbounds[0] + xbounds[1]);
var ymu = 0.5 * (ybounds[0] + ybounds[1]);
var xsigma = 0.5 * (xbounds[1] - xbounds[0]);
var ysigma = 0.5 * (ybounds[1] - ybounds[0]);

var model = function() {
  var mu = Vector([xmu, ymu]);
  var sigma = Vector([xsigma, ysigma]);
  var xy = sample(DiagCovGaussian({mu: mu, sigma: sigma}));
  var x = T.get(xy, 0);
  var y = T.get(xy, 1);
  condition(onCurve(x, y));
  return {x: x, y: y};
};

var ANSWER = Infer({method: 'MCMC', samples: 1000, lag: 100}, model);
Error: Parameter "mu" should be of type "tensor".
var onCurve = function(x, y) {
  var x2 = x*x;
  var term1 = y - Math.pow(x2, 1/3);
  var crossSection = x2 + term1*term1 - 1;
  return Math.abs(crossSection) < 0.01;
};

var xbounds = [-1, 1];
var ybounds = [-1, 1.6];
var xmu = 0.5 * (xbounds[0] + xbounds[1]);
var ymu = 0.5 * (ybounds[0] + ybounds[1]);
var xsigma = 0.5 * (xbounds[1] - xbounds[0]);
var ysigma = 0.5 * (ybounds[1] - ybounds[0]);

var model = function() {
  var sample = diagCovGaussian({mu: [xmu, ymu], sigma: [xsigma, ysigma]});
  var x = T.get(sample, 0);
  var y = T.get(sample, 1);
  condition(onCurve(x, y));
  return {x: x, y: y};
};

var ANSWER = Infer({method: 'MCMC', samples: 1000, lag: 100}, model);
var onCurve = function(x, y) {
  var x2 = x*x;
  var term1 = y - Math.pow(x2, 1/3);
  var crossSection = x2 + term1*term1 - 1;
  return Math.abs(crossSection) < 0.01;
};

var xbounds = [-1, 1];
var ybounds = [-1, 1.6];
var xmu = 0.5 * (xbounds[0] + xbounds[1]);
var ymu = 0.5 * (ybounds[0] + ybounds[1]);
var xsigma = 0.5 * (xbounds[1] - xbounds[0]);
var ysigma = 0.5 * (ybounds[1] - ybounds[0]);

var diagCovGaussian = function(mu, sigma) {
  // mu and sigma are 2-element arrays [mu_x, mu_y] and [sigma_x, sigma_y]
  var x = gaussian(mu[0], sigma[0]);
  var y = gaussian(mu[1], sigma[1]);
  return Vector([x, y]);
};

var model = function() {
  var sample = diagCovGaussian([xmu, ymu], [xsigma, ysigma]);
  var x = T.get(sample, 0);
  var y = T.get(sample, 1);
  condition(onCurve(x, y));
  return {x: x, y: y};
};

var ANSWER = Infer({method: 'MCMC', samples: 1000, lag: 100}, model);
Error: Parameter "mu" should be of type "tensor".
var onCurve = function(x, y) {
  var x2 = x*x;
  var term1 = y - Math.pow(x2, 1/3);
  var crossSection = x2 + term1*term1 - 1;
  return Math.abs(crossSection) < 0.01;
};

var xbounds = [-1, 1];
var ybounds = [-1, 1.6];
var xmu = 0.5 * (xbounds[0] + xbounds[1]);
var ymu = 0.5 * (ybounds[0] + ybounds[1]);
var xsigma = 0.5 * (xbounds[1] - xbounds[0]);
var ysigma = 0.5 * (ybounds[1] - ybounds[0]);

var model = function() {
  var point = diagCovGaussian({mu: [xmu, ymu], sigma: [xsigma, ysigma]});
  var x = T.get(point, 0);
  var y = T.get(point, 1);
  
  condition(onCurve(x, y));
  
  return {x: x, y: y};
};

var ANSWER = Infer({method: 'MCMC', samples: 1000, lag: 100}, model);
var onCurve = function(x, y) {
  var x2 = x*x;
  var term1 = y - Math.pow(x2, 1/3);
  var crossSection = x2 + term1*term1 - 1;
  return Math.abs(crossSection) < 0.01;
};

var xbounds = [-1, 1];
var ybounds = [-1, 1.6];
var xmu = 0.5 * (xbounds[0] + xbounds[1]);
var ymu = 0.5 * (ybounds[0] + ybounds[1]);
var xsigma = 0.5 * (xbounds[1] - xbounds[0]);
var ysigma = 0.5 * (ybounds[1] - ybounds[0]);

var model = function() {
  var sample = diagCovGaussian({
    mu: Vector([xmu, ymu]),
    sigma: Vector([xsigma, ysigma])
  });
  
  var x = T.get(sample, 0);
  var y = T.get(sample, 1);
  
  condition(onCurve(x, y));
  
  return {x: x, y: y};
};

var ANSWER = Infer({method: 'MCMC', samples: 1000, lag: 100}, model);
var onCurve = function(x, y) {
  var x2 = x*x;
  var term1 = y - Math.pow(x2, 1/3);
  var crossSection = x2 + term1*term1 - 1;
  return Math.abs(crossSection) < 0.01;
};
var xbounds = [-1, 1];
var ybounds = [-1, 1.6];
var xmu = 0.5 * (xbounds[0] + xbounds[1]);
var ymu = 0.5 * (ybounds[0] + ybounds[1]);
var xsigma = 0.5 * (xbounds[1] - xbounds[0]);
var ysigma = 0.5 * (ybounds[1] - ybounds[0]);

var model = function() {
  var mu = Vector([xmu, ymu]);
  var sigma = Vector([xsigma, ysigma]);
  var sample = sample(DiagCovGaussian({mu: mu, sigma: sigma}));
  var x = T.get(sample, 0);
  var y = T.get(sample, 1);
  condition(onCurve(x, y));
  return {x: x, y: y};
};

var ANSWER = Infer({method: 'MCMC', samples: 1000, lag: 100}, model);
var onCurve = function(x, y) {
  var x2 = x * x;
  var term1 = y - Math.pow(x2, 1/3);
  var crossSection = x2 + term1 * term1 - 1;
  return Math.abs(crossSection) < 0.01;
};

var xbounds = [-1, 1];
var ybounds = [-1, 1.6];
var xmu = 0.5 * (xbounds[0] + xbounds[1]);
var ymu = 0.5 * (ybounds[0] + ybounds[1]);
var xsigma = 0.5 * (xbounds[1] - xbounds[0]);
var ysigma = 0.5 * (ybounds[1] - ybounds[0]);

var model = function() {
  var mu = Vector([xmu, ymu]);
  var sigma = Vector([xsigma, ysigma]);
  var sample = sample(DiagCovGaussian({mu: mu, sigma: sigma}));
  var x = T.get(sample, 0);
  var y = T.get(sample, 1);
  condition(onCurve(x, y));
  return {x: x, y: y};
};

var ANSWER = Infer({method: 'MCMC', samples: 1000, lag: 100}, model);
var onCurve = function(x, y) {
  var x2 = x * x;
  var term1 = y - Math.pow(x2, 1/3);
  var crossSection = x2 + term1 * term1 - 1;
  return Math.abs(crossSection) < 0.01;
};

var xbounds = [-1, 1];
var ybounds = [-1, 1.6];
var xmu = 0.5 * (xbounds[0] + xbounds[1]);
var ymu = 0.5 * (ybounds[0] + ybounds[1]);
var xsigma = 0.5 * (xbounds[1] - xbounds[0]);
var ysigma = 0.5 * (ybounds[1] - ybounds[0]);

var model = function() {
  var mu = Vector([xmu, ymu]);
  var sigma = Vector([xsigma, ysigma]);
  var xy = sample(DiagCovGaussian({mu: mu, sigma: sigma}));
  var x = T.get(xy, 0);
  var y = T.get(xy, 1);
  condition(onCurve(x, y));
  return {x: x, y: y};
};

var ANSWER = Infer({method: 'MCMC', samples: 1000, lag: 100}, model);
groundtruth output
{"x":-0.3459221389446751,"y":1.431399…0.0110
{"x":-0.9824182899254787,"y":0.800142…0.0090
{"x":-1.0005872894491943,"y":0.940362…0.0090
{"x":-0.8471059233387676,"y":1.433703…0.0090
{"x":-0.12323521471768298,"y":-0.7477…0.0080
{"x":-0.7812140378472965,"y":0.230533…0.0080
{"x":0.06194358124675301,"y":-0.83830…0.0080
{"x":-0.8390600676647956,"y":0.345258…0.0080
{"x":-0.102321222065728,"y":1.2093495…0.0070
{"x":-0.9997017087622113,"y":1.075232…0.0070
{"x":-0.9949849967270745,"y":1.042097…0.0070
{"x":0.963374899295324,"y":0.69366849…0.0070
… 393 more
raw JSON
{
  "__kind": "distribution",
  "probs": [
    0.0010000000000000002,
    0.004000000000000002,
    0.0010000000000000002,
    0.0020000000000000005,
    0.004000000000000002,
    0.002999999999999999,
    0.004000000000000002,
    0.0010000000000000002,
    0.0020000000000000005,
    0.0010000000000000002,
    0.0010000000000000002,
    0.005999999999999998,
    0.0020000000000000005,
    0.0010000000000000002,
    0.002999999999999999,
    0.0010000000000000002,
    0.002999999999999999,
    0.005000000000000002,
    0.0010000000000000002,
    0.009,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0020000000000000005,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0020000000000000005,
    0.0020000000000000005,
    0.002999999999999999,
    0.0020000000000000005,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0020000000000000005,
    0.0020000000000000005,
    0.0010000000000000002,
    0.002999999999999999,
    0.004000000000000002,
    0.005999999999999998,
    0.0010000000000000002,
    0.002999999999999999,
    0.0020000000000000005,
    0.0010000000000000002,
    0.0020000000000000005,
    0.0010000000000000002,
    0.002999999999999999,
    0.0010000000000000002,
    0.007000000000000003,
    0.0020000000000000005,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.005000000000000002,
    0.0010000000000000002,
    0.005000000000000002,
    0.004000000000000002,
    0.002999999999999999,
    0.004000000000000002,
    0.0020000000000000005,
    0.0010000000000000002,
    0.0010000000000000002,
    0.011000000000000005,
    0.0020000000000000005,
    0.0020000000000000005,
    0.005999999999999998,
    0.0020000000000000005,
    0.0020000000000000005,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0020000000000000005,
    0.0010000000000000002,
    0.002999999999999999,
    0.0020000000000000005,
    0.002999999999999999,
    0.005000000000000002,
    0.005999999999999998,
    0.002999999999999999,
    0.0010000000000000002,
    0.0020000000000000005,
    0.009,
    0.0020000000000000005,
    0.0010000000000000002,
    0.0020000000000000005,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0020000000000000005,
    0.005000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.002999999999999999,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0020000000000000005,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.002999999999999999,
    0.0010000000000000002,
    0.002999999999999999,
    0.002999999999999999,
    0.007000000000000003,
    0.004000000000000002,
    0.0020000000000000005,
    0.002999999999999999,
    0.0010000000000000002,
    0.0020000000000000005,
    0.0020000000000000005,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0020000000000000005,
    0.0020000000000000005,
    0.002999999999999999,
    0.005000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0020000000000000005,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.002999999999999999,
    0.0010000000000000002,
    0.0020000000000000005,
    0.002999999999999999,
    0.0010000000000000002,
    0.002999999999999999,
    0.0010000000000000002,
    0.004000000000000002,
    0.0010000000000000002,
    0.0020000000000000005,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.002999999999999999,
    0.0020000000000000005,
    0.002999999999999999,
    0.0020000000000000005,
    0.002999999999999999,
    0.0010000000000000002,
    0.002999999999999999,
    0.0020000000000000005,
    0.007999999999999997,
    0.002999999999999999,
    0.0010000000000000002,
    0.0010000000000000002,
    0.005999999999999998,
    0.00200000

... (37189 more chars truncated)
feedback
not signed
probmods2-inference-algorithms/ex1.3 distribution 8 runs TV=0.00 TV=0
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Heart-shaped implicit curve: a point (x, y) is *on the curve* if x^2 + (y - x^(2/3))^2 - 1 is within 0.01 of 0. The reference model draws x and y from independent Gaussians around the bounding box center, and conditions on onCurve(x, y). Helpers (include them or equivalent in your program):
var onCurve = function(x, y) {
  var x2 = x*x;
  var term1 = y - Math.pow(x2, 1/3);
  var crossSection = x2 + term1*term1 - 1;
  return Math.abs(crossSection) < 0.01;
};
var xbounds = [-1, 1];
var ybounds = [-1, 1.6];
var xmu = 0.5 * (xbounds[0] + xbounds[1]);
var ymu = 0.5 * (ybounds[0] + ybounds[1]);
var xsigma = 0.5 * (xbounds[1] - xbounds[0]);
var ysigma = 0.5 * (ybounds[1] - ybounds[0]);
Using the original independent-Gaussians model, use HMC instead of MH. End your program with Infer({method: 'MCMC', samples: 10000, kernel: {HMC: {steps: 10, stepSize: .5}}}, model) returning the posterior over {x, y}.
groundtruth code
var onCurve = function(x, y) {
  var x2 = x*x;
  var term1 = y - Math.pow(x2, 1/3);
  var crossSection = x2 + term1*term1 - 1;
  return Math.abs(crossSection) < 0.01;
};
var xbounds = [-1, 1];
var ybounds = [-1, 1.6];

var xmu = 0.5 * (xbounds[0] + xbounds[1]);
var ymu = 0.5 * (ybounds[0] + ybounds[1]);
var xsigma = 0.5 * (xbounds[1] - xbounds[0]);
var ysigma = 0.5 * (ybounds[1] - ybounds[0]);

var model = function() {
  var x = gaussian(xmu, xsigma);
  var y = gaussian(ymu, ysigma);
  condition(onCurve(x, y));
  return {x: x, y: y};
};
var ANSWER = (Infer({method: 'MCMC',
       samples: 10000,
       kernel: {HMC : { steps: 10, stepSize: .5 }} }, model));
generated code
sonnet-46-primer-v3 TV=0 haiku-45-noprimer-v3 TV=0 haiku-45-primer-v3 TV=0 haiku-45-think-noprimer-v3 TV=0 haiku-45-think-primer-v3 TV=0 sonnet-46-noprimer-v3 TV=0 sonnet-46-think-noprimer-v3 TV=0 sonnet-46-think-primer-v3 TV=0
var onCurve = function(x, y) {
  var x2 = x*x;
  var term1 = y - Math.pow(x2, 1/3);
  var crossSection = x2 + term1*term1 - 1;
  return Math.abs(crossSection) < 0.01;
};
var xbounds = [-1, 1];
var ybounds = [-1, 1.6];
var xmu = 0.5 * (xbounds[0] + xbounds[1]);
var ymu = 0.5 * (ybounds[0] + ybounds[1]);
var xsigma = 0.5 * (xbounds[1] - xbounds[0]);
var ysigma = 0.5 * (ybounds[1] - ybounds[0]);

var model = function() {
  var x = gaussian(xmu, xsigma);
  var y = gaussian(ymu, ysigma);
  condition(onCurve(x, y));
  return {x: x, y: y};
};

var ANSWER = Infer({method: 'MCMC', samples: 10000, kernel: {HMC: {steps: 10, stepSize: .5}}}, model);
var onCurve = function(x, y) {
  var x2 = x*x;
  var term1 = y - Math.pow(x2, 1/3);
  var crossSection = x2 + term1*term1 - 1;
  return Math.abs(crossSection) < 0.01;
};

var xbounds = [-1, 1];
var ybounds = [-1, 1.6];
var xmu = 0.5 * (xbounds[0] + xbounds[1]);
var ymu = 0.5 * (ybounds[0] + ybounds[1]);
var xsigma = 0.5 * (xbounds[1] - xbounds[0]);
var ysigma = 0.5 * (ybounds[1] - ybounds[0]);

var model = function() {
  var x = gaussian(xmu, xsigma);
  var y = gaussian(ymu, ysigma);
  condition(onCurve(x, y));
  return {x: x, y: y};
};

var ANSWER = Infer({method: 'MCMC', samples: 10000, kernel: {HMC: {steps: 10, stepSize: .5}}}, model);
var onCurve = function(x, y) {
  var x2 = x*x;
  var term1 = y - Math.pow(x2, 1/3);
  var crossSection = x2 + term1*term1 - 1;
  return Math.abs(crossSection) < 0.01;
};

var xbounds = [-1, 1];
var ybounds = [-1, 1.6];
var xmu = 0.5 * (xbounds[0] + xbounds[1]);
var ymu = 0.5 * (ybounds[0] + ybounds[1]);
var xsigma = 0.5 * (xbounds[1] - xbounds[0]);
var ysigma = 0.5 * (ybounds[1] - ybounds[0]);

var model = function() {
  var x = gaussian(xmu, xsigma);
  var y = gaussian(ymu, ysigma);
  condition(onCurve(x, y));
  return {x: x, y: y};
};

var ANSWER = Infer({method: 'MCMC', samples: 10000, kernel: {HMC: {steps: 10, stepSize: 0.5}}}, model);
var onCurve = function(x, y) {
  var x2 = x*x;
  var term1 = y - Math.pow(x2, 1/3);
  var crossSection = x2 + term1*term1 - 1;
  return Math.abs(crossSection) < 0.01;
};

var xbounds = [-1, 1];
var ybounds = [-1, 1.6];
var xmu = 0.5 * (xbounds[0] + xbounds[1]);
var ymu = 0.5 * (ybounds[0] + ybounds[1]);
var xsigma = 0.5 * (xbounds[1] - xbounds[0]);
var ysigma = 0.5 * (ybounds[1] - ybounds[0]);

var model = function() {
  var x = gaussian(xmu, xsigma);
  var y = gaussian(ymu, ysigma);
  condition(onCurve(x, y));
  return {x: x, y: y};
};

var ANSWER = Infer({method: 'MCMC', samples: 10000, kernel: {HMC: {steps: 10, stepSize: .5}}}, model);
var onCurve = function(x, y) {
  var x2 = x*x;
  var term1 = y - Math.pow(x2, 1/3);
  var crossSection = x2 + term1*term1 - 1;
  return Math.abs(crossSection) < 0.01;
};

var xbounds = [-1, 1];
var ybounds = [-1, 1.6];
var xmu = 0.5 * (xbounds[0] + xbounds[1]);
var ymu = 0.5 * (ybounds[0] + ybounds[1]);
var xsigma = 0.5 * (xbounds[1] - xbounds[0]);
var ysigma = 0.5 * (ybounds[1] - ybounds[0]);

var model = function() {
  var x = gaussian(xmu, xsigma);
  var y = gaussian(ymu, ysigma);
  condition(onCurve(x, y));
  return {x: x, y: y};
};

var ANSWER = Infer({method: 'MCMC', samples: 10000, kernel: {HMC: {steps: 10, stepSize: 0.5}}}, model);
var onCurve = function(x, y) {
  var x2 = x*x;
  var term1 = y - Math.pow(x2, 1/3);
  var crossSection = x2 + term1*term1 - 1;
  return Math.abs(crossSection) < 0.01;
};
var xbounds = [-1, 1];
var ybounds = [-1, 1.6];
var xmu = 0.5 * (xbounds[0] + xbounds[1]);
var ymu = 0.5 * (ybounds[0] + ybounds[1]);
var xsigma = 0.5 * (xbounds[1] - xbounds[0]);
var ysigma = 0.5 * (ybounds[1] - ybounds[0]);

var model = function() {
  var x = gaussian(xmu, xsigma);
  var y = gaussian(ymu, ysigma);
  condition(onCurve(x, y));
  return {x: x, y: y};
};

var ANSWER = Infer({method: 'MCMC', samples: 10000, kernel: {HMC: {steps: 10, stepSize: .5}}}, model);
var onCurve = function(x, y) {
  var x2 = x*x;
  var term1 = y - Math.pow(x2, 1/3);
  var crossSection = x2 + term1*term1 - 1;
  return Math.abs(crossSection) < 0.01;
};

var xbounds = [-1, 1];
var ybounds = [-1, 1.6];
var xmu = 0.5 * (xbounds[0] + xbounds[1]);
var ymu = 0.5 * (ybounds[0] + ybounds[1]);
var xsigma = 0.5 * (xbounds[1] - xbounds[0]);
var ysigma = 0.5 * (ybounds[1] - ybounds[0]);

var model = function() {
  var x = gaussian(xmu, xsigma);
  var y = gaussian(ymu, ysigma);
  condition(onCurve(x, y));
  return {x: x, y: y};
};

var ANSWER = Infer({method: 'MCMC', samples: 10000, kernel: {HMC: {steps: 10, stepSize: .5}}}, model);
var onCurve = function(x, y) {
  var x2 = x*x;
  var term1 = y - Math.pow(x2, 1/3);
  var crossSection = x2 + term1*term1 - 1;
  return Math.abs(crossSection) < 0.01;
};

var xbounds = [-1, 1];
var ybounds = [-1, 1.6];
var xmu = 0.5 * (xbounds[0] + xbounds[1]);
var ymu = 0.5 * (ybounds[0] + ybounds[1]);
var xsigma = 0.5 * (xbounds[1] - xbounds[0]);
var ysigma = 0.5 * (ybounds[1] - ybounds[0]);

var model = function() {
  var x = gaussian(xmu, xsigma);
  var y = gaussian(ymu, ysigma);
  condition(onCurve(x, y));
  return {x: x, y: y};
};

var ANSWER = Infer({method: 'MCMC', samples: 10000, kernel: {HMC: {steps: 10, stepSize: .5}}}, model);
groundtruth output
{"x":-0.9341766753389376,"y":1.318423…0.0789
{"x":-0.11641543440502738,"y":1.23436…0.0544
{"x":-0.01461927707274957,"y":1.05829…0.0479
{"x":-0.46789574224724206,"y":-0.2851…0.0417
{"x":0.61502140545566,"y":-0.06142812…0.0404
{"x":0.9052867005160372,"y":0.5110251…0.0378
{"x":0.6844386712270067,"y":0.0408111…0.0354
{"x":-0.9152179672132673,"y":0.549999…0.0346
{"x":-0.23549781617292087,"y":-0.5868…0.0337
{"x":0.995704610756105,"y":0.99776104…0.0335
{"x":0.3308823423736242,"y":-0.462900…0.0320
{"x":-0.9452414883519986,"y":0.645468…0.0307
… 45 more
raw JSON
{
  "__kind": "distribution",
  "probs": [
    0.009099999999999997,
    0.015200000000000002,
    0.03779999999999999,
    0.03350000000000001,
    0.03540000000000001,
    0.0159,
    0.017699999999999994,
    0.005199999999999999,
    0.012799999999999997,
    0.0789,
    0.033699999999999994,
    0.004699999999999999,
    0.047899999999999984,
    0.004699999999999999,
    0.0005999999999999997,
    0.0020000000000000005,
    0.0417,
    0.015500000000000003,
    0.0003000000000000001,
    0.0071,
    0.006299999999999999,
    0.004699999999999999,
    0.023000000000000007,
    0.026499999999999996,
    0.0023,
    0.008899999999999998,
    0.0276,
    0.0028000000000000013,
    0.014100000000000003,
    0.0269,
    0.015099999999999999,
    0.0186,
    0.0032999999999999987,
    0.030700000000000005,
    0.021199999999999997,
    0.014499999999999997,
    0.004199999999999999,
    0.0025000000000000005,
    0.00010000000000000009,
    0.027799999999999995,
    0.029600000000000005,
    0.03200000000000001,
    0.0037999999999999996,
    0.0186,
    0.010100000000000003,
    0.0010000000000000002,
    0.006499999999999998,
    0.002999999999999999,
    0.0346,
    0.0404,
    0.006599999999999998,
    0.024799999999999996,
    0.004900000000000002,
    0.012299999999999995,
    0.0104,
    0.006200000000000001,
    0.054400000000000004
  ],
  "support": [
    {
      "x": -0.5386686298110793,
      "y": -0.18290368835938287
    },
    {
      "x": -0.8865107000090767,
      "y": 0.4541219387088812
    },
    {
      "x": 0.9052867005160372,
      "y": 0.5110251608400044
    },
    {
      "x": 0.995704610756105,
      "y": 0.9977610480655436
    },
    {
      "x": 0.6844386712270067,
      "y": 0.04081118345106344
    },
    {
      "x": -0.7491448100405761,
      "y": 1.49044543606978
    },
    {
      "x": -0.164137674933637,
      "y": -0.6889009504724839
    },
    {
      "x": -0.168419071742342,
      "y": 1.2945454000976286
    },
    {
      "x": -0.5036787891662713,
      "y": -0.23192607148368072
    },
    {
      "x": -0.9341766753389376,
      "y": 1.3184236011577293
    },
    {
      "x": -0.23549781617292087,
      "y": -0.5868926125446496
    },
    {
      "x": 0.9978838972337464,
      "y": 1.0279998674710635
    },
    {
      "x": -0.01461927707274957,
      "y": 1.0582985474543665
    },
    {
      "x": 0.5612898880777135,
      "y": 1.5050044431593748
    },
    {
      "x": 0.3945473887494416,
      "y": -0.3828286714874139
    },
    {
      "x": -0.6745960693294226,
      "y": 0.028509710001000443
    },
    {
      "x": -0.46789574224724206,
      "y": -0.28517754175634813
    },
    {
      "x": -0.9665372516149988,
      "y": 0.7369013669074493
    },
    {
      "x": 0.40392316623418223,
      "y": -0.36626849066172373
    },
    {
      "x": 0.373395508499323,
      "y": 1.4512145889711618
    },
    {
      "x": 0.9700426010907026,
      "y": 0.7357226947036823
    },
    {
      "x": 0.88672628844336,
      "y": 1.375000627933844
    },
    {
      "x": -0.7193871863907841,
      "y": 0.11254128487213477
    },
    {
      "x": 0.8087036067546571,
      "y": 0.2733927498772759
    },
    {
      "x": 0.49734907302781817,
      "y": 1.4909752700153733
    },
    {
      "x": -0.9226187401062613,
      "y": 0.5697136740983553
    },
    {
      "x": -0.7395257861496647,
      "y": 0.1486250343698654
    },
    {
      "x": 0.8110649490079591,
      "y": 0.28713630631992404
    },
    {
      "x": 0.5895863406719727,
      "y": -0.10241563187990538
    },
    {
      "x": -0.9303159081126877,
      "y": 0.5950334680713287
    },
    {
      "x": 0.22620565304228918,
      "y": -0.60023529094916
    },
    {
      "x": -0.6556463064156958,
      "y": 1.5043714010996867
    },
    {
      "x": -0.987571009569439,
      "y": 0.8625860215204055
    },
    {
      "x": -0.9452414883519986,
      "y": 0.6454687154321894
    },
    {
      "x": -0.36382445784055695,
      "y": 1.4454802391440977
 

... (1662 more chars truncated)
feedback
not signed
probmods2-inference-algorithms/ex2.1 record(point2, interpolationWeight) 8 runs TV=0.00 TV=0
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Two-endpoint interpolation: point1 = -10 is fixed; point2 is uniform on [-100, 100]; interpolationWeight is uniform on [0, 1]; pointInMiddle = point1 * interpolationWeight + point2 * (1 - interpolationWeight). We observe(Gaussian({mu: 0, sigma: 0.1}), pointInMiddle). Helper:
var interpolate = function(point1, point2, interpolationWeight) {
  return point1 * interpolationWeight + point2 * (1 - interpolationWeight);
};
Run MCMC with samples: 5000, lag: 100. End your program with an object literal {point2, interpolationWeight} where each value is the marginal distribution of the corresponding latent (use marginalize).
groundtruth code
var interpolate = function(point1, point2, interpolationWeight) {
  return (point1 * interpolationWeight +
          point2 * (1 - interpolationWeight));
};

var model = function(){
  var point1 = -10;
  var point2 = uniform(-100, 100);
  var interpolationWeight = uniform(0, 1);
  var pointInMiddle = interpolate(point1, point2, interpolationWeight);
  observe(Gaussian({mu: 0, sigma:0.1}), pointInMiddle);
  return {point2, interpolationWeight, pointInMiddle};
};

var posterior = Infer({method: 'MCMC', samples: 5000, lag: 100}, model);
var ANSWER = (({
  point2: marginalize(posterior, function(x) {return x.point2}),
  interpolationWeight: marginalize(posterior, function(x) {return x.interpolationWeight})
}));
generated code
sonnet-46-primer-v3 TV=0 haiku-45-noprimer-v3 TV=0 haiku-45-primer-v3 TV=0 haiku-45-think-noprimer-v3 TV=0 haiku-45-think-primer-v3 TV=0 sonnet-46-noprimer-v3 TV=0 sonnet-46-think-noprimer-v3 TV=0 sonnet-46-think-primer-v3 TV=0
var interpolate = function(point1, point2, interpolationWeight) {
  return point1 * interpolationWeight + point2 * (1 - interpolationWeight);
};

var model = function() {
  var point1 = -10;
  var point2 = uniform(-100, 100);
  var interpolationWeight = uniform(0, 1);
  var pointInMiddle = interpolate(point1, point2, interpolationWeight);
  observe(Gaussian({mu: 0, sigma: 0.1}), pointInMiddle);
  return {point2: point2, interpolationWeight: interpolationWeight};
};

var posterior = Infer({method: 'MCMC', samples: 5000, lag: 100}, model);

var ANSWER = {
  point2: marginalize(posterior, function(x) { return x.point2; }),
  interpolationWeight: marginalize(posterior, function(x) { return x.interpolationWeight; })
};
var interpolate = function(point1, point2, interpolationWeight) {
  return point1 * interpolationWeight + point2 * (1 - interpolationWeight);
};

var model = function() {
  var point1 = -10;
  var point2 = uniform(-100, 100);
  var interpolationWeight = uniform(0, 1);
  var pointInMiddle = interpolate(point1, point2, interpolationWeight);
  
  observe(Gaussian({mu: 0, sigma: 0.1}), pointInMiddle);
  
  return {point2: point2, interpolationWeight: interpolationWeight};
};

var posterior = Infer({method: 'MCMC', samples: 5000, lag: 100}, model);

var ANSWER = {
  point2: marginalize(posterior, 'point2'),
  interpolationWeight: marginalize(posterior, 'interpolationWeight')
};
var interpolate = function(point1, point2, interpolationWeight) {
  return point1 * interpolationWeight + point2 * (1 - interpolationWeight);
};

var model = function() {
  var point1 = -10;
  var point2 = uniform(-100, 100);
  var interpolationWeight = uniform(0, 1);
  var pointInMiddle = interpolate(point1, point2, interpolationWeight);
  
  observe(Gaussian({mu: 0, sigma: 0.1}), pointInMiddle);
  
  return {point2: point2, interpolationWeight: interpolationWeight};
};

var posterior = Infer({method: 'MCMC', samples: 5000, lag: 100}, model);

var ANSWER = {
  point2: marginalize(posterior, function(x) { return x.point2; }),
  interpolationWeight: marginalize(posterior, function(x) { return x.interpolationWeight; })
};
var interpolate = function(point1, point2, interpolationWeight) {
  return point1 * interpolationWeight + point2 * (1 - interpolationWeight);
};

var model = function() {
  var point1 = -10;
  var point2 = uniform(-100, 100);
  var interpolationWeight = uniform(0, 1);
  var pointInMiddle = interpolate(point1, point2, interpolationWeight);
  
  observe(Gaussian({mu: 0, sigma: 0.1}), pointInMiddle);
  
  return {point2: point2, interpolationWeight: interpolationWeight};
};

var posterior = Infer({method: 'MCMC', samples: 5000, lag: 100}, model);

var ANSWER = {
  point2: marginalize(posterior, 'point2'),
  interpolationWeight: marginalize(posterior, 'interpolationWeight')
};
var interpolate = function(point1, point2, interpolationWeight) {
  return point1 * interpolationWeight + point2 * (1 - interpolationWeight);
};

var model = function() {
  var point1 = -10;
  var point2 = uniform(-100, 100);
  var interpolationWeight = uniform(0, 1);
  var pointInMiddle = interpolate(point1, point2, interpolationWeight);
  
  observe(Gaussian({mu: 0, sigma: 0.1}), pointInMiddle);
  
  return {point2: point2, interpolationWeight: interpolationWeight};
};

var posterior = Infer(
  {method: 'MCMC', samples: 5000, lag: 100},
  model
);

var ANSWER = {
  point2: marginalize(posterior, function(x) { return x.point2; }),
  interpolationWeight: marginalize(posterior, function(x) { return x.interpolationWeight; })
};
var interpolate = function(point1, point2, interpolationWeight) {
  return point1 * interpolationWeight + point2 * (1 - interpolationWeight);
};

var model = function() {
  var point1 = -10;
  var point2 = uniform(-100, 100);
  var interpolationWeight = uniform(0, 1);
  var pointInMiddle = interpolate(point1, point2, interpolationWeight);
  observe(Gaussian({mu: 0, sigma: 0.1}), pointInMiddle);
  return {point2: point2, interpolationWeight: interpolationWeight};
};

var posterior = Infer({method: 'MCMC', samples: 5000, lag: 100}, model);

var ANSWER = {
  point2: marginalize(posterior, function(x) { return x.point2; }),
  interpolationWeight: marginalize(posterior, function(x) { return x.interpolationWeight; })
};
var interpolate = function(point1, point2, interpolationWeight) {
  return point1 * interpolationWeight + point2 * (1 - interpolationWeight);
};

var model = function() {
  var point1 = -10;
  var point2 = uniform(-100, 100);
  var interpolationWeight = uniform(0, 1);
  var pointInMiddle = interpolate(point1, point2, interpolationWeight);
  observe(Gaussian({mu: 0, sigma: 0.1}), pointInMiddle);
  return {point2: point2, interpolationWeight: interpolationWeight};
};

var posterior = Infer({method: 'MCMC', samples: 5000, lag: 100}, model);

var ANSWER = {
  point2: marginalize(posterior, function(x) { return x.point2; }),
  interpolationWeight: marginalize(posterior, function(x) { return x.interpolationWeight; })
};
var interpolate = function(point1, point2, interpolationWeight) {
  return point1 * interpolationWeight + point2 * (1 - interpolationWeight);
};

var model = function() {
  var point1 = -10;
  var point2 = uniform(-100, 100);
  var interpolationWeight = uniform(0, 1);
  var pointInMiddle = interpolate(point1, point2, interpolationWeight);
  observe(Gaussian({mu: 0, sigma: 0.1}), pointInMiddle);
  return {point2: point2, interpolationWeight: interpolationWeight};
};

var joint = Infer({method: 'MCMC', samples: 5000, lag: 100}, model);

var ANSWER = {
  point2: marginalize(joint, function(x) { return x.point2; }),
  interpolationWeight: marginalize(joint, function(x) { return x.interpolationWeight; })
};
groundtruth output
{
  "point2": {
    "__kind": "distribution",
    "probs": [
      0.0022000000000000045,
      0.0002000000000000002,
      0.002600000000000004,
      0.0012000000000000016,
      0.0002000000000000002,
      0.0008000000000000009,
      0.0008000000000000017,
      0.0014000000000000015,
      0.0004000000000000008,
      0.0008000000000000009,
      0.0008000000000000017,
      0.0006000000000000008,
      0.0002000000000000002,
      0.0002000000000000002,
      0.0008000000000000017,
      0.0002000000000000002,
      0.0002000000000000002,
      0.0002000000000000002,
      0.0004000000000000008,
      0.0006000000000000008,
      0.0002000000000000002,
      0.001000000000000002,
      0.0002000000000000002,
      0.002600000000000006,
      0.0004000000000000008,
      0.0002000000000000002,
      0.0006000000000000008,
      0.0002000000000000002,
      0.0002000000000000002,
      0.0002000000000000002,
      0.001000000000000002,
      0.0034000000000000067,
      0.0002000000000000002,
      0.0002000000000000002,
      0.0002000000000000002,
      0.0002000000000000002,
      0.001000000000000002,
      0.0002000000000000002,
      0.0004000000000000008,
      0.0004000000000000008,
      0.0008000000000000017,
      0.0006000000000000008,
      0.0002000000000000002,
      0.001000000000000001,
      0.001000000000000001,
      0.0002000000000000002,
      0.0002000000000000002,
      0.0006000000000000008,
      0.0004000000000000008,
      0.0004000000000000008,
      0.00040000000000000045,
      0.0004000000000000008,
      0.0008000000000000009,
      0.001800000000000003,
      0.0014000000000000015,
      0.00040000000000000045,
      0.0002000000000000002,
      0.0002000000000000002,
      0.0004000000000000008,
      0.0006000000000000008,
      0.0002000000000000002,
      0.0002000000000000002,
      0.0012000000000000016,
      0.0002000000000000002,
      0.0008000000000000017,
      0.0002000000000000002,
      0.0020000000000000026,
      0.0002000000000000002,
      0.0004000000000000008,
      0.0004000000000000008,
      0.0008000000000000017,
      0.0006000000000000008,
      0.00040000000000000045,
      0.001800000000000003,
      0.0002000000000000002,
      0.0008000000000000017,
      0.0002000000000000002,
      0.0008000000000000017,
      0.0002000000000000002,
      0.0002000000000000002,
      0.001000000000000002,
      0.0002000000000000002,
      0.0012000000000000016,
      0.0002000000000000002,
      0.0002000000000000002,
      0.001000000000000001,
      0.00040000000000000045,
      0.00040000000000000045,
      0.0002000000000000002,
      0.0002000000000000002,
      0.0002000000000000002,
      0.0002000000000000002,
      0.001000000000000002,
      0.00040000000000000045,
      0.0004000000000000008,
      0.0004000000000000008,
      0.0006000000000000008,
      0.0002000000000000002,
      0.0004000000000000008,
      0.0004000000000000008,
      0.0006000000000000008,
      0.00040000000000000045,
      0.001800000000000003,
      0.0004000000000000008,
      0.001000000000000001,
      0.0012000000000000027,
      0.0002000000000000002,
      0.0004000000000000008,
      0.0008000000000000009,
      0.0014000000000000028,
      0.0006000000000000008,
      0.001800000000000003,
      0.0008000000000000017,
      0.001000000000000002,
      0.0004000000000000008,
      0.0034000000000000067,
      0.0002000000000000002,
      0.0002000000000000002,
      0.0004000000000000008,
      0.0004000000000000008,
      0.0002000000000000002,
      0.0006000000000000008,
      0.0004000000000000008,
      0.0002000000000000002,
      0.0014000000000000015,
      0.0002000000000000002,
      0.0002000000000000002,
      0.00040000000000000045,
      0.0002000000000000002,
      0.001000000000000002,
      0.0006000000000000008,
      0.0002000000000000002,
      0.0002000000000000002,
      0.0002000000000000002,
      0.0004000000000000008,
      0.0002000000000000002,
    

... (176185 more chars truncated)
feedback
not signed
probmods2-inference-algorithms/ex2.2 distribution 8 runs TV=0.00 TV=0
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Two-endpoint interpolation: point1 = -10 is fixed; point2 is uniform on [-100, 100]; interpolationWeight is uniform on [0, 1]; pointInMiddle = point1 * interpolationWeight + point2 * (1 - interpolationWeight). We observe(Gaussian({mu: 0, sigma: 0.1}), pointInMiddle). Helper:
var interpolate = function(point1, point2, interpolationWeight) {
  return point1 * interpolationWeight + point2 * (1 - interpolationWeight);
};
Run MCMC with samples: 5000, lag: 100. End your program with the joint marginal distribution over (point2, interpolationWeight) as a Distribution (use marginalize returning an object {point2, inter} for each sample).
groundtruth code
var interpolate = function(point1, point2, interpolationWeight) {
  return (point1 * interpolationWeight +
          point2 * (1 - interpolationWeight));
};

var model = function(){
  var point1 = -10;
  var point2 = uniform(-100, 100);
  var interpolationWeight = uniform(0, 1);
  var pointInMiddle = interpolate(point1, point2, interpolationWeight);
  observe(Gaussian({mu: 0, sigma:0.1}), pointInMiddle);
  return {point2, interpolationWeight, pointInMiddle};
};

var posterior = Infer({method: 'MCMC', samples: 5000, lag: 100}, model);
var ANSWER = (marginalize(posterior, function(x) {
  return {'point2': x.point2, 'inter': x.interpolationWeight};
}));
generated code
sonnet-46-primer-v3 TV=0 haiku-45-noprimer-v3 TV<.05 haiku-45-primer-v3 TV<.05 haiku-45-think-noprimer-v3 TV<.05 haiku-45-think-primer-v3 TV<.05 sonnet-46-noprimer-v3 TV=0 sonnet-46-think-noprimer-v3 TV=0 sonnet-46-think-primer-v3 TV=0
var interpolate = function(point1, point2, interpolationWeight) {
  return point1 * interpolationWeight + point2 * (1 - interpolationWeight);
};

var model = function() {
  var point1 = -10;
  var point2 = uniform(-100, 100);
  var interpolationWeight = uniform(0, 1);
  var pointInMiddle = interpolate(point1, point2, interpolationWeight);
  observe(Gaussian({mu: 0, sigma: 0.1}), pointInMiddle);
  return {point2: point2, inter: interpolationWeight};
};

var dist = Infer({method: 'MCMC', samples: 5000, lag: 100}, model);

var ANSWER = marginalize(dist, function(x) { return {point2: x.point2, inter: x.inter}; });
var interpolate = function(point1, point2, interpolationWeight) {
  return point1 * interpolationWeight + point2 * (1 - interpolationWeight);
};

var model = function() {
  var point1 = -10;
  var point2 = uniform(-100, 100);
  var interpolationWeight = uniform(0, 1);
  var pointInMiddle = interpolate(point1, point2, interpolationWeight);
  
  observe(Gaussian({mu: 0, sigma: 0.1}), pointInMiddle);
  
  return {point2: point2, inter: interpolationWeight};
};

var posterior = Infer(
  {method: 'MCMC', samples: 5000, lag: 100},
  model
);

var ANSWER = posterior;
var interpolate = function(point1, point2, interpolationWeight) {
  return point1 * interpolationWeight + point2 * (1 - interpolationWeight);
};

var model = function() {
  var point1 = -10;
  var point2 = uniform(-100, 100);
  var interpolationWeight = uniform(0, 1);
  var pointInMiddle = interpolate(point1, point2, interpolationWeight);
  
  observe(Gaussian({mu: 0, sigma: 0.1}), pointInMiddle);
  
  return {point2: point2, inter: interpolationWeight};
};

var ANSWER = Infer({method: 'MCMC', samples: 5000, lag: 100}, model);
var interpolate = function(point1, point2, interpolationWeight) {
  return point1 * interpolationWeight + point2 * (1 - interpolationWeight);
};

var model = function() {
  var point1 = -10;
  var point2 = uniform(-100, 100);
  var interpolationWeight = uniform(0, 1);
  var pointInMiddle = interpolate(point1, point2, interpolationWeight);
  
  observe(Gaussian({mu: 0, sigma: 0.1}), pointInMiddle);
  
  return {point2: point2, inter: interpolationWeight};
};

var ANSWER = Infer({method: 'MCMC', samples: 5000, lag: 100}, model);
var interpolate = function(point1, point2, interpolationWeight) {
  return point1 * interpolationWeight + point2 * (1 - interpolationWeight);
};

var model = function() {
  var point1 = -10;
  var point2 = uniform(-100, 100);
  var interpolationWeight = uniform(0, 1);
  var pointInMiddle = interpolate(point1, point2, interpolationWeight);
  
  observe(Gaussian({mu: 0, sigma: 0.1}), pointInMiddle);
  
  return {point2: point2, inter: interpolationWeight};
};

var ANSWER = Infer({method: 'MCMC', samples: 5000, lag: 100}, model);
var interpolate = function(point1, point2, interpolationWeight) {
  return point1 * interpolationWeight + point2 * (1 - interpolationWeight);
};

var model = function() {
  var point1 = -10;
  var point2 = uniform(-100, 100);
  var interpolationWeight = uniform(0, 1);
  var pointInMiddle = interpolate(point1, point2, interpolationWeight);
  observe(Gaussian({mu: 0, sigma: 0.1}), pointInMiddle);
  return {point2: point2, inter: interpolationWeight};
};

var posterior = Infer({method: 'MCMC', samples: 5000, lag: 100}, model);

var ANSWER = marginalize(posterior, function(x) { return {point2: x.point2, inter: x.inter}; });
var interpolate = function(point1, point2, interpolationWeight) {
  return point1 * interpolationWeight + point2 * (1 - interpolationWeight);
};

var model = function() {
  var point1 = -10;
  var point2 = uniform(-100, 100);
  var interpolationWeight = uniform(0, 1);
  var pointInMiddle = interpolate(point1, point2, interpolationWeight);
  observe(Gaussian({mu: 0, sigma: 0.1}), pointInMiddle);
  return {point2: point2, inter: interpolationWeight};
};

var posterior = Infer({method: 'MCMC', samples: 5000, lag: 100}, model);

var ANSWER = marginalize(posterior, function(x) { return {point2: x.point2, inter: x.inter}; });
var interpolate = function(point1, point2, interpolationWeight) {
  return point1 * interpolationWeight + point2 * (1 - interpolationWeight);
};

var model = function() {
  var point1 = -10;
  var point2 = uniform(-100, 100);
  var interpolationWeight = uniform(0, 1);
  var pointInMiddle = interpolate(point1, point2, interpolationWeight);
  observe(Gaussian({mu: 0, sigma: 0.1}), pointInMiddle);
  return {point2: point2, inter: interpolationWeight};
};

var posterior = Infer({method: 'MCMC', samples: 5000, lag: 100}, model);

var ANSWER = marginalize(posterior, function(x) { return {point2: x.point2, inter: x.inter}; });
groundtruth output
{"point2":99.70626800036933,"inter":0…0.0024
{"point2":81.85141876567201,"inter":0…0.0020
{"point2":99.8310266518356,"inter":0.…0.0020
{"point2":45.8064167274317,"inter":0.…0.0018
{"point2":47.388737152716885,"inter":…0.0018
{"point2":64.20858538749977,"inter":0…0.0018
{"point2":72.23738707719014,"inter":0…0.0018
{"point2":99.39840537549048,"inter":0…0.0018
{"point2":47.94372803570422,"inter":0…0.0016
{"point2":45.02058863762417,"inter":0…0.0016
{"point2":48.508802015754334,"inter":…0.0016
{"point2":62.63398910039734,"inter":0…0.0016
… 2870 more
raw JSON
{
  "__kind": "distribution",
  "probs": [
    0.001000000000000002,
    0.0012000000000000016,
    0.0002000000000000002,
    0.0008000000000000017,
    0.0012000000000000016,
    0.0006000000000000008,
    0.0006000000000000008,
    0.0006000000000000008,
    0.0002000000000000002,
    0.0002000000000000002,
    0.0006000000000000008,
    0.0008000000000000017,
    0.0002000000000000002,
    0.0012000000000000016,
    0.0004000000000000008,
    0.0002000000000000002,
    0.0002000000000000002,
    0.0002000000000000002,
    0.0002000000000000002,
    0.0008000000000000017,
    0.0002000000000000002,
    0.0004000000000000008,
    0.0002000000000000002,
    0.0002000000000000002,
    0.0008000000000000017,
    0.0002000000000000002,
    0.0002000000000000002,
    0.0002000000000000002,
    0.0004000000000000008,
    0.0006000000000000008,
    0.0002000000000000002,
    0.0008000000000000017,
    0.0002000000000000002,
    0.0002000000000000002,
    0.0008000000000000017,
    0.0004000000000000008,
    0.0002000000000000002,
    0.0002000000000000002,
    0.001000000000000002,
    0.0004000000000000008,
    0.0002000000000000002,
    0.0002000000000000002,
    0.0004000000000000008,
    0.0002000000000000002,
    0.0002000000000000002,
    0.0002000000000000002,
    0.001000000000000002,
    0.001800000000000003,
    0.0004000000000000008,
    0.0012000000000000016,
    0.0002000000000000002,
    0.0002000000000000002,
    0.0002000000000000002,
    0.0002000000000000002,
    0.001000000000000002,
    0.0002000000000000002,
    0.0004000000000000008,
    0.0004000000000000008,
    0.0008000000000000017,
    0.0006000000000000008,
    0.0002000000000000002,
    0.0002000000000000002,
    0.0006000000000000008,
    0.0002000000000000002,
    0.0002000000000000002,
    0.0004000000000000008,
    0.0004000000000000008,
    0.0002000000000000002,
    0.0002000000000000002,
    0.0006000000000000008,
    0.0004000000000000008,
    0.0004000000000000008,
    0.0002000000000000002,
    0.0002000000000000002,
    0.0004000000000000008,
    0.0006000000000000008,
    0.0002000000000000002,
    0.001800000000000003,
    0.0006000000000000008,
    0.0006000000000000008,
    0.0002000000000000002,
    0.0002000000000000002,
    0.0002000000000000002,
    0.0002000000000000002,
    0.0002000000000000002,
    0.0004000000000000008,
    0.0002000000000000002,
    0.0004000000000000008,
    0.0002000000000000002,
    0.0002000000000000002,
    0.0006000000000000008,
    0.0006000000000000008,
    0.0002000000000000002,
    0.0004000000000000008,
    0.0004000000000000008,
    0.0002000000000000002,
    0.0002000000000000002,
    0.0016000000000000033,
    0.0002000000000000002,
    0.0002000000000000002,
    0.0004000000000000008,
    0.0004000000000000008,
    0.0008000000000000017,
    0.0006000000000000008,
    0.0002000000000000002,
    0.0002000000000000002,
    0.0014000000000000028,
    0.0002000000000000002,
    0.0002000000000000002,
    0.0002000000000000002,
    0.0008000000000000017,
    0.0002000000000000002,
    0.0008000000000000017,
    0.0002000000000000002,
    0.0002000000000000002,
    0.0004000000000000008,
    0.0004000000000000008,
    0.0002000000000000002,
    0.0002000000000000002,
    0.0002000000000000002,
    0.001000000000000002,
    0.0002000000000000002,
    0.0002000000000000002,
    0.0002000000000000002,
    0.0002000000000000002,
    0.0006000000000000008,
    0.0002000000000000002,
    0.0002000000000000002,
    0.0002000000000000002,
    0.0002000000000000002,
    0.0002000000000000002,
    0.0002000000000000002,
    0.0002000000000000002,
    0.0002000000000000002,
    0.0008000000000000017,
    0.0002000000000000002,
    0.0002000000000000002,
    0.0002000000000000002,
    0.0004000000000000008,
    0.0004000000000000008,
    0.0006000000000000008,
    0.0002000000000000002,
    0.0004000000000000008,
    0.0004000000000000008,
    0.0006000000000000008,
    0.0002000000000000002,
    0.0002000000000000

... (309614 more chars truncated)
feedback
not signed
probmods2-inference-algorithms/ex2.3 value 8 runs val+
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Two-endpoint interpolation: point1 = -10 is fixed; point2 is uniform on [-100, 100]; interpolationWeight is uniform on [0, 1]; pointInMiddle = point1 * interpolationWeight + point2 * (1 - interpolationWeight). We observe(Gaussian({mu: 0, sigma: 0.1}), pointInMiddle). Helper:
var interpolate = function(point1, point2, interpolationWeight) {
  return point1 * interpolationWeight + point2 * (1 - interpolationWeight);
};
Run MCMC with samples: 100, lag: 0. From posterior.samples, extract the array of pointInMiddle values in order. End your program with that array of length 100.
groundtruth code
var interpolate = function(point1, point2, interpolationWeight) {
  return (point1 * interpolationWeight +
          point2 * (1 - interpolationWeight));
};

var model = function(){
  var point1 = -10;
  var point2 = uniform(-100, 100);
  var interpolationWeight = uniform(0, 1);
  var pointInMiddle = interpolate(point1, point2, interpolationWeight);
  observe(Gaussian({mu: 0, sigma:0.1}), pointInMiddle);
  return {point2, interpolationWeight, pointInMiddle};
};

var posterior = Infer({method: 'MCMC', samples: 100, lag: 0}, model);
var ANSWER = (map(function(d) { d["value"]["pointInMiddle"] }, posterior.samples));
generated code
sonnet-46-primer-v3 val+ haiku-45-noprimer-v3 val- haiku-45-primer-v3 val- haiku-45-think-noprimer-v3 val- haiku-45-think-primer-v3 val- sonnet-46-noprimer-v3 val+ sonnet-46-think-noprimer-v3 val+ sonnet-46-think-primer-v3 val+
var interpolate = function(point1, point2, interpolationWeight) {
  return point1 * interpolationWeight + point2 * (1 - interpolationWeight);
};

var point1 = -10;

var model = function() {
  var point2 = uniform(-100, 100);
  var interpolationWeight = uniform(0, 1);
  var pointInMiddle = interpolate(point1, point2, interpolationWeight);
  observe(Gaussian({mu: 0, sigma: 0.1}), pointInMiddle);
  return pointInMiddle;
};

var posterior = Infer({method: 'MCMC', samples: 100, lag: 0}, model);

var ANSWER = map(function(s) { return s.value; }, posterior.samples);
var interpolate = function(point1, point2, interpolationWeight) {
  return point1 * interpolationWeight + point2 * (1 - interpolationWeight);
};

var model = function() {
  var point1 = -10;
  var point2 = uniform(-100, 100);
  var interpolationWeight = uniform(0, 1);
  var pointInMiddle = interpolate(point1, point2, interpolationWeight);
  
  observe(Gaussian({mu: 0, sigma: 0.1}), pointInMiddle);
  
  return {
    point2: point2,
    interpolationWeight: interpolationWeight,
    pointInMiddle: pointInMiddle
  };
};

var posterior = Infer({
  method: 'MCMC',
  samples: 100,
  lag: 0
}, model);

var pointInMiddleValues = map(function(sample) {
  return sample.pointInMiddle;
}, posterior.samples);

var ANSWER = pointInMiddleValues;
var interpolate = function(point1, point2, interpolationWeight) {
  return point1 * interpolationWeight + point2 * (1 - interpolationWeight);
};

var model = function() {
  var point1 = -10;
  var point2 = uniform(-100, 100);
  var interpolationWeight = uniform(0, 1);
  var pointInMiddle = interpolate(point1, point2, interpolationWeight);
  
  observe(Gaussian({mu: 0, sigma: 0.1}), pointInMiddle);
  
  return {point2: point2, interpolationWeight: interpolationWeight, pointInMiddle: pointInMiddle};
};

var posterior = Infer({method: 'MCMC', samples: 100, lag: 0}, model);

var pointInMiddleValues = map(function(sample) {
  return sample.pointInMiddle;
}, posterior.samples);

var ANSWER = pointInMiddleValues;
var interpolate = function(point1, point2, interpolationWeight) {
  return point1 * interpolationWeight + point2 * (1 - interpolationWeight);
};

var model = function() {
  var point1 = -10;
  var point2 = uniform(-100, 100);
  var interpolationWeight = uniform(0, 1);
  var pointInMiddle = interpolate(point1, point2, interpolationWeight);
  
  observe(Gaussian({mu: 0, sigma: 0.1}), pointInMiddle);
  
  return {pointInMiddle: pointInMiddle};
};

var posterior = Infer({
  method: 'MCMC',
  samples: 100,
  lag: 0
}, model);

var pointInMiddleArray = map(function(sample) { return sample.pointInMiddle; }, posterior.samples);

var ANSWER = pointInMiddleArray;
var interpolate = function(point1, point2, interpolationWeight) {
  return point1 * interpolationWeight + point2 * (1 - interpolationWeight);
};

var posterior = Infer({method: 'MCMC', samples: 100, lag: 0}, function() {
  var point1 = -10;
  var point2 = uniform(-100, 100);
  var interpolationWeight = uniform(0, 1);
  var pointInMiddle = interpolate(point1, point2, interpolationWeight);
  
  observe(Gaussian({mu: 0, sigma: 0.1}), pointInMiddle);
  
  return {point2: point2, interpolationWeight: interpolationWeight, pointInMiddle: pointInMiddle};
});

var pointInMiddleValues = map(function(sample) {
  return sample.pointInMiddle;
}, posterior.samples);

var ANSWER = pointInMiddleValues;
var interpolate = function(point1, point2, interpolationWeight) {
  return point1 * interpolationWeight + point2 * (1 - interpolationWeight);
};

var model = function() {
  var point1 = -10;
  var point2 = uniform(-100, 100);
  var interpolationWeight = uniform(0, 1);
  var pointInMiddle = interpolate(point1, point2, interpolationWeight);
  observe(Gaussian({mu: 0, sigma: 0.1}), pointInMiddle);
  return pointInMiddle;
};

var posterior = Infer({method: 'MCMC', samples: 100, lag: 0}, model);

var ANSWER = map(function(s) { return s.value; }, posterior.samples);
var interpolate = function(point1, point2, interpolationWeight) {
  return point1 * interpolationWeight + point2 * (1 - interpolationWeight);
};

var model = function() {
  var point1 = -10;
  var point2 = uniform(-100, 100);
  var interpolationWeight = uniform(0, 1);
  var pointInMiddle = interpolate(point1, point2, interpolationWeight);
  observe(Gaussian({mu: 0, sigma: 0.1}), pointInMiddle);
  return { pointInMiddle: pointInMiddle };
};

var posterior = Infer({method: 'MCMC', samples: 100, lag: 0}, model);

var ANSWER = map(function(s) { return s.value.pointInMiddle; }, posterior.samples);
var interpolate = function(point1, point2, interpolationWeight) {
  return point1 * interpolationWeight + point2 * (1 - interpolationWeight);
};

var model = function() {
  var point1 = -10;
  var point2 = uniform(-100, 100);
  var interpolationWeight = uniform(0, 1);
  var pointInMiddle = interpolate(point1, point2, interpolationWeight);
  observe(Gaussian({mu: 0, sigma: 0.1}), pointInMiddle);
  return {pointInMiddle: pointInMiddle};
};

var posterior = Infer({method: 'MCMC', samples: 100, lag: 0}, model);

var ANSWER = map(function(s) { return s.value.pointInMiddle; }, posterior.samples);
groundtruth output
[
  -15.698035390578585,
  -13.422023333841864,
  -6.114422351819618,
  -6.114422351819618,
  -6.114422351819618,
  -6.114422351819618,
  -6.114422351819618,
  -6.114422351819618,
  -6.114422351819618,
  -6.114422351819618,
  -6.114422351819618,
  -6.114422351819618,
  -6.114422351819618,
  -6.114422351819618,
  -6.114422351819618,
  -6.114422351819618,
  -6.114422351819618,
  -1.7865395148405865,
  -1.7865395148405865,
  -1.7865395148405865,
  -1.7865395148405865,
  -1.7865395148405865,
  -1.7865395148405865,
  -1.7865395148405865,
  -1.7865395148405865,
  -1.7865395148405865,
  -1.7865395148405865,
  -1.7865395148405865,
  -1.7865395148405865,
  -1.7865395148405865,
  -1.7865395148405865,
  -1.7865395148405865,
  -1.7865395148405865,
  -1.7865395148405865,
  -1.7865395148405865,
  -1.7865395148405865,
  -1.7865395148405865,
  -1.7865395148405865,
  -1.7865395148405865,
  -1.2616027448795748,
  -1.2616027448795748,
  -1.2616027448795748,
  -1.2616027448795748,
  -1.2616027448795748,
  -1.2616027448795748,
  -1.2616027448795748,
  -1.2616027448795748,
  -1.2616027448795748,
  -1.2616027448795748,
  -1.2616027448795748,
  -1.2616027448795748,
  -1.2616027448795748,
  -1.2616027448795748,
  -1.2616027448795748,
  -1.2616027448795748,
  -1.2616027448795748,
  -1.2616027448795748,
  -1.2616027448795748,
  -1.2616027448795748,
  -1.2616027448795748,
  -1.2616027448795748,
  -1.2616027448795748,
  -1.2616027448795748,
  -1.2616027448795748,
  -1.2616027448795748,
  -1.2616027448795748,
  -1.2616027448795748,
  -1.2616027448795748,
  -1.2616027448795748,
  -1.2616027448795748,
  -1.2616027448795748,
  -1.2616027448795748,
  -1.154888352957311,
  -1.154888352957311,
  -1.154888352957311,
  -1.154888352957311,
  -1.154888352957311,
  -1.154888352957311,
  -1.154888352957311,
  -1.154888352957311,
  -1.154888352957311,
  -1.154888352957311,
  -1.154888352957311,
  -1.154888352957311,
  -1.154888352957311,
  -1.154888352957311,
  -1.154888352957311,
  -1.154888352957311,
  -1.154888352957311,
  -1.154888352957311,
  -1.154888352957311,
  -1.154888352957311,
  -1.154888352957311,
  -1.154888352957311,
  -1.154888352957311,
  -1.154888352957311,
  -1.154888352957311,
  -1.154888352957311,
  -1.154888352957311,
  -1.154888352957311
]
feedback
not signed
probmods2-inference-algorithms/ex2.4 distribution 8 runs TV=0.00 TV=0
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Two-endpoint interpolation: point1 = -10 is fixed; point2 is uniform on [-100, 100]; interpolationWeight is uniform on [0, 1]; pointInMiddle = point1 * interpolationWeight + point2 * (1 - interpolationWeight). We observe(Gaussian({mu: 0, sigma: 0.1}), pointInMiddle). Helper:
var interpolate = function(point1, point2, interpolationWeight) {
  return point1 * interpolationWeight + point2 * (1 - interpolationWeight);
};
Rewrite this as rejection sampling. Convert the observe into a condition(Math.abs(pointInMiddle) < 0.01). End your program with Infer({method: 'rejection', samples: 1000}, model) returning the posterior over {point2, interpolationWeight, pointInMiddle}.
groundtruth code
var interpolate = function(point1, point2, interpolationWeight) {
  return (point1 * interpolationWeight +
          point2 * (1 - interpolationWeight));
};

var model = function(){
  var point1 = -10;
  var point2 = uniform(-100, 100);
  var interpolationWeight = uniform(0, 1);
  var pointInMiddle = interpolate(point1, point2, interpolationWeight);
  condition(Math.abs(pointInMiddle) < 0.01);
  return {point2, interpolationWeight, pointInMiddle};
};
var ANSWER = (Infer({method: 'rejection', samples: 1000}, model));
generated code
sonnet-46-primer-v3 TV=0 haiku-45-noprimer-v3 TV=0 haiku-45-primer-v3 TV=0 haiku-45-think-noprimer-v3 TV=0 haiku-45-think-primer-v3 TV=0 sonnet-46-noprimer-v3 TV=0 sonnet-46-think-noprimer-v3 TV=0 sonnet-46-think-primer-v3 TV=0
var interpolate = function(point1, point2, interpolationWeight) {
  return point1 * interpolationWeight + point2 * (1 - interpolationWeight);
};

var model = function() {
  var point1 = -10;
  var point2 = uniform(-100, 100);
  var interpolationWeight = uniform(0, 1);
  var pointInMiddle = interpolate(point1, point2, interpolationWeight);
  condition(Math.abs(pointInMiddle) < 0.01);
  return { point2: point2, interpolationWeight: interpolationWeight, pointInMiddle: pointInMiddle };
};

var ANSWER = Infer({method: 'rejection', samples: 1000}, model);
var interpolate = function(point1, point2, interpolationWeight) {
  return point1 * interpolationWeight + point2 * (1 - interpolationWeight);
};

var model = function() {
  var point1 = -10;
  var point2 = uniform(-100, 100);
  var interpolationWeight = uniform(0, 1);
  var pointInMiddle = interpolate(point1, point2, interpolationWeight);
  
  condition(Math.abs(pointInMiddle) < 0.01);
  
  return {
    point2: point2,
    interpolationWeight: interpolationWeight,
    pointInMiddle: pointInMiddle
  };
};

var ANSWER = Infer({method: 'rejection', samples: 1000}, model);
var interpolate = function(point1, point2, interpolationWeight) {
  return point1 * interpolationWeight + point2 * (1 - interpolationWeight);
};

var model = function() {
  var point1 = -10;
  var point2 = uniform(-100, 100);
  var interpolationWeight = uniform(0, 1);
  var pointInMiddle = interpolate(point1, point2, interpolationWeight);
  
  condition(Math.abs(pointInMiddle) < 0.01);
  
  return {
    point2: point2,
    interpolationWeight: interpolationWeight,
    pointInMiddle: pointInMiddle
  };
};

var ANSWER = Infer({method: 'rejection', samples: 1000}, model);
var interpolate = function(point1, point2, interpolationWeight) {
  return point1 * interpolationWeight + point2 * (1 - interpolationWeight);
};

var model = function() {
  var point1 = -10;
  var point2 = uniform(-100, 100);
  var interpolationWeight = uniform(0, 1);
  var pointInMiddle = interpolate(point1, point2, interpolationWeight);
  
  condition(Math.abs(pointInMiddle) < 0.01);
  
  return {
    point2: point2,
    interpolationWeight: interpolationWeight,
    pointInMiddle: pointInMiddle
  };
};

var ANSWER = Infer({method: 'rejection', samples: 1000}, model);
var interpolate = function(point1, point2, interpolationWeight) {
  return point1 * interpolationWeight + point2 * (1 - interpolationWeight);
};

var model = function() {
  var point1 = -10;
  var point2 = uniform(-100, 100);
  var interpolationWeight = uniform(0, 1);
  var pointInMiddle = interpolate(point1, point2, interpolationWeight);
  
  condition(Math.abs(pointInMiddle) < 0.01);
  
  return {point2: point2, interpolationWeight: interpolationWeight, pointInMiddle: pointInMiddle};
};

var ANSWER = Infer({method: 'rejection', samples: 1000}, model);
var interpolate = function(point1, point2, interpolationWeight) {
  return point1 * interpolationWeight + point2 * (1 - interpolationWeight);
};

var model = function() {
  var point1 = -10;
  var point2 = uniform(-100, 100);
  var interpolationWeight = uniform(0, 1);
  var pointInMiddle = interpolate(point1, point2, interpolationWeight);
  condition(Math.abs(pointInMiddle) < 0.01);
  return { point2: point2, interpolationWeight: interpolationWeight, pointInMiddle: pointInMiddle };
};

var ANSWER = Infer({method: 'rejection', samples: 1000}, model);
var interpolate = function(point1, point2, interpolationWeight) {
  return point1 * interpolationWeight + point2 * (1 - interpolationWeight);
};

var model = function() {
  var point1 = -10;
  var point2 = uniform(-100, 100);
  var interpolationWeight = uniform(0, 1);
  var pointInMiddle = interpolate(point1, point2, interpolationWeight);

  condition(Math.abs(pointInMiddle) < 0.01);

  return { point2: point2, interpolationWeight: interpolationWeight, pointInMiddle: pointInMiddle };
};

var ANSWER = Infer({method: 'rejection', samples: 1000}, model);
var interpolate = function(point1, point2, interpolationWeight) {
  return point1 * interpolationWeight + point2 * (1 - interpolationWeight);
};

var model = function() {
  var point1 = -10;
  var point2 = uniform(-100, 100);
  var interpolationWeight = uniform(0, 1);
  var pointInMiddle = interpolate(point1, point2, interpolationWeight);

  condition(Math.abs(pointInMiddle) < 0.01);

  return { point2: point2, interpolationWeight: interpolationWeight, pointInMiddle: pointInMiddle };
};

var ANSWER = Infer({method: 'rejection', samples: 1000}, model);
groundtruth output
{"point2":51.07985169866842,"interpol…0.0010
{"point2":65.34802671366538,"interpol…0.0010
{"point2":17.771508698598964,"interpo…0.0010
{"point2":3.8200919430500164,"interpo…0.0010
{"point2":71.63150186885701,"interpol…0.0010
{"point2":8.83910678860353,"interpola…0.0010
{"point2":34.53431146877349,"interpol…0.0010
{"point2":20.16318804860458,"interpol…0.0010
{"point2":27.390703851439298,"interpo…0.0010
{"point2":4.654901291897829,"interpol…0.0010
{"point2":64.47853036464946,"interpol…0.0010
{"point2":18.51705076195411,"interpol…0.0010
… 988 more
raw JSON
{
  "__kind": "distribution",
  "probs": [
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0010000000000000002,
    0.001000000

... (165306 more chars truncated)
feedback
not signed
probmods2-inference-algorithms/ex2.5 distribution 8 runs TV=0.00 TV=0
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Two-endpoint interpolation: point1 = -10 is fixed; point2 is uniform on [-100, 100]; interpolationWeight is uniform on [0, 1]; pointInMiddle = point1 * interpolationWeight + point2 * (1 - interpolationWeight). We observe(Gaussian({mu: 0, sigma: 0.1}), pointInMiddle). Helper:
var interpolate = function(point1, point2, interpolationWeight) {
  return point1 * interpolationWeight + point2 * (1 - interpolationWeight);
};
Replace point2's prior with a drift kernel: uniformDrift({a: -100, b: 100, width: 0.1}). End your program with Infer({method: 'MCMC', samples: 500}, model) returning the posterior over {point2, interpolationWeight, pointInMiddle}.
groundtruth code
var interpolate = function(point1, point2, interpolationWeight) {
  return (point1 * interpolationWeight +
          point2 * (1 - interpolationWeight));
};

var model = function(){
  var point1 = -10;
  var point2 = uniformDrift({a: -100, b: 100, width: .1});
  var interpolationWeight = uniform(0, 1);
  var pointInMiddle = interpolate(point1, point2, interpolationWeight);
  observe(Gaussian({mu: 0, sigma:0.1}), pointInMiddle);
  return {point2, interpolationWeight, pointInMiddle};
};
var ANSWER = (Infer({method: 'MCMC', samples: 500}, model));
generated code
sonnet-46-primer-v3 TV=0 haiku-45-noprimer-v3 TV=0 haiku-45-primer-v3 TV=0 haiku-45-think-noprimer-v3 TV=0 haiku-45-think-primer-v3 TV=0 sonnet-46-noprimer-v3 fail sonnet-46-think-noprimer-v3 fail sonnet-46-think-primer-v3 TV=0
var interpolate = function(point1, point2, interpolationWeight) {
  return point1 * interpolationWeight + point2 * (1 - interpolationWeight);
};

var model = function() {
  var point1 = -10;
  var point2 = uniformDrift({a: -100, b: 100, width: 0.1});
  var interpolationWeight = uniform(0, 1);
  var pointInMiddle = interpolate(point1, point2, interpolationWeight);
  observe(Gaussian({mu: 0, sigma: 0.1}), pointInMiddle);
  return {point2: point2, interpolationWeight: interpolationWeight, pointInMiddle: pointInMiddle};
};

var ANSWER = Infer({method: 'MCMC', samples: 500}, model);
var interpolate = function(point1, point2, interpolationWeight) {
  return point1 * interpolationWeight + point2 * (1 - interpolationWeight);
};

var model = function() {
  var point1 = -10;
  var point2 = uniformDrift({a: -100, b: 100, width: 0.1});
  var interpolationWeight = uniform(0, 1);
  var pointInMiddle = interpolate(point1, point2, interpolationWeight);
  
  observe(Gaussian({mu: 0, sigma: 0.1}), pointInMiddle);
  
  return {
    point2: point2,
    interpolationWeight: interpolationWeight,
    pointInMiddle: pointInMiddle
  };
};

var ANSWER = Infer({method: 'MCMC', samples: 500}, model);
var interpolate = function(point1, point2, interpolationWeight) {
  return point1 * interpolationWeight + point2 * (1 - interpolationWeight);
};

var model = function() {
  var point1 = -10;
  var point2 = uniformDrift({a: -100, b: 100, width: 0.1});
  var interpolationWeight = uniform(0, 1);
  var pointInMiddle = interpolate(point1, point2, interpolationWeight);
  
  observe(Gaussian({mu: 0, sigma: 0.1}), pointInMiddle);
  
  return {
    point2: point2,
    interpolationWeight: interpolationWeight,
    pointInMiddle: pointInMiddle
  };
};

var ANSWER = Infer({method: 'MCMC', samples: 500}, model);
var interpolate = function(point1, point2, interpolationWeight) {
  return point1 * interpolationWeight + point2 * (1 - interpolationWeight);
};

var model = function() {
  var point1 = -10;
  var point2 = uniformDrift({a: -100, b: 100, width: 0.1});
  var interpolationWeight = uniform({a: 0, b: 1});
  var pointInMiddle = interpolate(point1, point2, interpolationWeight);
  
  observe(Gaussian({mu: 0, sigma: 0.1}), pointInMiddle);
  
  return {
    point2: point2,
    interpolationWeight: interpolationWeight,
    pointInMiddle: pointInMiddle
  };
};

var ANSWER = Infer({method: 'MCMC', samples: 500}, model);
var interpolate = function(point1, point2, interpolationWeight) {
  return point1 * interpolationWeight + point2 * (1 - interpolationWeight);
};

var model = function() {
  var point1 = -10;
  var point2 = uniformDrift({a: -100, b: 100, width: 0.1});
  var interpolationWeight = uniform(0, 1);
  var pointInMiddle = interpolate(point1, point2, interpolationWeight);
  
  observe(Gaussian({mu: 0, sigma: 0.1}), pointInMiddle);
  
  return {
    point2: point2,
    interpolationWeight: interpolationWeight,
    pointInMiddle: pointInMiddle
  };
};

var ANSWER = Infer({method: 'MCMC', samples: 500}, model);
ReferenceError: UniformDrift is not defined
var uniformDrift = function(params) {
  var a = params.a;
  var b = params.b;
  var width = params.width;
  var current = sample(Uniform({a: a, b: b}));
  return current;
};

var interpolate = function(point1, point2, interpolationWeight) {
  return point1 * interpolationWeight + point2 * (1 - interpolationWeight);
};

var model = function() {
  var point1 = -10;
  var point2 = sample(UniformDrift({a: -100, b: 100, width: 0.1}));
  var interpolationWeight = sample(UniformDrift({a: 0, b: 1, width: 0.1}));
  var pointInMiddle = interpolate(point1, point2, interpolationWeight);
  observe(Gaussian({mu: 0, sigma: 0.1}), pointInMiddle);
  return {point2: point2, interpolationWeight: interpolationWeight, pointInMiddle: pointInMiddle};
};

var ANSWER = Infer({method: 'MCMC', samples: 500}, model);
Error: sample() expected a distribution but received "-99.67316120640561".
var interpolate = function(point1, point2, interpolationWeight) {
  return point1 * interpolationWeight + point2 * (1 - interpolationWeight);
};

var model = function() {
  var point1 = -10;
  var point2 = sample(uniformDrift({a: -100, b: 100, width: 0.1}));
  var interpolationWeight = sample(Uniform({a: 0, b: 1}));
  var pointInMiddle = interpolate(point1, point2, interpolationWeight);
  observe(Gaussian({mu: 0, sigma: 0.1}), pointInMiddle);
  return {point2: point2, interpolationWeight: interpolationWeight, pointInMiddle: pointInMiddle};
};

var ANSWER = Infer({method: 'MCMC', samples: 500}, model);
var interpolate = function(point1, point2, interpolationWeight) {
  return point1 * interpolationWeight + point2 * (1 - interpolationWeight);
};

var model = function() {
  var point1 = -10;
  var point2 = uniformDrift({a: -100, b: 100, width: 0.1});
  var interpolationWeight = uniform(0, 1);
  var pointInMiddle = interpolate(point1, point2, interpolationWeight);
  observe(Gaussian({mu: 0, sigma: 0.1}), pointInMiddle);
  return {point2: point2, interpolationWeight: interpolationWeight, pointInMiddle: pointInMiddle};
};

var ANSWER = Infer({method: 'MCMC', samples: 500}, model);
groundtruth output
{"point2":-98.98399215093289,"interpo…0.0200
{"point2":-97.87181027253325,"interpo…0.0200
{"point2":-99.58134073447034,"interpo…0.0160
{"point2":-98.06739700899008,"interpo…0.0160
{"point2":-98.52620252417587,"interpo…0.0160
{"point2":-98.39795585955841,"interpo…0.0140
{"point2":-99.46569420687285,"interpo…0.0120
{"point2":-97.82868830271168,"interpo…0.0120
{"point2":-97.82328860510273,"interpo…0.0120
{"point2":-98.17687961667289,"interpo…0.0120
{"point2":-98.38217929955636,"interpo…0.0120
{"point2":-98.68526912369471,"interpo…0.0120
… 204 more
raw JSON
{
  "__kind": "distribution",
  "probs": [
    0.0020000000000000005,
    0.0020000000000000005,
    0.015999999999999997,
    0.005999999999999998,
    0.010000000000000005,
    0.004000000000000002,
    0.011999999999999997,
    0.010000000000000005,
    0.0020000000000000005,
    0.007999999999999997,
    0.005999999999999998,
    0.005999999999999998,
    0.004000000000000002,
    0.0020000000000000005,
    0.005999999999999998,
    0.004000000000000002,
    0.005999999999999998,
    0.007999999999999997,
    0.004000000000000002,
    0.0020000000000000005,
    0.004000000000000002,
    0.020000000000000004,
    0.004000000000000002,
    0.0020000000000000005,
    0.0020000000000000005,
    0.007999999999999997,
    0.005999999999999998,
    0.0020000000000000005,
    0.007999999999999997,
    0.007999999999999997,
    0.0020000000000000005,
    0.0020000000000000005,
    0.004000000000000002,
    0.010000000000000005,
    0.005999999999999998,
    0.0020000000000000005,
    0.007999999999999997,
    0.0020000000000000005,
    0.005999999999999998,
    0.0020000000000000005,
    0.0020000000000000005,
    0.005999999999999998,
    0.0020000000000000005,
    0.004000000000000002,
    0.004000000000000002,
    0.005999999999999998,
    0.0020000000000000005,
    0.0020000000000000005,
    0.007999999999999997,
    0.007999999999999997,
    0.004000000000000002,
    0.0020000000000000005,
    0.007999999999999997,
    0.004000000000000002,
    0.004000000000000002,
    0.005999999999999998,
    0.004000000000000002,
    0.0020000000000000005,
    0.007999999999999997,
    0.010000000000000005,
    0.0020000000000000005,
    0.015999999999999997,
    0.0020000000000000005,
    0.005999999999999998,
    0.007999999999999997,
    0.0020000000000000005,
    0.0020000000000000005,
    0.004000000000000002,
    0.004000000000000002,
    0.0020000000000000005,
    0.011999999999999997,
    0.005999999999999998,
    0.0020000000000000005,
    0.011999999999999997,
    0.0020000000000000005,
    0.020000000000000004,
    0.004000000000000002,
    0.004000000000000002,
    0.0020000000000000005,
    0.004000000000000002,
    0.005999999999999998,
    0.004000000000000002,
    0.0020000000000000005,
    0.004000000000000002,
    0.004000000000000002,
    0.0020000000000000005,
    0.0020000000000000005,
    0.004000000000000002,
    0.005999999999999998,
    0.0020000000000000005,
    0.010000000000000005,
    0.004000000000000002,
    0.0020000000000000005,
    0.011999999999999997,
    0.0020000000000000005,
    0.0020000000000000005,
    0.0020000000000000005,
    0.004000000000000002,
    0.004000000000000002,
    0.013999999999999995,
    0.0020000000000000005,
    0.0020000000000000005,
    0.0020000000000000005,
    0.007999999999999997,
    0.0020000000000000005,
    0.0020000000000000005,
    0.0020000000000000005,
    0.005999999999999998,
    0.004000000000000002,
    0.004000000000000002,
    0.004000000000000002,
    0.0020000000000000005,
    0.0020000000000000005,
    0.0020000000000000005,
    0.004000000000000002,
    0.0020000000000000005,
    0.0020000000000000005,
    0.0020000000000000005,
    0.0020000000000000005,
    0.004000000000000002,
    0.004000000000000002,
    0.0020000000000000005,
    0.005999999999999998,
    0.0020000000000000005,
    0.011999999999999997,
    0.0020000000000000005,
    0.0020000000000000005,
    0.0020000000000000005,
    0.0020000000000000005,
    0.0020000000000000005,
    0.004000000000000002,
    0.0020000000000000005,
    0.005999999999999998,
    0.005999999999999998,
    0.004000000000000002,
    0.004000000000000002,
    0.010000000000000005,
    0.0020000000000000005,
    0.005999999999999998,
    0.0020000000000000005,
    0.0020000000000000005,
    0.005999999999999998,
    0.0020000000000000005,
    0.0020000000000000005,
    0.004000000000000002,
    0.0020000000000000005,
    0.0020000000000000005,
    0.004000000000000002,
    0.005999999999999998,
    0.00200000000000

... (32158 more chars truncated)
feedback
not signed
probmods2-inference-algorithms/ex4.a distribution 8 runs TV=1.00 TV<1
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Implement a topic model with two latent topics over the vocabulary ['DNA', 'evolution', 'parsing', 'phonology']. For each word in each document, a topic is drawn from a per-document topic distribution, then a word is observed under that topic's distribution. Topic distributions over the vocabulary are drawn from Dirichlet(eta = ones); per-document topic distributions are drawn from Dirichlet(alpha = ones).
var vocabulary = ['DNA', 'evolution', 'parsing', 'phonology'];
var eta = ones([vocabulary.length, 1])
var numTopics = 2
var alpha = ones([numTopics, 1])

var corpus = /* 6 short documents */;

Infer({method: 'MCMC', samples: 200, lag: 50}, function() {
  var topics = repeat(numTopics, function() {
    return T.toScalars(dirichlet({alpha: eta}))
  })
  mapData({data: corpus}, function(doc) {
    // your code: per-doc topic dist, then per-word topic + observe
  })
  return topics
});
End with the Infer(...) returning the joint posterior over topic distributions.
groundtruth code
var vocabulary = ['DNA', 'evolution', 'parsing', 'phonology'];
var eta = ones([vocabulary.length, 1]);

var numTopics = 2;
var alpha = ones([numTopics, 1]);

var corpus = [
  'DNA evolution DNA evolution DNA evolution DNA evolution DNA evolution'.split(' '),
  'DNA evolution DNA evolution DNA evolution DNA evolution DNA evolution'.split(' '),
  'DNA evolution DNA evolution DNA evolution DNA evolution DNA evolution'.split(' '),
  'parsing phonology parsing phonology parsing phonology parsing phonology parsing phonology'.split(' '),
  'parsing phonology parsing phonology parsing phonology parsing phonology parsing phonology'.split(' '),
  'parsing phonology parsing phonology parsing phonology parsing phonology parsing phonology'.split(' ')
];
var ANSWER = (Infer({method: 'MCMC', samples: 200, lag: 50}, function() {
  var topics = repeat(numTopics, function() {
    return T.toScalars(dirichlet({alpha: eta}));
  });

  mapData({data: corpus}, function(doc) {
    var docTopicDist = dirichlet({alpha: alpha});
    mapData({data: doc}, function(word) {
      var z = discrete(docTopicDist);
      var topic = topics[z];
      observe(Categorical({vs: vocabulary, ps: topic}), word);
    });
  });
  return topics;
}));
generated code
sonnet-46-primer-v3 TV<1 haiku-45-noprimer-v3 fail haiku-45-primer-v3 TV<1 haiku-45-think-noprimer-v3 fail haiku-45-think-primer-v3 fail sonnet-46-noprimer-v3 TV<1 sonnet-46-think-noprimer-v3 TV<1 sonnet-46-think-primer-v3 TV<1
var vocabulary = ['DNA', 'evolution', 'parsing', 'phonology'];
var eta = ones([vocabulary.length, 1]);
var numTopics = 2;
var alpha = ones([numTopics, 1]);

var corpus = [
  ['DNA', 'evolution', 'DNA', 'evolution'],
  ['DNA', 'DNA', 'evolution', 'evolution'],
  ['evolution', 'DNA', 'evolution', 'DNA'],
  ['parsing', 'phonology', 'parsing', 'phonology'],
  ['phonology', 'parsing', 'phonology', 'parsing'],
  ['parsing', 'parsing', 'phonology', 'phonology']
];

var ANSWER = Infer({method: 'MCMC', samples: 200, lag: 50}, function() {
  var topics = repeat(numTopics, function() {
    return T.toScalars(dirichlet({alpha: eta}));
  });

  mapData({data: corpus}, function(doc) {
    var topicDist = T.toScalars(dirichlet({alpha: alpha}));
    mapData({data: doc}, function(word) {
      var topicIdx = sample(Categorical({vs: _.range(numTopics), ps: topicDist}));
      var wordDist = topics[topicIdx];
      observe(Categorical({vs: vocabulary, ps: wordDist}), word);
    });
  });

  return topics;
});
Error: Parameter "vs" missing from Categorical distribution.
var vocabulary = ['DNA', 'evolution', 'parsing', 'phonology'];
var eta = ones([vocabulary.length, 1]);
var numTopics = 2;
var alpha = ones([numTopics, 1]);

var corpus = [
  ['DNA', 'evolution'],
  ['DNA', 'evolution', 'DNA'],
  ['parsing', 'phonology'],
  ['parsing', 'phonology', 'parsing'],
  ['DNA', 'parsing'],
  ['evolution', 'phonology']
];

var ANSWER = Infer({method: 'MCMC', samples: 200, lag: 50}, function() {
  var topics = repeat(numTopics, function() {
    return T.toScalars(dirichlet({alpha: eta}));
  });
  
  mapData({data: corpus}, function(doc) {
    var docTopicDist = T.toScalars(dirichlet({alpha: alpha}));
    mapData({data: doc}, function(word) {
      var topic = categorical({ps: docTopicDist});
      var wordDist = topics[topic];
      var wordIndex = indexOf(word, vocabulary);
      observe(categorical({ps: wordDist}), wordIndex);
    });
  });
  
  return topics;
});
var vocabulary = ['DNA', 'evolution', 'parsing', 'phonology'];
var eta = ones([vocabulary.length, 1]);
var numTopics = 2;
var alpha = ones([numTopics, 1]);

var corpus = [
  ['DNA', 'evolution'],
  ['DNA', 'evolution', 'DNA'],
  ['parsing', 'phonology'],
  ['parsing', 'phonology', 'parsing'],
  ['DNA', 'parsing'],
  ['evolution', 'phonology']
];

var ANSWER = Infer({method: 'MCMC', samples: 200, lag: 50}, function() {
  var topics = repeat(numTopics, function() {
    return T.toScalars(dirichlet({alpha: eta}));
  });
  
  mapData({data: corpus}, function(doc) {
    var docTopicDist = T.toScalars(dirichlet({alpha: alpha}));
    mapData({data: doc}, function(word) {
      var wordIdx = _.indexOf(vocabulary, word);
      var topic = categorical({vs: _.range(0, numTopics), ps: docTopicDist});
      observe(Categorical({vs: _.range(0, vocabulary.length), ps: topics[topic]}), wordIdx);
    });
  });
  
  return topics;
});
Error: Parameter "vs" missing from Categorical distribution.
var vocabulary = ['DNA', 'evolution', 'parsing', 'phonology'];
var eta = ones([vocabulary.length, 1])
var numTopics = 2
var alpha = ones([numTopics, 1])

var corpus = [
  ['DNA', 'evolution'],
  ['DNA', 'evolution', 'DNA'],
  ['parsing', 'phonology'],
  ['parsing', 'phonology', 'parsing'],
  ['DNA', 'evolution', 'parsing'],
  ['phonology', 'evolution']
];

var ANSWER = Infer({method: 'MCMC', samples: 200, lag: 50}, function() {
  var topics = repeat(numTopics, function() {
    return T.toScalars(dirichlet({alpha: eta}))
  })
  mapData({data: corpus}, function(doc) {
    var docTopicDist = T.toScalars(dirichlet({alpha: alpha}))
    mapData({data: doc}, function(word) {
      var topic = categorical({ps: docTopicDist})
      var wordProbs = topics[topic]
      var wordIndex = vocabulary.indexOf(word)
      observe(categorical({ps: wordProbs}), wordIndex)
    })
  })
  return topics
});
Error: Parameter "vs" missing from Categorical distribution.
var vocabulary = ['DNA', 'evolution', 'parsing', 'phonology'];
var eta = ones([vocabulary.length, 1]);
var numTopics = 2;
var alpha = ones([numTopics, 1]);

var corpus = [
  ['DNA', 'evolution', 'DNA'],
  ['parsing', 'phonology', 'parsing'],
  ['DNA', 'evolution'],
  ['phonology', 'parsing', 'phonology'],
  ['DNA', 'DNA', 'evolution'],
  ['parsing', 'phonology']
];

var ANSWER = Infer({method: 'MCMC', samples: 200, lag: 50}, function() {
  var topics = repeat(numTopics, function() {
    return T.toScalars(dirichlet({alpha: eta}));
  });
  
  mapData({data: corpus}, function(doc) {
    var docTopicDist = T.toScalars(dirichlet({alpha: alpha}));
    mapData({data: doc}, function(word) {
      var topic = categorical({ps: docTopicDist});
      var wordDist = topics[topic];
      observe(Categorical({vs: vocabulary, ps: wordDist}), word);
    });
  });
  
  return topics;
});
var vocabulary = ['DNA', 'evolution', 'parsing', 'phonology'];
var eta = ones([vocabulary.length, 1]);
var numTopics = 2;
var alpha = ones([numTopics, 1]);

var corpus = [
  ['DNA', 'evolution', 'DNA', 'evolution'],
  ['DNA', 'DNA', 'evolution', 'evolution'],
  ['parsing', 'phonology', 'parsing', 'phonology'],
  ['parsing', 'parsing', 'phonology', 'phonology'],
  ['DNA', 'evolution', 'parsing', 'phonology'],
  ['DNA', 'parsing', 'evolution', 'phonology']
];

var ANSWER = Infer({method: 'MCMC', samples: 200, lag: 50}, function() {
  var topics = repeat(numTopics, function() {
    return T.toScalars(dirichlet({alpha: eta}));
  });

  mapData({data: corpus}, function(doc) {
    var topicDist = T.toScalars(dirichlet({alpha: alpha}));

    mapData({data: doc}, function(word) {
      var topicIdx = sample(Discrete({ps: topicDist}));
      var wordDist = topics[topicIdx];
      observe(Discrete({ps: wordDist}), vocabulary.indexOf(word));
    });
  });

  return topics;
});
var vocabulary = ['DNA', 'evolution', 'parsing', 'phonology'];
var eta = ones([vocabulary.length, 1]);
var numTopics = 2;
var alpha = ones([numTopics, 1]);

var corpus = [
  ['DNA', 'evolution', 'DNA', 'evolution'],
  ['DNA', 'DNA', 'evolution', 'evolution'],
  ['parsing', 'phonology', 'parsing', 'phonology'],
  ['parsing', 'parsing', 'phonology', 'phonology'],
  ['DNA', 'evolution', 'parsing', 'phonology'],
  ['phonology', 'phonology', 'evolution', 'DNA']
];

var ANSWER = Infer({method: 'MCMC', samples: 200, lag: 50}, function() {
  // Draw topic distributions over vocabulary
  var topics = repeat(numTopics, function() {
    return T.toScalars(dirichlet({alpha: eta}));
  });

  mapData({data: corpus}, function(doc) {
    // Per-document topic distribution
    var topicDist = T.toScalars(dirichlet({alpha: alpha}));

    mapData({data: doc}, function(word) {
      // Draw a topic for this word
      var topicIndex = sample(Discrete({ps: topicDist}));
      // Observe the word under the chosen topic's distribution
      var wordDist = topics[topicIndex];
      observe(Discrete({ps: wordDist}), vocabulary.indexOf(word));
    });
  });

  return topics;
});
var vocabulary = ['DNA', 'evolution', 'parsing', 'phonology'];
var eta = ones([vocabulary.length, 1]);
var numTopics = 2;
var alpha = ones([numTopics, 1]);

var corpus = [
  ['DNA', 'evolution', 'DNA', 'evolution'],
  ['DNA', 'DNA', 'evolution', 'evolution'],
  ['evolution', 'DNA', 'evolution', 'DNA'],
  ['parsing', 'phonology', 'parsing', 'phonology'],
  ['phonology', 'parsing', 'phonology', 'parsing'],
  ['parsing', 'parsing', 'phonology', 'phonology']
];

var ANSWER = Infer({method: 'MCMC', samples: 200, lag: 50}, function() {
  var topics = repeat(numTopics, function() {
    return T.toScalars(dirichlet({alpha: eta}));
  });

  mapData({data: corpus}, function(doc) {
    var topicDist = T.toScalars(dirichlet({alpha: alpha}));

    mapData({data: doc}, function(word) {
      var topicIdx = sample(Categorical({vs: _.range(numTopics), ps: topicDist}));
      var wordDist = topics[topicIdx];
      observe(Categorical({vs: vocabulary, ps: wordDist}), word);
    });
  });

  return topics;
});
groundtruth output
[[0.529040361125981,0.409199013329874…0.3300
[[0.529040361125981,0.409199013329874…0.3200
[[0.529040361125981,0.409199013329874…0.0900
[[0.38065299779822026,0.3299247163099…0.0750
[[0.529040361125981,0.409199013329874…0.0600
[[0.38065299779822026,0.3299247163099…0.0500
[[0.15295165262946586,0.1996916665470…0.0300
[[0.18333950267772844,0.1472573705984…0.0100
[[0.40737603013885243,0.2519686090192…0.0100
[[0.22164646964637275,0.5439470369448…0.0100
[[0.22164646964637275,0.5439470369448…0.0100
[[0.49041832661309387,0.2159043028496…0.0050
raw JSON
{
  "__kind": "distribution",
  "probs": [
    0.029999999999999995,
    0.010000000000000005,
    0.005000000000000002,
    0.010000000000000005,
    0.075,
    0.05,
    0.010000000000000005,
    0.010000000000000005,
    0.09000000000000002,
    0.32,
    0.33,
    0.06
  ],
  "support": [
    [
      [
        0.15295165262946586,
        0.1996916665470667,
        0.23072951452448973,
        0.4166271662989778
      ],
      [
        0.17788532690099823,
        0.33010599244279615,
        0.2608030402593208,
        0.2312056403968847
      ]
    ],
    [
      [
        0.18333950267772844,
        0.1472573705984025,
        0.2225101642134672,
        0.44689296251040195
      ],
      [
        0.17788532690099823,
        0.33010599244279615,
        0.2608030402593208,
        0.2312056403968847
      ]
    ],
    [
      [
        0.49041832661309387,
        0.2159043028496365,
        0.11907189681507446,
        0.1746054737221953
      ],
      [
        0.01345978513204631,
        0.10670211604040278,
        0.401535543997136,
        0.4783025548304149
      ]
    ],
    [
      [
        0.40737603013885243,
        0.251968609019243,
        0.16273733413183714,
        0.17791802671006754
      ],
      [
        0.01345978513204631,
        0.10670211604040278,
        0.401535543997136,
        0.4783025548304149
      ]
    ],
    [
      [
        0.38065299779822026,
        0.3299247163099222,
        0.15867752137875826,
        0.13074476451309933
      ],
      [
        0.01345978513204631,
        0.10670211604040278,
        0.401535543997136,
        0.4783025548304149
      ]
    ],
    [
      [
        0.38065299779822026,
        0.3299247163099222,
        0.15867752137875826,
        0.13074476451309933
      ],
      [
        0.050516006079644994,
        0.1871657361366196,
        0.3153632802687446,
        0.44695497751499075
      ]
    ],
    [
      [
        0.22164646964637275,
        0.5439470369448495,
        0.12435282908483955,
        0.11005366432393827
      ],
      [
        0.050516006079644994,
        0.1871657361366196,
        0.3153632802687446,
        0.44695497751499075
      ]
    ],
    [
      [
        0.22164646964637275,
        0.5439470369448495,
        0.12435282908483955,
        0.11005366432393827
      ],
      [
        0.12005996814605394,
        0.04857170560438741,
        0.364115972951276,
        0.46725235329828263
      ]
    ],
    [
      [
        0.529040361125981,
        0.40919901332987424,
        0.015925983115092242,
        0.0458346424290524
      ],
      [
        0.12005996814605394,
        0.04857170560438741,
        0.364115972951276,
        0.46725235329828263
      ]
    ],
    [
      [
        0.529040361125981,
        0.40919901332987424,
        0.015925983115092242,
        0.0458346424290524
      ],
      [
        0.04286695153882787,
        0.06330509482722826,
        0.37980715722634195,
        0.5140207964076019
      ]
    ],
    [
      [
        0.529040361125981,
        0.40919901332987424,
        0.015925983115092242,
        0.0458346424290524
      ],
      [
        0.03560363946649995,
        0.08741968262940499,
        0.4712660154274694,
        0.4057106624766257
      ]
    ],
    [
      [
        0.529040361125981,
        0.40919901332987424,
        0.015925983115092242,
        0.0458346424290524
      ],
      [
        0.04745342084000766,
        0.029594511457219735,
        0.5064967766446136,
        0.416455291058159
      ]
    ]
  ]
}
feedback
not signed
learning-as-conditional-inference 3 atoms 2✓ · 0✗
probmods2-learning-as-conditional-inference/ex1.1 value 8 runs val+
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Given this 'fair-vs-uniform' coin model:
var weightPosterior = function(observedData){
  return Infer({method: 'MCMC', burn:1000, samples: 10000}, function() {
    var isFair = flip(0.9);
    var realWeight = isFair ? 0.5 : uniform({a:0, b:1});
    var coin = Bernoulli({p: realWeight});
    var obsFn = function(datum){ observe(coin, datum=='h') };
    mapData({data: observedData}, obsFn);
    return realWeight;
  })
}
This implies a two-faced coin and any other biased coin are equally likely. Adjust the model so that within the biased class (probability 0.1 in the prior), the coin is two-faced with probability 0.7 and otherwise uniform on (0, 1). End your program with var fullDataSet = repeat(50, function() { 'h' }); var observedDataSizes = [0,1,2,4,6,8,10,12,15,20,25,30,40,50]; map(function(N) { expectation(weightPosterior(fullDataSet.slice(0, N))) }, observedDataSizes) (an array of expected coin weights at each data size).
groundtruth code
var weightPosterior = function(observedData) {
  return Infer({method: 'MCMC', burn:1000, samples: 10000}, function() {
    var isFair = flip(0.9);
    var isTwoFaced = flip(0.7);
    var realWeight = isFair ? 0.5 : (isTwoFaced ? 1 : uniform({a:0, b:1}));
    var coin = Bernoulli({p: realWeight});
    var obsFn = function(datum) { observe(coin, datum=='h') };
    mapData({data: observedData}, obsFn);
    return realWeight;
  })
};

var fullDataSet = repeat(50, function() { 'h' });
var observedDataSizes = [0,1,2,4,6,8,10,12,15,20,25,30,40,50];
var ANSWER = (map(function(N) { expectation(weightPosterior(fullDataSet.slice(0, N))) }, observedDataSizes));
generated code
sonnet-46-primer-v3 val+ haiku-45-noprimer-v3 val+ haiku-45-primer-v3 val- haiku-45-think-noprimer-v3 fail haiku-45-think-primer-v3 val- sonnet-46-noprimer-v3 val+ sonnet-46-think-noprimer-v3 fail sonnet-46-think-primer-v3 val-
var weightPosterior = function(observedData){
  return Infer({method: 'MCMC', burn:1000, samples: 10000}, function() {
    var isFair = flip(0.9);
    var realWeight = isFair ? 0.5 : (flip(0.7) ? 1.0 : uniform({a:0, b:1}));
    var coin = Bernoulli({p: realWeight});
    var obsFn = function(datum){ observe(coin, datum=='h') };
    mapData({data: observedData}, obsFn);
    return realWeight;
  })
}

var fullDataSet = repeat(50, function() { return 'h'; });
var observedDataSizes = [0,1,2,4,6,8,10,12,15,20,25,30,40,50];
var ANSWER = map(function(N) { expectation(weightPosterior(fullDataSet.slice(0, N))) }, observedDataSizes);
var weightPosterior = function(observedData){
  return Infer({method: 'MCMC', burn:1000, samples: 10000}, function() {
    var isFair = flip(0.9);
    var realWeight = isFair ? 0.5 : (flip(0.7) ? 1.0 : uniform({a:0, b:1}));
    var coin = Bernoulli({p: realWeight});
    var obsFn = function(datum){ observe(coin, datum=='h') };
    mapData({data: observedData}, obsFn);
    return realWeight;
  })
}

var fullDataSet = repeat(50, function() { 'h' });
var observedDataSizes = [0,1,2,4,6,8,10,12,15,20,25,30,40,50];
var ANSWER = map(function(N) { expectation(weightPosterior(fullDataSet.slice(0, N))) }, observedDataSizes);
var weightPosterior = function(observedData){
  return Infer({method: 'MCMC', burn:1000, samples: 10000}, function() {
    var isFair = flip(0.9);
    var realWeight = isFair ? 0.5 : (flip(0.7) ? 0.5 : uniform({a:0, b:1}));
    var coin = Bernoulli({p: realWeight});
    var obsFn = function(datum){ observe(coin, datum=='h') };
    mapData({data: observedData}, obsFn);
    return realWeight;
  })
}

var fullDataSet = repeat(50, function() { 'h' });
var observedDataSizes = [0,1,2,4,6,8,10,12,15,20,25,30,40,50];
var ANSWER = map(function(N) { expectation(weightPosterior(fullDataSet.slice(0, N))) }, observedDataSizes);
Error: Line 32: Did you mean var realWeight = ?
var weightPosterior = function(observedData){
  return Infer({method: 'MCMC', burn:1000, samples: 10000}, function() {
    var isFair = flip(0.9);
    var realWeight;
    if (isFair) {
      realWeight = 0.5;
    } else {
      var isTwoFaced = flip(0.7);
      realWeight = isTwoFaced ? 1 : uniform({a:0, b:1});
    }
    var coin = Bernoulli({p: realWeight});
    var obsFn = function(datum){ observe(coin, datum=='h') };
    mapData({data: observedData}, obsFn);
    return realWeight;
  })
}

var fullDataSet = repeat(50, function() { 'h' }); 
var observedDataSizes = [0,1,2,4,6,8,10,12,15,20,25,30,40,50]; 
var ANSWER = map(function(N) { expectation(weightPosterior(fullDataSet.slice(0, N))) }, observedDataSizes);
var weightPosterior = function(observedData){
  return Infer({method: 'MCMC', burn:1000, samples: 10000}, function() {
    var isFair = flip(0.9);
    var realWeight = isFair ? 0.5 : 
      (flip(0.7) ? (flip(0.5) ? 0 : 1) : uniform({a:0, b:1}));
    var coin = Bernoulli({p: realWeight});
    var obsFn = function(datum){ observe(coin, datum=='h') };
    mapData({data: observedData}, obsFn);
    return realWeight;
  })
}

var fullDataSet = repeat(50, function() { 'h' });
var observedDataSizes = [0,1,2,4,6,8,10,12,15,20,25,30,40,50];
var ANSWER = map(function(N) { expectation(weightPosterior(fullDataSet.slice(0, N))) }, observedDataSizes);
var weightPosterior = function(observedData){
  return Infer({method: 'MCMC', burn:1000, samples: 10000}, function() {
    var isFair = flip(0.9);
    var realWeight = isFair ? 0.5 : (flip(0.7) ? 1.0 : uniform({a:0, b:1}));
    var coin = Bernoulli({p: realWeight});
    var obsFn = function(datum){ observe(coin, datum=='h') };
    mapData({data: observedData}, obsFn);
    return realWeight;
  })
}

var fullDataSet = repeat(50, function() { return 'h' });
var observedDataSizes = [0,1,2,4,6,8,10,12,15,20,25,30,40,50];
var ANSWER = map(function(N) { expectation(weightPosterior(fullDataSet.slice(0, N))) }, observedDataSizes);
Error: Line 32: Did you mean var realWeight = ?
var weightPosterior = function(observedData){
  return Infer({method: 'MCMC', burn:1000, samples: 10000}, function() {
    var isFair = flip(0.9);
    var realWeight;
    if (isFair) {
      realWeight = 0.5;
    } else {
      // Within biased class: two-faced with prob 0.7, uniform otherwise
      var isTwoFaced = flip(0.7);
      if (isTwoFaced) {
        // Two-faced: always heads or always tails
        realWeight = flip(0.5) ? 1 : 0;
      } else {
        realWeight = uniform({a: 0, b: 1});
      }
    }
    var coin = Bernoulli({p: realWeight});
    var obsFn = function(datum){ observe(coin, datum=='h') };
    mapData({data: observedData}, obsFn);
    return realWeight;
  })
}

var fullDataSet = repeat(50, function() { 'h' }); var observedDataSizes = [0,1,2,4,6,8,10,12,15,20,25,30,40,50]; var ANSWER = map(function(N) { expectation(weightPosterior(fullDataSet.slice(0, N))) }, observedDataSizes);
var weightPosterior = function(observedData){
  return Infer({method: 'MCMC', burn:1000, samples: 10000}, function() {
    var isFair = flip(0.9);
    var realWeight = isFair
      ? 0.5
      : (function() {
          var isTwoFaced = flip(0.7);
          return isTwoFaced
            ? (flip(0.5) ? 1 : 0)   // two-faced: always heads or always tails
            : uniform({a: 0, b: 1}); // otherwise uniform on (0,1)
        })();
    var coin = Bernoulli({p: realWeight});
    var obsFn = function(datum){ observe(coin, datum == 'h'); };
    mapData({data: observedData}, obsFn);
    return realWeight;
  });
};

var fullDataSet = repeat(50, function() { return 'h'; });
var observedDataSizes = [0,1,2,4,6,8,10,12,15,20,25,30,40,50];
var ANSWER = map(function(N) { expectation(weightPosterior(fullDataSet.slice(0, N))); }, observedDataSizes);
groundtruth output
[
  0.5354900129390903,
  0.5659150723240003,
  0.6160401051245443,
  0.7918115028260628,
  0.9310861493740097,
  0.964256490939227,
  0.9920111771459954,
  0.9960491544707906,
  0.9982308358122187,
  0.9989501605036541,
  0.9992324658351402,
  0.9996499015683925,
  0.99977119063664,
  0.9997853611754232
]
feedback
not signed
probmods2-learning-as-conditional-inference/ex2.1 record(prior, post) 8 runs TV=1.00 TV<1
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Given this Beta(10,10)-prior coin model and a data set alternating heads/tails 50 times each:
var pseudoCounts = {a: 10, b: 10};
var weightPosterior = function(observedData){
  return Infer({method: 'MCMC', burn:1000, samples: 1000}, function() {
    var coinWeight = sample(Beta(pseudoCounts));
    var coinDist = Bernoulli({p: coinWeight});
    var obsFn = function(datum){ observe(coinDist, datum=='h') };
    mapData({data: observedData}, obsFn);
    return coinWeight;
  })
}
var fullDataSet = repeat(50, function() { ['h', 't'] }).flat();
Compute the prior distribution and the posterior after observing the full data set. Return an object literal with two distributions: - prior: the Beta(10,10) prior - post: the posterior after all observations
groundtruth code
var pseudoCounts = {a: 10, b: 10};

var weightPosterior = function(observedData){
  return Infer({method: 'MCMC', burn:1000, samples: 1000}, function() {
    var coinWeight = sample(Beta(pseudoCounts));
    var coinDist = Bernoulli({p: coinWeight});
    var obsFn = function(datum){ observe(coinDist, datum=='h') };
    mapData({data: observedData}, obsFn);
    return coinWeight;
  })
};

var fullDataSet = repeat(50, function() { ['h', 't'] }).flat();
var ANSWER = (({
  prior: Beta(pseudoCounts),
  post: weightPosterior(fullDataSet)
}));
generated code
sonnet-46-primer-v3 TV<1 haiku-45-noprimer-v3 TV=0 haiku-45-primer-v3 TV=0 haiku-45-think-noprimer-v3 TV<1 haiku-45-think-primer-v3 TV=0 sonnet-46-noprimer-v3 TV<1 sonnet-46-think-noprimer-v3 TV<1 sonnet-46-think-primer-v3 TV<1
var pseudoCounts = {a: 10, b: 10};

var weightPosterior = function(observedData){
  return Infer({method: 'MCMC', burn:1000, samples: 5000}, function() {
    var coinWeight = sample(Beta(pseudoCounts));
    var coinDist = Bernoulli({p: coinWeight});
    var obsFn = function(datum){ observe(coinDist, datum=='h'); };
    mapData({data: observedData}, obsFn);
    return coinWeight;
  });
};

var fullDataSet = _.flatten(repeat(50, function() { return ['h', 't']; }));

var prior = Infer({method: 'forward', samples: 5000}, function() {
  return sample(Beta(pseudoCounts));
});

var post = weightPosterior(fullDataSet);

var ANSWER = {prior: prior, post: post};
var pseudoCounts = {a: 10, b: 10};
var weightPosterior = function(observedData){
  return Infer({method: 'MCMC', burn:1000, samples: 1000}, function() {
    var coinWeight = sample(Beta(pseudoCounts));
    var coinDist = Bernoulli({p: coinWeight});
    var obsFn = function(datum){ observe(coinDist, datum=='h') };
    mapData({data: observedData}, obsFn);
    return coinWeight;
  })
}
var fullDataSet = repeat(50, function() { ['h', 't'] }).flat();

var prior = Beta(pseudoCounts);
var post = weightPosterior(fullDataSet);

var ANSWER = {
  prior: prior,
  post: post
};
var pseudoCounts = {a: 10, b: 10};

var weightPosterior = function(observedData){
  return Infer({method: 'MCMC', burn:1000, samples: 1000}, function() {
    var coinWeight = sample(Beta(pseudoCounts));
    var coinDist = Bernoulli({p: coinWeight});
    var obsFn = function(datum){ observe(coinDist, datum=='h') };
    mapData({data: observedData}, obsFn);
    return coinWeight;
  })
};

var fullDataSet = repeat(50, function() { return ['h', 't']; }).flat();

var prior = Beta(pseudoCounts);

var post = weightPosterior(fullDataSet);

var ANSWER = {
  prior: prior,
  post: post
};
var pseudoCounts = {a: 10, b: 10};

var weightPosterior = function(observedData){
  return Infer({method: 'MCMC', burn:1000, samples: 1000}, function() {
    var coinWeight = sample(Beta(pseudoCounts));
    var coinDist = Bernoulli({p: coinWeight});
    var obsFn = function(datum){ observe(coinDist, datum=='h') };
    mapData({data: observedData}, obsFn);
    return coinWeight;
  })
}

var fullDataSet = repeat(50, function() { ['h', 't'] }).flat();

var prior = Infer({method: 'MCMC', burn:1000, samples: 1000}, function() {
  var coinWeight = sample(Beta(pseudoCounts));
  return coinWeight;
});

var post = weightPosterior(fullDataSet);

var ANSWER = {prior: prior, post: post};
var pseudoCounts = {a: 10, b: 10};

var weightPosterior = function(observedData){
  return Infer({method: 'MCMC', burn:1000, samples: 1000}, function() {
    var coinWeight = sample(Beta(pseudoCounts));
    var coinDist = Bernoulli({p: coinWeight});
    var obsFn = function(datum){ observe(coinDist, datum=='h') };
    mapData({data: observedData}, obsFn);
    return coinWeight;
  })
};

var fullDataSet = repeat(50, function() { ['h', 't'] }).flat();

var prior = Beta(pseudoCounts);
var post = weightPosterior(fullDataSet);

var ANSWER = {prior: prior, post: post};
var pseudoCounts = {a: 10, b: 10};

var weightPosterior = function(observedData){
  return Infer({method: 'MCMC', burn:1000, samples: 5000}, function() {
    var coinWeight = sample(Beta(pseudoCounts));
    var coinDist = Bernoulli({p: coinWeight});
    var obsFn = function(datum){ observe(coinDist, datum=='h') };
    mapData({data: observedData}, obsFn);
    return coinWeight;
  })
};

var fullDataSet = repeat(50, function() { return ['h', 't']; }).flat();

var prior = Infer({method: 'forward', samples: 5000}, function() {
  return sample(Beta(pseudoCounts));
});

var post = weightPosterior(fullDataSet);

var ANSWER = {prior: prior, post: post};
var pseudoCounts = {a: 10, b: 10};

var weightPosterior = function(observedData){
  return Infer({method: 'MCMC', burn:1000, samples: 1000}, function() {
    var coinWeight = sample(Beta(pseudoCounts));
    var coinDist = Bernoulli({p: coinWeight});
    var obsFn = function(datum){ observe(coinDist, datum=='h') };
    mapData({data: observedData}, obsFn);
    return coinWeight;
  })
}

var fullDataSet = repeat(50, function() { return ['h', 't']; }).flat();

var prior = Infer({method: 'forward', samples: 1000}, function() {
  return sample(Beta(pseudoCounts));
});

var post = weightPosterior(fullDataSet);

var ANSWER = {prior: prior, post: post};
var pseudoCounts = {a: 10, b: 10};

var weightPosterior = function(observedData) {
  return Infer({method: 'MCMC', burn: 1000, samples: 1000}, function() {
    var coinWeight = sample(Beta(pseudoCounts));
    var coinDist = Bernoulli({p: coinWeight});
    var obsFn = function(datum) { observe(coinDist, datum == 'h'); };
    mapData({data: observedData}, obsFn);
    return coinWeight;
  });
};

var fullDataSet = _.flatten(repeat(50, function() { return ['h', 't']; }));

var prior = Infer({method: 'forward', samples: 1000}, function() {
  return sample(Beta(pseudoCounts));
});

var post = weightPosterior(fullDataSet);

var ANSWER = {prior: prior, post: post};
groundtruth output
{
  "prior": {
    "__kind": "distribution_continuous",
    "repr": "Beta({ a: 10, b: 10 })"
  },
  "post": {
    "__kind": "distribution",
    "probs": [
      0.0020000000000000005,
      0.0010000000000000002,
      0.0010000000000000002,
      0.005000000000000002,
      0.002999999999999999,
      0.007000000000000003,
      0.0010000000000000002,
      0.0020000000000000005,
      0.0010000000000000002,
      0.0010000000000000002,
      0.0010000000000000002,
      0.004000000000000002,
      0.0020000000000000005,
      0.0010000000000000002,
      0.0010000000000000002,
      0.0010000000000000002,
      0.0010000000000000002,
      0.0010000000000000002,
      0.0020000000000000005,
      0.0020000000000000005,
      0.0010000000000000002,
      0.0010000000000000002,
      0.0010000000000000002,
      0.0010000000000000002,
      0.0010000000000000002,
      0.004000000000000002,
      0.0010000000000000002,
      0.002999999999999999,
      0.0010000000000000002,
      0.0020000000000000005,
      0.0010000000000000002,
      0.0010000000000000002,
      0.0010000000000000002,
      0.005000000000000002,
      0.0010000000000000002,
      0.0010000000000000002,
      0.0010000000000000002,
      0.002999999999999999,
      0.0010000000000000002,
      0.0020000000000000005,
      0.002999999999999999,
      0.0010000000000000002,
      0.0020000000000000005,
      0.0010000000000000002,
      0.0010000000000000002,
      0.0010000000000000002,
      0.0010000000000000002,
      0.0020000000000000005,
      0.002999999999999999,
      0.0010000000000000002,
      0.0010000000000000002,
      0.0010000000000000002,
      0.0010000000000000002,
      0.004000000000000002,
      0.0010000000000000002,
      0.0020000000000000005,
      0.0010000000000000002,
      0.0020000000000000005,
      0.0010000000000000002,
      0.0010000000000000002,
      0.0020000000000000005,
      0.004000000000000002,
      0.0010000000000000002,
      0.0020000000000000005,
      0.0010000000000000002,
      0.0020000000000000005,
      0.0020000000000000005,
      0.0020000000000000005,
      0.0010000000000000002,
      0.0020000000000000005,
      0.0010000000000000002,
      0.0020000000000000005,
      0.002999999999999999,
      0.0020000000000000005,
      0.0020000000000000005,
      0.0010000000000000002,
      0.0020000000000000005,
      0.0010000000000000002,
      0.0020000000000000005,
      0.004000000000000002,
      0.0010000000000000002,
      0.0010000000000000002,
      0.0020000000000000005,
      0.002999999999999999,
      0.0010000000000000002,
      0.004000000000000002,
      0.005000000000000002,
      0.0010000000000000002,
      0.0020000000000000005,
      0.0010000000000000002,
      0.0020000000000000005,
      0.005000000000000002,
      0.004000000000000002,
      0.002999999999999999,
      0.0010000000000000002,
      0.0020000000000000005,
      0.0020000000000000005,
      0.0010000000000000002,
      0.002999999999999999,
      0.0020000000000000005,
      0.0010000000000000002,
      0.0010000000000000002,
      0.0020000000000000005,
      0.0010000000000000002,
      0.0010000000000000002,
      0.002999999999999999,
      0.0010000000000000002,
      0.002999999999999999,
      0.0010000000000000002,
      0.002999999999999999,
      0.0010000000000000002,
      0.0020000000000000005,
      0.0020000000000000005,
      0.007000000000000003,
      0.0020000000000000005,
      0.005000000000000002,
      0.0020000000000000005,
      0.002999999999999999,
      0.0010000000000000002,
      0.005000000000000002,
      0.007999999999999997,
      0.0010000000000000002,
      0.0020000000000000005,
      0.004000000000000002,
      0.0010000000000000002,
      0.0010000000000000002,
      0.0010000000000000002,
      0.0010000000000000002,
      0.002999999999999999,
      0.0010000000000000002,
      0.005999999999999998,
      0.0020000000000000005,
      0.0020000000000000005,
      0.002999999999

... (23572 more chars truncated)
feedback
not signed
probmods2-learning-as-conditional-inference/ex2.2 value 8 runs val+
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Given this Beta(10,10)-prior coin model:
var pseudoCounts = {a: 10, b: 10};
var weightPosterior = function(observedData){
  return Infer({method: 'MCMC', burn:1000, samples: 1000}, function() {
    var coinWeight = sample(Beta(pseudoCounts));
    var coinDist = Bernoulli({p: coinWeight});
    var obsFn = function(datum){ observe(coinDist, datum=='h') };
    mapData({data: observedData}, obsFn);
    return coinWeight;
  })
}
Compute the *variance* of the posterior at each of these data sizes: [0,2,4,8,16,32,64,128,256,512], using the data set repeat(256, function(){['h','t']}).flat() (alternating). Variance is expectation(posterior, function(x) { Math.pow(x - mean, 2) }) where mean = expectation(posterior). End with the array of variances.
groundtruth code
var pseudoCounts = {a: 10, b: 10};

var weightPosterior = function(observedData){
  return Infer({method: 'MCMC', burn:1000, samples: 1000}, function() {
    var coinWeight = sample(Beta(pseudoCounts));
    var coinDist = Bernoulli({p: coinWeight});
    var obsFn = function(datum){ observe(coinDist, datum=='h') };
    mapData({data: observedData}, obsFn);
    return coinWeight;
  })
};

var fullDataSet = repeat(256, function(){['h', 't']}).flat();
var observedDataSizes = [0,2,4,8,16,32,64,128,256,512];
var ANSWER = (map(function(N) {
  var posterior = weightPosterior(fullDataSet.slice(0,N));
  var mean = expectation(posterior);
  return expectation(posterior, function(x) { Math.pow(x - mean, 2) });
}, observedDataSizes));
generated code
sonnet-46-primer-v3 val+ haiku-45-noprimer-v3 val+ haiku-45-primer-v3 val+ haiku-45-think-noprimer-v3 val+ haiku-45-think-primer-v3 val+ sonnet-46-noprimer-v3 val+ sonnet-46-think-noprimer-v3 val+ sonnet-46-think-primer-v3 val+
var pseudoCounts = {a: 10, b: 10};

var weightPosterior = function(observedData){
  return Infer({method: 'MCMC', burn:1000, samples: 1000}, function() {
    var coinWeight = sample(Beta(pseudoCounts));
    var coinDist = Bernoulli({p: coinWeight});
    var obsFn = function(datum){ observe(coinDist, datum=='h') };
    mapData({data: observedData}, obsFn);
    return coinWeight;
  })
};

// Build the alternating dataset of length 512
var fullData = _.flatten(repeat(256, function(){ return ['h','t']; }));

var dataSizes = [0,2,4,8,16,32,64,128,256,512];

var variances = map(function(n) {
  var data = fullData.slice(0, n);
  var posterior = weightPosterior(data);
  var mean = expectation(posterior, function(x){ return x; });
  var variance = expectation(posterior, function(x){ return Math.pow(x - mean, 2); });
  return variance;
}, dataSizes);

var ANSWER = variances;
var pseudoCounts = {a: 10, b: 10};
var weightPosterior = function(observedData){
  return Infer({method: 'MCMC', burn:1000, samples: 1000}, function() {
    var coinWeight = sample(Beta(pseudoCounts));
    var coinDist = Bernoulli({p: coinWeight});
    var obsFn = function(datum){ observe(coinDist, datum=='h') };
    mapData({data: observedData}, obsFn);
    return coinWeight;
  })
}

var fullData = repeat(256, function(){return ['h','t']}).flat();

var dataSizes = [0,2,4,8,16,32,64,128,256,512];

var variances = map(function(size) {
  var observedData = fullData.slice(0, size);
  var posterior = weightPosterior(observedData);
  var mean = expectation(posterior, function(x) { return x; });
  var variance = expectation(posterior, function(x) { return Math.pow(x - mean, 2); });
  return variance;
}, dataSizes);

var ANSWER = variances;
var pseudoCounts = {a: 10, b: 10};
var weightPosterior = function(observedData){
  return Infer({method: 'MCMC', burn:1000, samples: 1000}, function() {
    var coinWeight = sample(Beta(pseudoCounts));
    var coinDist = Bernoulli({p: coinWeight});
    var obsFn = function(datum){ observe(coinDist, datum=='h') };
    mapData({data: observedData}, obsFn);
    return coinWeight;
  })
}

var fullData = _.flatten(repeat(256, function(){return ['h','t']}));
var dataSizes = [0,2,4,8,16,32,64,128,256,512];

var variances = map(function(size) {
  var observedData = size === 0 ? [] : fullData.slice(0, size);
  var posterior = weightPosterior(observedData);
  var mean = expectation(posterior);
  var variance = expectation(posterior, function(x) { 
    return Math.pow(x - mean, 2); 
  });
  return variance;
}, dataSizes);

var ANSWER = variances;
var pseudoCounts = {a: 10, b: 10};

var weightPosterior = function(observedData){
  return Infer({method: 'MCMC', burn:1000, samples: 1000}, function() {
    var coinWeight = sample(Beta(pseudoCounts));
    var coinDist = Bernoulli({p: coinWeight});
    var obsFn = function(datum){ observe(coinDist, datum=='h') };
    mapData({data: observedData}, obsFn);
    return coinWeight;
  })
}

// Generate alternating h,t data
var fullData = repeat(256, function(){['h','t']}).flat();

// Data sizes to test
var dataSizes = [0,2,4,8,16,32,64,128,256,512];

// Compute variance of posterior for each data size
var variances = map(function(size) {
  var data = fullData.slice(0, size);
  var posterior = weightPosterior(data);
  var mean = expectation(posterior);
  var variance = expectation(posterior, function(x) { 
    return Math.pow(x - mean, 2) 
  });
  return variance;
}, dataSizes);

var ANSWER = variances;
var pseudoCounts = {a: 10, b: 10};

var weightPosterior = function(observedData){
  return Infer({method: 'MCMC', burn:1000, samples: 1000}, function() {
    var coinWeight = sample(Beta(pseudoCounts));
    var coinDist = Bernoulli({p: coinWeight});
    var obsFn = function(datum){ observe(coinDist, datum=='h') };
    mapData({data: observedData}, obsFn);
    return coinWeight;
  })
};

// Create the alternating data: ['h','t','h','t',...] repeated 256 times = 512 elements
var fullData = _.flatten(repeat(256, function(){ return ['h','t']; }));

// Data sizes to test
var dataSizes = [0,2,4,8,16,32,64,128,256,512];

// Compute variances for each data size
var variances = map(function(size) {
  var data = size === 0 ? [] : fullData.slice(0, size);
  var posterior = weightPosterior(data);
  var mean = expectation(posterior);
  var variance = expectation(posterior, function(x) { Math.pow(x - mean, 2) });
  return variance;
}, dataSizes);

var ANSWER = variances;
var pseudoCounts = {a: 10, b: 10};

var weightPosterior = function(observedData){
  return Infer({method: 'MCMC', burn:1000, samples: 1000}, function() {
    var coinWeight = sample(Beta(pseudoCounts));
    var coinDist = Bernoulli({p: coinWeight});
    var obsFn = function(datum){ observe(coinDist, datum=='h') };
    mapData({data: observedData}, obsFn);
    return coinWeight;
  })
};

// Create the full alternating dataset of 512 elements
var fullData = repeat(256, function(){ return ['h','t']; });
var flatData = [].concat.apply([], fullData);

var dataSizes = [0, 2, 4, 8, 16, 32, 64, 128, 256, 512];

var variances = map(function(n) {
  var observedData = flatData.slice(0, n);
  var posterior = weightPosterior(observedData);
  var mean = expectation(posterior, function(x){ return x; });
  var variance = expectation(posterior, function(x){ return Math.pow(x - mean, 2); });
  return variance;
}, dataSizes);

var ANSWER = variances;
var pseudoCounts = {a: 10, b: 10};
var weightPosterior = function(observedData){
  return Infer({method: 'MCMC', burn:1000, samples: 1000}, function() {
    var coinWeight = sample(Beta(pseudoCounts));
    var coinDist = Bernoulli({p: coinWeight});
    var obsFn = function(datum){ observe(coinDist, datum=='h') };
    mapData({data: observedData}, obsFn);
    return coinWeight;
  })
};

var fullData = [].concat.apply([], repeat(256, function(){ return ['h','t']; }));

var dataSizes = [0, 2, 4, 8, 16, 32, 64, 128, 256, 512];

var variances = map(function(n) {
  var data = fullData.slice(0, n);
  var posterior = weightPosterior(data);
  var mean = expectation(posterior, function(x){ return x; });
  var variance = expectation(posterior, function(x){ return Math.pow(x - mean, 2); });
  return variance;
}, dataSizes);

var ANSWER = variances;
var pseudoCounts = {a: 10, b: 10};

var weightPosterior = function(observedData) {
  return Infer({method: 'MCMC', burn: 1000, samples: 1000}, function() {
    var coinWeight = sample(Beta(pseudoCounts));
    var coinDist = Bernoulli({p: coinWeight});
    var obsFn = function(datum) { observe(coinDist, datum == 'h'); };
    mapData({data: observedData}, obsFn);
    return coinWeight;
  });
};

var fullData = _.flatten(repeat(256, function() { return ['h', 't']; }));

var dataSizes = [0, 2, 4, 8, 16, 32, 64, 128, 256, 512];

var variances = map(function(n) {
  var data = fullData.slice(0, n);
  var posterior = weightPosterior(data);
  var mean = expectation(posterior);
  var variance = expectation(posterior, function(x) {
    return Math.pow(x - mean, 2);
  });
  return variance;
}, dataSizes);

var ANSWER = variances;
groundtruth output
[
  0.011891060781771552,
  0.010688273164072504,
  0.010146191714640929,
  0.008783158580133874,
  0.006554418248099274,
  0.004498781299218255,
  0.002883115466926617,
  0.001784902317924087,
  0.0008958940464777299,
  0.0005487575480921067
]
feedback
not signed
mixture-models 3 atoms 1✓ · 0✗
probmods2-mixture-models/ex1.a distribution 8 runs TV=0.00 TV=0
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
You visit an alien planet and observe 10 aliens, each with three binary properties (antennae, green, blarghNoise). Build a mixture model with two latent kinds of aliens, each with its own per-property probability. Use Beta(.5, .5) priors independently on each of the three probabilities for each group, and a 50/50 prior over which group each alien belongs to. Memoize the per-group prototype with mem (so within one inference run, the group's prototype is consistent across aliens). Data:
var data = [
  {antennae: false, green: false, blarghNoise: false},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: false, green: false, blarghNoise: false},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: false, green: false, blarghNoise: false},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: false, green: false, blarghNoise: false},
  {antennae: false, green: false, blarghNoise: false}
];
End your program with Infer({method: 'MCMC', kernel: {HMC: {steps: 10, stepSize: .01}}, samples: 3000}, ...) returning the joint posterior {group1, group2} where each is the per-property probability object {antennae, green, blarghNoise}.
groundtruth code
var properties = ['antennae', 'green', 'blarghNoise'];
var data = [
  {antennae : false, green: false, blarghNoise: false},
  {antennae : true,  green: true,  blarghNoise: true},
  {antennae : true,  green: true,  blarghNoise: true},
  {antennae : true,  green: true,  blarghNoise: true},
  {antennae : false, green: false, blarghNoise: false},
  {antennae : true,  green: true,  blarghNoise: true},
  {antennae : false, green: false, blarghNoise: false},
  {antennae : true,  green: true,  blarghNoise: true},
  {antennae : false, green: false, blarghNoise: false},
  {antennae : false, green: false, blarghNoise: false}
];

var sampleGroupPrototype = mem(function(groupName) {
  var probs = repeat(3, function(){ beta(.5, .5)});
  return _.zipObject(properties, probs);
});
var ANSWER = (Infer({method: 'MCMC', kernel: {HMC: {steps: 10, stepSize: .01}}, samples: 3000},
      function(){
  mapData({data: data}, function(datum) {
    var group = flip() ? 'group1' : 'group2';
    var prototype = sampleGroupPrototype(group);
    mapData({data: properties}, function(property) {
      observe(Bernoulli({p: prototype[property]}), datum[property]);
    });
  });
  return {group1: sampleGroupPrototype('group1'),
          group2: sampleGroupPrototype('group2')};
}));
generated code
sonnet-46-primer-v3 TV=0 haiku-45-noprimer-v3 fail haiku-45-primer-v3 fail haiku-45-think-noprimer-v3 fail haiku-45-think-primer-v3 TV<1 sonnet-46-noprimer-v3 TV=0 sonnet-46-think-noprimer-v3 fail sonnet-46-think-primer-v3 TV<1
var data = [
  {antennae: false, green: false, blarghNoise: false},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: false, green: false, blarghNoise: false},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: false, green: false, blarghNoise: false},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: false, green: false, blarghNoise: false},
  {antennae: false, green: false, blarghNoise: false}
];

var model = function() {
  var prototype = mem(function(group) {
    return {
      antennae: beta(0.5, 0.5),
      green: beta(0.5, 0.5),
      blarghNoise: beta(0.5, 0.5)
    };
  });

  mapData({data: data}, function(alien) {
    var group = flip(0.5) ? 1 : 2;
    var proto = prototype(group);
    observe(Bernoulli({p: proto.antennae}), alien.antennae);
    observe(Bernoulli({p: proto.green}), alien.green);
    observe(Bernoulli({p: proto.blarghNoise}), alien.blarghNoise);
  });

  return {
    group1: prototype(1),
    group2: prototype(2)
  };
};

var ANSWER = Infer({method: 'MCMC', kernel: {HMC: {steps: 10, stepSize: .01}}, samples: 3000}, model);
Error: Too many arguments. Distributions take at most one argument.
var data = [
  {antennae: false, green: false, blarghNoise: false},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: false, green: false, blarghNoise: false},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: false, green: false, blarghNoise: false},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: false, green: false, blarghNoise: false},
  {antennae: false, green: false, blarghNoise: false}
];

var model = function() {
  // Memoized function to generate group prototypes
  var getGroupPrototype = mem(function(groupId) {
    return {
      antennae: sample(Beta(0.5, 0.5)),
      green: sample(Beta(0.5, 0.5)),
      blarghNoise: sample(Beta(0.5, 0.5))
    };
  });
  
  // Get the two group prototypes
  var group1 = getGroupPrototype(1);
  var group2 = getGroupPrototype(2);
  
  // Observe each alien
  mapData({data: data}, function(alien) {
    // Each alien belongs to group 1 or 2 with equal probability
    var group = flip(0.5) ? group1 : group2;
    
    // Observe the alien's properties given their group
    observe(Bernoulli(group.antennae), alien.antennae);
    observe(Bernoulli(group.green), alien.green);
    observe(Bernoulli(group.blarghNoise), alien.blarghNoise);
  });
  
  return {group1: group1, group2: group2};
};

var ANSWER = Infer({method: 'MCMC', kernel: {HMC: {steps: 10, stepSize: .01}}, samples: 3000}, model);
Timeout after 60s
var data = [
  {antennae: false, green: false, blarghNoise: false},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: false, green: false, blarghNoise: false},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: false, green: false, blarghNoise: false},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: false, green: false, blarghNoise: false},
  {antennae: false, green: false, blarghNoise: false}
];

var model = function() {
  // Memoized function to generate group prototype
  var getGroupPrototype = mem(function(groupId) {
    return {
      antennae: sample(Beta({a: 0.5, b: 0.5})),
      green: sample(Beta({a: 0.5, b: 0.5})),
      blarghNoise: sample(Beta({a: 0.5, b: 0.5}))
    };
  });

  var group1 = getGroupPrototype(1);
  var group2 = getGroupPrototype(2);

  // Observe each alien
  mapData({data: data}, function(alien) {
    // Each alien belongs to one of two groups with equal probability
    var groupId = flip(0.5) ? 1 : 2;
    var prototype = groupId === 1 ? group1 : group2;

    // Observe the alien's properties given the group prototype
    observe(Bernoulli({p: prototype.antennae}), alien.antennae ? 1 : 0);
    observe(Bernoulli({p: prototype.green}), alien.green ? 1 : 0);
    observe(Bernoulli({p: prototype.blarghNoise}), alien.blarghNoise ? 1 : 0);
  });

  return {group1: group1, group2: group2};
};

var ANSWER = Infer({method: 'MCMC', kernel: {HMC: {steps: 10, stepSize: 0.01}}, samples: 3000}, model);
Error: Too many arguments. Distributions take at most one argument.
var data = [
  {antennae: false, green: false, blarghNoise: false},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: false, green: false, blarghNoise: false},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: false, green: false, blarghNoise: false},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: false, green: false, blarghNoise: false},
  {antennae: false, green: false, blarghNoise: false}
];

var model = function() {
  // Memoized per-group prototype generator
  var groupPrototype = mem(function(groupId) {
    return {
      antennae: sample(Beta(.5, .5)),
      green: sample(Beta(.5, .5)),
      blarghNoise: sample(Beta(.5, .5))
    };
  });
  
  // For each alien, sample group membership and condition on observations
  _.each(data, function(alien) {
    // 50/50 prior over group membership
    var group = sample(Categorical({ps: [0.5, 0.5], vs: [1, 2]}));
    var prototype = groupPrototype(group);
    
    // Likelihood of observed properties given group prototype
    var antennaeLik = alien.antennae ? prototype.antennae : (1 - prototype.antennae);
    var greenLik = alien.green ? prototype.green : (1 - prototype.green);
    var blarghLik = alien.blarghNoise ? prototype.blarghNoise : (1 - prototype.blarghNoise);
    
    factor(Math.log(antennaeLik * greenLik * blarghLik));
  });
  
  // Return posterior over both group prototypes
  return {
    group1: groupPrototype(1),
    group2: groupPrototype(2)
  };
};

var ANSWER = Infer({method: 'MCMC', kernel: {HMC: {steps: 10, stepSize: .01}}, samples: 3000}, model);
var data = [
  {antennae: false, green: false, blarghNoise: false},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: false, green: false, blarghNoise: false},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: false, green: false, blarghNoise: false},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: false, green: false, blarghNoise: false},
  {antennae: false, green: false, blarghNoise: false}
];

var model = function() {
  // Memoized function to generate per-group prototype
  var getGroupPrototype = mem(function(groupId) {
    return {
      antennae: sample(Beta({a: 0.5, b: 0.5})),
      green: sample(Beta({a: 0.5, b: 0.5})),
      blarghNoise: sample(Beta({a: 0.5, b: 0.5}))
    };
  });
  
  var group1 = getGroupPrototype(1);
  var group2 = getGroupPrototype(2);
  
  // For each alien, sample which group it belongs to and observe its properties
  mapData({data: data}, function(alien) {
    var group = flip(0.5) ? 1 : 2;
    var prototype = group === 1 ? group1 : group2;
    
    observe(Bernoulli({p: prototype.antennae}), alien.antennae);
    observe(Bernoulli({p: prototype.green}), alien.green);
    observe(Bernoulli({p: prototype.blarghNoise}), alien.blarghNoise);
  });
  
  return {group1: group1, group2: group2};
};

var ANSWER = Infer({method: 'MCMC', kernel: {HMC: {steps: 10, stepSize: .01}}, samples: 3000}, model);
var data = [
  {antennae: false, green: false, blarghNoise: false},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: false, green: false, blarghNoise: false},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: false, green: false, blarghNoise: false},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: false, green: false, blarghNoise: false},
  {antennae: false, green: false, blarghNoise: false}
];

var model = function() {
  var prototype = mem(function(group) {
    return {
      antennae: sample(Beta({a: 0.5, b: 0.5})),
      green: sample(Beta({a: 0.5, b: 0.5})),
      blarghNoise: sample(Beta({a: 0.5, b: 0.5}))
    };
  });

  mapData({data: data}, function(alien) {
    var group = sample(Categorical({ps: [0.5, 0.5], vs: [1, 2]}));
    var proto = prototype(group);
    observe(Bernoulli({p: proto.antennae}), alien.antennae);
    observe(Bernoulli({p: proto.green}), alien.green);
    observe(Bernoulli({p: proto.blarghNoise}), alien.blarghNoise);
  });

  return {
    group1: prototype(1),
    group2: prototype(2)
  };
};

var ANSWER = Infer({method: 'MCMC', kernel: {HMC: {steps: 10, stepSize: .01}}, samples: 3000}, model);
Error: Too many arguments. Distributions take at most one argument.
var data = [
  {antennae: false, green: false, blarghNoise: false},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: false, green: false, blarghNoise: false},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: false, green: false, blarghNoise: false},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: false, green: false, blarghNoise: false},
  {antennae: false, green: false, blarghNoise: false}
];

var model = function() {

  // Memoized per-group prototypes with Beta(.5,.5) priors on each property
  var prototype = mem(function(group) {
    return {
      antennae:   sample(Beta(.5, .5)),
      green:      sample(Beta(.5, .5)),
      blarghNoise: sample(Beta(.5, .5))
    };
  });

  // For each alien, marginalize out the discrete group assignment so that
  // the model is fully continuous and compatible with HMC.
  mapData({data: data}, function(alien) {
    var proto1 = prototype(1);
    var proto2 = prototype(2);

    // Log-likelihood of this alien's data under group 1
    var loglik1 = Bernoulli({p: proto1.antennae}).score(alien.antennae)
                + Bernoulli({p: proto1.green}).score(alien.green)
                + Bernoulli({p: proto1.blarghNoise}).score(alien.blarghNoise);

    // Log-likelihood of this alien's data under group 2
    var loglik2 = Bernoulli({p: proto2.antennae}).score(alien.antennae)
                + Bernoulli({p: proto2.green}).score(alien.green)
                + Bernoulli({p: proto2.blarghNoise}).score(alien.blarghNoise);

    // 50/50 mixture: log( 0.5*p1 + 0.5*p2 ) = log(0.5) + logSumExp(loglik1, loglik2)
    var logMarginal = Math.log(0.5)
                    + Math.log(Math.exp(loglik1) + Math.exp(loglik2));

    factor(logMarginal);
  });

  return {
    group1: prototype(1),
    group2: prototype(2)
  };
};

var ANSWER = Infer(
  {method: 'MCMC', kernel: {HMC: {steps: 10, stepSize: .01}}, samples: 3000},
  model
);
var data = [
  {antennae: false, green: false, blarghNoise: false},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: false, green: false, blarghNoise: false},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: false, green: false, blarghNoise: false},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: false, green: false, blarghNoise: false},
  {antennae: false, green: false, blarghNoise: false}
];

var logsumexp2 = function(a, b) {
  var m = Math.max(a, b);
  return m + Math.log(Math.exp(a - m) + Math.exp(b - m));
};

var model = function() {

  var getPrototype = mem(function(group) {
    return {
      antennae:   beta(.5, .5),
      green:      beta(.5, .5),
      blarghNoise: beta(.5, .5)
    };
  });

  var alienLogLik = function(alien, group) {
    var proto = getPrototype(group);
    return (
      Bernoulli({p: proto.antennae}).score(alien.antennae) +
      Bernoulli({p: proto.green}).score(alien.green) +
      Bernoulli({p: proto.blarghNoise}).score(alien.blarghNoise)
    );
  };

  mapData({data: data}, function(alien) {
    var ll1 = Math.log(0.5) + alienLogLik(alien, 1);
    var ll2 = Math.log(0.5) + alienLogLik(alien, 2);
    factor(logsumexp2(ll1, ll2));
  });

  return {
    group1: getPrototype(1),
    group2: getPrototype(2)
  };
};

var ANSWER = Infer({method: 'MCMC', kernel: {HMC: {steps: 10, stepSize: .01}}, samples: 3000}, model);
groundtruth output
{"group1":{"antennae":0.0068432967157…0.1303
{"group1":{"antennae":0.0024218054098…0.1300
{"group1":{"antennae":0.0510886745519…0.0960
{"group1":{"antennae":0.0089178919758…0.0353
{"group1":{"antennae":0.0511091916768…0.0243
{"group1":{"antennae":0.0277726661794…0.0217
{"group1":{"antennae":0.0698073408281…0.0183
{"group1":{"antennae":0.0087640592584…0.0183
{"group1":{"antennae":0.0144299554450…0.0160
{"group1":{"antennae":0.0162166443571…0.0160
{"group1":{"antennae":0.2566061475504…0.0150
{"group1":{"antennae":0.1809004142778…0.0123
… 245 more
raw JSON
{
  "__kind": "distribution",
  "probs": [
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0006666666666666666,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0006666666666666666,
    0.0006666666666666666,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0010000000000000002,
    0.0003333333333333336,
    0.0006666666666666666,
    0.0006666666666666666,
    0.0003333333333333336,
    0.0010000000000000002,
    0.0006666666666666666,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0006666666666666666,
    0.0003333333333333336,
    0.0023333333333333335,
    0.002666666666666667,
    0.0010000000000000002,
    0.0020000000000000005,
    0.003333333333333334,
    0.007333333333333334,
    0.004666666666666667,
    0.0006666666666666666,
    0.0013333333333333335,
    0.002666666666666667,
    0.0006666666666666666,
    0.004000000000000002,
    0.005000000000000002,
    0.004000000000000002,
    0.0003333333333333336,
    0.005333333333333335,
    0.005666666666666667,
    0.004333333333333335,
    0.0006666666666666666,
    0.0013333333333333335,
    0.0003333333333333336,
    0.003333333333333334,
    0.0003333333333333336,
    0.0010000000000000002,
    0.0003333333333333336,
    0.002999999999999999,
    0.002666666666666667,
    0.005333333333333335,
    0.003666666666666667,
    0.0006666666666666666,
    0.005000000000000002,
    0.014999999999999996,
    0.012333333333333337,
    0.0016666666666666668,
    0.002666666666666667,
    0.0013333333333333335,
    0.0006666666666666666,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0016666666666666668,
    0.004000000000000002,
    0.0010000000000000002,
    0.0006666666666666666,
    0.0016666666666666668,
    0.0023333333333333335,
    0.02433333333333333,
    0.0006666666666666666,
    0.0013333333333333335,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0010000000000000002,
    0.0006666666666666666,
    0.0016666666666666668,
    0.0003333333333333336,
    0.002999999999999999,
    0.002999999999999999,
    0.0010000000000000002,
    0.007666666666666666,
    0.0003333333333333336,
    0.13033333333333333,
    0.0020000000000000005,
    0.0003333333333333336,
    0.008333333333333335,
    0.0010000000000000002,
    0.0010000000000000002,
    0.0016666666666666668,
    0.0020000000000000005,
    0.003333333333333334,
    0.0006666666666666666,
    0.0020000000000000005,
    0.004333333333333335,
    0.005666666666666667,
    0.002666666666666667,
    0.0013333333333333335,
    0.007999999999999997,
    0.003666666666666667,
    0.002999999999999999,
    0.0020000000000000005,
    0.0063333333333333314,
    0.006666666666666668,
    0.0003333333333333336,
    0.0003333333333333336,
    0.007333333333333334,
    0.0010000000000000002,
    0.005000000000000002,
    0.0010000000000000002,
    0.0003333333333333336,
    0.0003333333333333336,
    0.002666666666666667,
 

... (81695 more chars truncated)
feedback
not signed
probmods2-mixture-models/ex1.b distribution 8 runs TV=1.00 TV<1
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Alien-planet setup with 10 aliens (three binary properties: antennae, green, blarghNoise). Two latent kinds of aliens with Beta(.5, .5) priors per property, 50/50 prior on group membership, mem'd group prototypes, MCMC + HMC inference (steps:10, stepSize:.01, samples:3000). Data:
var data = [
  {antennae: false, green: false, blarghNoise: false},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: false, green: false, blarghNoise: false},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: false, green: false, blarghNoise: false},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: false, green: false, blarghNoise: false},
  {antennae: false, green: false, blarghNoise: false}
];
Extend the model with a new observation: you hear a 'blargh' sound from a crater but cannot see the alien. Add a latent mysteryGroup (50/50 over the two groups), sample its prototype, and observe blarghNoise=true for that prototype. End your program with the Infer(...) returning {group1, group2, mysteryGroup}.
groundtruth code
var properties = ['antennae', 'green', 'blarghNoise'];
var data = [
  {antennae : false, green: false, blarghNoise: false},
  {antennae : true,  green: true,  blarghNoise: true},
  {antennae : true,  green: true,  blarghNoise: true},
  {antennae : true,  green: true,  blarghNoise: true},
  {antennae : false, green: false, blarghNoise: false},
  {antennae : true,  green: true,  blarghNoise: true},
  {antennae : false, green: false, blarghNoise: false},
  {antennae : true,  green: true,  blarghNoise: true},
  {antennae : false, green: false, blarghNoise: false},
  {antennae : false, green: false, blarghNoise: false}
];
var sampleGroupPrototype = mem(function(groupName) {
  var probs = repeat(3, function(){ beta(.5, .5)});
  return _.zipObject(properties, probs);
});
var ANSWER = (Infer({method: 'MCMC', kernel: {HMC: {steps: 10, stepSize: .01}}, samples: 3000},
      function(){
  mapData({data: data}, function(datum) {
    var group = flip() ? 'group1' : 'group2';
    var prototype = sampleGroupPrototype(group);
    mapData({data: properties}, function(property) {
      observe(Bernoulli({p: prototype[property]}), datum[property]);
    });
  });
  var mysteryGroup = flip() ? 'group1' : 'group2';
  var mysteryPrototype = sampleGroupPrototype(mysteryGroup);
  observe(Bernoulli({p: mysteryPrototype['blarghNoise']}), true);
  return {group1: sampleGroupPrototype('group1'),
          group2: sampleGroupPrototype('group2'),
          mysteryGroup: mysteryGroup};
}));
generated code
sonnet-46-primer-v3 TV<1 haiku-45-noprimer-v3 fail haiku-45-primer-v3 fail haiku-45-think-noprimer-v3 fail haiku-45-think-primer-v3 fail sonnet-46-noprimer-v3 TV<1 sonnet-46-think-noprimer-v3 TV<1 sonnet-46-think-primer-v3 TV<1
var data = [
  {antennae: false, green: false, blarghNoise: false},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: false, green: false, blarghNoise: false},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: false, green: false, blarghNoise: false},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: false, green: false, blarghNoise: false},
  {antennae: false, green: false, blarghNoise: false}
];

var model = function() {
  // Group prototypes: for each group, sample Beta(.5,.5) for each property
  var groupPrototype = mem(function(group) {
    return {
      antennae: beta(0.5, 0.5),
      green: beta(0.5, 0.5),
      blarghNoise: beta(0.5, 0.5)
    };
  });

  // For each alien in data, assign to a group and observe properties
  mapData({data: data}, function(alien) {
    var group = flip(0.5) ? 1 : 2;
    var proto = groupPrototype(group);
    observe(Bernoulli({p: proto.antennae}), alien.antennae);
    observe(Bernoulli({p: proto.green}), alien.green);
    observe(Bernoulli({p: proto.blarghNoise}), alien.blarghNoise);
  });

  // Mystery alien: heard blargh but can't see it
  var mysteryGroup = flip(0.5) ? 1 : 2;
  var mysteryProto = groupPrototype(mysteryGroup);
  observe(Bernoulli({p: mysteryProto.blarghNoise}), true);

  var proto1 = groupPrototype(1);
  var proto2 = groupPrototype(2);

  return {
    group1: {
      antennae: proto1.antennae,
      green: proto1.green,
      blarghNoise: proto1.blarghNoise
    },
    group2: {
      antennae: proto2.antennae,
      green: proto2.green,
      blarghNoise: proto2.blarghNoise
    },
    mysteryGroup: mysteryGroup
  };
};

var ANSWER = Infer({
  method: 'MCMC',
  samples: 3000,
  kernel: {HMC: {steps: 10, stepSize: 0.01}}
}, model);
Error: Too many arguments. Distributions take at most one argument.
var data = [
  {antennae: false, green: false, blarghNoise: false},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: false, green: false, blarghNoise: false},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: false, green: false, blarghNoise: false},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: false, green: false, blarghNoise: false},
  {antennae: false, green: false, blarghNoise: false}
];

var getGroupPrototype = mem(function(groupId) {
  return {
    antennae: sample(Beta(0.5, 0.5)),
    green: sample(Beta(0.5, 0.5)),
    blarghNoise: sample(Beta(0.5, 0.5))
  };
});

var model = function() {
  var group1 = getGroupPrototype(1);
  var group2 = getGroupPrototype(2);
  
  mapData({data: data}, function(alien) {
    var groupId = categorical({ps: [0.5, 0.5], vs: [1, 2]});
    var prototype = groupId === 1 ? group1 : group2;
    
    observe(Bernoulli(prototype.antennae), alien.antennae);
    observe(Bernoulli(prototype.green), alien.green);
    observe(Bernoulli(prototype.blarghNoise), alien.blarghNoise);
  });
  
  var mysteryGroupId = categorical({ps: [0.5, 0.5], vs: [1, 2]});
  var mysteryGroup = mysteryGroupId === 1 ? group1 : group2;
  observe(Bernoulli(mysteryGroup.blarghNoise), true);
  
  return {
    group1: group1,
    group2: group2,
    mysteryGroup: mysteryGroup
  };
};

var ANSWER = Infer(
  {method: "MCMC", kernel: {HMC: {steps: 10, stepSize: 0.01}}, samples: 3000},
  model
);
Error: Line 54: Did you mean var logProb = ?
var data = [
  {antennae: false, green: false, blarghNoise: false},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: false, green: false, blarghNoise: false},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: false, green: false, blarghNoise: false},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: false, green: false, blarghNoise: false},
  {antennae: false, green: false, blarghNoise: false}
];

var properties = ['antennae', 'green', 'blarghNoise'];

var groupPrototype = mem(function(groupId) {
  var proto = {};
  mapData({data: properties}, function(prop) {
    proto[prop] = beta({a: 0.5, b: 0.5});
  });
  return proto;
});

var alienLikelihood = function(alien, prototype) {
  var logProb = 0;
  mapData({data: properties}, function(prop) {
    var p = prototype[prop];
    logProb = logProb + (alien[prop] ? Math.log(p) : Math.log(1 - p));
  });
  return logProb;
};

var model = function() {
  var group1 = groupPrototype(0);
  var group2 = groupPrototype(1);
  
  mapData({data: data}, function(alien) {
    var groupId = flip(0.5) ? 0 : 1;
    var prototype = groupId === 0 ? group1 : group2;
    factor(alienLikelihood(alien, prototype));
  });
  
  var mysteryGroup = flip(0.5) ? 0 : 1;
  var mysteryPrototype = mysteryGroup === 0 ? group1 : group2;
  observe(Bernoulli({p: mysteryPrototype.blarghNoise}), true);
  
  return {
    group1: group1,
    group2: group2,
    mysteryGroup: mysteryGroup
  };
};

var ANSWER = Infer(
  {
    method: 'MCMC',
    kernel: {HMC: {steps: 10, stepSize: 0.01}},
    samples: 3000
  },
  model
);
Error: Infer: 'mcmc' is not a valid method. The following methods are available: SMC, MCMC, PMCMC, asyncPF, rejection, enumerate, incrementalMH, forward, optimize, AIS, defaultInfer.
var data = [
  {antennae: false, green: false, blarghNoise: false},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: false, green: false, blarghNoise: false},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: false, green: false, blarghNoise: false},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: false, green: false, blarghNoise: false},
  {antennae: false, green: false, blarghNoise: false}
];

var model = function() {
  // Memoized group prototypes
  var groupPrototype = mem(function(group) {
    return {
      antennae: beta(0.5, 0.5),
      green: beta(0.5, 0.5),
      blarghNoise: beta(0.5, 0.5)
    };
  });
  
  var group1 = groupPrototype(1);
  var group2 = groupPrototype(2);
  
  // Observe the 10 alien data points
  mapData({data: data}, function(datum) {
    var alienGroup = categorical({ps: [0.5, 0.5], vs: [1, 2]});
    var prototype = groupPrototype(alienGroup);
    
    observe(Bernoulli({p: prototype.antennae}), datum.antennae);
    observe(Bernoulli({p: prototype.green}), datum.green);
    observe(Bernoulli({p: prototype.blarghNoise}), datum.blarghNoise);
  });
  
  // Mystery alien: heard blargh but cannot see
  var mysteryGroup = categorical({ps: [0.5, 0.5], vs: [1, 2]});
  var mysteryPrototype = groupPrototype(mysteryGroup);
  observe(Bernoulli({p: mysteryPrototype.blarghNoise}), true);
  
  return {
    group1: group1,
    group2: group2,
    mysteryGroup: mysteryGroup
  };
};

var ANSWER = Infer(
  {method: 'mcmc', kernel: {HMC: {steps: 10, stepSize: 0.01}}, samples: 3000},
  model
);
Timeout after 60s
var data = [
  {antennae: false, green: false, blarghNoise: false},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: false, green: false, blarghNoise: false},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: false, green: false, blarghNoise: false},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: false, green: false, blarghNoise: false},
  {antennae: false, green: false, blarghNoise: false}
];

var model = function() {
  // Memoized group prototypes with Beta(0.5, 0.5) priors per property
  var groupPrototype = mem(function(groupId) {
    return {
      antennae: beta(0.5, 0.5),
      green: beta(0.5, 0.5),
      blarghNoise: beta(0.5, 0.5)
    };
  });
  
  var group1 = groupPrototype(0);
  var group2 = groupPrototype(1);
  
  // Observe the 10 aliens with group assignments
  mapData({data: data}, function(alien) {
    var groupId = flip(0.5) ? 0 : 1;
    var prototype = groupId === 0 ? group1 : group2;
    
    observe(Bernoulli({p: prototype.antennae}), alien.antennae ? 1 : 0);
    observe(Bernoulli({p: prototype.green}), alien.green ? 1 : 0);
    observe(Bernoulli({p: prototype.blarghNoise}), alien.blarghNoise ? 1 : 0);
  });
  
  // Mystery alien: hear blargh but can't see it
  var mysteryGroupId = flip(0.5) ? 0 : 1;
  var mysteryPrototype = mysteryGroupId === 0 ? group1 : group2;
  observe(Bernoulli({p: mysteryPrototype.blarghNoise}), 1);
  
  return {
    group1: group1,
    group2: group2,
    mysteryGroup: mysteryGroupId
  };
};

var ANSWER = Infer({
  method: 'MCMC',
  samples: 3000,
  kernel: {HMC: {steps: 10, stepSize: 0.01}}
}, model);
var data = [
  {antennae: false, green: false, blarghNoise: false},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: false, green: false, blarghNoise: false},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: false, green: false, blarghNoise: false},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: false, green: false, blarghNoise: false},
  {antennae: false, green: false, blarghNoise: false}
];

var model = function() {
  // Group prototypes: for each group, sample Beta(.5,.5) for each property
  var groupPrototype = mem(function(group) {
    return {
      antennae: beta(0.5, 0.5),
      green: beta(0.5, 0.5),
      blarghNoise: beta(0.5, 0.5)
    };
  });

  // Observe each alien in the data
  mapData({data: data}, function(alien) {
    // Sample group membership for this alien
    var group = flip(0.5) ? 1 : 2;
    var proto = groupPrototype(group);
    observe(Bernoulli({p: proto.antennae}), alien.antennae);
    observe(Bernoulli({p: proto.green}), alien.green);
    observe(Bernoulli({p: proto.blarghNoise}), alien.blarghNoise);
  });

  // Mystery alien: heard blargh noise but can't see it
  var mysteryGroup = flip(0.5) ? 1 : 2;
  var mysteryProto = groupPrototype(mysteryGroup);
  observe(Bernoulli({p: mysteryProto.blarghNoise}), true);

  var proto1 = groupPrototype(1);
  var proto2 = groupPrototype(2);

  return {
    group1: {
      antennae: proto1.antennae,
      green: proto1.green,
      blarghNoise: proto1.blarghNoise
    },
    group2: {
      antennae: proto2.antennae,
      green: proto2.green,
      blarghNoise: proto2.blarghNoise
    },
    mysteryGroup: mysteryGroup
  };
};

var ANSWER = Infer({
  method: 'MCMC',
  kernel: {HMC: {steps: 10, stepSize: 0.01}},
  samples: 3000
}, model);
var data = [
  {antennae: false, green: false, blarghNoise: false},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: false, green: false, blarghNoise: false},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: false, green: false, blarghNoise: false},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: false, green: false, blarghNoise: false},
  {antennae: false, green: false, blarghNoise: false}
];

var model = function() {

  // Memoised per-group, per-property Beta(.5,.5) prototype weights
  var groupPrototype = mem(function(group, property) {
    return beta(0.5, 0.5);
  });

  // Memoised 50/50 group assignment for each observed alien
  var alienGroup = mem(function(i) {
    return flip(0.5) ? 1 : 2;
  });

  // Condition on all 10 observed aliens
  mapIndexed(function(i, alien) {
    var g = alienGroup(i);
    observe(Bernoulli({p: groupPrototype(g, 'antennae')}),   alien.antennae);
    observe(Bernoulli({p: groupPrototype(g, 'green')}),      alien.green);
    observe(Bernoulli({p: groupPrototype(g, 'blarghNoise')}), alien.blarghNoise);
  }, data);

  // Mystery alien: latent group, observe blarghNoise = true
  var mysteryGroup = flip(0.5) ? 1 : 2;
  observe(Bernoulli({p: groupPrototype(mysteryGroup, 'blarghNoise')}), true);

  return {
    group1: {
      antennae:    groupPrototype(1, 'antennae'),
      green:       groupPrototype(1, 'green'),
      blarghNoise: groupPrototype(1, 'blarghNoise')
    },
    group2: {
      antennae:    groupPrototype(2, 'antennae'),
      green:       groupPrototype(2, 'green'),
      blarghNoise: groupPrototype(2, 'blarghNoise')
    },
    mysteryGroup: mysteryGroup
  };
};

var ANSWER = Infer({
  method: 'MCMC',
  kernel: { HMC: { steps: 10, stepSize: 0.01 } },
  samples: 3000
}, model);
var data = [
  {antennae: false, green: false, blarghNoise: false},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: false, green: false, blarghNoise: false},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: false, green: false, blarghNoise: false},
  {antennae: true,  green: true,  blarghNoise: true},
  {antennae: false, green: false, blarghNoise: false},
  {antennae: false, green: false, blarghNoise: false}
];

var model = function() {

  // Mem'd group prototypes: two groups, three continuous Beta(.5,.5) properties each
  var groupPrototype = mem(function(group) {
    return {
      antennae:    beta(0.5, 0.5),
      green:       beta(0.5, 0.5),
      blarghNoise: beta(0.5, 0.5)
    };
  });

  // Observe each alien in the data
  mapData({data: data}, function(alien) {
    var group = flip(0.5) ? 1 : 2;
    var proto  = groupPrototype(group);
    observe(Bernoulli({p: proto.antennae}),    alien.antennae);
    observe(Bernoulli({p: proto.green}),       alien.green);
    observe(Bernoulli({p: proto.blarghNoise}), alien.blarghNoise);
  });

  // Mystery alien: heard blargh but alien is invisible
  var mysteryGroup = flip(0.5) ? 1 : 2;
  var mysteryProto = groupPrototype(mysteryGroup);
  observe(Bernoulli({p: mysteryProto.blarghNoise}), true);

  return {
    group1:       groupPrototype(1),
    group2:       groupPrototype(2),
    mysteryGroup: mysteryGroup
  };
};

var ANSWER = Infer({
  method: 'MCMC',
  kernel: { HMC: { steps: 10, stepSize: 0.01 } },
  samples: 3000
}, model);
groundtruth output
{"group1":{"antennae":0.0388987323799…0.0507
{"group1":{"antennae":0.0017869489723…0.0433
{"group1":{"antennae":0.0388987323799…0.0333
{"group1":{"antennae":0.0072867138426…0.0303
{"group1":{"antennae":0.0137240703587…0.0247
{"group1":{"antennae":0.0029449772063…0.0230
{"group1":{"antennae":0.0187876025654…0.0193
{"group1":{"antennae":0.0099347298344…0.0180
{"group1":{"antennae":0.0826757036952…0.0177
{"group1":{"antennae":0.0320260080856…0.0173
{"group1":{"antennae":0.0670978200531…0.0167
{"group1":{"antennae":0.0075057264635…0.0153
… 525 more
raw JSON
{
  "__kind": "distribution",
  "probs": [
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0006666666666666666,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0006666666666666666,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0003333333333333336,
    0.0006666666666666666,
    0.0003333333333333336,
    0.000333333

... (191853 more chars truncated)
feedback
not signed
probmods2-mixture-models/ex2.a distribution 8 runs TV=1.00 TV<1
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Suppose 22 participants take a memory test scored 0..45. Some are bona fide; others are malingerers (deliberately scoring low). Implement a mixture model inferring which group each participant belongs to. Fill in the blanks and end with the Infer(...):
var scores = [45, 45, 44, 45, 44, 45, 45, 45, 45, 45, 30, 20, 6, 44, 44, 27, 25, 17, 14, 27, 35, 30]
var subjIDs = _.range(scores.length)
var data = map(function(datum) {return _.zipObject(['subjID', 'score'], datum)}, _.zip(subjIDs, scores));

Infer({method: 'MCMC', samples: 10000}, function() {
  // Your code here: define group success probs, per-participant group memership
  var obsFn = function(datum){
    observe(// Your code here: Binomial({p, n: 45}))
  }
  mapData({data: data}, obsFn)
  // Your code here
  return // Your code here
});
Use Binomial({p, n: 45}) for each participant's score, with two latent group success probabilities (use uniform(0.5, 1) for the bona-fide group and uniform(0, group_1_p) for malingerers, ensuring p_malingerer < p_bona-fide). Return participant memberships plus group rates. Use the strings 'group1' (bona fide) and 'group2' (malingerer) as group labels in your output. Use uniform(0.5, 1) for the bona-fide group success probability and uniform(0, group_1_p) for the malingerer probability.
groundtruth code
var scores = [45, 45, 44, 45, 44, 45, 45, 45, 45, 45, 30, 20, 6, 44, 44, 27, 25, 17, 14, 27, 35, 30];
var subjIDs = _.range(scores.length);
var data = map(function(datum) {return _.zipObject(['subjID', 'score'], datum)}, _.zip(subjIDs, scores));
var ANSWER = (Infer({method: 'MCMC', samples: 10000}, function() {
  var group_1_p = uniform(0.5, 1);
  var group_2_p = uniform(0, group_1_p);
  var participant2Group = mem(function(participantID) {
    return flip() ? 'group1' : 'group2';
  });
  var group2Prob = mem(function(group) {
    return group == 'group1' ? group_1_p : group_2_p;
  });

  var obsFn = function(datum){
    var p = group2Prob(participant2Group(datum.subjID));
    observe(Binomial({p: p, n: 45}), datum.score);
  };
  mapData({data: data}, obsFn);

  var participantResults_ = map(function(datum) {return participant2Group(datum.subjID)}, data);
  var participantResults = _.zipObject(_.range(participantResults_.length), participantResults_);
  return _.merge(participantResults, {group_1_p: group_1_p, group_2_p: group_2_p});
}));
generated code
sonnet-46-primer-v3 TV<1 haiku-45-noprimer-v3 TV<1 haiku-45-primer-v3 TV<1 haiku-45-think-noprimer-v3 TV<1 haiku-45-think-primer-v3 TV<1 sonnet-46-noprimer-v3 TV<1 sonnet-46-think-noprimer-v3 TV<1 sonnet-46-think-primer-v3 TV<1
var scores = [45, 45, 44, 45, 44, 45, 45, 45, 45, 45, 30, 20, 6, 44, 44, 27, 25, 17, 14, 27, 35, 30];
var subjIDs = _.range(scores.length);
var data = map(function(datum) { return _.zipObject(['subjID', 'score'], datum); }, _.zip(subjIDs, scores));

var ANSWER = Infer({method: 'MCMC', samples: 10000}, function() {
  // Group success probabilities
  var group_1_p = uniform(0.5, 1);       // bona fide
  var group_2_p = uniform(0, group_1_p); // malingerer

  // Per-participant group membership
  var groupMembership = map(function(id) {
    return flip(0.5) ? 'group1' : 'group2';
  }, subjIDs);

  var obsFn = function(datum) {
    var id = datum.subjID;
    var score = datum.score;
    var group = groupMembership[id];
    var p = group === 'group1' ? group_1_p : group_2_p;
    observe(Binomial({p: p, n: 45}), score);
  };

  mapData({data: data}, obsFn);

  return {
    group_1_p: group_1_p,
    group_2_p: group_2_p,
    memberships: groupMembership
  };
});
var scores = [45, 45, 44, 45, 44, 45, 45, 45, 45, 45, 30, 20, 6, 44, 44, 27, 25, 17, 14, 27, 35, 30]
var subjIDs = _.range(scores.length)
var data = map(function(datum) {return _.zipObject(['subjID', 'score'], datum)}, _.zip(subjIDs, scores));

var ANSWER = Infer({method: 'MCMC', samples: 10000}, function() {
  // Define group success probabilities
  var p_bonafide = uniform(0.5, 1);
  var p_malingerer = uniform(0, p_bonafide);
  
  // Per-participant group membership
  var groupAssignments = map(function(subj) {
    return categorical({ps: [0.5, 0.5], vs: ['group1', 'group2']});
  }, subjIDs);
  
  // Observation function
  var obsFn = function(datum) {
    var group = groupAssignments[datum.subjID];
    var p = (group === 'group1') ? p_bonafide : p_malingerer;
    observe(Binomial({p: p, n: 45}), datum.score);
  };
  
  mapData({data: data}, obsFn);
  
  // Return participant memberships plus group rates
  return {
    groupAssignments: groupAssignments,
    p_bonafide: p_bonafide,
    p_malingerer: p_malingerer
  };
});
var scores = [45, 45, 44, 45, 44, 45, 45, 45, 45, 45, 30, 20, 6, 44, 44, 27, 25, 17, 14, 27, 35, 30]
var subjIDs = _.range(scores.length)
var data = map(function(datum) {return _.zipObject(['subjID', 'score'], datum)}, _.zip(subjIDs, scores));

var ANSWER = Infer({method: 'MCMC', samples: 10000}, function() {
  // Define group success probabilities
  var p_group1 = uniform(0.5, 1);  // bona fide group
  var p_group2 = uniform(0, p_group1);  // malingerer group (constrained to be lower)
  
  // Per-participant group membership
  var groupAssignment = map(function(i) {
    return flip(0.5) ? 'group1' : 'group2';
  }, subjIDs);
  
  // Observation function
  var obsFn = function(datum) {
    var subjID = datum.subjID;
    var score = datum.score;
    var group = groupAssignment[subjID];
    var p = group === 'group1' ? p_group1 : p_group2;
    observe(Binomial({p: p, n: 45}), score);
  };
  
  mapData({data: data}, obsFn);
  
  // Return participant memberships plus group rates
  return {
    groupAssignments: groupAssignment,
    p_bona_fide: p_group1,
    p_malingerer: p_group2
  };
});
var scores = [45, 45, 44, 45, 44, 45, 45, 45, 45, 45, 30, 20, 6, 44, 44, 27, 25, 17, 14, 27, 35, 30]
var subjIDs = _.range(scores.length)
var data = map(function(datum) {return _.zipObject(['subjID', 'score'], datum)}, _.zip(subjIDs, scores));

var ANSWER = Infer({method: 'MCMC', samples: 10000}, function() {
  // Define group success probs
  var p_group1 = uniform(0.5, 1);
  var p_group2 = uniform(0, p_group1);
  
  // Per-participant group membership
  var groups = map(function(subjID) {
    return categorical({ps: [0.5, 0.5], vs: ['group1', 'group2']});
  }, subjIDs);
  
  var obsFn = function(datum){
    var p = groups[datum.subjID] === 'group1' ? p_group1 : p_group2;
    observe(Binomial({p: p, n: 45}), datum.score);
  }
  mapData({data: data}, obsFn)
  
  return {
    group_memberships: groups,
    p_group1: p_group1,
    p_group2: p_group2
  };
});
var scores = [45, 45, 44, 45, 44, 45, 45, 45, 45, 45, 30, 20, 6, 44, 44, 27, 25, 17, 14, 27, 35, 30]
var subjIDs = _.range(scores.length)
var data = map(function(datum) {return _.zipObject(['subjID', 'score'], datum)}, _.zip(subjIDs, scores));

var ANSWER = Infer({method: 'MCMC', samples: 10000}, function() {
  // Sample group success probabilities
  var p_group1 = uniform(0.5, 1);  // bona fide group
  var p_group2 = uniform(0, p_group1);  // malingerers (lower than bona fide)
  
  // Sample prior probability of being bona fide
  var prior_bona_fide = uniform(0, 1);
  
  // For each participant, infer group membership and observe score
  var groupAssignments = map(function(datum){
    var group = categorical({vs: ['group1', 'group2'], ps: [prior_bona_fide, 1 - prior_bona_fide]});
    var p = group === 'group1' ? p_group1 : p_group2;
    observe(Binomial({p: p, n: 45}), datum.score);
    return {subjID: datum.subjID, group: group};
  }, data);
  
  return {
    participant_groups: groupAssignments,
    p_bona_fide: p_group1,
    p_malingerer: p_group2,
    prior_prob_bona_fide: prior_bona_fide
  };
});
var scores = [45, 45, 44, 45, 44, 45, 45, 45, 45, 45, 30, 20, 6, 44, 44, 27, 25, 17, 14, 27, 35, 30]
var subjIDs = _.range(scores.length)
var data = map(function(datum) {return _.zipObject(['subjID', 'score'], datum)}, _.zip(subjIDs, scores));

var ANSWER = Infer({method: 'MCMC', samples: 10000}, function() {
  // Group success probabilities
  var group1_p = uniform(0.5, 1);        // bona fide
  var group2_p = uniform(0, group1_p);   // malingerer, constrained < group1_p

  // Mixture weight: probability of being bona fide
  var pGroup1 = uniform(0, 1);

  // Per-participant group membership
  var memberships = map(function(subjID) {
    return flip(pGroup1) ? 'group1' : 'group2';
  }, subjIDs);

  var obsFn = function(datum) {
    var subjID = datum.subjID;
    var score = datum.score;
    var group = memberships[subjID];
    var p = group === 'group1' ? group1_p : group2_p;
    observe(Binomial({p: p, n: 45}), score);
  };

  mapData({data: data}, obsFn);

  return {
    group1_rate: group1_p,
    group2_rate: group2_p,
    memberships: memberships
  };
});
var scores = [45, 45, 44, 45, 44, 45, 45, 45, 45, 45, 30, 20, 6, 44, 44, 27, 25, 17, 14, 27, 35, 30]
var subjIDs = _.range(scores.length)
var data = map(function(datum) {return _.zipObject(['subjID', 'score'], datum)}, _.zip(subjIDs, scores));

var ANSWER = Infer({method: 'MCMC', samples: 10000}, function() {
  // Group success probabilities
  var group1_p = uniform(0.5, 1);          // bona fide
  var group2_p = uniform(0, group1_p);     // malingerer (constrained < group1_p)

  // Prior on group membership (mixing weight)
  var pGroup1 = uniform(0, 1);

  // Per-participant group membership
  var memberships = map(function(id) {
    return flip(pGroup1) ? 'group1' : 'group2';
  }, subjIDs);

  var obsFn = function(datum) {
    var subj = datum.subjID;
    var group = memberships[subj];
    var p = group === 'group1' ? group1_p : group2_p;
    observe(Binomial({p: p, n: 45}), datum.score);
  };

  mapData({data: data}, obsFn);

  return {
    group1_p: group1_p,
    group2_p: group2_p,
    memberships: memberships
  };
});
var scores = [45, 45, 44, 45, 44, 45, 45, 45, 45, 45, 30, 20, 6, 44, 44, 27, 25, 17, 14, 27, 35, 30];
var subjIDs = _.range(scores.length);
var data = map(function(datum) {
  return _.zipObject(['subjID', 'score'], datum);
}, _.zip(subjIDs, scores));

var ANSWER = Infer({method: 'MCMC', samples: 10000}, function() {
  // Latent group success probabilities
  var group1_p = uniform(0.5, 1);          // bona-fide group
  var group2_p = uniform(0, group1_p);     // malingerer group (always < group1_p)

  // Per-participant group membership (memoized so each ID gets one stable draw)
  var groupOf = mem(function(id) {
    return flip(0.5) ? 'group1' : 'group2';
  });

  var obsFn = function(datum) {
    var group = groupOf(datum.subjID);
    var p = group === 'group1' ? group1_p : group2_p;
    observe(Binomial({p: p, n: 45}), datum.score);
  };

  mapData({data: data}, obsFn);

  return {
    group1_p: group1_p,
    group2_p: group2_p,
    memberships: map(function(id) { return groupOf(id); }, subjIDs)
  };
});
groundtruth output
{"0":"group1","1":"group1","2":"group…0.0858
{"0":"group1","1":"group1","2":"group…0.0732
{"0":"group1","1":"group1","2":"group…0.0647
{"0":"group1","1":"group1","2":"group…0.0630
{"0":"group1","1":"group1","2":"group…0.0530
{"0":"group1","1":"group1","2":"group…0.0472
{"0":"group1","1":"group1","2":"group…0.0422
{"0":"group1","1":"group1","2":"group…0.0405
{"0":"group1","1":"group1","2":"group…0.0356
{"0":"group1","1":"group1","2":"group…0.0345
{"0":"group1","1":"group1","2":"group…0.0337
{"0":"group1","1":"group1","2":"group…0.0332
… 80 more
raw JSON
{
  "__kind": "distribution",
  "probs": [
    0.0004000000000000001,
    0.00010000000000000009,
    0.0004000000000000001,
    0.00019999999999999985,
    0.00010000000000000009,
    0.00010000000000000009,
    0.0003000000000000001,
    0.0008000000000000003,
    0.0004000000000000001,
    0.0003000000000000001,
    0.0003000000000000001,
    0.00010000000000000009,
    0.00010000000000000009,
    0.0005999999999999997,
    0.00019999999999999985,
    0.00010000000000000009,
    0.00019999999999999985,
    0.0008000000000000003,
    0.00010000000000000009,
    0.00010000000000000009,
    0.0003000000000000001,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00019999999999999985,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.0004000000000000001,
    0.0004000000000000001,
    0.00019999999999999985,
    0.00010000000000000009,
    0.00010000000000000009,
    0.0005999999999999997,
    0.00019999999999999985,
    0.00019999999999999985,
    0.0010000000000000002,
    0.00019999999999999985,
    0.0018999999999999996,
    0.0011,
    0.0007000000000000001,
    0.0018999999999999996,
    0.0032999999999999987,
    0.0007000000000000001,
    0.0011,
    0.0017999999999999997,
    0.0012999999999999995,
    0.006399999999999998,
    0.0057,
    0.0332,
    0.021700000000000004,
    0.016400000000000005,
    0.005000000000000002,
    0.0004000000000000001,
    0.0356,
    0.024600000000000004,
    0.028000000000000004,
    0.0345,
    0.033699999999999994,
    0.008899999999999998,
    0.0022000000000000006,
    0.007999999999999997,
    0.04220000000000001,
    0.007700000000000002,
    0.015200000000000002,
    0.005500000000000001,
    0.027900000000000008,
    0.014299999999999995,
    0.011999999999999997,
    0.011999999999999997,
    0.014999999999999996,
    0.0007000000000000001,
    0.06300000000000001,
    0.07320000000000002,
    0.0041,
    0.08580000000000002,
    0.0405,
    0.047200000000000006,
    0.023999999999999997,
    0.030300000000000004,
    0.0647,
    0.018000000000000002,
    0.05299999999999999,
    0.0209,
    0.009799999999999996,
    0.0011,
    0.015299999999999998,
    0.0076
  ],
  "support": [
    {
      "0": "group2",
      "1": "group1",
      "2": "group2",
      "3": "group2",
      "4": "group2",
      "5": "group1",
      "6": "group1",
      "7": "group2",
      "8": "group1",
      "9": "group2",
      "10": "group2",
      "11": "group1",
      "12": "group2",
      "13": "group1",
      "14": "group2",
      "15": "group2",
      "16": "group2",
      "17": "group1",
      "18": "group1",
      "19": "group1",
      "20": "group2",
      "21": "group2",
      "group_1_p": 0.500817096983986,
      "group_2_p": 0.4689940464253788
    },
    {
      "0": "group2",
      "1": "group1",
      "2": "group2",
      "3": "group1",
      "4": "group2",
      "5": "group1",
      "6": "group1",
      "7": "group2",
      "8": "group1",
      "9": "group2",
      "10": "group2",
      "11": "group1",
      "12": "group2",
      "13": "group1",
      "14": "group2",
      "15": "group2",
      "16": "group2",
      "17": "group1",
      "18": "group1",
      "19": "group1",
      "20": "group2",
      "21": "group2",
      "group_1_p": 0.500817096983986,
      "group_2_p": 0.4689940464253788
    },
    {
      "0": "group2",
      "1": "group1",
      "2": "group2",
      "3": "group1",
      "4": "group2",
      "5": "group1",
      "6": "group1",
      "7": "group2",
      "8": "group1",
      "9": "group1",
      "10": "group2",
      "11": "group1",
      "12": "group2",
      "13": "group1",
      "14": "group2",
      "15": "group2",
      "16": "group2",
      "17": "group1",
      "18": "group1",
      "19": "group1",
      "20": "group2",
      "21": "group2",
      "group_1_p": 0.500817096983

... (50267 more chars truncated)
feedback
not signed
occams-razor 5 atoms 2✓ · 2✗
probmods2-occams-razor/ex1.2 distribution 8 runs TV=0.00 TV=0
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Number-game model. The hypothesis space contains rule-based concepts (multiples_of_N, powers_of_N for N=1..11; evens; odds) over the integers in [1, 20]. Implement similarity-based interval hypotheses (integers in [a, b]) so that: - genSetFromInterval(a, b) returns all integers from a through b inclusive, - makeIntervalHypothesisSpace(start, end) returns names 'interval_a_b' for every (a, b) with start <= a < b <= end, - getSetFromHypothesis(rule) dispatches on the rule's prefix to produce the set of integers it denotes (handling multiples_, powers_, evens, odds, and interval_). These helpers are given (include them or equivalent in your program):
var maxNumber = 20;
var filterByInRange = function(set) {
  var inRange = function(v) { v <= maxNumber && v >= 0 };
  return _.uniq(filter(inRange, set));
};
var genEvens = function() {
  return filter(function(v) { return v % 2 == 0 }, _.range(1, maxNumber));
};
var genOdds = function() {
  return filter(function(v) { return (v + 1) % 2 == 0 }, _.range(1, maxNumber));
};
var genMultiples = function(base) {
  var multiples = map(function(v) { return base * v }, _.range(maxNumber));
  return filterByInRange(multiples);
};
var genPowers = function(base) {
  var powers = map(function(v) { return Math.pow(base, v) }, _.range(maxNumber));
  return filterByInRange(powers);
};
var inSet = function(val, set) { return _.includes(set, val); };
var makeRuleHypothesisSpace = function() {
  var multipleRules = map(function(b) { return 'multiples_of_' + b }, _.range(1, 12));
  var powerRules    = map(function(b) { return 'powers_of_' + b },    _.range(1, 12));
  return multipleRules.concat(powerRules).concat(['evens', 'odds']);
};
Build learnConcept(examples, testQuery) that mixes rule and interval hypotheses 50/50 in the prior, draws a hypothesis, observes each example via Categorical({vs: set}), and returns {hypothesis, testQueryResponse: inSet(testQuery, set)}. End your program with learnConcept([3, 10], 12) returning the joint posterior.
groundtruth code
var maxNumber = 20;
var filterByInRange =  function(set) {
  var inRange = function(v) {v <= maxNumber && v >= 0};
  return _.uniq(filter(inRange, set));
};
var genEvens = function() {
  return filter(function(v) {return v % 2 == 0}, _.range(1, maxNumber));
};
var genOdds = function() {
  return filter(function(v) {return (v + 1) % 2 == 0}, _.range(1, maxNumber));
};
var genMultiples = function(base) {
  var multiples = map(function(v) {return base * v}, _.range(maxNumber));
  return filterByInRange(multiples);
};
var genPowers = function(base) {
  var powers = map(function(v) {return Math.pow(base, v)}, _.range(maxNumber));
  return filterByInRange(powers);
};
var inSet = function(val, set) { return _.includes(set, val); };
var makeRuleHypothesisSpace = function() {
  var multipleRules = map(function(base) {return 'multiples_of_' + base}, _.range(1, 12));
  var powerRules = map(function(base) {return 'powers_of_' + base}, _.range(1, 12));
  return multipleRules.concat(powerRules).concat(['evens', 'odds']);
};
var genSetFromInterval = function(a, b) { return _.range(a, b+1); };

var makeIntervalHypothesisSpace = function(start, end) {
  var allIntervals = _.flatten(map(function(s) {
    return map(function(e) { [s, e] }, genSetFromInterval(s+1, end));
  }, genSetFromInterval(start, end)));
  return map(function(x) { 'interval_' + x[0] + '_' + x[1] }, allIntervals);
};

var getSetFromHypothesis = function(rule) {
  var parts = rule.split('_');
  return (parts[0] == 'multiples' ? genMultiples(_.parseInt(parts[2])) :
          parts[0] == 'powers' ? genPowers(_.parseInt(parts[2])) :
          parts[0] == 'evens' ? genEvens() :
          parts[0] == 'odds' ? genOdds() :
          parts[0] == 'interval' ? genSetFromInterval(_.parseInt(parts[1]), _.parseInt(parts[2])) :
          console.error('unknown rule' + rule));
};

var learnConcept = function(examples, testQuery) {
  return Infer({method: 'enumerate'}, function() {
    var rules = makeRuleHypothesisSpace();
    var intervals = makeIntervalHypothesisSpace(1, maxNumber);
    var hypothesis = flip(0.5) ? uniformDraw(rules) : uniformDraw(intervals);
    var set = getSetFromHypothesis(hypothesis);
    mapData({data: examples}, function(example) {
      observe(Categorical({vs: set}), example);
    });
    return {hypothesis: hypothesis,
            testQueryResponse: inSet(testQuery, set)};
  });
};
var ANSWER = (learnConcept([3, 10], 12));
generated code
sonnet-46-primer-v3 TV=0 haiku-45-noprimer-v3 fail haiku-45-primer-v3 fail haiku-45-think-noprimer-v3 fail haiku-45-think-primer-v3 fail sonnet-46-noprimer-v3 fail sonnet-46-think-noprimer-v3 fail sonnet-46-think-primer-v3 TV=0
var maxNumber = 20;

var filterByInRange = function(set) {
  var inRange = function(v) { return v <= maxNumber && v >= 1; };
  return _.uniq(filter(inRange, set));
};

var genEvens = function() {
  return filter(function(v) { return v % 2 == 0; }, _.range(1, maxNumber + 1));
};

var genOdds = function() {
  return filter(function(v) { return (v + 1) % 2 == 0; }, _.range(1, maxNumber + 1));
};

var genMultiples = function(base) {
  var multiples = map(function(v) { return base * v; }, _.range(1, maxNumber + 1));
  return filterByInRange(multiples);
};

var genPowers = function(base) {
  var powers = map(function(v) { return Math.pow(base, v); }, _.range(0, maxNumber + 1));
  return filterByInRange(powers);
};

var inSet = function(val, set) { return _.includes(set, val); };

var makeRuleHypothesisSpace = function() {
  var multipleRules = map(function(b) { return 'multiples_of_' + b; }, _.range(1, 12));
  var powerRules    = map(function(b) { return 'powers_of_' + b; },    _.range(1, 12));
  return multipleRules.concat(powerRules).concat(['evens', 'odds']);
};

var genSetFromInterval = function(a, b) {
  return _.range(a, b + 1);
};

var makeIntervalHypothesisSpace = function(start, end) {
  var pairs = [];
  var as = _.range(start, end);
  return _.flatten(map(function(a) {
    var bs = _.range(a + 1, end + 1);
    return map(function(b) {
      return 'interval_' + a + '_' + b;
    }, bs);
  }, as));
};

var getSetFromHypothesis = function(rule) {
  var isMultiples = rule.indexOf('multiples_of_') === 0;
  var isPowers    = rule.indexOf('powers_of_') === 0;
  var isInterval  = rule.indexOf('interval_') === 0;
  return isMultiples ? (function() {
    var base = _.parseInt(rule.slice('multiples_of_'.length));
    return genMultiples(base);
  })() :
  isPowers ? (function() {
    var base = _.parseInt(rule.slice('powers_of_'.length));
    return genPowers(base);
  })() :
  isInterval ? (function() {
    var rest = rule.slice('interval_'.length);
    // rest is like "a_b"
    // find the underscore separating a and b
    // a and b could be multi-digit, split on first underscore
    var firstUnderscore = rest.indexOf('_');
    var a = _.parseInt(rest.slice(0, firstUnderscore));
    var b = _.parseInt(rest.slice(firstUnderscore + 1));
    return genSetFromInterval(a, b);
  })() :
  rule === 'evens' ? genEvens() :
  rule === 'odds'  ? genOdds()  :
  [];
};

var ruleHypotheses     = makeRuleHypothesisSpace();
var intervalHypotheses = makeIntervalHypothesisSpace(1, maxNumber);

var numRules     = ruleHypotheses.length;
var numIntervals = intervalHypotheses.length;

var learnConcept = function(examples, testQuery) {
  return Infer({method: 'enumerate'}, function() {
    // 50/50 prior between rule-based and interval hypotheses
    var useRule = flip(0.5);
    var hypothesis = useRule
      ? ruleHypotheses[randomInteger(numRules)]
      : intervalHypotheses[randomInteger(numIntervals)];

    var set = getSetFromHypothesis(hypothesis);

    // Each example is observed as drawn uniformly from the set
    var setSize = set.length;
    condition(setSize > 0);

    // Observe each example
    mapData({data: examples}, function(ex) {
      observe(Categorical({vs: set}), ex);
    });

    var testQueryResponse = inSet(testQuery, set);

    return {hypothesis: hypothesis, testQueryResponse: testQueryResponse};
  });
};

var ANSWER = learnConcept([3, 10], 12);
Error: cpsInnerStatement
var maxNumber = 20;

var filterByInRange = function(set) {
  var inRange = function(v) { return v <= maxNumber && v >= 1; };
  return _.uniq(filter(inRange, set));
};

var genEvens = function() {
  return filter(function(v) { return v % 2 == 0 }, _.range(1, maxNumber + 1));
};

var genOdds = function() {
  return filter(function(v) { return v % 2 == 1 }, _.range(1, maxNumber + 1));
};

var genMultiples = function(base) {
  var multiples = map(function(v) { return base * v }, _.range(1, maxNumber + 1));
  return filterByInRange(multiples);
};

var genPowers = function(base) {
  var powers = map(function(v) { return Math.pow(base, v) }, _.range(0, maxNumber));
  return filterByInRange(powers);
};

var inSet = function(val, set) { 
  return _.includes(set, val); 
};

var makeRuleHypothesisSpace = function() {
  var multipleRules = map(function(b) { return 'multiples_of_' + b }, _.range(1, 12));
  var powerRules = map(function(b) { return 'powers_of_' + b }, _.range(1, 12));
  return multipleRules.concat(powerRules).concat(['evens', 'odds']);
};

var genSetFromInterval = function(a, b) {
  return _.range(a, b + 1);
};

var makeIntervalHypothesisSpace = function(start, end) {
  var intervals = [];
  for (var a = start; a < end; a++) {
    for (var b = a + 1; b <= end; b++) {
      intervals.push('interval_' + a + '_' + b);
    }
  }
  return intervals;
};

var getSetFromHypothesis = function(rule) {
  if (rule.indexOf('multiples_of_') === 0) {
    var base = parseInt(rule.split('_').pop());
    return genMultiples(base);
  } else if (rule.indexOf('powers_of_') === 0) {
    var base = parseInt(rule.split('_').pop());
    return genPowers(base);
  } else if (rule === 'evens') {
    return genEvens();
  } else if (rule === 'odds') {
    return genOdds();
  } else if (rule.indexOf('interval_') === 0) {
    var parts = rule.split('_');
    var a = parseInt(parts[1]);
    var b = parseInt(parts[2]);
    return genSetFromInterval(a, b);
  }
  return [];
};

var learnConcept = function(examples, testQuery) {
  return Infer({method: 'enumerate'}, function() {
    var ruleHypotheses = makeRuleHypothesisSpace();
    var intervalHypotheses = makeIntervalHypothesisSpace(1, maxNumber);
    var allHypotheses = ruleHypotheses.concat(intervalHypotheses);
    
    var hypothesis = categorical({
      vs: allHypotheses,
      ps: map(function(h) {
        return 0.5 / ruleHypotheses.length + 0.5 / intervalHypotheses.length;
      }, allHypotheses)
    });
    
    var set = getSetFromHypothesis(hypothesis);
    
    mapData({data: examples}, function(example) {
      observe(Categorical({vs: set, ps: map(function() { return 1; }, set)}), example);
    });
    
    var testQueryResponse = inSet(testQuery, set);
    
    return {hypothesis: hypothesis, testQueryResponse: testQueryResponse};
  });
};

var ANSWER = learnConcept([3, 10], 12);
Error: cpsInnerStatement
var maxNumber = 20;

var filterByInRange = function(set) {
  var inRange = function(v) { return v <= maxNumber && v >= 1; };
  return _.uniq(filter(inRange, set));
};

var genEvens = function() {
  return filter(function(v) { return v % 2 == 0; }, _.range(1, maxNumber + 1));
};

var genOdds = function() {
  return filter(function(v) { return v % 2 == 1; }, _.range(1, maxNumber + 1));
};

var genMultiples = function(base) {
  var multiples = map(function(v) { return base * v; }, _.range(0, maxNumber + 1));
  return filterByInRange(multiples);
};

var genPowers = function(base) {
  var powers = map(function(v) { return Math.pow(base, v); }, _.range(0, maxNumber + 1));
  return filterByInRange(powers);
};

var inSet = function(val, set) { return _.includes(set, val); };

var makeRuleHypothesisSpace = function() {
  var multipleRules = map(function(b) { return 'multiples_of_' + b; }, _.range(1, 12));
  var powerRules = map(function(b) { return 'powers_of_' + b; }, _.range(1, 12));
  return multipleRules.concat(powerRules).concat(['evens', 'odds']);
};

var genSetFromInterval = function(a, b) {
  return filter(function(v) { return v >= a && v <= b; }, _.range(1, maxNumber + 1));
};

var makeIntervalHypothesisSpace = function(start, end) {
  var intervals = [];
  var i = start;
  while (i < end) {
    var j = i + 1;
    while (j <= end) {
      intervals.push('interval_' + i + '_' + j);
      j = j + 1;
    }
    i = i + 1;
  }
  return intervals;
};

var getSetFromHypothesis = function(rule) {
  if (rule === 'evens') {
    return genEvens();
  } else if (rule === 'odds') {
    return genOdds();
  } else if (rule.substring(0, 10) === 'multiples_') {
    var base = _.parseInt(rule.substring(10));
    return genMultiples(base);
  } else if (rule.substring(0, 8) === 'powers_of_') {
    var base = _.parseInt(rule.substring(10));
    return genPowers(base);
  } else if (rule.substring(0, 9) === 'interval_') {
    var parts = rule.substring(9).split('_');
    var a = _.parseInt(parts[0]);
    var b = _.parseInt(parts[1]);
    return genSetFromInterval(a, b);
  } else {
    return [];
  }
};

var learnConcept = function(examples, testQuery) {
  return Infer({method: 'enumerate'}, function() {
    var ruleHypotheses = makeRuleHypothesisSpace();
    var intervalHypotheses = makeIntervalHypothesisSpace(1, maxNumber);
    var allHypotheses = ruleHypotheses.concat(intervalHypotheses);
    
    var hypothesisType = flip(0.5) ? 'rule' : 'interval';
    var hypothesis = hypothesisType === 'rule' 
      ? uniformDraw(ruleHypotheses)
      : uniformDraw(intervalHypotheses);
    
    var set = getSetFromHypothesis(hypothesis);
    
    mapData({data: examples}, function(example) {
      observe(Categorical({vs: set, ps: map(function() { return 1.0 / set.length; }, set)}), example);
    });
    
    var testQueryResponse = inSet(testQuery, set);
    
    return {hypothesis: hypothesis, testQueryResponse: testQueryResponse};
  });
};

var ANSWER = learnConcept([3, 10], 12);
Error: cpsInnerStatement
var maxNumber = 20;

var filterByInRange = function(set) {
  var inRange = function(v) { return v <= maxNumber && v >= 0; };
  return _.uniq(filter(inRange, set));
};

var genEvens = function() {
  return filter(function(v) { return v % 2 == 0; }, _.range(1, maxNumber + 1));
};

var genOdds = function() {
  return filter(function(v) { return (v + 1) % 2 == 0; }, _.range(1, maxNumber + 1));
};

var genMultiples = function(base) {
  var multiples = map(function(v) { return base * v; }, _.range(0, maxNumber + 1));
  return filterByInRange(multiples);
};

var genPowers = function(base) {
  var powers = map(function(v) { return Math.pow(base, v); }, _.range(0, maxNumber + 1));
  return filterByInRange(powers);
};

var inSet = function(val, set) { return _.includes(set, val); };

var makeRuleHypothesisSpace = function() {
  var multipleRules = map(function(b) { return 'multiples_of_' + b; }, _.range(1, 12));
  var powerRules = map(function(b) { return 'powers_of_' + b; }, _.range(1, 12));
  return multipleRules.concat(powerRules).concat(['evens', 'odds']);
};

var genSetFromInterval = function(a, b) {
  return _.range(a, b + 1);
};

var makeIntervalHypothesisSpace = function(start, end) {
  var intervals = [];
  for (var a = start; a < end; a++) {
    for (var b = a + 1; b <= end; b++) {
      intervals.push('interval_' + a + '_' + b);
    }
  }
  return intervals;
};

var getSetFromHypothesis = function(rule) {
  if (_.startsWith(rule, 'multiples_of_')) {
    var base = parseInt(rule.split('_')[2]);
    return genMultiples(base);
  } else if (_.startsWith(rule, 'powers_of_')) {
    var base = parseInt(rule.split('_')[2]);
    return genPowers(base);
  } else if (rule === 'evens') {
    return genEvens();
  } else if (rule === 'odds') {
    return genOdds();
  } else if (_.startsWith(rule, 'interval_')) {
    var parts = rule.split('_');
    var a = parseInt(parts[1]);
    var b = parseInt(parts[2]);
    return genSetFromInterval(a, b);
  }
  return [];
};

var learnConcept = function(examples, testQuery) {
  return Infer({method: 'enumerate'}, function() {
    var ruleHypotheses = makeRuleHypothesisSpace();
    var intervalHypotheses = makeIntervalHypothesisSpace(1, maxNumber);
    
    // 50/50 prior over rule vs. interval hypotheses
    var hypothesisType = uniformDraw(['rule', 'interval']);
    var hypothesis = hypothesisType === 'rule' ? 
      uniformDraw(ruleHypotheses) : 
      uniformDraw(intervalHypotheses);
    
    var set = getSetFromHypothesis(hypothesis);
    
    // Observe each example
    forEach(examples, function(example) {
      observe(Categorical({vs: set}), example);
    });
    
    return {
      hypothesis: hypothesis,
      testQueryResponse: inSet(testQuery, set)
    };
  });
};

var ANSWER = learnConcept([3, 10], 12);
Error: cpsInnerStatement
var maxNumber = 20;

var filterByInRange = function(set) {
  var inRange = function(v) { return v <= maxNumber && v >= 1; };
  return _.uniq(filter(inRange, set));
};

var genEvens = function() {
  return filter(function(v) { return v % 2 == 0; }, _.range(1, maxNumber + 1));
};

var genOdds = function() {
  return filter(function(v) { return (v + 1) % 2 == 0; }, _.range(1, maxNumber + 1));
};

var genMultiples = function(base) {
  var multiples = map(function(v) { return base * v; }, _.range(1, maxNumber + 1));
  return filterByInRange(multiples);
};

var genPowers = function(base) {
  var powers = map(function(v) { return Math.pow(base, v); }, _.range(0, 20));
  return filterByInRange(powers);
};

var inSet = function(val, set) { return _.includes(set, val); };

var makeRuleHypothesisSpace = function() {
  var multipleRules = map(function(b) { return 'multiples_of_' + b; }, _.range(1, 12));
  var powerRules    = map(function(b) { return 'powers_of_' + b; }, _.range(1, 12));
  return multipleRules.concat(powerRules).concat(['evens', 'odds']);
};

var genSetFromInterval = function(a, b) {
  return _.range(a, b + 1);
};

var makeIntervalHypothesisSpace = function(start, end) {
  var intervals = [];
  var i = start;
  while (i < end) {
    var j = i + 1;
    while (j <= end) {
      intervals.push('interval_' + i + '_' + j);
      j = j + 1;
    }
    i = i + 1;
  }
  return intervals;
};

var getSetFromHypothesis = function(rule) {
  if (rule === 'evens') {
    return genEvens();
  } else if (rule === 'odds') {
    return genOdds();
  } else if (_.includes(rule, 'multiples_of_')) {
    var parts = rule.split('_');
    var base = parseInt(parts[2]);
    return genMultiples(base);
  } else if (_.includes(rule, 'powers_of_')) {
    var parts = rule.split('_');
    var base = parseInt(parts[2]);
    return genPowers(base);
  } else if (_.includes(rule, 'interval_')) {
    var parts = rule.split('_');
    var a = parseInt(parts[1]);
    var b = parseInt(parts[2]);
    return genSetFromInterval(a, b);
  } else {
    return [];
  }
};

var learnConcept = function(examples, testQuery) {
  return Infer({method: 'enumerate'}, function() {
    var ruleHyps = makeRuleHypothesisSpace();
    var intervalHyps = makeIntervalHypothesisSpace(1, maxNumber);
    
    var isRule = flip(0.5);
    var hypothesis = isRule ? 
      uniformDraw(ruleHyps) : 
      uniformDraw(intervalHyps);
    
    var set = getSetFromHypothesis(hypothesis);
    
    mapData({data: examples}, function(example) {
      observe(Categorical({vs: set}), example);
    });
    
    return {
      hypothesis: hypothesis,
      testQueryResponse: inSet(testQuery, set)
    };
  });
};

var ANSWER = learnConcept([3, 10], 12);
program exited 0 but produced no output (likely silent failure or undefined ANSWER)
var maxNumber = 20;
var filterByInRange = function(set) {
  var inRange = function(v) { return v <= maxNumber && v >= 1; };
  return _.uniq(filter(inRange, set));
};
var genEvens = function() {
  return filter(function(v) { return v % 2 == 0; }, _.range(1, maxNumber + 1));
};
var genOdds = function() {
  return filter(function(v) { return (v + 1) % 2 == 0; }, _.range(1, maxNumber + 1));
};
var genMultiples = function(base) {
  var multiples = map(function(v) { return base * v; }, _.range(1, maxNumber + 1));
  return filterByInRange(multiples);
};
var genPowers = function(base) {
  var powers = map(function(v) { return Math.pow(base, v); }, _.range(0, maxNumber + 1));
  return filterByInRange(powers);
};
var inSet = function(val, set) { return _.includes(set, val); };

var makeRuleHypothesisSpace = function() {
  var multipleRules = map(function(b) { return 'multiples_of_' + b; }, _.range(1, 12));
  var powerRules    = map(function(b) { return 'powers_of_' + b; },    _.range(1, 12));
  return multipleRules.concat(powerRules).concat(['evens', 'odds']);
};

var genSetFromInterval = function(a, b) {
  return _.range(a, b + 1);
};

var makeIntervalHypothesisSpace = function(start, end) {
  var names = [];
  var as = _.range(start, end);
  return _.flatten(map(function(a) {
    var bs = _.range(a + 1, end + 1);
    return map(function(b) {
      return 'interval_' + a + '_' + b;
    }, bs);
  }, as));
};

var getSetFromHypothesis = function(rule) {
  if (rule.indexOf('multiples_of_') === 0) {
    var base = parseInt(rule.slice('multiples_of_'.length));
    return genMultiples(base);
  } else if (rule.indexOf('powers_of_') === 0) {
    var base = parseInt(rule.slice('powers_of_'.length));
    return genPowers(base);
  } else if (rule === 'evens') {
    return genEvens();
  } else if (rule === 'odds') {
    return genOdds();
  } else if (rule.indexOf('interval_') === 0) {
    var parts = rule.slice('interval_'.length).split('_');
    var a = parseInt(parts[0]);
    var b = parseInt(parts[1]);
    return genSetFromInterval(a, b);
  }
};

var learnConcept = function(examples, testQuery) {
  var ruleHypotheses = makeRuleHypothesisSpace();
  var intervalHypotheses = makeIntervalHypothesisSpace(1, maxNumber);
  
  var numRule = ruleHypotheses.length;
  var numInterval = intervalHypotheses.length;
  
  // 50/50 split between rule and interval hypotheses
  // Each rule hypothesis gets weight: 0.5 / numRule
  // Each interval hypothesis gets weight: 0.5 / numInterval
  
  var allHypotheses = ruleHypotheses.concat(intervalHypotheses);
  var ruleWeight = 0.5 / numRule;
  var intervalWeight = 0.5 / numInterval;
  var weights = map(function(h) {
    return _.includes(ruleHypotheses, h) ? ruleWeight : intervalWeight;
  }, allHypotheses);
  
  var model = function() {
    var hypothesis = categorical({ vs: allHypotheses, ps: weights });
    var set = getSetFromHypothesis(hypothesis);
    
    // Observe each example: it must be in the set
    // Using factor with log probability of observing from Categorical
    var setSize = set.length;
    
    // Each example is observed as drawn from Categorical over the set
    mapData({ data: examples }, function(example) {
      var exampleInSet = inSet(example, set);
      factor(exampleInSet ? -Math.log(setSize) : -Infinity);
    });
    
    var testQueryResponse = inSet(testQuery, set);
    return { hypothesis: hypothesis, testQueryResponse: testQueryResponse };
  };
  
  return Infer({ method: 'enumerate' }, model);
};

var ANSWER = learnConcept([3, 10], 12);
program exited 0 but produced no output (likely silent failure or undefined ANSWER)
var maxNumber = 20;

var filterByInRange = function(set) {
  var inRange = function(v) { return v <= maxNumber && v >= 1; };
  return _.uniq(filter(inRange, set));
};

var genEvens = function() {
  return filter(function(v) { return v % 2 === 0; }, _.range(1, maxNumber + 1));
};

var genOdds = function() {
  return filter(function(v) { return (v + 1) % 2 === 0; }, _.range(1, maxNumber + 1));
};

var genMultiples = function(base) {
  var multiples = map(function(v) { return base * v; }, _.range(1, maxNumber + 1));
  return filterByInRange(multiples);
};

var genPowers = function(base) {
  var powers = map(function(v) { return Math.pow(base, v); }, _.range(0, maxNumber + 1));
  return filterByInRange(powers);
};

var inSet = function(val, set) { return _.includes(set, val); };

var makeRuleHypothesisSpace = function() {
  var multipleRules = map(function(b) { return 'multiples_of_' + b; }, _.range(1, 12));
  var powerRules    = map(function(b) { return 'powers_of_' + b; },    _.range(1, 12));
  return multipleRules.concat(powerRules).concat(['evens', 'odds']);
};

// Returns all integers from a through b inclusive
var genSetFromInterval = function(a, b) {
  return _.range(a, b + 1);
};

// Returns 'interval_a_b' names for every (a,b) with start <= a < b <= end
var makeIntervalHypothesisSpace = function(start, end) {
  var as = _.range(start, end);          // a from start to end-1
  return _.flatten(map(function(a) {
    var bs = _.range(a + 1, end + 1);    // b from a+1 to end
    return map(function(b) {
      return 'interval_' + a + '_' + b;
    }, bs);
  }, as));
};

// Dispatch on rule prefix to produce the set of integers it denotes
var getSetFromHypothesis = function(rule) {
  if (rule === 'evens') {
    return genEvens();
  } else if (rule === 'odds') {
    return genOdds();
  } else if (rule.indexOf('multiples_of_') === 0) {
    var base = parseInt(rule.slice('multiples_of_'.length));
    return genMultiples(base);
  } else if (rule.indexOf('powers_of_') === 0) {
    var base2 = parseInt(rule.slice('powers_of_'.length));
    return genPowers(base2);
  } else if (rule.indexOf('interval_') === 0) {
    var suffix = rule.slice('interval_'.length);
    var underscoreIdx = suffix.indexOf('_');
    var a = parseInt(suffix.slice(0, underscoreIdx));
    var b = parseInt(suffix.slice(underscoreIdx + 1));
    return genSetFromInterval(a, b);
  }
};

var learnConcept = function(examples, testQuery) {
  var ruleHypotheses     = makeRuleHypothesisSpace();
  var intervalHypotheses = makeIntervalHypothesisSpace(1, maxNumber);

  return Infer({ method: 'enumerate' }, function() {
    // 50/50 prior: rule-based vs interval-based
    var useRule      = flip(0.5);
    var hypotheses   = useRule ? ruleHypotheses : intervalHypotheses;

    // Uniform draw within the chosen family
    var hypothesis   = uniformDraw(hypotheses);
    var set          = getSetFromHypothesis(hypothesis);

    // Observe each example as a draw from Categorical over the set
    mapData({ data: examples }, function(ex) {
      observe(Categorical({ vs: set }), ex);
    });

    var testQueryResponse = inSet(testQuery, set);

    return { hypothesis: hypothesis, testQueryResponse: testQueryResponse };
  });
};

var ANSWER = learnConcept([3, 10], 12);
var maxNumber = 20;

var filterByInRange = function(set) {
  var inRange = function(v) { return v <= maxNumber && v >= 1; };
  return _.uniq(filter(inRange, set));
};

var genEvens = function() {
  return filter(function(v) { return v % 2 === 0; }, _.range(1, maxNumber + 1));
};

var genOdds = function() {
  return filter(function(v) { return (v + 1) % 2 === 0; }, _.range(1, maxNumber + 1));
};

var genMultiples = function(base) {
  var multiples = map(function(v) { return base * v; }, _.range(1, maxNumber + 1));
  return filterByInRange(multiples);
};

var genPowers = function(base) {
  var powers = map(function(v) { return Math.pow(base, v); }, _.range(1, maxNumber + 1));
  return filterByInRange(powers);
};

var inSet = function(val, set) { return _.includes(set, val); };

var makeRuleHypothesisSpace = function() {
  var multipleRules = map(function(b) { return 'multiples_of_' + b; }, _.range(1, 12));
  var powerRules    = map(function(b) { return 'powers_of_' + b;    }, _.range(1, 12));
  return multipleRules.concat(powerRules).concat(['evens', 'odds']);
};

var genSetFromInterval = function(a, b) {
  return _.range(a, b + 1);
};

var makeIntervalHypothesisSpace = function(start, end) {
  var as = _.range(start, end);
  return _.flatten(map(function(a) {
    var bs = _.range(a + 1, end + 1);
    return map(function(b) {
      return 'interval_' + a + '_' + b;
    }, bs);
  }, as));
};

var getSetFromHypothesis = function(rule) {
  if (rule === 'evens') {
    return genEvens();
  } else if (rule === 'odds') {
    return genOdds();
  } else if (rule.indexOf('multiples_of_') === 0) {
    var mBase = _.parseInt(rule.slice('multiples_of_'.length));
    return genMultiples(mBase);
  } else if (rule.indexOf('powers_of_') === 0) {
    var pBase = _.parseInt(rule.slice('powers_of_'.length));
    return genPowers(pBase);
  } else {
    // interval_a_b
    var body  = rule.slice('interval_'.length);
    var parts = body.split('_');
    var a = _.parseInt(parts[0]);
    var b = _.parseInt(parts[1]);
    return genSetFromInterval(a, b);
  }
};

var learnConcept = function(examples, testQuery) {
  var ruleSpace     = makeRuleHypothesisSpace();
  var intervalSpace = makeIntervalHypothesisSpace(1, maxNumber);

  return Infer({method: 'enumerate'}, function() {
    // 50/50 prior between rule-based and interval hypotheses
    var useRule    = flip(0.5);
    var space      = useRule ? ruleSpace : intervalSpace;
    var hypothesis = space[randomInteger(space.length)];
    var set        = getSetFromHypothesis(hypothesis);

    // Each example is observed as drawn uniformly from the concept's extension
    mapData({data: examples}, function(example) {
      observe(Categorical({vs: set}), example);
    });

    var testQueryResponse = inSet(testQuery, set);
    return {hypothesis: hypothesis, testQueryResponse: testQueryResponse};
  });
};

var ANSWER = learnConcept([3, 10], 12);
groundtruth output
{"hypothesis":"multiples_of_1","testQ…0.0893
{"hypothesis":"interval_3_10","testQu…0.0705
{"hypothesis":"interval_3_11","testQu…0.0557
{"hypothesis":"interval_2_10","testQu…0.0557
{"hypothesis":"interval_3_12","testQu…0.0451
{"hypothesis":"interval_2_11","testQu…0.0451
{"hypothesis":"interval_1_10","testQu…0.0451
{"hypothesis":"interval_3_13","testQu…0.0373
{"hypothesis":"interval_2_12","testQu…0.0373
{"hypothesis":"interval_1_11","testQu…0.0373
{"hypothesis":"interval_3_14","testQu…0.0313
{"hypothesis":"interval_2_13","testQu…0.0313
… 22 more
raw JSON
{
  "__kind": "distribution",
  "probs": [
    0.013932635144786142,
    0.015619978501421161,
    0.017633491355119973,
    0.020062994608492056,
    0.023031498912809725,
    0.02671108749651309,
    0.03134842907576883,
    0.0373072213794274,
    0.04514173786910714,
    0.055730540579144575,
    0.0705339654204799,
    0.012504636528838532,
    0.013932635144786142,
    0.015619978501421161,
    0.017633491355119973,
    0.020062994608492056,
    0.023031498912809725,
    0.02671108749651309,
    0.03134842907576883,
    0.0373072213794274,
    0.04514173786910714,
    0.055730540579144575,
    0.011285434467276784,
    0.012504636528838532,
    0.013932635144786142,
    0.015619978501421161,
    0.017633491355119973,
    0.020062994608492056,
    0.023031498912809725,
    0.02671108749651309,
    0.03134842907576883,
    0.0373072213794274,
    0.04514173786910714,
    0.08934302286594116
  ],
  "support": [
    {
      "hypothesis": "interval_3_20",
      "testQueryResponse": true
    },
    {
      "hypothesis": "interval_3_19",
      "testQueryResponse": true
    },
    {
      "hypothesis": "interval_3_18",
      "testQueryResponse": true
    },
    {
      "hypothesis": "interval_3_17",
      "testQueryResponse": true
    },
    {
      "hypothesis": "interval_3_16",
      "testQueryResponse": true
    },
    {
      "hypothesis": "interval_3_15",
      "testQueryResponse": true
    },
    {
      "hypothesis": "interval_3_14",
      "testQueryResponse": true
    },
    {
      "hypothesis": "interval_3_13",
      "testQueryResponse": true
    },
    {
      "hypothesis": "interval_3_12",
      "testQueryResponse": true
    },
    {
      "hypothesis": "interval_3_11",
      "testQueryResponse": false
    },
    {
      "hypothesis": "interval_3_10",
      "testQueryResponse": false
    },
    {
      "hypothesis": "interval_2_20",
      "testQueryResponse": true
    },
    {
      "hypothesis": "interval_2_19",
      "testQueryResponse": true
    },
    {
      "hypothesis": "interval_2_18",
      "testQueryResponse": true
    },
    {
      "hypothesis": "interval_2_17",
      "testQueryResponse": true
    },
    {
      "hypothesis": "interval_2_16",
      "testQueryResponse": true
    },
    {
      "hypothesis": "interval_2_15",
      "testQueryResponse": true
    },
    {
      "hypothesis": "interval_2_14",
      "testQueryResponse": true
    },
    {
      "hypothesis": "interval_2_13",
      "testQueryResponse": true
    },
    {
      "hypothesis": "interval_2_12",
      "testQueryResponse": true
    },
    {
      "hypothesis": "interval_2_11",
      "testQueryResponse": false
    },
    {
      "hypothesis": "interval_2_10",
      "testQueryResponse": false
    },
    {
      "hypothesis": "interval_1_20",
      "testQueryResponse": true
    },
    {
      "hypothesis": "interval_1_19",
      "testQueryResponse": true
    },
    {
      "hypothesis": "interval_1_18",
      "testQueryResponse": true
    },
    {
      "hypothesis": "interval_1_17",
      "testQueryResponse": true
    },
    {
      "hypothesis": "interval_1_16",
      "testQueryResponse": true
    },
    {
      "hypothesis": "interval_1_15",
      "testQueryResponse": true
    },
    {
      "hypothesis": "interval_1_14",
      "testQueryResponse": true
    },
    {
      "hypothesis": "interval_1_13",
      "testQueryResponse": true
    },
    {
      "hypothesis": "interval_1_12",
      "testQueryResponse": true
    },
    {
      "hypothesis": "interval_1_11",
      "testQueryResponse": false
    },
    {
      "hypothesis": "interval_1_10",
      "testQueryResponse": false
    },
    {
      "hypothesis": "multiples_of_1",
      "testQueryResponse": true
    }
  ]
}
feedback
not signed
probmods2-occams-razor/ex1.3 value 8 runs val-
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Using the number-game model with rule + interval hypotheses (50/50 mix of rule-based and interval_a_b hypotheses) over integers [1, 20]: compute for each query in [1, 20] the expected probability that the query is in the inferred concept, given examples = [3, 6, 9]. Helpers (include them in your program):
var maxNumber = 20;
var filterByInRange = function(set) {
  var inRange = function(v) { v <= maxNumber && v >= 0 };
  return _.uniq(filter(inRange, set));
};
var genEvens = function() {
  return filter(function(v) { return v % 2 == 0 }, _.range(1, maxNumber));
};
var genOdds = function() {
  return filter(function(v) { return (v + 1) % 2 == 0 }, _.range(1, maxNumber));
};
var genMultiples = function(base) {
  var multiples = map(function(v) { return base * v }, _.range(maxNumber));
  return filterByInRange(multiples);
};
var genPowers = function(base) {
  var powers = map(function(v) { return Math.pow(base, v) }, _.range(maxNumber));
  return filterByInRange(powers);
};
var inSet = function(val, set) { return _.includes(set, val); };
var makeRuleHypothesisSpace = function() {
  var multipleRules = map(function(b) { return 'multiples_of_' + b }, _.range(1, 12));
  var powerRules    = map(function(b) { return 'powers_of_' + b },    _.range(1, 12));
  return multipleRules.concat(powerRules).concat(['evens', 'odds']);
};
var genSetFromInterval = function(a, b) { return _.range(a, b+1); };
var makeIntervalHypothesisSpace = function(start, end) {
  var allIntervals = _.flatten(map(function(s) {
    return map(function(e) { [s, e] }, genSetFromInterval(s+1, end));
  }, genSetFromInterval(start, end)));
  return map(function(x) { 'interval_' + x[0] + '_' + x[1] }, allIntervals);
};
var getSetFromHypothesis = function(rule) {
  var parts = rule.split('_');
  return parts[0] == 'multiples' ? genMultiples(_.parseInt(parts[2])) :
         parts[0] == 'powers'    ? genPowers(_.parseInt(parts[2])) :
         parts[0] == 'evens'     ? genEvens() :
         parts[0] == 'odds'      ? genOdds() :
         parts[0] == 'interval'  ? genSetFromInterval(_.parseInt(parts[1]), _.parseInt(parts[2])) :
         null;
};
var learnConcept = function(examples, testQuery) {
  return Infer({method: 'enumerate'}, function() {
    var rules = makeRuleHypothesisSpace();
    var intervals = makeIntervalHypothesisSpace(1, maxNumber);
    var hypothesis = flip(0.5) ? uniformDraw(rules) : uniformDraw(intervals);
    var set = getSetFromHypothesis(hypothesis);
    mapData({data: examples}, function(example) {
      observe(Categorical({vs: set}), example);
    });
    return {hypothesis: hypothesis, testQueryResponse: inSet(testQuery, set)};
  });
};
End your program with the array pQueries of length 20: for each query 1..20, compute expectation(marginalize(learnConcept([3, 6, 9], query), function(x) { x.testQueryResponse })).
groundtruth code
var maxNumber = 20;
var filterByInRange =  function(set) {
  var inRange = function(v) {v <= maxNumber && v >= 0};
  return _.uniq(filter(inRange, set));
};
var genEvens = function() {
  return filter(function(v) {return v % 2 == 0}, _.range(1, maxNumber));
};
var genOdds = function() {
  return filter(function(v) {return (v + 1) % 2 == 0}, _.range(1, maxNumber));
};
var genMultiples = function(base) {
  var multiples = map(function(v) {return base * v}, _.range(maxNumber));
  return filterByInRange(multiples);
};
var genPowers = function(base) {
  var powers = map(function(v) {return Math.pow(base, v)}, _.range(maxNumber));
  return filterByInRange(powers);
};
var inSet = function(val, set) { return _.includes(set, val); };
var makeRuleHypothesisSpace = function() {
  var multipleRules = map(function(base) {return 'multiples_of_' + base}, _.range(1, 12));
  var powerRules = map(function(base) {return 'powers_of_' + base}, _.range(1, 12));
  return multipleRules.concat(powerRules).concat(['evens', 'odds']);
};
var genSetFromInterval = function(a, b) { return _.range(a, b+1); };
var makeIntervalHypothesisSpace = function(start, end) {
  var allIntervals = _.flatten(map(function(s) {
    return map(function(e) { [s, e] }, genSetFromInterval(s+1, end));
  }, genSetFromInterval(start, end)));
  return map(function(x) { 'interval_' + x[0] + '_' + x[1] }, allIntervals);
};
var getSetFromHypothesis = function(rule) {
  var parts = rule.split('_');
  return (parts[0] == 'multiples' ? genMultiples(_.parseInt(parts[2])) :
          parts[0] == 'powers' ? genPowers(_.parseInt(parts[2])) :
          parts[0] == 'evens' ? genEvens() :
          parts[0] == 'odds' ? genOdds() :
          parts[0] == 'interval' ? genSetFromInterval(_.parseInt(parts[1]), _.parseInt(parts[2])) :
          console.error('unknown rule' + rule));
};
var learnConcept = function(examples, testQuery) {
  return Infer({method: 'enumerate'}, function() {
    var rules = makeRuleHypothesisSpace();
    var intervals = makeIntervalHypothesisSpace(1, maxNumber);
    var hypothesis = flip(0.5) ? uniformDraw(rules) : uniformDraw(intervals);
    var set = getSetFromHypothesis(hypothesis);
    mapData({data: examples}, function(example) {
      observe(Categorical({vs: set}), example);
    });
    return {hypothesis: hypothesis,
            testQueryResponse: inSet(testQuery, set)};
  });
};

var examples = [3, 6, 9];
var queries = genSetFromInterval(1, maxNumber);
var ANSWER = (map(function(query) {
  var post = learnConcept(examples, query);
  return expectation(marginalize(post, function(x) { x.testQueryResponse }));
}, queries));
generated code
sonnet-46-primer-v3 val- haiku-45-noprimer-v3 val+ haiku-45-primer-v3 val- haiku-45-think-noprimer-v3 val+ haiku-45-think-primer-v3 val+ sonnet-46-noprimer-v3 val+ sonnet-46-think-noprimer-v3 val+ sonnet-46-think-primer-v3 val-
var maxNumber = 20;
var filterByInRange = function(set) {
  var inRange = function(v) { return v <= maxNumber && v >= 0; };
  return _.uniq(filter(inRange, set));
};
var genEvens = function() {
  return filter(function(v) { return v % 2 == 0; }, _.range(1, maxNumber + 1));
};
var genOdds = function() {
  return filter(function(v) { return (v + 1) % 2 == 0; }, _.range(1, maxNumber + 1));
};
var genMultiples = function(base) {
  var multiples = map(function(v) { return base * v; }, _.range(1, maxNumber + 1));
  return filterByInRange(multiples);
};
var genPowers = function(base) {
  var powers = map(function(v) { return Math.pow(base, v); }, _.range(1, maxNumber + 1));
  return filterByInRange(powers);
};
var inSet = function(val, set) { return _.includes(set, val); };
var makeRuleHypothesisSpace = function() {
  var multipleRules = map(function(b) { return 'multiples_of_' + b; }, _.range(1, 12));
  var powerRules    = map(function(b) { return 'powers_of_' + b; },    _.range(1, 12));
  return multipleRules.concat(powerRules).concat(['evens', 'odds']);
};
var genSetFromInterval = function(a, b) { return _.range(a, b + 1); };
var makeIntervalHypothesisSpace = function(start, end) {
  var allIntervals = _.flatten(map(function(s) {
    return map(function(e) { return [s, e]; }, genSetFromInterval(s + 1, end));
  }, genSetFromInterval(start, end)));
  return map(function(x) { return 'interval_' + x[0] + '_' + x[1]; }, allIntervals);
};
var getSetFromHypothesis = function(rule) {
  var parts = rule.split('_');
  return parts[0] == 'multiples' ? genMultiples(_.parseInt(parts[2])) :
         parts[0] == 'powers'    ? genPowers(_.parseInt(parts[2])) :
         parts[0] == 'evens'     ? genEvens() :
         parts[0] == 'odds'      ? genOdds() :
         parts[0] == 'interval'  ? genSetFromInterval(_.parseInt(parts[1]), _.parseInt(parts[2])) :
         null;
};
var learnConcept = function(examples, testQuery) {
  return Infer({method: 'enumerate'}, function() {
    var rules = makeRuleHypothesisSpace();
    var intervals = makeIntervalHypothesisSpace(1, maxNumber);
    var hypothesis = flip(0.5) ? uniformDraw(rules) : uniformDraw(intervals);
    var set = getSetFromHypothesis(hypothesis);
    mapData({data: examples}, function(example) {
      observe(Categorical({vs: set, ps: map(function(x) { return 1/set.length; }, set)}), example);
    });
    return {hypothesis: hypothesis, testQueryResponse: inSet(testQuery, set)};
  });
};

var queries = _.range(1, 21);
var pQueries = map(function(query) {
  var dist = learnConcept([3, 6, 9], query);
  var marginal = marginalize(dist, function(x) { return x.testQueryResponse; });
  return expectation(marginal);
}, queries);

var ANSWER = pQueries;
var maxNumber = 20;
var filterByInRange = function(set) {
  var inRange = function(v) { v <= maxNumber && v >= 0 };
  return _.uniq(filter(inRange, set));
};
var genEvens = function() {
  return filter(function(v) { return v % 2 == 0 }, _.range(1, maxNumber));
};
var genOdds = function() {
  return filter(function(v) { return (v + 1) % 2 == 0 }, _.range(1, maxNumber));
};
var genMultiples = function(base) {
  var multiples = map(function(v) { return base * v }, _.range(maxNumber));
  return filterByInRange(multiples);
};
var genPowers = function(base) {
  var powers = map(function(v) { return Math.pow(base, v) }, _.range(maxNumber));
  return filterByInRange(powers);
};
var inSet = function(val, set) { return _.includes(set, val); };
var makeRuleHypothesisSpace = function() {
  var multipleRules = map(function(b) { return 'multiples_of_' + b }, _.range(1, 12));
  var powerRules    = map(function(b) { return 'powers_of_' + b },    _.range(1, 12));
  return multipleRules.concat(powerRules).concat(['evens', 'odds']);
};
var genSetFromInterval = function(a, b) { return _.range(a, b+1); };
var makeIntervalHypothesisSpace = function(start, end) {
  var allIntervals = _.flatten(map(function(s) {
    return map(function(e) { return [s, e] }, genSetFromInterval(s+1, end));
  }, genSetFromInterval(start, end)));
  return map(function(x) { return 'interval_' + x[0] + '_' + x[1] }, allIntervals);
};
var getSetFromHypothesis = function(rule) {
  var parts = rule.split('_');
  return parts[0] == 'multiples' ? genMultiples(_.parseInt(parts[2])) :
         parts[0] == 'powers'    ? genPowers(_.parseInt(parts[2])) :
         parts[0] == 'evens'     ? genEvens() :
         parts[0] == 'odds'      ? genOdds() :
         parts[0] == 'interval'  ? genSetFromInterval(_.parseInt(parts[1]), _.parseInt(parts[2])) :
         null;
};
var learnConcept = function(examples, testQuery) {
  return Infer({method: 'enumerate'}, function() {
    var rules = makeRuleHypothesisSpace();
    var intervals = makeIntervalHypothesisSpace(1, maxNumber);
    var hypothesis = flip(0.5) ? uniformDraw(rules) : uniformDraw(intervals);
    var set = getSetFromHypothesis(hypothesis);
    mapData({data: examples}, function(example) {
      observe(Categorical({vs: set}), example);
    });
    return {hypothesis: hypothesis, testQueryResponse: inSet(testQuery, set)};
  });
};

var examples = [3, 6, 9];
var pQueries = map(function(query) {
  var dist = learnConcept(examples, query);
  var marginal = marginalize(dist, function(x) { return x.testQueryResponse; });
  return expectation(marginal);
}, _.range(1, 21));

var ANSWER = pQueries;
var maxNumber = 20;
var filterByInRange = function(set) {
  var inRange = function(v) { return v <= maxNumber && v >= 0; };
  return _.uniq(filter(inRange, set));
};
var genEvens = function() {
  return filter(function(v) { return v % 2 == 0; }, _.range(1, maxNumber + 1));
};
var genOdds = function() {
  return filter(function(v) { return (v + 1) % 2 == 0; }, _.range(1, maxNumber + 1));
};
var genMultiples = function(base) {
  var multiples = map(function(v) { return base * v; }, _.range(0, maxNumber + 1));
  return filterByInRange(multiples);
};
var genPowers = function(base) {
  var powers = map(function(v) { return Math.pow(base, v); }, _.range(0, maxNumber + 1));
  return filterByInRange(powers);
};
var inSet = function(val, set) { return _.includes(set, val); };
var makeRuleHypothesisSpace = function() {
  var multipleRules = map(function(b) { return 'multiples_of_' + b; }, _.range(1, 12));
  var powerRules = map(function(b) { return 'powers_of_' + b; }, _.range(1, 12));
  return multipleRules.concat(powerRules).concat(['evens', 'odds']);
};
var genSetFromInterval = function(a, b) { return _.range(a, b + 1); };
var makeIntervalHypothesisSpace = function(start, end) {
  var allIntervals = _.flatten(map(function(s) {
    return map(function(e) { return [s, e]; }, genSetFromInterval(s + 1, end));
  }, genSetFromInterval(start, end)));
  return map(function(x) { return 'interval_' + x[0] + '_' + x[1]; }, allIntervals);
};
var getSetFromHypothesis = function(rule) {
  var parts = rule.split('_');
  return parts[0] == 'multiples' ? genMultiples(_.parseInt(parts[2])) :
         parts[0] == 'powers' ? genPowers(_.parseInt(parts[2])) :
         parts[0] == 'evens' ? genEvens() :
         parts[0] == 'odds' ? genOdds() :
         parts[0] == 'interval' ? genSetFromInterval(_.parseInt(parts[1]), _.parseInt(parts[2])) :
         null;
};
var learnConcept = function(examples, testQuery) {
  return Infer({method: 'enumerate'}, function() {
    var rules = makeRuleHypothesisSpace();
    var intervals = makeIntervalHypothesisSpace(1, maxNumber);
    var hypothesis = flip(0.5) ? uniformDraw(rules) : uniformDraw(intervals);
    var set = getSetFromHypothesis(hypothesis);
    mapData({data: examples}, function(example) {
      observe(Categorical({vs: set}), example);
    });
    return {hypothesis: hypothesis, testQueryResponse: inSet(testQuery, set)};
  });
};

var examples = [3, 6, 9];
var pQueries = map(function(query) {
  var dist = learnConcept(examples, query);
  var marginal = marginalize(dist, function(x) { return x.testQueryResponse; });
  return expectation(marginal);
}, _.range(1, maxNumber + 1));

var ANSWER = pQueries;
var maxNumber = 20;
var filterByInRange = function(set) {
  var inRange = function(v) { v <= maxNumber && v >= 0 };
  return _.uniq(filter(inRange, set));
};
var genEvens = function() {
  return filter(function(v) { return v % 2 == 0 }, _.range(1, maxNumber));
};
var genOdds = function() {
  return filter(function(v) { return (v + 1) % 2 == 0 }, _.range(1, maxNumber));
};
var genMultiples = function(base) {
  var multiples = map(function(v) { return base * v }, _.range(maxNumber));
  return filterByInRange(multiples);
};
var genPowers = function(base) {
  var powers = map(function(v) { return Math.pow(base, v) }, _.range(maxNumber));
  return filterByInRange(powers);
};
var inSet = function(val, set) { return _.includes(set, val); };
var makeRuleHypothesisSpace = function() {
  var multipleRules = map(function(b) { return 'multiples_of_' + b }, _.range(1, 12));
  var powerRules    = map(function(b) { return 'powers_of_' + b },    _.range(1, 12));
  return multipleRules.concat(powerRules).concat(['evens', 'odds']);
};
var genSetFromInterval = function(a, b) { return _.range(a, b+1); };
var makeIntervalHypothesisSpace = function(start, end) {
  var allIntervals = _.flatten(map(function(s) {
    return map(function(e) { return [s, e]; }, genSetFromInterval(s+1, end));
  }, genSetFromInterval(start, end)));
  return map(function(x) { return 'interval_' + x[0] + '_' + x[1]; }, allIntervals);
};
var getSetFromHypothesis = function(rule) {
  var parts = rule.split('_');
  return parts[0] == 'multiples' ? genMultiples(_.parseInt(parts[2])) :
         parts[0] == 'powers'    ? genPowers(_.parseInt(parts[2])) :
         parts[0] == 'evens'     ? genEvens() :
         parts[0] == 'odds'      ? genOdds() :
         parts[0] == 'interval'  ? genSetFromInterval(_.parseInt(parts[1]), _.parseInt(parts[2])) :
         null;
};
var learnConcept = function(examples, testQuery) {
  return Infer({method: 'enumerate'}, function() {
    var rules = makeRuleHypothesisSpace();
    var intervals = makeIntervalHypothesisSpace(1, maxNumber);
    var hypothesis = flip(0.5) ? uniformDraw(rules) : uniformDraw(intervals);
    var set = getSetFromHypothesis(hypothesis);
    mapData({data: examples}, function(example) {
      observe(Categorical({vs: set}), example);
    });
    return {hypothesis: hypothesis, testQueryResponse: inSet(testQuery, set)};
  });
};

var pQueries = map(function(query) {
  var dist = learnConcept([3, 6, 9], query);
  var marginal = marginalize(dist, function(x) { return x.testQueryResponse; });
  return expectation(marginal, function(x) { return x ? 1 : 0; });
}, _.range(1, 21));

var ANSWER = pQueries;
var maxNumber = 20;
var filterByInRange = function(set) {
  var inRange = function(v) { return v <= maxNumber && v >= 0; };
  return _.uniq(filter(inRange, set));
};
var genEvens = function() {
  return filter(function(v) { return v % 2 == 0; }, _.range(1, maxNumber));
};
var genOdds = function() {
  return filter(function(v) { return (v + 1) % 2 == 0; }, _.range(1, maxNumber));
};
var genMultiples = function(base) {
  var multiples = map(function(v) { return base * v; }, _.range(maxNumber));
  return filterByInRange(multiples);
};
var genPowers = function(base) {
  var powers = map(function(v) { return Math.pow(base, v); }, _.range(maxNumber));
  return filterByInRange(powers);
};
var inSet = function(val, set) { return _.includes(set, val); };
var makeRuleHypothesisSpace = function() {
  var multipleRules = map(function(b) { return 'multiples_of_' + b; }, _.range(1, 12));
  var powerRules    = map(function(b) { return 'powers_of_' + b; },    _.range(1, 12));
  return multipleRules.concat(powerRules).concat(['evens', 'odds']);
};
var genSetFromInterval = function(a, b) { return _.range(a, b+1); };
var makeIntervalHypothesisSpace = function(start, end) {
  var allIntervals = _.flatten(map(function(s) {
    return map(function(e) { return [s, e]; }, genSetFromInterval(s+1, end));
  }, genSetFromInterval(start, end)));
  return map(function(x) { return 'interval_' + x[0] + '_' + x[1]; }, allIntervals);
};
var getSetFromHypothesis = function(rule) {
  var parts = rule.split('_');
  return parts[0] == 'multiples' ? genMultiples(_.parseInt(parts[2])) :
         parts[0] == 'powers'    ? genPowers(_.parseInt(parts[2])) :
         parts[0] == 'evens'     ? genEvens() :
         parts[0] == 'odds'      ? genOdds() :
         parts[0] == 'interval'  ? genSetFromInterval(_.parseInt(parts[1]), _.parseInt(parts[2])) :
         null;
};
var learnConcept = function(examples, testQuery) {
  return Infer({method: 'enumerate'}, function() {
    var rules = makeRuleHypothesisSpace();
    var intervals = makeIntervalHypothesisSpace(1, maxNumber);
    var hypothesis = flip(0.5) ? uniformDraw(rules) : uniformDraw(intervals);
    var set = getSetFromHypothesis(hypothesis);
    mapData({data: examples}, function(example) {
      observe(Categorical({vs: set}), example);
    });
    return {hypothesis: hypothesis, testQueryResponse: inSet(testQuery, set)};
  });
};

var pQueries = map(function(q) {
  var dist = learnConcept([3, 6, 9], q);
  var margDist = marginalize(dist, function(x) { return x.testQueryResponse; });
  return expectation(margDist);
}, _.range(1, 21));

var ANSWER = pQueries;
var maxNumber = 20;
var filterByInRange = function(set) {
  var inRange = function(v) { v <= maxNumber && v >= 0 };
  return _.uniq(filter(inRange, set));
};
var genEvens = function() {
  return filter(function(v) { return v % 2 == 0 }, _.range(1, maxNumber));
};
var genOdds = function() {
  return filter(function(v) { return (v + 1) % 2 == 0 }, _.range(1, maxNumber));
};
var genMultiples = function(base) {
  var multiples = map(function(v) { return base * v }, _.range(maxNumber));
  return filterByInRange(multiples);
};
var genPowers = function(base) {
  var powers = map(function(v) { return Math.pow(base, v) }, _.range(maxNumber));
  return filterByInRange(powers);
};
var inSet = function(val, set) { return _.includes(set, val); };
var makeRuleHypothesisSpace = function() {
  var multipleRules = map(function(b) { return 'multiples_of_' + b }, _.range(1, 12));
  var powerRules    = map(function(b) { return 'powers_of_' + b },    _.range(1, 12));
  return multipleRules.concat(powerRules).concat(['evens', 'odds']);
};
var genSetFromInterval = function(a, b) { return _.range(a, b+1); };
var makeIntervalHypothesisSpace = function(start, end) {
  var allIntervals = _.flatten(map(function(s) {
    return map(function(e) { return [s, e]; }, genSetFromInterval(s+1, end));
  }, genSetFromInterval(start, end)));
  return map(function(x) { return 'interval_' + x[0] + '_' + x[1]; }, allIntervals);
};
var getSetFromHypothesis = function(rule) {
  var parts = rule.split('_');
  return parts[0] == 'multiples' ? genMultiples(_.parseInt(parts[2])) :
         parts[0] == 'powers'    ? genPowers(_.parseInt(parts[2])) :
         parts[0] == 'evens'     ? genEvens() :
         parts[0] == 'odds'      ? genOdds() :
         parts[0] == 'interval'  ? genSetFromInterval(_.parseInt(parts[1]), _.parseInt(parts[2])) :
         null;
};
var learnConcept = function(examples, testQuery) {
  return Infer({method: 'enumerate'}, function() {
    var rules = makeRuleHypothesisSpace();
    var intervals = makeIntervalHypothesisSpace(1, maxNumber);
    var hypothesis = flip(0.5) ? uniformDraw(rules) : uniformDraw(intervals);
    var set = getSetFromHypothesis(hypothesis);
    mapData({data: examples}, function(example) {
      observe(Categorical({vs: set}), example);
    });
    return {hypothesis: hypothesis, testQueryResponse: inSet(testQuery, set)};
  });
};

var queries = _.range(1, 21);
var examples = [3, 6, 9];

var pQueries = map(function(query) {
  var dist = learnConcept(examples, query);
  var marginal = marginalize(dist, function(x) { return x.testQueryResponse; });
  return expectation(marginal);
}, queries);

var ANSWER = pQueries;
var maxNumber = 20;
var filterByInRange = function(set) {
  var inRange = function(v) { return v <= maxNumber && v >= 0; };
  return _.uniq(filter(inRange, set));
};
var genEvens = function() {
  return filter(function(v) { return v % 2 == 0; }, _.range(1, maxNumber + 1));
};
var genOdds = function() {
  return filter(function(v) { return (v + 1) % 2 == 0; }, _.range(1, maxNumber + 1));
};
var genMultiples = function(base) {
  var multiples = map(function(v) { return base * v; }, _.range(maxNumber));
  return filterByInRange(multiples);
};
var genPowers = function(base) {
  var powers = map(function(v) { return Math.pow(base, v); }, _.range(maxNumber));
  return filterByInRange(powers);
};
var inSet = function(val, set) { return _.includes(set, val); };
var makeRuleHypothesisSpace = function() {
  var multipleRules = map(function(b) { return 'multiples_of_' + b; }, _.range(1, 12));
  var powerRules    = map(function(b) { return 'powers_of_' + b; },    _.range(1, 12));
  return multipleRules.concat(powerRules).concat(['evens', 'odds']);
};
var genSetFromInterval = function(a, b) { return _.range(a, b + 1); };
var makeIntervalHypothesisSpace = function(start, end) {
  var allIntervals = _.flatten(map(function(s) {
    return map(function(e) { return [s, e]; }, genSetFromInterval(s + 1, end));
  }, genSetFromInterval(start, end)));
  return map(function(x) { return 'interval_' + x[0] + '_' + x[1]; }, allIntervals);
};
var getSetFromHypothesis = function(rule) {
  var parts = rule.split('_');
  return parts[0] == 'multiples' ? genMultiples(_.parseInt(parts[2])) :
         parts[0] == 'powers'    ? genPowers(_.parseInt(parts[2])) :
         parts[0] == 'evens'     ? genEvens() :
         parts[0] == 'odds'      ? genOdds() :
         parts[0] == 'interval'  ? genSetFromInterval(_.parseInt(parts[1]), _.parseInt(parts[2])) :
         null;
};
var learnConcept = function(examples, testQuery) {
  return Infer({method: 'enumerate'}, function() {
    var rules = makeRuleHypothesisSpace();
    var intervals = makeIntervalHypothesisSpace(1, maxNumber);
    var hypothesis = flip(0.5) ? uniformDraw(rules) : uniformDraw(intervals);
    var set = getSetFromHypothesis(hypothesis);
    mapData({data: examples}, function(example) {
      observe(Categorical({vs: set}), example);
    });
    return {hypothesis: hypothesis, testQueryResponse: inSet(testQuery, set)};
  });
};

var examples = [3, 6, 9];
var queries = _.range(1, maxNumber + 1);

var pQueries = map(function(query) {
  var dist = learnConcept(examples, query);
  var marginal = marginalize(dist, function(x) { return x.testQueryResponse; });
  return expectation(marginal);
}, queries);

var ANSWER = pQueries;
var maxNumber = 20;
var filterByInRange = function(set) {
  var inRange = function(v) { return v <= maxNumber && v >= 0; };
  return _.uniq(filter(inRange, set));
};
var genEvens = function() {
  return filter(function(v) { return v % 2 == 0; }, _.range(1, maxNumber + 1));
};
var genOdds = function() {
  return filter(function(v) { return (v + 1) % 2 == 0; }, _.range(1, maxNumber + 1));
};
var genMultiples = function(base) {
  var multiples = map(function(v) { return base * v; }, _.range(1, maxNumber + 1));
  return filterByInRange(multiples);
};
var genPowers = function(base) {
  var powers = map(function(v) { return Math.pow(base, v); }, _.range(1, maxNumber + 1));
  return filterByInRange(powers);
};
var inSet = function(val, set) { return _.includes(set, val); };
var makeRuleHypothesisSpace = function() {
  var multipleRules = map(function(b) { return 'multiples_of_' + b; }, _.range(1, 12));
  var powerRules    = map(function(b) { return 'powers_of_' + b; },    _.range(1, 12));
  return multipleRules.concat(powerRules).concat(['evens', 'odds']);
};
var genSetFromInterval = function(a, b) { return _.range(a, b + 1); };
var makeIntervalHypothesisSpace = function(start, end) {
  var allIntervals = _.flatten(map(function(s) {
    return map(function(e) { return [s, e]; }, genSetFromInterval(s + 1, end));
  }, genSetFromInterval(start, end - 1)));
  return map(function(x) { return 'interval_' + x[0] + '_' + x[1]; }, allIntervals);
};
var getSetFromHypothesis = function(rule) {
  var parts = rule.split('_');
  return parts[0] == 'multiples' ? genMultiples(_.parseInt(parts[2])) :
         parts[0] == 'powers'    ? genPowers(_.parseInt(parts[2])) :
         rule == 'evens'         ? genEvens() :
         rule == 'odds'          ? genOdds() :
         parts[0] == 'interval'  ? genSetFromInterval(_.parseInt(parts[1]), _.parseInt(parts[2])) :
         null;
};
var learnConcept = function(examples, testQuery) {
  return Infer({method: 'enumerate'}, function() {
    var rules = makeRuleHypothesisSpace();
    var intervals = makeIntervalHypothesisSpace(1, maxNumber);
    var hypothesis = flip(0.5) ? uniformDraw(rules) : uniformDraw(intervals);
    var set = getSetFromHypothesis(hypothesis);
    mapData({data: examples}, function(example) {
      condition(inSet(example, set));
      factor(-Math.log(set.length));
    });
    return {hypothesis: hypothesis, testQueryResponse: inSet(testQuery, set)};
  });
};

var queries = _.range(1, 21);
var pQueries = map(function(query) {
  var dist = learnConcept([3, 6, 9], query);
  var margDist = marginalize(dist, function(x) { return x.testQueryResponse; });
  return expectation(margDist);
}, queries);

var ANSWER = pQueries;
groundtruth output
[
  0.14064111230440013,
  0.2988916366670469,
  1,
  0.515305102429516,
  0.515305102429516,
  1,
  0.515305102429516,
  0.515305102429516,
  1,
  0.38425812748876287,
  0.2934357269948065,
  0.7125462913648225,
  0.17892090222552293,
  0.14143194588649405,
  0.5967624538101919,
  0.08863373462910144,
  0.0696314580679196,
  0.5387027733025224,
  0.041005698089744455,
  0.009287513387930357
]
feedback
not signed
probmods2-occams-razor/ex2.1 distribution 8 runs TV=0.00 TV=0
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Given this Causal Power model where C is a possible cause of E:
var observedData = [{C:true, E:false}];
Infer({method: 'MCMC', samples: 10000, lag:2}, function() {
  var cp = uniform(0, 1);  // Causal power of C to cause E
  var b = uniform(0, 1);   // Background probability of E
  mapData({data: observedData}, function(datum) {
    var E = (datum.C && flip(cp)) || flip(b);
    condition(E == datum.E);
  })
  return {cp, b};
})
Modify it into a Causal Support model: also infer whether there is a causal relation at all (relation = flip()). When relation is false, C does not affect E. Return {relation, cp, b}. End with the Infer(...).
groundtruth code
var observedData = [{C:true, E:false}];
var ANSWER = (Infer({method: 'MCMC', samples: 10000, lag:2}, function() {
  var relation = flip();
  var cp = uniform(0, 1);
  var b = uniform(0, 1);

  mapData({data: observedData}, function(datum) {
    var E = (relation && datum.C && flip(cp)) || flip(b);
    condition(E == datum.E);
  });

  return {relation, cp, b};
}));
generated code
sonnet-46-primer-v3 TV=0 haiku-45-noprimer-v3 TV=0 haiku-45-primer-v3 TV=0 haiku-45-think-noprimer-v3 TV<1 haiku-45-think-primer-v3 TV<1 sonnet-46-noprimer-v3 TV=0 sonnet-46-think-noprimer-v3 TV=0 sonnet-46-think-primer-v3 TV=0
var observedData = [{C:true, E:false}];

var ANSWER = Infer({method: 'MCMC', samples: 10000, lag:2}, function() {
  var relation = flip();
  var cp = uniform(0, 1);
  var b = uniform(0, 1);
  mapData({data: observedData}, function(datum) {
    var E = (relation && datum.C && flip(cp)) || flip(b);
    condition(E == datum.E);
  });
  return {relation, cp, b};
});
var observedData = [{C:true, E:false}];
var ANSWER = Infer({method: 'MCMC', samples: 10000, lag:2}, function() {
  var relation = flip();
  var cp = uniform(0, 1);  // Causal power of C to cause E
  var b = uniform(0, 1);   // Background probability of E
  mapData({data: observedData}, function(datum) {
    var E = (relation && datum.C && flip(cp)) || flip(b);
    condition(E == datum.E);
  })
  return {relation, cp, b};
});
var observedData = [{C:true, E:false}];
var ANSWER = Infer({method: 'MCMC', samples: 10000, lag:2}, function() {
  var relation = flip(0.5);  // Whether there is a causal relation
  var cp = uniform(0, 1);    // Causal power of C to cause E
  var b = uniform(0, 1);     // Background probability of E
  mapData({data: observedData}, function(datum) {
    var E = (relation && datum.C && flip(cp)) || flip(b);
    condition(E == datum.E);
  })
  return {relation, cp, b};
});
var observedData = [{C:true, E:false}];
Infer({method: 'MCMC', samples: 10000, lag:2}, function() {
  var relation = flip();  // Whether there is a causal relation
  var cp = uniform(0, 1);  // Causal power of C to cause E
  var b = uniform(0, 1);   // Background probability of E
  mapData({data: observedData}, function(datum) {
    var E = relation ? ((datum.C && flip(cp)) || flip(b)) : flip(b);
    condition(E == datum.E);
  })
  return {relation, cp, b};
})
var ANSWER = Infer({method: 'MCMC', samples: 10000, lag:2}, function() {
  var relation = flip();  // Whether there is a causal relation
  var cp = uniform(0, 1);  // Causal power of C to cause E
  var b = uniform(0, 1);   // Background probability of E
  mapData({data: observedData}, function(datum) {
    var E = relation ? ((datum.C && flip(cp)) || flip(b)) : flip(b);
    condition(E == datum.E);
  })
  return {relation, cp, b};
});
var observedData = [{C:true, E:false}];
var ANSWER = Infer({method: 'MCMC', samples: 10000, lag:2}, function() {
  var relation = flip();   // Whether there is a causal relation
  var cp = uniform(0, 1);  // Causal power of C to cause E (only used if relation is true)
  var b = uniform(0, 1);   // Background probability of E
  mapData({data: observedData}, function(datum) {
    var E = relation ? ((datum.C && flip(cp)) || flip(b)) : flip(b);
    condition(E == datum.E);
  })
  return {relation, cp, b};
});
var observedData = [{C:true, E:false}];
var ANSWER = Infer({method: 'MCMC', samples: 10000, lag:2}, function() {
  var relation = flip();   // Whether C causally influences E
  var cp = uniform(0, 1); // Causal power of C to cause E
  var b = uniform(0, 1);  // Background probability of E

  mapData({data: observedData}, function(datum) {
    var E = (relation && datum.C && flip(cp)) || flip(b);
    condition(E == datum.E);
  });

  return {relation, cp, b};
});
var observedData = [{C:true, E:false}];

var ANSWER = Infer({method: 'MCMC', samples: 10000, lag: 2}, function() {
  var relation = flip();   // Whether C causally influences E at all
  var cp = uniform(0, 1); // Causal power of C to cause E
  var b = uniform(0, 1);  // Background probability of E

  mapData({data: observedData}, function(datum) {
    var E = (relation && datum.C && flip(cp)) || flip(b);
    condition(E == datum.E);
  });

  return {relation, cp, b};
});
var observedData = [{C:true, E:false}];

var ANSWER = Infer({method: 'MCMC', samples: 10000, lag: 2}, function() {
  var relation = flip();   // Whether C causally influences E at all
  var cp = uniform(0, 1); // Causal power of C to cause E
  var b = uniform(0, 1);  // Background probability of E

  mapData({data: observedData}, function(datum) {
    var E = (relation && datum.C && flip(cp)) || flip(b);
    condition(E == datum.E);
  });

  return {relation, cp, b};
});
groundtruth output
{"relation":true,"cp":0.2185296397704…0.0007
{"relation":true,"cp":0.0729233522079…0.0007
{"relation":true,"cp":0.0146991513624…0.0006
{"relation":false,"cp":0.522970429212…0.0006
{"relation":false,"cp":0.110245068953…0.0006
{"relation":false,"cp":0.485246544614…0.0006
{"relation":true,"cp":0.1102450689533…0.0005
{"relation":true,"cp":0.2362638740934…0.0005
{"relation":true,"cp":0.0332195106849…0.0005
{"relation":false,"cp":0.726618962517…0.0005
{"relation":true,"cp":0.1387407450624…0.0005
{"relation":true,"cp":0.0243044972065…0.0005
… 7966 more
raw JSON
{
  "__kind": "distribution",
  "probs": [
    0.00019999999999999985,
    0.0003000000000000001,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00019999999999999985,
    0.00010000000000000009,
    0.00019999999999999985,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00019999999999999985,
    0.0003000000000000001,
    0.00010000000000000009,
    0.00019999999999999985,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00019999999999999985,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.0003000000000000001,
    0.0003000000000000001,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.0003000000000000001,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.0003000000000000001,
    0.00019999999999999985,
    0.00010000000000000009,
    0.00019999999999999985,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00019999999999999985,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.0003000000000000001,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00019999999999999985,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.0003000000000000001,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00019999999999999985,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00019999999999999985,
    0.00010000000000000009,
    0.0004000000000000001,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00019999999999999985,
    0.000100000000

... (1021124 more chars truncated)
feedback
not signed
probmods2-occams-razor/ex2.2 distribution 8 runs TV=0.67 TV<1
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Given the Causal Support model (relation, cp, b latent; observedData = [{C:true, E:false}]; noisy-OR effect E = (relation && C && flip(cp)) || flip(b)): Single-site MH MCMC won't be efficient because changing relation alone rarely accepts. Improve efficiency by constructing the *marginal probability of E* directly via a small Infer, and use that in an observe (instead of sampling E and conditioning). Return {relation, cp, b}.
groundtruth code
var observedData = [{C:true, E:false}];
var ANSWER = (Infer({method: 'MCMC', samples: 10000, lag:2}, function() {
  var relation = flip();
  var cp = uniform(0, 1);
  var b = uniform(0, 1);

  var noisyOrMarginal = function(C) {
    return Infer({method: 'enumerate'}, function() {
      return (relation && C && flip(cp)) || flip(b);
    });
  };

  mapData({data: observedData}, function(datum) {
    observe(noisyOrMarginal(datum.C), datum.E);
  });

  return {relation, cp, b};
}));
generated code
sonnet-46-primer-v3 TV<1 haiku-45-noprimer-v3 TV<1 haiku-45-primer-v3 TV<1 haiku-45-think-noprimer-v3 TV<1 haiku-45-think-primer-v3 TV<1 sonnet-46-noprimer-v3 TV<1 sonnet-46-think-noprimer-v3 TV<1 sonnet-46-think-primer-v3 TV<1
var model = function() {
  var relation = flip(0.5);
  var cp = uniform(0, 1);
  var b = uniform(0, 1);

  // Compute marginal probability of E given C=true using a small inner Infer
  var eDistGivenCTrue = Infer({method: 'enumerate'}, function() {
    var E = (relation && flip(cp)) || flip(b);
    return E;
  });

  // Observed data: {C: true, E: false}
  // P(E=false | C=true, relation, cp, b)
  observe(eDistGivenCTrue, false);

  return {relation: relation, cp: cp, b: b};
};

var ANSWER = Infer({
  method: 'MCMC',
  samples: 10000,
  burn: 2000,
  kernel: 'MH'
}, model);
// Causal Support model with improved efficiency
// Instead of sampling E and conditioning, we compute P(E|relation,cp,b) directly
// and use that in observe for better MCMC mixing

var model = function() {
  var relation = flip(0.5);
  var cp = uniform(0, 1);
  var b = uniform(0, 1);
  
  // Compute marginal probability of E given relation, cp, b
  // E = (relation && C && flip(cp)) || flip(b)
  // where C is observed to be true
  var probE = function(rel, c_prob, b_prob) {
    // C is true (observed)
    // E = (rel && true && flip(c_prob)) || flip(b_prob)
    // E = (rel && flip(c_prob)) || flip(b_prob)
    
    if (rel) {
      // E = flip(c_prob) || flip(b_prob)
      // P(E=true) = 1 - P(E=false) = 1 - (1-c_prob)*(1-b_prob)
      return 1 - (1 - c_prob) * (1 - b_prob);
    } else {
      // E = flip(b_prob)
      // P(E=true) = b_prob
      return b_prob;
    }
  };
  
  var pE = probE(relation, cp, b);
  
  // Observe E=false with probability (1 - pE)
  observe(Bernoulli({p: pE}), false);
  
  return {relation: relation, cp: cp, b: b};
};

var ANSWER = Infer({method: "MCMC", samples: 10000, lag: 10}, model);
var model = function() {
  var relation = flip(0.5);
  var cp = uniform(0, 1);
  var b = uniform(0, 1);
  
  // Marginal probability of E given relation, cp, b
  // E = (relation && C && flip(cp)) || flip(b), with C = true
  // P(E=false) = P(not relation) * P(flip(b)=false) + P(relation) * P(flip(cp)=false) * P(flip(b)=false)
  //            = (1-relation) * (1-b) + relation * (1-cp) * (1-b)
  //            = (1-b) * ((1-relation) + relation*(1-cp))
  //            = (1-b) * (1 - relation*cp)
  
  var pE_false = (1 - b) * (1 - (relation ? cp : 0));
  
  // Observe E = false
  observe(Bernoulli({p: 1 - pE_false}), false);
  
  return {relation: relation, cp: cp, b: b};
};

var ANSWER = Infer({method: 'MCMC', samples: 5000, burn: 1000}, model);
var model = function() {
  var relation = flip();
  var cp = uniform(0, 1);
  var b = uniform(0, 1);
  
  // Compute P(E = false | relation, cp, b, C = true) directly
  // E = (relation && C && flip(cp)) || flip(b)
  // With C = true, E = (relation && flip(cp)) || flip(b)
  // E = false iff both disjuncts are false:
  //   - (relation && flip(cp)) is false with prob: relation ? (1-cp) : 1
  //   - flip(b) is false with prob: (1-b)
  var probEFalse = function(relation, cp, b) {
    var pFirstDisjunctFalse = relation ? (1 - cp) : 1.0;
    var pSecondDisjunctFalse = 1 - b;
    return pFirstDisjunctFalse * pSecondDisjunctFalse;
  };
  
  // Observe E = false using the marginal probability
  // This avoids resampling E, making MH more efficient
  observe(Bernoulli({p: probEFalse(relation, cp, b)}), false);
  
  return {relation: relation, cp: cp, b: b};
};

var ANSWER = Infer({method: 'MCMC', samples: 5000}, model);
var model = function() {
  var relation = flip(0.5);
  var cp = uniform(0, 1);
  var b = uniform(0, 1);
  
  // Compute marginal probability of E given current latent values
  // by running a small inner inference that marginalizes over the stochastic flip()s
  var eDistribution = Infer({method: 'enumerate'}, function() {
    // With C = true from observed data
    var E = (relation && flip(cp)) || flip(b);
    return E;
  });
  
  // Observe E = false directly against the marginal distribution
  // This is much more efficient than sampling E and conditioning,
  // because proposals to relation/cp/b are now scored by the exact marginal,
  // avoiding rejection from joint sampling
  observe(eDistribution, false);
  
  return {relation: relation, cp: cp, b: b};
};

var ANSWER = Infer({method: 'MCMC', samples: 10000}, model);
// Causal Support model with marginal likelihood approach
// observedData = [{C: true, E: false}]
// E = (relation && C && flip(cp)) || flip(b)  [noisy-OR]

var model = function() {
  // Sample latent variables
  var relation = flip(0.5);
  var cp = uniform(0, 1);
  var b = uniform(0, 1);
  
  // For each observation, compute marginal P(E | C, relation, cp, b)
  // and use factor/observe instead of sampling E
  
  // Observation: C = true, E = false
  var C = true;
  var E_obs = false;
  
  // Compute P(E = true | C, relation, cp, b) analytically
  // E = (relation && C && flip(cp)) || flip(b)
  // Using noisy-OR:
  // P(E=1) = 1 - P(E=0)
  // P(E=0) = P(not from causal path) * P(not from background)
  // P(not from background) = (1 - b)
  // P(not from causal path) = if (relation && C) then (1 - cp) else 1
  
  var p_causal_fires = (relation && C) ? cp : 0;
  var p_E_true = 1 - (1 - p_causal_fires) * (1 - b);
  var p_E_false = 1 - p_E_true;
  
  // Observe E = false
  var p_E_obs = E_obs ? p_E_true : p_E_false;
  factor(Math.log(p_E_obs));
  
  return {relation: relation, cp: cp, b: b};
};

var ANSWER = Infer({method: 'MCMC', samples: 50000, burn: 5000, kernel: 'MH'}, model);
// Causal Support model with marginalised E for efficient MCMC
// Observed: [{C:true, E:false}]
// Noisy-OR: E = (relation && C && flip(cp)) || flip(b)

var model = function() {
  // Priors over latent structure and parameters
  var relation = flip(0.5);
  var cp       = uniform(0, 1);   // causal power
  var b        = uniform(0, 1);   // background rate

  // For each observation, build a *small enumeration* over E
  // to get P(E | relation, cp, b, C) analytically, then observe.
  map(function(obs) {
    var C = obs.C;
    var observedE = obs.E;

    // Marginal distribution of E under noisy-OR given current latents
    var eDist = Infer({method: 'enumerate'}, function() {
      var causalFires = relation && C && flip(cp);
      var bgFires     = flip(b);
      return causalFires || bgFires;
    });

    // Soft-condition on the observed E value
    observe(eDist, observedE);
  }, [{C: true, E: false}]);

  return {relation: relation, cp: cp, b: b};
};

var ANSWER = Infer({method: 'MCMC', samples: 20000, burn: 2000, lag: 2}, model);
var observedData = [{C: true, E: false}];

// Compute P(E=true | C, relation, cp, b) by marginalising out the internal flips
// via a nested enumerate inference — this avoids ever sampling E and conditioning on it.
var marginalPofE = function(C, relation, cp, b) {
  var dist = Infer({method: 'enumerate'}, function() {
    var causalPath = relation && C && flip(cp);
    var background  = flip(b);
    return causalPath || background;   // noisy-OR
  });
  return Math.exp(dist.score(true));  // P(E = true)
};

var model = function() {
  // Priors
  var relation = flip(0.5);
  var cp = uniform(0, 1);
  var b  = uniform(0, 1);

  // Soft-condition on each observation via the marginal likelihood of E
  mapData({data: observedData}, function(datum) {
    var pE = marginalPofE(datum.C, relation, cp, b);
    // observe E_observed under Bernoulli(pE) — no hard condition needed
    observe(Bernoulli({p: pE}), datum.E);
  });

  return {relation: relation, cp: cp, b: b};
};

var ANSWER = Infer(
  {method: 'MCMC', samples: 20000, lag: 5, burn: 2000,
   kernel: {MH: {}}},
  model
);
groundtruth output
{"relation":true,"cp":0.1720657297410…0.0004
{"relation":false,"cp":0.167367757800…0.0004
{"relation":true,"cp":0.0026054214915…0.0004
{"relation":true,"cp":0.0779696075964…0.0004
{"relation":true,"cp":0.0520597709915…0.0004
{"relation":true,"cp":0.0082423988873…0.0004
{"relation":false,"cp":0.918602394004…0.0004
{"relation":false,"cp":0.709728818230…0.0003
{"relation":false,"cp":0.618708086566…0.0003
{"relation":true,"cp":0.5020014531893…0.0003
{"relation":false,"cp":0.627211768456…0.0003
{"relation":false,"cp":0.592535077541…0.0003
… 9265 more
raw JSON
{
  "__kind": "distribution",
  "probs": [
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00019999999999999985,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00019999999999999985,
    0.00019999999999999985,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00019999999999999985,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00019999999999999985,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00019999999999999985,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00019999999999999985,
    0.00010000000000000009,
    0.00010000000000000009,
    0.0003000000000000001,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00019999999999999985,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.00010000000000000009,
    0.0001

... (1188065 more chars truncated)
feedback
not signed
probmods2-occams-razor/ex2.3 record(cpValues, csValues) 8 runs val-
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Compare Causal Power (CP) and Causal Support (CS) models on data where E occurs varying amounts with and without C.
var generateData = function(numEWithC, numEWithoutC) {
  var eWithC = repeat(numEWithC, function() {return {C: true, E: true}});
  var noEWithC = repeat(8 - numEWithC, function() {return {C: true, E: false}});
  var eWithoutC = repeat(numEWithoutC, function() {return {C: false, E: true}});
  var noEWithoutC = repeat(8 - numEWithoutC, function() {return {C: false, E: false}});
  return _.flatten([eWithC, noEWithC, eWithoutC, noEWithoutC]);
}
var dataParams = [[8,8],[6,6],[4,4],[2,2],[0,0],[8,6],[6,4],[4,2],[2,0],[8,4],[6,2],[4,0],[8,2],[6,0],[8,0]];
Implement cpPost(observedData) (returning posterior over cp) and csPost(observedData) (returning posterior over relation*cp), both using the marginal-noisy-OR observe from Ex 2.2. Return an object literal {cpValues, csValues} where each is an array of expectation(...) of the corresponding posterior across all 15 dataParams configurations.
groundtruth code
var generateData = function(numEWithC, numEWithoutC) {
  var eWithC = repeat(numEWithC, function() {return {C: true, E: true}});
  var noEWithC = repeat(8 - numEWithC, function() {return {C: true, E: false}});
  var eWithoutC = repeat(numEWithoutC, function() {return {C: false, E: true}});
  var noEWithoutC = repeat(8 - numEWithoutC, function() {return {C: false, E: false}});
  return _.flatten([eWithC, noEWithC, eWithoutC, noEWithoutC]);
};

var dataParams = [[8, 8], [6, 6], [4, 4], [2, 2], [0, 0], [8, 6],
                  [6, 4], [4, 2], [2, 0], [8, 4], [6, 2], [4, 0],
                  [8, 2], [6, 0], [8, 0]];

var data = map(function(x) { generateData(x[0], x[1]) }, dataParams);

var cpPost = function(observedData) {
  return Infer({method: 'MCMC', burn: 2000, samples: 1000, lag:2}, function() {
    var cp = uniform(0, 1);
    var b = uniform(0, 1);
    var noisyOrMarginal = function(C) {
      return Infer({method: 'enumerate'}, function() {
        return (C && flip(cp)) || flip(b);
      });
    };
    mapData({data: observedData}, function(datum) {
      observe(noisyOrMarginal(datum.C), datum.E);
    });
    return cp;
  });
};

var csPost = function(observedData) {
  return Infer({method: 'MCMC', burn: 2000, samples: 1000, lag:2}, function() {
    var relation = flip();
    var cp = uniform(0, 1);
    var b = uniform(0, 1);
    var noisyOrMarginal = function(C) {
      return Infer({method: 'enumerate'}, function() {
        return (relation && C && flip(cp)) || flip(b);
      });
    };
    mapData({data: observedData}, function(datum) {
      observe(noisyOrMarginal(datum.C), datum.E);
    });
    return relation * cp;
  });
};
var ANSWER = (({
  cpValues: map(function(d) { expectation(cpPost(d)) }, data),
  csValues: map(function(d) { expectation(csPost(d)) }, data)
}));
generated code
sonnet-46-primer-v3 val- haiku-45-noprimer-v3 val- haiku-45-primer-v3 fail haiku-45-think-noprimer-v3 fail haiku-45-think-primer-v3 fail sonnet-46-noprimer-v3 val- sonnet-46-think-noprimer-v3 val- sonnet-46-think-primer-v3 val-
var cpPost = function(observedData) {
  return Infer({method: 'MCMC', samples: 5000, burn: 1000}, function() {
    var b = uniform(0, 1);
    var cp = uniform(0, 1);
    
    mapData({data: observedData}, function(d) {
      var pE = d.C ? (1 - (1 - b) * (1 - cp)) : b;
      observe(Bernoulli({p: pE}), d.E);
    });
    
    return cp;
  });
};

var csPost = function(observedData) {
  return Infer({method: 'MCMC', samples: 5000, burn: 1000}, function() {
    var b = uniform(0, 1);
    var relation = flip(0.5);
    var cp = relation ? uniform(0, 1) : 0;
    
    mapData({data: observedData}, function(d) {
      var pE = d.C ? (1 - (1 - b) * (1 - cp)) : b;
      observe(Bernoulli({p: pE}), d.E);
    });
    
    return relation * cp;
  });
};

var generateData = function(numEWithC, numEWithoutC) {
  var eWithC = repeat(numEWithC, function() {return {C: true, E: true}});
  var noEWithC = repeat(8 - numEWithC, function() {return {C: true, E: false}});
  var eWithoutC = repeat(numEWithoutC, function() {return {C: false, E: true}});
  var noEWithoutC = repeat(8 - numEWithoutC, function() {return {C: false, E: false}});
  return _.flatten([eWithC, noEWithC, eWithoutC, noEWithoutC]);
};

var dataParams = [[8,8],[6,6],[4,4],[2,2],[0,0],[8,6],[6,4],[4,2],[2,0],[8,4],[6,2],[4,0],[8,2],[6,0],[8,0]];

var allData = map(function(params) {
  return generateData(params[0], params[1]);
}, dataParams);

var cpValues = map(function(data) {
  var post = cpPost(data);
  return expectation(post);
}, allData);

var csValues = map(function(data) {
  var post = csPost(data);
  return expectation(post);
}, allData);

var ANSWER = {cpValues: cpValues, csValues: csValues};
var generateData = function(numEWithC, numEWithoutC) {
  var eWithC = repeat(numEWithC, function() {return {C: true, E: true}});
  var noEWithC = repeat(8 - numEWithC, function() {return {C: true, E: false}});
  var eWithoutC = repeat(numEWithoutC, function() {return {C: false, E: true}});
  var noEWithoutC = repeat(8 - numEWithoutC, function() {return {C: false, E: false}});
  return _.flatten([eWithC, noEWithC, eWithoutC, noEWithoutC]);
}

var dataParams = [[8,8],[6,6],[4,4],[2,2],[0,0],[8,6],[6,4],[4,2],[2,0],[8,4],[6,2],[4,0],[8,2],[6,0],[8,0]];

// Marginal noisy-OR model from Ex 2.2
var noisyOrModel = function(cp, baseRate) {
  return function(c) {
    var pE = c ? (1 - (1 - baseRate) * (1 - cp)) : baseRate;
    return Bernoulli({p: pE});
  };
};

// Causal Power posterior
var cpPost = function(observedData) {
  return Infer({method: 'MCMC', samples: 2000}, function() {
    var cp = uniform(0, 1);
    var baseRate = uniform(0, 1);
    var model = noisyOrModel(cp, baseRate);
    
    mapData({data: observedData}, function(datum) {
      observe(model(datum.C), datum.E);
    });
    
    return cp;
  });
};

// Causal Support posterior (relation * cp)
var csPost = function(observedData) {
  return Infer({method: 'MCMC', samples: 2000}, function() {
    var cp = uniform(0, 1);
    var baseRate = uniform(0, 1);
    var model = noisyOrModel(cp, baseRate);
    
    mapData({data: observedData}, function(datum) {
      observe(model(datum.C), datum.E);
    });
    
    // Causal Support: relation * cp
    // relation = P(E|C) - P(E|¬C)
    var pEC = 1 - (1 - baseRate) * (1 - cp);
    var pEnotC = baseRate;
    var relation = pEC - pEnotC;
    var cs = relation * cp;
    
    return cs;
  });
};

// Compute expectations across all data configurations
var cpValues = map(function(params) {
  var data = generateData(params[0], params[1]);
  var post = cpPost(data);
  return expectation(post);
}, dataParams);

var csValues = map(function(params) {
  var data = generateData(params[0], params[1]);
  var post = csPost(data);
  return expectation(post);
}, dataParams);

var ANSWER = {cpValues: cpValues, csValues: csValues};
Timeout after 60s
var generateData = function(numEWithC, numEWithoutC) {
  var eWithC = repeat(numEWithC, function() {return {C: true, E: true}});
  var noEWithC = repeat(8 - numEWithC, function() {return {C: true, E: false}});
  var eWithoutC = repeat(numEWithoutC, function() {return {C: false, E: true}});
  var noEWithoutC = repeat(8 - numEWithoutC, function() {return {C: false, E: false}});
  return _.flatten([eWithC, noEWithC, eWithoutC, noEWithoutC]);
};

var dataParams = [[8,8],[6,6],[4,4],[2,2],[0,0],[8,6],[6,4],[4,2],[2,0],[8,4],[6,2],[4,0],[8,2],[6,0],[8,0]];

// Marginal-noisy-OR model from Ex 2.2
var cpPost = function(observedData) {
  return Infer({method: 'MCMC', samples: 5000}, function() {
    var cp = uniform(0, 1);
    var baseRate = uniform(0, 1);
    
    mapData({data: observedData}, function(datum) {
      var pE = datum.C ? (1 - (1 - baseRate) * (1 - cp)) : baseRate;
      observe(Bernoulli({p: pE}), datum.E ? 1 : 0);
    });
    
    return cp;
  });
};

var csPost = function(observedData) {
  return Infer({method: 'MCMC', samples: 5000}, function() {
    var cp = uniform(0, 1);
    var baseRate = uniform(0, 1);
    var relation = uniform(0, 1);
    
    mapData({data: observedData}, function(datum) {
      var pE = datum.C ? (1 - (1 - baseRate) * (1 - relation * cp)) : baseRate;
      observe(Bernoulli({p: pE}), datum.E ? 1 : 0);
    });
    
    return relation * cp;
  });
};

var cpValues = map(function(params) {
  var data = generateData(params[0], params[1]);
  var post = cpPost(data);
  return expectation(post);
}, dataParams);

var csValues = map(function(params) {
  var data = generateData(params[0], params[1]);
  var post = csPost(data);
  return expectation(post);
}, dataParams);

var ANSWER = {cpValues: cpValues, csValues: csValues};
Timeout after 60s
var generateData = function(numEWithC, numEWithoutC) {
  var eWithC = repeat(numEWithC, function() {return {C: true, E: true}});
  var noEWithC = repeat(8 - numEWithC, function() {return {C: true, E: false}});
  var eWithoutC = repeat(numEWithoutC, function() {return {C: false, E: true}});
  var noEWithoutC = repeat(8 - numEWithoutC, function() {return {C: false, E: false}});
  return _.flatten([eWithC, noEWithC, eWithoutC, noEWithoutC]);
};

var cpPost = function(observedData) {
  return Infer({method: 'MCMC', samples: 1000}, function() {
    var cp = uniform(0, 1);
    var bg = uniform(0, 1);
    
    mapData({data: observedData}, function(datum) {
      var probE = datum.C ? (bg + (1 - bg) * cp) : bg;
      observe(Bernoulli({p: probE}), datum.E ? 1 : 0);
    });
    
    return cp;
  });
};

var csPost = function(observedData) {
  return Infer({method: 'MCMC', samples: 1000}, function() {
    var cp = uniform(0, 1);
    var relation = uniform(0, 1);
    var bg = uniform(0, 1);
    
    mapData({data: observedData}, function(datum) {
      var probE = datum.C ? 
        (bg + (1 - bg) * relation * cp) : 
        bg;
      observe(Bernoulli({p: probE}), datum.E ? 1 : 0);
    });
    
    return relation * cp;
  });
};

var dataParams = [[8,8],[6,6],[4,4],[2,2],[0,0],[8,6],[6,4],[4,2],[2,0],[8,4],[6,2],[4,0],[8,2],[6,0],[8,0]];

var cpValues = map(function(params) {
  var data = generateData(params[0], params[1]);
  return expectation(cpPost(data));
}, dataParams);

var csValues = map(function(params) {
  var data = generateData(params[0], params[1]);
  return expectation(csPost(data));
}, dataParams);

var ANSWER = {cpValues: cpValues, csValues: csValues};
Timeout after 60s
var generateData = function(numEWithC, numEWithoutC) {
  var eWithC = repeat(numEWithC, function() {return {C: true, E: true}});
  var noEWithC = repeat(8 - numEWithC, function() {return {C: true, E: false}});
  var eWithoutC = repeat(numEWithoutC, function() {return {C: false, E: true}});
  var noEWithoutC = repeat(8 - numEWithoutC, function() {return {C: false, E: false}});
  return _.flatten([eWithC, noEWithC, eWithoutC, noEWithoutC]);
};

var cpPost = function(observedData) {
  return Infer({method: 'MCMC', samples: 2000}, function() {
    // Priors
    var cp = beta(1, 1);  // Causal power, uniform prior
    var b = beta(1, 1);   // Base rate (alternative causes)
    
    // Noisy-OR likelihood
    mapData({data: observedData}, function(datum) {
      var pE = datum.C ? (cp + (1 - cp) * b) : b;
      observe(Bernoulli({p: pE}), datum.E ? 1 : 0);
    });
    
    return cp;
  });
};

var csPost = function(observedData) {
  return Infer({method: 'MCMC', samples: 2000}, function() {
    // Priors
    var cp = beta(1, 1);
    var b = beta(1, 1);
    
    // Noisy-OR likelihood
    mapData({data: observedData}, function(datum) {
      var pE = datum.C ? (cp + (1 - cp) * b) : b;
      observe(Bernoulli({p: pE}), datum.E ? 1 : 0);
    });
    
    // Causal Support = relation * cp
    // relation measures how much E is more likely given C vs. without C
    var pEC = cp + (1 - cp) * b;
    var pNotEC = b;
    var relation = (pEC - pNotEC) / Math.max(1 - pNotEC, 0.001);
    
    return relation * cp;
  });
};

var dataParams = [[8,8],[6,6],[4,4],[2,2],[0,0],[8,6],[6,4],[4,2],[2,0],[8,4],[6,2],[4,0],[8,2],[6,0],[8,0]];

var cpValues = map(function(params) {
  var data = generateData(params[0], params[1]);
  var post = cpPost(data);
  return expectation(post);
}, dataParams);

var csValues = map(function(params) {
  var data = generateData(params[0], params[1]);
  var post = csPost(data);
  return expectation(post);
}, dataParams);

var ANSWER = {cpValues: cpValues, csValues: csValues};
// Marginal noisy-OR likelihood
// P(E=1 | C, cp, b) = 1 - (1-b) * (C ? (1-cp) : 1)
// We marginalize over b with a uniform prior

var noiseORObserve = function(datum, cp, b) {
  var pE = datum.C ? (1 - (1 - b) * (1 - cp)) : b;
  return datum.E ? Math.log(pE) : Math.log(1 - pE);
};

var cpPost = function(observedData) {
  return Infer({method: 'MCMC', samples: 5000, burn: 1000}, function() {
    var cp = uniform(0, 1);
    var b = uniform(0, 1);
    
    var logLik = sum(map(function(datum) {
      return noiseORObserve(datum, cp, b);
    }, observedData));
    
    factor(logLik);
    
    return cp;
  });
};

var csPost = function(observedData) {
  return Infer({method: 'MCMC', samples: 5000, burn: 1000}, function() {
    var relation = flip(0.5) ? 1 : 0; // causal relation exists or not
    var cp = uniform(0, 1);
    var b = uniform(0, 1);
    
    // If no relation, cp effectively = 0 (C has no causal power)
    var effectiveCp = relation === 1 ? cp : 0;
    
    var logLik = sum(map(function(datum) {
      return noiseORObserve(datum, effectiveCp, b);
    }, observedData));
    
    factor(logLik);
    
    return relation * cp;
  });
};

var generateData = function(numEWithC, numEWithoutC) {
  var eWithC = repeat(numEWithC, function() {return {C: true, E: true}});
  var noEWithC = repeat(8 - numEWithC, function() {return {C: true, E: false}});
  var eWithoutC = repeat(numEWithoutC, function() {return {C: false, E: true}});
  var noEWithoutC = repeat(8 - numEWithoutC, function() {return {C: false, E: false}});
  return _.flatten([eWithC, noEWithC, eWithoutC, noEWithoutC]);
};

var dataParams = [[8,8],[6,6],[4,4],[2,2],[0,0],[8,6],[6,4],[4,2],[2,0],[8,4],[6,2],[4,0],[8,2],[6,0],[8,0]];

var cpValues = map(function(params) {
  var data = generateData(params[0], params[1]);
  var post = cpPost(data);
  return expectation(post);
}, dataParams);

var csValues = map(function(params) {
  var data = generateData(params[0], params[1]);
  var post = csPost(data);
  return expectation(post);
}, dataParams);

var ANSWER = {cpValues: cpValues, csValues: csValues};
var generateData = function(numEWithC, numEWithoutC) {
  var eWithC = repeat(numEWithC, function() {return {C: true, E: true}});
  var noEWithC = repeat(8 - numEWithC, function() {return {C: true, E: false}});
  var eWithoutC = repeat(numEWithoutC, function() {return {C: false, E: true}});
  var noEWithoutC = repeat(8 - numEWithoutC, function() {return {C: false, E: false}});
  return _.flatten([eWithC, noEWithC, eWithoutC, noEWithoutC]);
};

var dataParams = [[8,8],[6,6],[4,4],[2,2],[0,0],[8,6],[6,4],[4,2],[2,0],[8,4],[6,2],[4,0],[8,2],[6,0],[8,0]];

// Marginal noisy-OR observe from Ex 2.2
var marginalNoisyORObs = function(data, b, cp) {
  mapData({data: data}, function(d) {
    var pE = d.C ? (b + cp - b * cp) : b;
    var pEclamped = Math.max(1e-10, Math.min(1 - 1e-10, pE));
    factor(d.E ? Math.log(pEclamped) : Math.log(1 - pEclamped));
  });
};

// CP model: infer causal power cp (with background b marginalized)
var cpPost = function(observedData) {
  return Infer({method: 'MCMC', samples: 10000, burn: 2000, lag: 2}, function() {
    var b  = uniform(0, 1);
    var cp = uniform(0, 1);
    marginalNoisyORObs(observedData, b, cp);
    return cp;
  });
};

// CS model: infer relation (binary) * cp, capturing causal support
var csPost = function(observedData) {
  return Infer({method: 'MCMC', samples: 10000, burn: 2000, lag: 2}, function() {
    var b        = uniform(0, 1);
    var relation = flip(0.5);
    var cp       = uniform(0, 1);
    // if no causal relation, effective cp is 0
    var effectiveCp = relation ? cp : 0;
    marginalNoisyORObs(observedData, b, effectiveCp);
    return relation ? cp : 0;
  });
};

var cpValues = map(function(params) {
  var data = generateData(params[0], params[1]);
  return expectation(cpPost(data));
}, dataParams);

var csValues = map(function(params) {
  var data = generateData(params[0], params[1]);
  return expectation(csPost(data));
}, dataParams);

var ANSWER = {cpValues: cpValues, csValues: csValues};
var generateData = function(numEWithC, numEWithoutC) {
  var eWithC = repeat(numEWithC, function() { return {C: true, E: true}; });
  var noEWithC = repeat(8 - numEWithC, function() { return {C: true, E: false}; });
  var eWithoutC = repeat(numEWithoutC, function() { return {C: false, E: true}; });
  var noEWithoutC = repeat(8 - numEWithoutC, function() { return {C: false, E: false}; });
  return _.flatten([eWithC, noEWithC, eWithoutC, noEWithoutC]);
};

var dataParams = [[8,8],[6,6],[4,4],[2,2],[0,0],[8,6],[6,4],[4,2],[2,0],[8,4],[6,2],[4,0],[8,2],[6,0],[8,0]];

// Causal Power: posterior over cp using noisy-OR marginal observe
var cpPost = function(observedData) {
  return Infer({method: 'MCMC', samples: 10000, burn: 2000}, function() {
    var b = beta(1, 1);
    var cp = beta(1, 1);

    mapData({data: observedData}, function(d) {
      var pE = d.C ? (1 - (1 - b) * (1 - cp)) : b;
      observe(Bernoulli({p: pE}), d.E);
    });

    return cp;
  });
};

// Causal Support: posterior over relation*cp using noisy-OR marginal observe
var csPost = function(observedData) {
  return Infer({method: 'MCMC', samples: 10000, burn: 2000}, function() {
    var b = beta(1, 1);
    var relation = flip(0.5);  // binary: does C have causal relation to E?
    var cp = beta(1, 1);

    var effectiveCp = relation ? cp : 0;

    mapData({data: observedData}, function(d) {
      var pE = d.C ? (1 - (1 - b) * (1 - effectiveCp)) : b;
      observe(Bernoulli({p: pE}), d.E);
    });

    return relation * cp;
  });
};

var allData = map(function(params) {
  return generateData(params[0], params[1]);
}, dataParams);

var cpValues = map(function(data) {
  return expectation(cpPost(data));
}, allData);

var csValues = map(function(data) {
  return expectation(csPost(data));
}, allData);

var ANSWER = {cpValues: cpValues, csValues: csValues};
groundtruth output
{
  "cpValues": [
    0.5556826131397045,
    0.3702535980921866,
    0.2700255045689427,
    0.1892047791618214,
    0.10600645878143336,
    0.6358724683821168,
    0.4481293977618758,
    0.33676185481308646,
    0.25430731291635106,
    0.7727284201484627,
    0.5424782657820121,
    0.4280589392830203,
    0.8453887507387302,
    0.6593040276151818,
    0.8872620837183924
  ],
  "csValues": [
    0.34298901139185173,
    0.15298054308492048,
    0.10216146467440754,
    0.04931054333555417,
    0.013087233596827976,
    0.5022983299207961,
    0.29668120661657926,
    0.16291500410135396,
    0.13221114517998647,
    0.7142026717003641,
    0.4885003554613696,
    0.36158814464404776,
    0.8495186831069386,
    0.6535112029435439,
    0.8830293854508352
  ]
}
feedback
not signed
social-cognition 6 atoms 5✓ · 0✗
probmods2-social-cognition/ex1.1 distribution 8 runs TV=0.00 TV=0
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Sally chooses an action 'a', 'b', or 'c' that maps to a vending machine outcome (each action gives 80% chance of one specific food, 10% each for the others). If Sally is *deceptive* she chooses an action that does NOT lead to her goal food; otherwise she chooses one that does. Fill in the condition(...) calls. End with the Infer(...) call returning the posterior over Sally's goal food given that she is deceptive AND chose 'b'.
var actionPrior = Categorical({vs: ['a','b','c'], ps: [1/3,1/3,1/3]});
var foodPrior = Categorical({vs: ['bagel','cookie','doughnut'], ps: [1/3,1/3,1/3]});
var vendingMachine = /* maps action -> categorical food */;

var chooseAction = function(goal, transition, state, deceive) {
  return Infer({method: 'enumerate'}, function() {
    var action = sample(actionPrior);
    condition(...)
    return action;
  })
};

Infer({method: 'enumerate'}, function() {
  var deceive = flip();
  var goalFood = sample(foodPrior);
  var goal = function(outcome) {return outcome == goalFood};
  var sallyActionDist = chooseAction(goal, vendingMachine, 'state', deceive);
  condition(...)
  return goalFood;
});
groundtruth code
var actionPrior = Categorical({vs: ['a', 'b', 'c'], ps: [1/3, 1/3, 1/3]});
var foodPrior = Categorical({vs: ['bagel', 'cookie', 'doughnut'], ps: [1/3, 1/3, 1/3]});

var vendingMachine = function(state, action) {
  return action == 'a' ? categorical({vs: ['bagel', 'cookie', 'doughnut'], ps: [.8, .1, .1]}) :
         action == 'b' ? categorical({vs: ['bagel', 'cookie', 'doughnut'], ps: [.1, .8, .1]}) :
         action == 'c' ? categorical({vs: ['bagel', 'cookie', 'doughnut'], ps: [.1, .1, .8]}) :
         'nothing';
};

var chooseAction = function(goal, transition, state, deceive) {
  return Infer({method: 'enumerate'}, function() {
    var action = sample(actionPrior);
    var outcome = transition(state, action);
    condition(deceive ? !goal(outcome) : goal(outcome));
    return action;
  });
};
var ANSWER = (Infer({method: 'enumerate'}, function() {
  var deceive = flip();
  var goalFood = sample(foodPrior);
  var goal = function(outcome) {return outcome == goalFood};
  var sallyActionDist = chooseAction(goal, vendingMachine, 'state', deceive);
  condition(deceive);
  condition(sample(sallyActionDist) == 'b');
  return goalFood;
}));
generated code
sonnet-46-primer-v3 TV=0 haiku-45-noprimer-v3 TV=0 haiku-45-primer-v3 fail haiku-45-think-noprimer-v3 TV<.5 haiku-45-think-primer-v3 TV=0 sonnet-46-noprimer-v3 TV=0 sonnet-46-think-noprimer-v3 TV=0 sonnet-46-think-primer-v3 TV=0
var actionPrior = Categorical({vs: ['a','b','c'], ps: [1/3,1/3,1/3]});
var foodPrior = Categorical({vs: ['bagel','cookie','doughnut'], ps: [1/3,1/3,1/3]});

var vendingMachine = function(action) {
  return Categorical({
    vs: ['bagel','cookie','doughnut'],
    ps: action === 'a' ? [0.8, 0.1, 0.1] :
        action === 'b' ? [0.1, 0.8, 0.1] :
                         [0.1, 0.1, 0.8]
  });
};

var chooseAction = function(goal, transition, state, deceive) {
  return Infer({method: 'enumerate'}, function() {
    var action = sample(actionPrior);
    var outcome = sample(transition(action));
    condition(deceive ? !goal(outcome) : goal(outcome));
    return action;
  });
};

var ANSWER = Infer({method: 'enumerate'}, function() {
  var deceive = flip();
  var goalFood = sample(foodPrior);
  var goal = function(outcome) { return outcome == goalFood; };
  var sallyActionDist = chooseAction(goal, vendingMachine, 'state', deceive);
  condition(deceive === true && sample(sallyActionDist) === 'b');
  return goalFood;
});
var actionPrior = Categorical({vs: ['a','b','c'], ps: [1/3,1/3,1/3]});
var foodPrior = Categorical({vs: ['bagel','cookie','doughnut'], ps: [1/3,1/3,1/3]});

var vendingMachine = function(action) {
  var actionToFood = {
    'a': 'bagel',
    'b': 'cookie',
    'c': 'doughnut'
  };
  var goalFood = actionToFood[action];
  return Categorical({
    vs: ['bagel','cookie','doughnut'],
    ps: [
      goalFood === 'bagel' ? 0.8 : 0.1,
      goalFood === 'cookie' ? 0.8 : 0.1,
      goalFood === 'doughnut' ? 0.8 : 0.1
    ]
  });
};

var chooseAction = function(goal, transition, state, deceive) {
  return Infer({method: 'enumerate'}, function() {
    var action = sample(actionPrior);
    var outcome = sample(transition(action));
    var goalAchieved = goal(outcome);
    if (deceive) {
      condition(!goalAchieved);
    } else {
      condition(goalAchieved);
    }
    return action;
  })
};

var ANSWER = Infer({method: 'enumerate'}, function() {
  var deceive = flip();
  var goalFood = sample(foodPrior);
  var goal = function(outcome) {return outcome == goalFood};
  var sallyActionDist = chooseAction(goal, vendingMachine, 'state', deceive);
  var action = sample(sallyActionDist);
  condition(deceive && action === 'b');
  return goalFood;
});
Error: Line 45: You tried to assign to a field of ps, but you can only assign to fields of globalStore
var actionPrior = Categorical({vs: ['a','b','c'], ps: [1/3,1/3,1/3]});
var foodPrior = Categorical({vs: ['bagel','cookie','doughnut'], ps: [1/3,1/3,1/3]});

var vendingMachine = function(action) {
  var actionToFood = {
    'a': 'bagel',
    'b': 'cookie',
    'c': 'doughnut'
  };
  var goalFood = actionToFood[action];
  return Categorical({
    vs: ['bagel','cookie','doughnut'],
    ps: (function() {
      var ps = [0, 0, 0];
      var foodIndex = {'bagel': 0, 'cookie': 1, 'doughnut': 2}[goalFood];
      ps[foodIndex] = 0.8;
      var others = _.range(3).filter(function(i) { return i !== foodIndex; });
      ps[others[0]] = 0.1;
      ps[others[1]] = 0.1;
      return ps;
    })()
  });
};

var chooseAction = function(goal, transition, state, deceive) {
  return Infer({method: 'enumerate'}, function() {
    var action = sample(actionPrior);
    var outcome = sample(transition(action));
    var goalAchieved = goal(outcome);
    condition(deceive ? !goalAchieved : goalAchieved);
    return action;
  });
};

var ANSWER = Infer({method: 'enumerate'}, function() {
  var deceive = flip();
  var goalFood = sample(foodPrior);
  var goal = function(outcome) { return outcome == goalFood; };
  var sallyActionDist = chooseAction(goal, vendingMachine, 'state', deceive);
  var sallyAction = sample(sallyActionDist);
  condition(deceive && sallyAction == 'b');
  return goalFood;
});
var actionPrior = Categorical({vs: ['a','b','c'], ps: [1/3,1/3,1/3]});
var foodPrior = Categorical({vs: ['bagel','cookie','doughnut'], ps: [1/3,1/3,1/3]});

var vendingMachine = function(action) {
  if (action === 'a') {
    return Categorical({vs: ['bagel','cookie','doughnut'], ps: [0.8, 0.1, 0.1]});
  } else if (action === 'b') {
    return Categorical({vs: ['bagel','cookie','doughnut'], ps: [0.1, 0.8, 0.1]});
  } else {
    return Categorical({vs: ['bagel','cookie','doughnut'], ps: [0.1, 0.1, 0.8]});
  }
};

var actionToMainFood = function(action) {
  if (action === 'a') return 'bagel';
  if (action === 'b') return 'cookie';
  return 'doughnut';
};

var chooseAction = function(goal, transition, state, deceive) {
  return Infer({method: 'enumerate'}, function() {
    var action = sample(actionPrior);
    var actionLeadsToGoal = goal(actionToMainFood(action));
    condition(deceive ? !actionLeadsToGoal : actionLeadsToGoal);
    return action;
  })
};

var ANSWER = Infer({method: 'enumerate'}, function() {
  var deceive = flip();
  var goalFood = sample(foodPrior);
  var goal = function(outcome) {return outcome == goalFood};
  var sallyActionDist = chooseAction(goal, vendingMachine, 'state', deceive);
  var sallyAction = sample(sallyActionDist);
  condition(deceive && sallyAction === 'b');
  return goalFood;
});
var actionPrior = Categorical({vs: ['a','b','c'], ps: [1/3,1/3,1/3]});
var foodPrior = Categorical({vs: ['bagel','cookie','doughnut'], ps: [1/3,1/3,1/3]});

var vendingMachine = function(action) {
  if (action === 'a') {
    return Categorical({vs: ['bagel','cookie','doughnut'], ps: [0.8, 0.1, 0.1]});
  } else if (action === 'b') {
    return Categorical({vs: ['bagel','cookie','doughnut'], ps: [0.1, 0.8, 0.1]});
  } else {
    return Categorical({vs: ['bagel','cookie','doughnut'], ps: [0.1, 0.1, 0.8]});
  }
};

var chooseAction = function(goal, transition, state, deceive) {
  return Infer({method: 'enumerate'}, function() {
    var action = sample(actionPrior);
    var outcome = sample(transition(action));
    condition(deceive ? !goal(outcome) : goal(outcome));
    return action;
  })
};

var ANSWER = Infer({method: 'enumerate'}, function() {
  var deceive = flip();
  var goalFood = sample(foodPrior);
  var goal = function(outcome) {return outcome == goalFood};
  var sallyActionDist = chooseAction(goal, vendingMachine, 'state', deceive);
  var action = sample(sallyActionDist);
  condition(deceive && action == 'b');
  return goalFood;
});
var actionPrior = Categorical({vs: ['a','b','c'], ps: [1/3,1/3,1/3]});
var foodPrior = Categorical({vs: ['bagel','cookie','doughnut'], ps: [1/3,1/3,1/3]});

var vendingMachine = function(state, action) {
  if (action === 'a') {
    return categorical({vs: ['bagel','cookie','doughnut'], ps: [0.8, 0.1, 0.1]});
  } else if (action === 'b') {
    return categorical({vs: ['bagel','cookie','doughnut'], ps: [0.1, 0.8, 0.1]});
  } else {
    return categorical({vs: ['bagel','cookie','doughnut'], ps: [0.1, 0.1, 0.8]});
  }
};

var chooseAction = function(goal, transition, state, deceive) {
  return Infer({method: 'enumerate'}, function() {
    var action = sample(actionPrior);
    var outcome = transition(state, action);
    if (deceive) {
      condition(!goal(outcome));
    } else {
      condition(goal(outcome));
    }
    return action;
  });
};

var ANSWER = Infer({method: 'enumerate'}, function() {
  var deceive = flip();
  var goalFood = sample(foodPrior);
  var goal = function(outcome) { return outcome == goalFood; };
  var sallyActionDist = chooseAction(goal, vendingMachine, 'state', deceive);
  var action = sample(sallyActionDist);
  condition(deceive === true && action === 'b');
  return goalFood;
});
var actionPrior = Categorical({vs: ['a','b','c'], ps: [1/3,1/3,1/3]});
var foodPrior   = Categorical({vs: ['bagel','cookie','doughnut'], ps: [1/3,1/3,1/3]});

// action 'a' → mostly bagel, 'b' → mostly cookie, 'c' → mostly doughnut
var vendingMachine = function(action) {
  if (action === 'a') {
    return Categorical({vs: ['bagel','cookie','doughnut'], ps: [0.8, 0.1, 0.1]});
  } else if (action === 'b') {
    return Categorical({vs: ['bagel','cookie','doughnut'], ps: [0.1, 0.8, 0.1]});
  } else {
    return Categorical({vs: ['bagel','cookie','doughnut'], ps: [0.1, 0.1, 0.8]});
  }
};

// chooseAction: if deceive, pick an action whose outcome does NOT satisfy goal;
//               otherwise pick one that does.
var chooseAction = function(goal, transition, state, deceive) {
  return Infer({method: 'enumerate'}, function() {
    var action  = sample(actionPrior);
    var outcome = sample(transition(action));
    condition(deceive ? !goal(outcome) : goal(outcome));
    return action;
  });
};

var ANSWER = Infer({method: 'enumerate'}, function() {
  var deceive    = flip();
  var goalFood   = sample(foodPrior);
  var goal       = function(outcome) { return outcome === goalFood; };
  var sallyActionDist = chooseAction(goal, vendingMachine, 'state', deceive);

  // Observe: Sally IS deceptive AND chose action 'b'
  condition(deceive === true);
  condition(sample(sallyActionDist) === 'b');

  return goalFood;
});
var actionPrior = Categorical({vs: ['a','b','c'], ps: [1/3,1/3,1/3]});
var foodPrior = Categorical({vs: ['bagel','cookie','doughnut'], ps: [1/3,1/3,1/3]});

var vendingMachine = function(action) {
  return Categorical({
    vs: ['bagel', 'cookie', 'doughnut'],
    ps: action == 'a' ? [0.8, 0.1, 0.1] :
        action == 'b' ? [0.1, 0.8, 0.1] :
                        [0.1, 0.1, 0.8]
  });
};

var chooseAction = function(goal, transition, state, deceive) {
  return Infer({method: 'enumerate'}, function() {
    var action = sample(actionPrior);
    // If deceiving, pick an action whose outcome does NOT satisfy goal;
    // otherwise pick one whose outcome DOES satisfy goal.
    condition(deceive
      ? !goal(sample(transition(action)))
      :  goal(sample(transition(action))));
    return action;
  });
};

var ANSWER = Infer({method: 'enumerate'}, function() {
  var deceive = flip();
  var goalFood = sample(foodPrior);
  var goal = function(outcome) { return outcome == goalFood; };
  var sallyActionDist = chooseAction(goal, vendingMachine, 'state', deceive);
  // Condition on Sally being deceptive AND having chosen action 'b'
  condition(deceive);
  condition(sample(sallyActionDist) == 'b');
  return goalFood;
});
groundtruth output
doughnut0.4500
bagel0.4500
cookie0.1000
raw JSON
{
  "__kind": "distribution",
  "probs": [
    0.44999999999999996,
    0.10000000000000012,
    0.44999999999999996
  ],
  "support": [
    "doughnut",
    "cookie",
    "bagel"
  ]
}
feedback
not signed
probmods2-social-cognition/ex1.2 distribution 8 runs TV=0.06 TV<.5
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Sally chooses 'a', 'b', or 'c'. If deceptive she avoids her goal food; otherwise she picks an action whose outcome IS her goal food. The vending machine: action 'a' -> 80% bagel, 'b' -> 80% cookie, 'c' -> 80% doughnut (10% each for the others). You observe Sally choose 'b' twice in a row. End with the Infer(...) returning the posterior over her goal food.
groundtruth code
var actionPrior = Categorical({vs: ['a', 'b', 'c'], ps: [1/3, 1/3, 1/3]});
var foodPrior = Categorical({vs: ['bagel', 'cookie', 'doughnut'], ps: [1/3, 1/3, 1/3]});

var vendingMachine = function(state, action) {
  return action == 'a' ? categorical({vs: ['bagel', 'cookie', 'doughnut'], ps: [.8, .1, .1]}) :
         action == 'b' ? categorical({vs: ['bagel', 'cookie', 'doughnut'], ps: [.1, .8, .1]}) :
         action == 'c' ? categorical({vs: ['bagel', 'cookie', 'doughnut'], ps: [.1, .1, .8]}) :
         'nothing';
};

var chooseAction = function(goal, transition, state, deceive) {
  return Infer({method: 'enumerate'}, function() {
    var action = sample(actionPrior);
    var outcome = transition(state, action);
    condition(deceive ? !goal(outcome) : goal(outcome));
    return action;
  });
};
var ANSWER = (Infer({method: 'enumerate'}, function() {
  var deceive = flip();
  var goalFood = sample(foodPrior);
  var goal = function(outcome) {return outcome == goalFood};
  var sallyActionDist = chooseAction(goal, vendingMachine, 'state', deceive);
  condition(sample(sallyActionDist) == 'b');
  condition(sample(sallyActionDist) == 'b');
  return goalFood;
}));
generated code
sonnet-46-primer-v3 TV<.5 haiku-45-noprimer-v3 fail haiku-45-primer-v3 TV<.5 haiku-45-think-noprimer-v3 TV<.5 haiku-45-think-primer-v3 TV<.5 sonnet-46-noprimer-v3 TV<.5 sonnet-46-think-noprimer-v3 TV<.5 sonnet-46-think-primer-v3 TV<.5
var model = function() {
  // Prior over goal food
  var goal = categorical({vs: ['bagel', 'cookie', 'doughnut'], ps: [1/3, 1/3, 1/3]});
  
  // Prior over deceptive
  var deceptive = flip(0.5);
  
  // Vending machine outcomes given action
  var vendingMachine = function(action) {
    var ps = action === 'a' ? [0.8, 0.1, 0.1] :
             action === 'b' ? [0.1, 0.8, 0.1] :
                              [0.1, 0.1, 0.8];
    return categorical({vs: ['bagel', 'cookie', 'doughnut'], ps: ps});
  };
  
  // Sally picks action based on goal and deceptiveness
  // If not deceptive: picks action whose outcome IS her goal food (most likely)
  // If deceptive: avoids her goal food
  var chooseAction = function() {
    if (!deceptive) {
      // Pick the action most likely to give goal food
      // 'a' -> bagel, 'b' -> cookie, 'c' -> doughnut
      return goal === 'bagel' ? 'a' :
             goal === 'cookie' ? 'b' : 'c';
    } else {
      // Deceptive: avoid goal food, pick uniformly among non-goal actions
      var nonGoalActions = goal === 'bagel' ? ['b', 'c'] :
                           goal === 'cookie' ? ['a', 'c'] :
                                              ['a', 'b'];
      return nonGoalActions[randomInteger(2)];
    }
  };
  
  // Observe Sally choosing 'b' twice
  var action1 = chooseAction();
  condition(action1 === 'b');
  
  var action2 = chooseAction();
  condition(action2 === 'b');
  
  return goal;
};

var ANSWER = Infer({method: 'enumerate'}, model);
Error: sample() expected a distribution but received "false".
var model = function() {
  var goal = uniformDraw(['bagel', 'cookie', 'doughnut']);
  var deceptive = flip(0.5);
  
  var actionDist = function(goal, deceptive) {
    if (deceptive) {
      // Avoid goal food: choose action that doesn't give goal with high probability
      if (goal === 'bagel') return uniformDraw(['b', 'c']);
      if (goal === 'cookie') return uniformDraw(['a', 'c']);
      if (goal === 'doughnut') return uniformDraw(['a', 'b']);
    } else {
      // Pick action whose outcome IS goal food
      if (goal === 'bagel') return 'a';
      if (goal === 'cookie') return 'b';
      if (goal === 'doughnut') return 'c';
    }
  };
  
  var outcomeDist = function(action) {
    if (action === 'a') return categorical({ps: [0.8, 0.1, 0.1], vs: ['bagel', 'cookie', 'doughnut']});
    if (action === 'b') return categorical({ps: [0.1, 0.8, 0.1], vs: ['bagel', 'cookie', 'doughnut']});
    if (action === 'c') return categorical({ps: [0.1, 0.1, 0.8], vs: ['bagel', 'cookie', 'doughnut']});
  };
  
  // Sally chooses 'b' twice
  var action1 = actionDist(goal, deceptive);
  observe(action1 === 'b');
  
  var action2 = actionDist(goal, deceptive);
  observe(action2 === 'b');
  
  return goal;
};

var ANSWER = Infer({method: 'enumerate'}, model);
var model = function() {
  // Sally's goal food
  var goalFood = uniformDrift({a: 0, b: 1, width: 0.25});
  var goal = categorical({vs: ['bagel', 'cookie', 'doughnut'], ps: [goalFood, 1-goalFood, 0]});
  
  // Sally's deceptiveness
  var deceptive = flip(0.5);
  
  // Vending machine outcomes given action
  var machineOutcome = function(action) {
    var outcomes = ['bagel', 'cookie', 'doughnut'];
    var probs = (action === 'a') ? [0.8, 0.1, 0.1] :
                (action === 'b') ? [0.1, 0.8, 0.1] :
                (action === 'c') ? [0.1, 0.1, 0.8] :
                [1/3, 1/3, 1/3];
    return categorical({vs: outcomes, ps: probs});
  };
  
  // Sally's action choice
  var sallyAction = function(deceptive, goal) {
    if (deceptive) {
      // Deceptive: avoid goal food - uniform over actions that don't lead to goal
      return uniformDrift({a: 0, b: 1, width: 0.25});
    } else {
      // Honest: pick action whose outcome IS goal food
      return (goal === 'bagel') ? 'a' :
             (goal === 'cookie') ? 'b' :
             'c';
    }
  };
  
  // Observe Sally chooses 'b' twice
  var action1 = sallyAction(deceptive, goal);
  condition(action1 === 'b');
  
  var action2 = sallyAction(deceptive, goal);
  condition(action2 === 'b');
  
  return goal;
};

var ANSWER = Infer({method: 'MCMC', samples: 10000}, model);
var model = function() {
  var goalFood = uniformDraw(['bagel', 'cookie', 'doughnut']);
  var deceptive = flip(0.5);
  
  // Probability of choosing action 'b' given goal and deceptiveness
  var probChooseB = function() {
    if (!deceptive) {
      // If not deceptive, pick action whose outcome IS goal food
      if (goalFood === 'bagel') {
        return 0; // Would choose 'a', never 'b'
      } else if (goalFood === 'cookie') {
        return 1; // Would choose 'b'
      } else {
        return 0; // Would choose 'c', never 'b'
      }
    } else {
      // If deceptive, avoid goal food
      if (goalFood === 'bagel') {
        return 0.5; // Can choose 'b' or 'c', uniform
      } else if (goalFood === 'cookie') {
        return 0; // Cannot choose 'b' (avoids goal)
      } else {
        return 0.5; // Can choose 'a' or 'b', uniform
      }
    }
  };
  
  var prob = probChooseB();
  // Observe 'b' twice in a row
  factor(Math.log(prob) + Math.log(prob));
  
  return goalFood;
};

var ANSWER = Infer({method: 'enumerate'}, model);
var ANSWER = Infer({method: 'enumerate'}, function() {
  var goalFood = uniformDraw(['bagel', 'cookie', 'doughnut']);
  var deceptive = flip(0.5);
  
  var getProb = function(goal, action, isDeceptive) {
    if (!isDeceptive) {
      // Non-deceptive: picks the action that yields her goal food
      var goalAction = (goal === 'bagel' ? 'a' : goal === 'cookie' ? 'b' : 'c');
      return action === goalAction ? 1 : 1e-10;
    } else {
      // Deceptive: avoids the action for her goal, chooses uniformly from other two
      var avoidAction = (goal === 'bagel' ? 'a' : goal === 'cookie' ? 'b' : 'c');
      return action !== avoidAction ? 0.5 : 1e-10;
    }
  };
  
  // Condition on observing 'b' twice in a row
  factor(Math.log(getProb(goalFood, 'b', deceptive)));
  factor(Math.log(getProb(goalFood, 'b', deceptive)));
  
  return goalFood;
});
var model = function() {
  // Prior over goal food
  var goal = categorical({ps: [1/3, 1/3, 1/3], vs: ['bagel', 'cookie', 'doughnut']});
  
  // Prior over deceptive
  var deceptive = flip(0.5);
  
  // Vending machine outcomes given action
  var vendingMachine = function(action) {
    if (action === 'a') {
      return categorical({ps: [0.8, 0.1, 0.1], vs: ['bagel', 'cookie', 'doughnut']});
    } else if (action === 'b') {
      return categorical({ps: [0.1, 0.8, 0.1], vs: ['bagel', 'cookie', 'doughnut']});
    } else { // 'c'
      return categorical({ps: [0.1, 0.1, 0.8], vs: ['bagel', 'cookie', 'doughnut']});
    }
  };
  
  // Expected outcome of an action (most likely outcome)
  // For choosing action: if not deceptive, pick action whose outcome IS goal food
  // If deceptive, pick action whose outcome AVOIDS goal food
  
  // Probability that a given action is chosen
  // Non-deceptive: picks action whose outcome matches goal food
  // Deceptive: picks action whose outcome avoids goal food
  
  // We need to define what "picks an action whose outcome IS her goal food" means
  // and "avoids her goal food" means.
  // Let's interpret: non-deceptive picks the action most likely to give goal food
  // Deceptive picks an action that does NOT most likely give goal food
  
  // Best action for each goal:
  // goal=bagel -> best action is 'a'
  // goal=cookie -> best action is 'b'
  // goal=doughnut -> best action is 'c'
  
  var bestAction = function(g) {
    if (g === 'bagel') return 'a';
    if (g === 'cookie') return 'b';
    return 'c';
  };
  
  // Non-deceptive actions (actions whose outcome IS goal food = best action)
  // Deceptive actions: any action that is NOT the best action for goal
  // Among deceptive actions, uniform over the other two
  
  var actionLikelihood = function(action, g, dec) {
    var best = bestAction(g);
    if (!dec) {
      // picks the best action deterministically
      return action === best ? 1.0 : 0.0;
    } else {
      // avoids best action, uniform over the other two
      return action !== best ? 0.5 : 0.0;
    }
  };
  
  // Observe Sally chose 'b' twice
  factor(Math.log(actionLikelihood('b', goal, deceptive)));
  factor(Math.log(actionLikelihood('b', goal, deceptive)));
  
  return goal;
};

var ANSWER = Infer({method: 'enumerate'}, model);
var model = function() {
  // Prior over Sally's true goal food
  var goal = uniformDraw(['bagel', 'cookie', 'doughnut']);

  // Prior over whether Sally is deceptive
  var deceptive = flip(0.5);

  // Action-selection policy
  var pickAction = function() {
    if (!deceptive) {
      // Non-deceptive: choose the action whose primary outcome IS the goal food
      if (goal === 'bagel')   return 'a';
      if (goal === 'cookie')  return 'b';
      return 'c'; // doughnut
    } else {
      // Deceptive: choose uniformly among actions that do NOT primarily give the goal food
      if (goal === 'bagel')   return uniformDraw(['b', 'c']);
      if (goal === 'cookie')  return uniformDraw(['a', 'c']);
      return uniformDraw(['a', 'b']); // doughnut
    }
  };

  // Observe 'b' twice (independent choices each time)
  var action1 = pickAction();
  condition(action1 === 'b');

  var action2 = pickAction();
  condition(action2 === 'b');

  return goal;
};

var ANSWER = Infer({method: 'enumerate'}, model);
var model = function() {
  // Prior over goal food
  var goal = categorical({vs: ['bagel', 'cookie', 'doughnut'],
                          ps: [1/3, 1/3, 1/3]});

  // Prior over deceptive vs honest
  var deceptive = flip(0.5);

  // Map each goal food to its "aligned" action and its "avoiding" actions
  var alignedAction = {bagel: 'a', cookie: 'b', doughnut: 'c'};
  var avoidingActions = {bagel: ['b', 'c'], cookie: ['a', 'c'], doughnut: ['a', 'b']};

  // Given goal and deceptive flag, return the probability of each action
  var actionProb = function(action) {
    if (!deceptive) {
      // Honest: deterministically picks the action aligned with goal
      return action === alignedAction[goal] ? 1.0 : 0.0;
    } else {
      // Deceptive: picks uniformly among actions that avoid goal food
      var avoiding = avoidingActions[goal];
      return _.includes(avoiding, action) ? (1.0 / avoiding.length) : 0.0;
    }
  };

  // Observe Sally choosing 'b' twice
  observe(Categorical({vs: ['a', 'b', 'c'], ps: [actionProb('a'), actionProb('b'), actionProb('c')]}), 'b');
  observe(Categorical({vs: ['a', 'b', 'c'], ps: [actionProb('a'), actionProb('b'), actionProb('c')]}), 'b');

  return goal;
};

var ANSWER = Infer({method: 'enumerate'}, model);
groundtruth output
cookie0.6047
doughnut0.1977
bagel0.1977
raw JSON
{
  "__kind": "distribution",
  "probs": [
    0.19767441860465113,
    0.6046511627906979,
    0.19767441860465113
  ],
  "support": [
    "doughnut",
    "cookie",
    "bagel"
  ]
}
feedback
not signed
probmods2-social-cognition/ex2.1 record(stay, switch) 8 runs TV=0.00 TV=0
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Monty Hall variant. Alice picks a door uniformly from {1, 2, 3} (one hides a prize, the others are empty). Monty opens a different door, and we condition on his door being neither Alice's nor the prize. In this variant, Monty picks a door uniformly at random from {1, 2, 3}, including possibly Alice's or the prize door (we filter those via the outer condition). Helpers (include them in your program):
var removeBadItems = function(l, badItems) {
  return reduce(function(badItem, remainingL) {
    return remove(badItem, remainingL);
  }, l, badItems);
};
var doors = [1, 2, 3];
Build montyRandom(aliceDoor, prizeDoor) returning a Distribution over Monty's door under this variant. Build model(switches) that draws aliceDoor and prizeDoor uniformly, samples montyDoor from montyRandom, conditions on montyDoor != prizeDoor && montyDoor != aliceDoor, and returns aliceDoor == prizeDoor (whether Alice wins) under her strategy (switches true means switch to the remaining unopened door, false means stay). End your program with an object literal: stay: P(win) when Alice doesn't switch switch: P(win) when Alice switches Each value should be Infer({method: 'enumerate'}, function() { return model(<flag>); }).
groundtruth code
var removeBadItems = function(l, badItems) {
  return reduce(function(badItem, remainingL) {
    return remove(badItem, remainingL)
  }, l, badItems);
};

var doors = [1, 2, 3];

var montyRandom = function(aliceDoor, prizeDoor) {
  return Infer({method: 'enumerate'}, function() {
    return categorical({vs: doors});
  });
};

var model = function(switches) {
  var aliceDoor = categorical({vs: doors});
  var prizeDoor = categorical({vs: doors});
  var montyDoorDist = montyRandom(aliceDoor, prizeDoor);
  var montyDoor = sample(montyDoorDist);
  condition(montyDoor != prizeDoor);
  condition(montyDoor != aliceDoor);
  var aliceDoor = switches ? removeBadItems(doors, [aliceDoor, montyDoor])[0] : aliceDoor;
  return aliceDoor == prizeDoor;
};
var ANSWER = (({
  stay: Infer({method: 'enumerate'}, function() { return model(false); }),
  switch: Infer({method: 'enumerate'}, function() { return model(true); })
}));
generated code
sonnet-46-primer-v3 TV=0 haiku-45-noprimer-v3 fail haiku-45-primer-v3 TV<1 haiku-45-think-noprimer-v3 TV=0 haiku-45-think-primer-v3 TV<.5 sonnet-46-noprimer-v3 TV<.5 sonnet-46-think-noprimer-v3 TV<.5 sonnet-46-think-primer-v3 TV<.5
var removeBadItems = function(l, badItems) {
  return reduce(function(badItem, remainingL) {
    return remove(badItem, remainingL);
  }, l, badItems);
};
var doors = [1, 2, 3];

var montyRandom = function(aliceDoor, prizeDoor) {
  return Infer({method: 'enumerate'}, function() {
    var montyDoor = doors[randomInteger(3)];
    return montyDoor;
  });
};

var model = function(switches) {
  var aliceDoor = doors[randomInteger(3)];
  var prizeDoor = doors[randomInteger(3)];
  var montyDoor = sample(montyRandom(aliceDoor, prizeDoor));
  condition(montyDoor != prizeDoor && montyDoor != aliceDoor);
  var finalDoor = switches
    ? removeBadItems(doors, [aliceDoor, montyDoor])[0]
    : aliceDoor;
  return finalDoor == prizeDoor;
};

var ANSWER = {
  stay: Infer({method: 'enumerate'}, function() { return model(false); }),
  switch: Infer({method: 'enumerate'}, function() { return model(true); })
};
Error: Line 52: Did you mean var finalDoor = ?
var removeBadItems = function(l, badItems) {
  return reduce(function(badItem, remainingL) {
    return remove(badItem, remainingL);
  }, l, badItems);
};
var doors = [1, 2, 3];

var montyRandom = function(aliceDoor, prizeDoor) {
  return Infer({method: 'enumerate'}, function() {
    var montyDoor = uniformDraw(doors);
    return montyDoor;
  });
};

var model = function(switches) {
  var aliceDoor = uniformDraw(doors);
  var prizeDoor = uniformDraw(doors);
  var montyDoor = sample(montyRandom(aliceDoor, prizeDoor));
  
  condition(montyDoor !== prizeDoor && montyDoor !== aliceDoor);
  
  var finalDoor;
  if (switches) {
    // Switch to the remaining unopened door
    var remainingDoors = removeBadItems(doors, [aliceDoor, montyDoor]);
    finalDoor = remainingDoors[0];
  } else {
    // Stay with Alice's original door
    finalDoor = aliceDoor;
  }
  
  return finalDoor === prizeDoor;
};

var ANSWER = {
  stay: Infer({method: 'enumerate'}, function() { return model(false); }),
  switch: Infer({method: 'enumerate'}, function() { return model(true); })
};
var removeBadItems = function(l, badItems) {
  return reduce(function(badItem, remainingL) {
    return remove(badItem, remainingL);
  }, l, badItems);
};
var doors = [1, 2, 3];

var montyRandom = function(aliceDoor, prizeDoor) {
  return Infer({method: 'enumerate'}, function() {
    var montyDoor = uniformDraw(doors);
    return montyDoor;
  });
};

var model = function(switches) {
  var aliceDoor = uniformDraw(doors);
  var prizeDoor = uniformDraw(doors);
  var montyDoor = sample(montyRandom(aliceDoor, prizeDoor));
  
  condition(montyDoor !== prizeDoor && montyDoor !== aliceDoor);
  
  var finalDoor = switches ? 
    uniformDraw(removeBadItems(doors, [aliceDoor, montyDoor]))[0] :
    aliceDoor;
  
  return finalDoor === prizeDoor;
};

var ANSWER = {
  stay: Infer({method: 'enumerate'}, function() { return model(false); }),
  switch: Infer({method: 'enumerate'}, function() { return model(true); })
};
var removeBadItems = function(l, badItems) {
  return reduce(function(badItem, remainingL) {
    return remove(badItem, remainingL);
  }, l, badItems);
};
var doors = [1, 2, 3];

var montyRandom = function(aliceDoor, prizeDoor) {
  return Categorical({vs: doors, ps: [1/3, 1/3, 1/3]});
};

var model = function(switches) {
  var aliceDoor = uniformDraw(doors);
  var prizeDoor = uniformDraw(doors);
  var montyDoor = sample(montyRandom(aliceDoor, prizeDoor));
  
  condition(montyDoor != prizeDoor && montyDoor != aliceDoor);
  
  if (switches) {
    var remainingDoors = removeBadItems(doors, [aliceDoor, montyDoor]);
    var remainingDoor = remainingDoors[0];
    return remainingDoor == prizeDoor;
  } else {
    return aliceDoor == prizeDoor;
  }
};

var ANSWER = {
  stay: Infer({method: 'enumerate'}, function() { return model(false); }),
  switch: Infer({method: 'enumerate'}, function() { return model(true); })
};
var removeBadItems = function(l, badItems) {
  return reduce(function(badItem, remainingL) {
    return remove(badItem, remainingL);
  }, l, badItems);
};
var doors = [1, 2, 3];

var montyRandom = function(aliceDoor, prizeDoor) {
  return Infer({method: 'enumerate'}, function() {
    var montyDoor = uniformDraw(doors);
    condition(montyDoor !== aliceDoor && montyDoor !== prizeDoor);
    return montyDoor;
  });
};

var model = function(switches) {
  var aliceDoor = uniformDraw(doors);
  var prizeDoor = uniformDraw(doors);
  var montyDoor = sample(montyRandom(aliceDoor, prizeDoor));
  
  if (switches) {
    var remainingDoors = removeBadItems(doors, [aliceDoor, montyDoor]);
    var newDoor = remainingDoors[0];
    return newDoor === prizeDoor;
  } else {
    return aliceDoor === prizeDoor;
  }
};

var ANSWER = {
  stay: Infer({method: 'enumerate'}, function() { return model(false); }),
  switch: Infer({method: 'enumerate'}, function() { return model(true); })
};
var removeBadItems = function(l, badItems) {
  return reduce(function(badItem, remainingL) {
    return remove(badItem, remainingL);
  }, l, badItems);
};
var doors = [1, 2, 3];

var montyRandom = function(aliceDoor, prizeDoor) {
  return Infer({method: 'enumerate'}, function() {
    var montyDoor = uniformDraw(doors);
    condition(montyDoor != aliceDoor && montyDoor != prizeDoor);
    return montyDoor;
  });
};

var model = function(switches) {
  var aliceDoor = uniformDraw(doors);
  var prizeDoor = uniformDraw(doors);
  var montyDoor = sample(montyRandom(aliceDoor, prizeDoor));
  condition(montyDoor != prizeDoor && montyDoor != aliceDoor);
  
  var finalDoor = switches
    ? uniformDraw(removeBadItems(doors, [aliceDoor, montyDoor]))
    : aliceDoor;
  
  return finalDoor == prizeDoor;
};

var ANSWER = {
  stay: Infer({method: 'enumerate'}, function() { return model(false); }),
  switch: Infer({method: 'enumerate'}, function() { return model(true); })
};
var removeBadItems = function(l, badItems) {
  return reduce(function(badItem, remainingL) {
    return remove(badItem, remainingL);
  }, l, badItems);
};
var doors = [1, 2, 3];

var montyRandom = function(aliceDoor, prizeDoor) {
  return Infer({method: 'enumerate'}, function() {
    var montyDoor = uniformDraw(doors);
    condition(montyDoor != aliceDoor && montyDoor != prizeDoor);
    return montyDoor;
  });
};

var model = function(switches) {
  var aliceDoor = uniformDraw(doors);
  var prizeDoor = uniformDraw(doors);
  var montyDoor = sample(montyRandom(aliceDoor, prizeDoor));
  condition(montyDoor != prizeDoor && montyDoor != aliceDoor);
  var finalDoor = switches
    ? uniformDraw(removeBadItems(doors, [aliceDoor, montyDoor]))
    : aliceDoor;
  return finalDoor == prizeDoor;
};

var ANSWER = {
  stay: Infer({method: 'enumerate'}, function() { return model(false); }),
  switch: Infer({method: 'enumerate'}, function() { return model(true); })
};
var removeBadItems = function(l, badItems) {
  return reduce(function(badItem, remainingL) {
    return remove(badItem, remainingL);
  }, l, badItems);
};
var doors = [1, 2, 3];

var montyRandom = function(aliceDoor, prizeDoor) {
  return Infer({method: 'enumerate'}, function() {
    var montyDoor = uniformDraw(doors);
    condition(montyDoor != aliceDoor && montyDoor != prizeDoor);
    return montyDoor;
  });
};

var model = function(switches) {
  var aliceDoor = uniformDraw(doors);
  var prizeDoor = uniformDraw(doors);
  var montyDoor = sample(montyRandom(aliceDoor, prizeDoor));
  condition(montyDoor != prizeDoor && montyDoor != aliceDoor);
  var remainingDoor = uniformDraw(removeBadItems(doors, [aliceDoor, montyDoor]));
  var finalDoor = switches ? remainingDoor : aliceDoor;
  return finalDoor == prizeDoor;
};

var ANSWER = {
  stay: Infer({method: 'enumerate'}, function() { return model(false); }),
  switch: Infer({method: 'enumerate'}, function() { return model(true); })
};
groundtruth output
{
  "stay": {
    "__kind": "distribution",
    "probs": [
      0.5,
      0.5
    ],
    "support": [
      true,
      false
    ]
  },
  "switch": {
    "__kind": "distribution",
    "probs": [
      0.5,
      0.5
    ],
    "support": [
      false,
      true
    ]
  }
}
feedback
not signed
probmods2-social-cognition/ex2.2 record(stay, switch) 8 runs TV=0.00 TV=0
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Monty Hall variant. Alice picks a door uniformly from {1, 2, 3} (one hides a prize, the others are empty). Monty opens a different door, and we condition on his door being neither Alice's nor the prize. In this variant, Monty deliberately picks a door that is neither Alice's nor the prize (this is the original Monty Hall setup). Inside montyAvoidBoth, sample Monty's door uniformly and condition that it is not Alice's and not the prize. Helpers (include them in your program):
var removeBadItems = function(l, badItems) {
  return reduce(function(badItem, remainingL) {
    return remove(badItem, remainingL);
  }, l, badItems);
};
var doors = [1, 2, 3];
Build montyAvoidBoth(aliceDoor, prizeDoor) returning a Distribution over Monty's door under this variant. Build model(switches) that draws aliceDoor and prizeDoor uniformly, samples montyDoor from montyAvoidBoth, conditions on montyDoor != prizeDoor && montyDoor != aliceDoor, and returns aliceDoor == prizeDoor (whether Alice wins) under her strategy (switches true means switch to the remaining unopened door, false means stay). End your program with an object literal: stay: P(win) when Alice doesn't switch switch: P(win) when Alice switches Each value should be Infer({method: 'enumerate'}, function() { return model(<flag>); }).
groundtruth code
var removeBadItems = function(l, badItems) {
  return reduce(function(badItem, remainingL) {
    return remove(badItem, remainingL)
  }, l, badItems);
};

var doors = [1, 2, 3];

var montyAvoidBoth = function(aliceDoor, prizeDoor) {
  return Infer({method: 'enumerate'}, function() {
    var montyDoor = categorical({vs: doors});
    condition(montyDoor != aliceDoor);
    condition(montyDoor != prizeDoor);
    return montyDoor;
  });
};

var model = function(switches) {
  var aliceDoor = categorical({vs: doors});
  var prizeDoor = categorical({vs: doors});
  var montyDoorDist = montyAvoidBoth(aliceDoor, prizeDoor);
  var montyDoor = sample(montyDoorDist);
  condition(montyDoor != prizeDoor);
  condition(montyDoor != aliceDoor);
  var aliceDoor = switches ? removeBadItems(doors, [aliceDoor, montyDoor])[0] : aliceDoor;
  return aliceDoor == prizeDoor;
};
var ANSWER = (({
  stay: Infer({method: 'enumerate'}, function() { return model(false); }),
  switch: Infer({method: 'enumerate'}, function() { return model(true); })
}));
generated code
sonnet-46-primer-v3 TV=0 haiku-45-noprimer-v3 fail haiku-45-primer-v3 TV=0 haiku-45-think-noprimer-v3 TV=0 haiku-45-think-primer-v3 TV=0 sonnet-46-noprimer-v3 TV=0 sonnet-46-think-noprimer-v3 TV=0 sonnet-46-think-primer-v3 TV=0
var removeBadItems = function(l, badItems) {
  return reduce(function(badItem, remainingL) {
    return remove(badItem, remainingL);
  }, l, badItems);
};
var doors = [1, 2, 3];

var montyAvoidBoth = function(aliceDoor, prizeDoor) {
  return Infer({method: 'enumerate'}, function() {
    var montyDoor = uniformDraw(doors);
    condition(montyDoor != aliceDoor && montyDoor != prizeDoor);
    return montyDoor;
  });
};

var model = function(switches) {
  var aliceDoor = uniformDraw(doors);
  var prizeDoor = uniformDraw(doors);
  var montyDoor = sample(montyAvoidBoth(aliceDoor, prizeDoor));
  condition(montyDoor != prizeDoor && montyDoor != aliceDoor);
  var finalDoor = switches
    ? uniformDraw(removeBadItems(doors, [aliceDoor, montyDoor]))
    : aliceDoor;
  return finalDoor == prizeDoor;
};

var ANSWER = {
  stay: Infer({method: 'enumerate'}, function() { return model(false); }),
  switch: Infer({method: 'enumerate'}, function() { return model(true); })
};
Error: Line 53: Did you mean var finalDoor = ?
var removeBadItems = function(l, badItems) {
  return reduce(function(badItem, remainingL) {
    return remove(badItem, remainingL);
  }, l, badItems);
};
var doors = [1, 2, 3];

var montyAvoidBoth = function(aliceDoor, prizeDoor) {
  return Infer({method: 'enumerate'}, function() {
    var montyDoor = uniformDraw(doors);
    condition(montyDoor !== aliceDoor && montyDoor !== prizeDoor);
    return montyDoor;
  });
};

var model = function(switches) {
  var aliceDoor = uniformDraw(doors);
  var prizeDoor = uniformDraw(doors);
  var montyDoor = sample(montyAvoidBoth(aliceDoor, prizeDoor));
  
  condition(montyDoor !== prizeDoor && montyDoor !== aliceDoor);
  
  var finalDoor;
  if (switches) {
    // Switch to the remaining unopened door
    var remainingDoors = removeBadItems(doors, [aliceDoor, montyDoor]);
    finalDoor = remainingDoors[0];
  } else {
    // Stay with Alice's original door
    finalDoor = aliceDoor;
  }
  
  return finalDoor === prizeDoor;
};

var ANSWER = {
  stay: Infer({method: 'enumerate'}, function() { return model(false); }),
  switch: Infer({method: 'enumerate'}, function() { return model(true); })
};
var removeBadItems = function(l, badItems) {
  return reduce(function(badItem, remainingL) {
    return remove(badItem, remainingL);
  }, l, badItems);
};
var doors = [1, 2, 3];

var montyAvoidBoth = function(aliceDoor, prizeDoor) {
  return Infer({method: 'enumerate'}, function() {
    var montyDoor = uniformDraw(doors);
    condition(montyDoor !== aliceDoor && montyDoor !== prizeDoor);
    return montyDoor;
  });
};

var model = function(switches) {
  var aliceDoor = uniformDraw(doors);
  var prizeDoor = uniformDraw(doors);
  var montyDoor = sample(montyAvoidBoth(aliceDoor, prizeDoor));
  
  condition(montyDoor !== prizeDoor && montyDoor !== aliceDoor);
  
  var finalDoor = switches ? 
    uniformDraw(removeBadItems(doors, [aliceDoor, montyDoor])) :
    aliceDoor;
  
  return finalDoor === prizeDoor;
};

var ANSWER = {
  stay: Infer({method: 'enumerate'}, function() { return model(false); }),
  switch: Infer({method: 'enumerate'}, function() { return model(true); })
};
var removeBadItems = function(l, badItems) {
  return reduce(function(badItem, remainingL) {
    return remove(badItem, remainingL);
  }, l, badItems);
};
var doors = [1, 2, 3];

var montyAvoidBoth = function(aliceDoor, prizeDoor) {
  return Infer({method: 'enumerate'}, function() {
    var montyDoor = uniformDraw(doors);
    condition(montyDoor != aliceDoor && montyDoor != prizeDoor);
    return montyDoor;
  });
};

var model = function(switches) {
  var aliceDoor = uniformDraw(doors);
  var prizeDoor = uniformDraw(doors);
  var montyDoor = sample(montyAvoidBoth(aliceDoor, prizeDoor));
  
  // The remaining unopened door (not Alice's, not Monty's)
  var remainingDoors = removeBadItems(doors, [aliceDoor, montyDoor]);
  var switchDoor = remainingDoors[0];
  
  var finalDoor = switches ? switchDoor : aliceDoor;
  return finalDoor == prizeDoor;
};

var ANSWER = {
  stay: Infer({method: 'enumerate'}, function() { return model(false); }),
  switch: Infer({method: 'enumerate'}, function() { return model(true); })
};
var removeBadItems = function(l, badItems) {
  return reduce(function(badItem, remainingL) {
    return remove(badItem, remainingL);
  }, l, badItems);
};

var doors = [1, 2, 3];

var montyAvoidBoth = function(aliceDoor, prizeDoor) {
  return Infer({method: 'enumerate'}, function() {
    var montyDoor = uniformDraw(doors);
    condition(montyDoor != aliceDoor && montyDoor != prizeDoor);
    return montyDoor;
  });
};

var model = function(switches) {
  var aliceDoor = uniformDraw(doors);
  var prizeDoor = uniformDraw(doors);
  var montyDoor = sample(montyAvoidBoth(aliceDoor, prizeDoor));
  
  condition(montyDoor != prizeDoor && montyDoor != aliceDoor);
  
  if (switches) {
    var remainingDoors = removeBadItems(doors, [aliceDoor, montyDoor]);
    var finalDoor = remainingDoors[0];
    return finalDoor == prizeDoor;
  } else {
    return aliceDoor == prizeDoor;
  }
};

var ANSWER = {
  stay: Infer({method: 'enumerate'}, function() { return model(false); }),
  switch: Infer({method: 'enumerate'}, function() { return model(true); })
};
var removeBadItems = function(l, badItems) {
  return reduce(function(badItem, remainingL) {
    return remove(badItem, remainingL);
  }, l, badItems);
};
var doors = [1, 2, 3];

var montyAvoidBoth = function(aliceDoor, prizeDoor) {
  return Infer({method: 'enumerate'}, function() {
    var montyDoor = uniformDraw(doors);
    condition(montyDoor != aliceDoor && montyDoor != prizeDoor);
    return montyDoor;
  });
};

var model = function(switches) {
  var aliceDoor = uniformDraw(doors);
  var prizeDoor = uniformDraw(doors);
  var montyDoor = sample(montyAvoidBoth(aliceDoor, prizeDoor));
  condition(montyDoor != prizeDoor && montyDoor != aliceDoor);
  
  var finalDoor = switches
    ? uniformDraw(removeBadItems(doors, [aliceDoor, montyDoor]))
    : aliceDoor;
  
  return finalDoor == prizeDoor;
};

var ANSWER = {
  stay: Infer({method: 'enumerate'}, function() { return model(false); }),
  switch: Infer({method: 'enumerate'}, function() { return model(true); })
};
var removeBadItems = function(l, badItems) {
  return reduce(function(badItem, remainingL) {
    return remove(badItem, remainingL);
  }, l, badItems);
};
var doors = [1, 2, 3];

var montyAvoidBoth = function(aliceDoor, prizeDoor) {
  return Infer({method: 'enumerate'}, function() {
    var montyDoor = uniformDraw(doors);
    condition(montyDoor !== aliceDoor && montyDoor !== prizeDoor);
    return montyDoor;
  });
};

var model = function(switches) {
  var aliceDoor = uniformDraw(doors);
  var prizeDoor = uniformDraw(doors);
  var montyDoor = sample(montyAvoidBoth(aliceDoor, prizeDoor));
  condition(montyDoor !== prizeDoor && montyDoor !== aliceDoor);
  var finalDoor = switches
    ? uniformDraw(removeBadItems(doors, [aliceDoor, montyDoor]))
    : aliceDoor;
  return finalDoor === prizeDoor;
};

var ANSWER = {
  stay: Infer({method: 'enumerate'}, function() { return model(false); }),
  switch: Infer({method: 'enumerate'}, function() { return model(true); })
};
var removeBadItems = function(l, badItems) {
  return reduce(function(badItem, remainingL) {
    return remove(badItem, remainingL);
  }, l, badItems);
};
var doors = [1, 2, 3];

var montyAvoidBoth = function(aliceDoor, prizeDoor) {
  return Infer({method: 'enumerate'}, function() {
    var montyDoor = uniformDraw(doors);
    condition(montyDoor != aliceDoor && montyDoor != prizeDoor);
    return montyDoor;
  });
};

var model = function(switches) {
  var aliceDoor = uniformDraw(doors);
  var prizeDoor = uniformDraw(doors);
  var montyDoor = sample(montyAvoidBoth(aliceDoor, prizeDoor));
  condition(montyDoor != prizeDoor && montyDoor != aliceDoor);
  var finalDoor = switches
    ? uniformDraw(removeBadItems(doors, [aliceDoor, montyDoor]))
    : aliceDoor;
  return finalDoor == prizeDoor;
};

var ANSWER = {
  stay: Infer({method: 'enumerate'}, function() { return model(false); }),
  switch: Infer({method: 'enumerate'}, function() { return model(true); })
};
groundtruth output
{
  "stay": {
    "__kind": "distribution",
    "probs": [
      0.33333333333333337,
      0.6666666666666666
    ],
    "support": [
      true,
      false
    ]
  },
  "switch": {
    "__kind": "distribution",
    "probs": [
      0.33333333333333337,
      0.6666666666666666
    ],
    "support": [
      false,
      true
    ]
  }
}
feedback
not signed
probmods2-social-cognition/ex2.4 record(stay, switch) 8 runs TV=0.00 TV=0
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Monty Hall variant. Alice picks a door uniformly from {1, 2, 3} (one hides a prize, the others are empty). Monty opens a different door, and we condition on his door being neither Alice's nor the prize. In this variant, Monty picks a door uniformly at random but only avoids Alice's door (he may inadvertently reveal the prize). Inside montyAvoidAlice, sample Monty's door uniformly and condition only that it is not Alice's. Helpers (include them in your program):
var removeBadItems = function(l, badItems) {
  return reduce(function(badItem, remainingL) {
    return remove(badItem, remainingL);
  }, l, badItems);
};
var doors = [1, 2, 3];
Build montyAvoidAlice(aliceDoor, prizeDoor) returning a Distribution over Monty's door under this variant. Build model(switches) that draws aliceDoor and prizeDoor uniformly, samples montyDoor from montyAvoidAlice, conditions on montyDoor != prizeDoor && montyDoor != aliceDoor, and returns aliceDoor == prizeDoor (whether Alice wins) under her strategy (switches true means switch to the remaining unopened door, false means stay). End your program with an object literal: stay: P(win) when Alice doesn't switch switch: P(win) when Alice switches Each value should be Infer({method: 'enumerate'}, function() { return model(<flag>); }).
groundtruth code
var removeBadItems = function(l, badItems) {
  return reduce(function(badItem, remainingL) {
    return remove(badItem, remainingL)
  }, l, badItems);
};

var doors = [1, 2, 3];

var montyAvoidAlice = function(aliceDoor, prizeDoor) {
  return Infer({method: 'enumerate'}, function() {
    var montyDoor = categorical({vs: doors});
    condition(montyDoor != aliceDoor);
    return montyDoor;
  });
};

var model = function(switches) {
  var aliceDoor = categorical({vs: doors});
  var prizeDoor = categorical({vs: doors});
  var montyDoorDist = montyAvoidAlice(aliceDoor, prizeDoor);
  var montyDoor = sample(montyDoorDist);
  condition(montyDoor != prizeDoor);
  condition(montyDoor != aliceDoor);
  var aliceDoor = switches ? removeBadItems(doors, [aliceDoor, montyDoor])[0] : aliceDoor;
  return aliceDoor == prizeDoor;
};
var ANSWER = (({
  stay: Infer({method: 'enumerate'}, function() { return model(false); }),
  switch: Infer({method: 'enumerate'}, function() { return model(true); })
}));
generated code
sonnet-46-primer-v3 TV=0 haiku-45-noprimer-v3 fail haiku-45-primer-v3 TV=0 haiku-45-think-noprimer-v3 fail haiku-45-think-primer-v3 fail sonnet-46-noprimer-v3 TV=0 sonnet-46-think-noprimer-v3 TV=0 sonnet-46-think-primer-v3 TV=0
var removeBadItems = function(l, badItems) {
  return reduce(function(badItem, remainingL) {
    return remove(badItem, remainingL);
  }, l, badItems);
};
var doors = [1, 2, 3];

var montyAvoidAlice = function(aliceDoor, prizeDoor) {
  return Infer({method: 'enumerate'}, function() {
    var montyDoor = uniformDraw(removeBadItems(doors, [aliceDoor]));
    return montyDoor;
  });
};

var model = function(switches) {
  var aliceDoor = uniformDraw(doors);
  var prizeDoor = uniformDraw(doors);
  var montyDoor = sample(montyAvoidAlice(aliceDoor, prizeDoor));
  condition(montyDoor != prizeDoor && montyDoor != aliceDoor);
  var finalDoor = switches
    ? uniformDraw(removeBadItems(doors, [aliceDoor, montyDoor]))
    : aliceDoor;
  return finalDoor == prizeDoor;
};

var ANSWER = {
  stay: Infer({method: 'enumerate'}, function() { return model(false); }),
  switch: Infer({method: 'enumerate'}, function() { return model(true); })
};
Error: Line 52: Did you mean var finalDoor = ?
var removeBadItems = function(l, badItems) {
  return reduce(function(badItem, remainingL) {
    return remove(badItem, remainingL);
  }, l, badItems);
};
var doors = [1, 2, 3];

var montyAvoidAlice = function(aliceDoor, prizeDoor) {
  return Infer({method: 'enumerate'}, function() {
    var montyDoor = uniformDraw(doors);
    condition(montyDoor !== aliceDoor);
    return montyDoor;
  });
};

var model = function(switches) {
  var aliceDoor = uniformDraw(doors);
  var prizeDoor = uniformDraw(doors);
  var montyDoor = sample(montyAvoidAlice(aliceDoor, prizeDoor));
  
  condition(montyDoor !== prizeDoor && montyDoor !== aliceDoor);
  
  var finalDoor;
  if (switches) {
    var remainingDoors = removeBadItems(doors, [aliceDoor, montyDoor]);
    finalDoor = remainingDoors[0];
  } else {
    finalDoor = aliceDoor;
  }
  
  return finalDoor === prizeDoor;
};

var ANSWER = {
  stay: Infer({method: 'enumerate'}, function() { return model(false); }),
  switch: Infer({method: 'enumerate'}, function() { return model(true); })
};
var removeBadItems = function(l, badItems) {
  return reduce(function(badItem, remainingL) {
    return remove(badItem, remainingL);
  }, l, badItems);
};
var doors = [1, 2, 3];

var montyAvoidAlice = function(aliceDoor, prizeDoor) {
  return Infer({method: 'enumerate'}, function() {
    var montyDoor = uniformDraw(doors);
    condition(montyDoor !== aliceDoor);
    return montyDoor;
  });
};

var model = function(switches) {
  var aliceDoor = uniformDraw(doors);
  var prizeDoor = uniformDraw(doors);
  var montyDoor = sample(montyAvoidAlice(aliceDoor, prizeDoor));
  
  condition(montyDoor !== prizeDoor && montyDoor !== aliceDoor);
  
  var finalDoor = switches ? 
    uniformDraw(removeBadItems(doors, [aliceDoor, montyDoor])) :
    aliceDoor;
  
  return finalDoor === prizeDoor;
};

var ANSWER = {
  stay: Infer({method: 'enumerate'}, function() { return model(false); }),
  switch: Infer({method: 'enumerate'}, function() { return model(true); })
};
Error: Line 52: Did you mean var finalDoor = ?
var removeBadItems = function(l, badItems) {
  return reduce(function(badItem, remainingL) {
    return remove(badItem, remainingL);
  }, l, badItems);
};
var doors = [1, 2, 3];

var montyAvoidAlice = function(aliceDoor, prizeDoor) {
  return Infer({method: 'enumerate'}, function() {
    var montyDoor = uniformDraw(doors);
    condition(montyDoor !== aliceDoor);
    return montyDoor;
  });
};

var model = function(switches) {
  var aliceDoor = uniformDraw(doors);
  var prizeDoor = uniformDraw(doors);
  var montyDoor = sample(montyAvoidAlice(aliceDoor, prizeDoor));
  
  condition(montyDoor !== prizeDoor && montyDoor !== aliceDoor);
  
  var finalDoor;
  if (switches) {
    var remainingDoors = removeBadItems(doors, [aliceDoor, montyDoor]);
    finalDoor = remainingDoors[0];
  } else {
    finalDoor = aliceDoor;
  }
  
  return finalDoor === prizeDoor;
};

var ANSWER = {
  "stay": Infer({method: 'enumerate'}, function() { return model(false); }),
  "switch": Infer({method: 'enumerate'}, function() { return model(true); })
};
Error: Line 52: Did you mean var finalDoor = ?
var removeBadItems = function(l, badItems) {
  return reduce(function(badItem, remainingL) {
    return remove(badItem, remainingL);
  }, l, badItems);
};
var doors = [1, 2, 3];

var montyAvoidAlice = function(aliceDoor, prizeDoor) {
  return Infer({method: 'enumerate'}, function() {
    var validDoors = remove(aliceDoor, doors);
    var montyDoor = sample(Categorical({vs: validDoors, ps: [0.5, 0.5]}));
    return montyDoor;
  });
};

var model = function(switches) {
  var aliceDoor = sample(Categorical({vs: doors, ps: [1/3, 1/3, 1/3]}));
  var prizeDoor = sample(Categorical({vs: doors, ps: [1/3, 1/3, 1/3]}));
  var montyDoor = sample(montyAvoidAlice(aliceDoor, prizeDoor));
  
  condition(montyDoor != prizeDoor && montyDoor != aliceDoor);
  
  var finalDoor;
  if (switches) {
    var remainingDoors = removeBadItems(doors, [aliceDoor, montyDoor]);
    finalDoor = remainingDoors[0];
  } else {
    finalDoor = aliceDoor;
  }
  
  return finalDoor == prizeDoor;
};

var ANSWER = {
  stay: Infer({method: 'enumerate'}, function() { return model(false); }),
  switch: Infer({method: 'enumerate'}, function() { return model(true); })
};
var removeBadItems = function(l, badItems) {
  return reduce(function(badItem, remainingL) {
    return remove(badItem, remainingL);
  }, l, badItems);
};
var doors = [1, 2, 3];

var montyAvoidAlice = function(aliceDoor, prizeDoor) {
  return Infer({method: 'enumerate'}, function() {
    var montyDoor = uniformDraw(removeBadItems(doors, [aliceDoor]));
    return montyDoor;
  });
};

var model = function(switches) {
  var aliceDoor = uniformDraw(doors);
  var prizeDoor = uniformDraw(doors);
  var montyDoor = sample(montyAvoidAlice(aliceDoor, prizeDoor));
  condition(montyDoor != prizeDoor && montyDoor != aliceDoor);
  var finalDoor = switches
    ? uniformDraw(removeBadItems(doors, [aliceDoor, montyDoor]))
    : aliceDoor;
  return finalDoor == prizeDoor;
};

var ANSWER = {
  stay: Infer({method: 'enumerate'}, function() { return model(false); }),
  switch: Infer({method: 'enumerate'}, function() { return model(true); })
};
var removeBadItems = function(l, badItems) {
  return reduce(function(badItem, remainingL) {
    return remove(badItem, remainingL);
  }, l, badItems);
};
var doors = [1, 2, 3];

var montyAvoidAlice = function(aliceDoor, prizeDoor) {
  return Infer({method: 'enumerate'}, function() {
    // Monty picks uniformly from doors that are not Alice's door
    var montyDoor = uniformDraw(removeBadItems(doors, [aliceDoor]));
    return montyDoor;
  });
};

var model = function(switches) {
  var aliceDoor = uniformDraw(doors);
  var prizeDoor = uniformDraw(doors);
  var montyDoor = sample(montyAvoidAlice(aliceDoor, prizeDoor));
  // Condition that Monty didn't accidentally reveal the prize and didn't open Alice's door
  condition(montyDoor != prizeDoor && montyDoor != aliceDoor);
  // Determine Alice's final door based on strategy
  var finalDoor = switches
    ? uniformDraw(removeBadItems(doors, [aliceDoor, montyDoor]))
    : aliceDoor;
  return finalDoor == prizeDoor;
};

var ANSWER = {
  stay:   Infer({method: 'enumerate'}, function() { return model(false); }),
  switch: Infer({method: 'enumerate'}, function() { return model(true); })
};
var removeBadItems = function(l, badItems) {
  return reduce(function(badItem, remainingL) {
    return remove(badItem, remainingL);
  }, l, badItems);
};

var doors = [1, 2, 3];

var montyAvoidAlice = function(aliceDoor, prizeDoor) {
  return Infer({method: 'enumerate'}, function() {
    var availableDoors = removeBadItems(doors, [aliceDoor]);
    var montyDoor = uniformDraw(availableDoors);
    return montyDoor;
  });
};

var model = function(switches) {
  var aliceDoor = uniformDraw(doors);
  var prizeDoor = uniformDraw(doors);
  var montyDoor = sample(montyAvoidAlice(aliceDoor, prizeDoor));
  condition(montyDoor != prizeDoor && montyDoor != aliceDoor);
  var remainingDoor = removeBadItems(doors, [aliceDoor, montyDoor])[0];
  var finalDoor = switches ? remainingDoor : aliceDoor;
  return finalDoor == prizeDoor;
};

var ANSWER = {
  stay: Infer({method: 'enumerate'}, function() { return model(false); }),
  switch: Infer({method: 'enumerate'}, function() { return model(true); })
};
groundtruth output
{
  "stay": {
    "__kind": "distribution",
    "probs": [
      0.5,
      0.5
    ],
    "support": [
      true,
      false
    ]
  },
  "switch": {
    "__kind": "distribution",
    "probs": [
      0.5,
      0.5
    ],
    "support": [
      false,
      true
    ]
  }
}
feedback
not signed
probmods2-social-cognition/ex2.5 record(stay, switch) 8 runs TV=0.00 TV=0
prompt
system base instructions used by every run
You are a WebPPL code generator. Given an exercise, produce a single WebPPL program that binds the answer to a top-level variable named `ANSWER`.

Answer format (strict): emit exactly one fenced code block.

```js
<your WebPPL program ending with: var ANSWER = <expression>;>
```

The last statement of your program MUST be `var ANSWER = <expression>;` where `<expression>` is the answer the prompt asks for - typically an `Infer({...}, model)` for a distribution, a numeric/array value, or an object literal `{key: value, ...}` for a record of multiple sub-answers.

Do not write prose, explanations, or multiple code blocks. Do not use `return` at the top level - WebPPL doesn't allow it (return is only for function bodies).
system WebPPL primer appended to system on `*-primer-*` runs only
WebPPL is a probabilistic programming language with JavaScript-like syntax. A few quirks to remember:

Control flow: WebPPL is a functional subset of JavaScript. There are no `for` or `while` loops. Iterate with `map(fn, list)`, `mapData({data: list}, fn)`, `repeat(n, fn)`, or recursion. `_.range(start, end)` produces integer ranges.

Functions: define with `var f = function(args) { ...; return value; }`. The last expression of a function is auto-returned only if it is a bare expression statement; otherwise use explicit `return`. Memoize with `mem(fn)` so repeated calls with the same arguments return the same value within an inference run.

Random primitives - lowercase samples directly, uppercase constructs a Distribution object. They are NOT interchangeable:

  flip(p)            -> boolean             (sample, no `sample()` needed)
  uniform(a, b)      -> number              (sample)
  gaussian(mu, sg)   -> number              (sample)
  beta(a, b)         -> number              (sample)
  dirichlet(alpha)   -> tensor              (sample; alpha must be a Vector)
  randomInteger(n)   -> int 0..n-1          (sample)
  uniformDrift({a, b, width})        -> sample  (drift kernel; do NOT wrap in sample())
  dirichletDrift({alpha, conc.})     -> sample  (drift kernel; do NOT wrap in sample())

Distribution *constructors* (used with `sample(D)`, `observe(D, val)`, or as return value of inference):
  Bernoulli({p})              Beta({a, b})              Gaussian({mu, sigma})
  Uniform({a, b})             Categorical({vs, ps})     Binomial({p, n})
  Dirichlet({alpha})          Multinomial({ps, n})      Poisson({mu})

Common gotchas:
- `Dirichlet({alpha: ...})` requires a Vector, not a JS array. Use `ones([n, 1])` or `Vector([1, 1, ...])`.
- WebPPL only supports `var`, not `let` / `const`.
- WebPPL is *single-assignment*: `var X = ...;` only. You can't declare `var X;` and assign later, and you can't reassign `X = ...` after the declaration. Use ternaries or recursion to express conditional bindings.
- Array methods like `.fill`, `.indexOf`, `.map`, `.forEach`, `.concat` may fail. Prefer `repeat(n, fn)`, `_.indexOf(arr, x)`, `map(fn, arr)`, `mapData({data: arr}, fn)`, and `arr1.concat(arr2)` only at the top of a returned expression.
- Always end top-level statements with `;`. WebPPL inherits JS ASI rules, so `var x = f()
[a, b]` parses as `var x = f()[a, b]` (subscript), not two statements.

Inference: `Infer({method: ..., samples: N, ...}, modelFn)` runs `modelFn` under the chosen method and returns a Distribution over its return value. Methods: `'enumerate'`, `'rejection'`, `'forward'`, `'MCMC'`, `'SMC'`. For MCMC, optional kernels: `kernel: {HMC: {steps, stepSize}}`; the drift kernels above can replace `uniform`/`dirichlet` calls inside the model.

Conditioning: `condition(bool)` zeros out worlds where bool is false. `observe(dist, value)` factors in `dist.score(value)`. `factor(score)` adds `score` to the log-probability directly.

Tensors / utilities: `Vector([a, b, ...])`, `T.get(vec, i)`, `ones([rows, cols])`, `_.range`, `_.flatten`, `_.zipObject`, `_.fromPairs`, `_.includes`, `_.parseInt`, `_.uniq`, `_.merge`. Arrays use `Array.isArray`, `Object.keys` works on plain objects.

Display: there's no `viz`, `print`, `display`, etc. that affects the answer - those are browser-only. The answer is the value of your program's last expression.
user message
Monty Hall variant. Alice picks a door uniformly from {1, 2, 3} (one hides a prize, the others are empty). Monty opens a different door, and we condition on his door being neither Alice's nor the prize. In this variant, Monty picks a door uniformly at random but only avoids the prize door (he may inadvertently pick Alice's). Inside montyAvoidPrize, sample Monty's door uniformly and condition only that it is not the prize. Helpers (include them in your program):
var removeBadItems = function(l, badItems) {
  return reduce(function(badItem, remainingL) {
    return remove(badItem, remainingL);
  }, l, badItems);
};
var doors = [1, 2, 3];
Build montyAvoidPrize(aliceDoor, prizeDoor) returning a Distribution over Monty's door under this variant. Build model(switches) that draws aliceDoor and prizeDoor uniformly, samples montyDoor from montyAvoidPrize, conditions on montyDoor != prizeDoor && montyDoor != aliceDoor, and returns aliceDoor == prizeDoor (whether Alice wins) under her strategy (switches true means switch to the remaining unopened door, false means stay). End your program with an object literal: stay: P(win) when Alice doesn't switch switch: P(win) when Alice switches Each value should be Infer({method: 'enumerate'}, function() { return model(<flag>); }).
groundtruth code
var removeBadItems = function(l, badItems) {
  return reduce(function(badItem, remainingL) {
    return remove(badItem, remainingL)
  }, l, badItems);
};

var doors = [1, 2, 3];

var montyAvoidPrize = function(aliceDoor, prizeDoor) {
  return Infer({method: 'enumerate'}, function() {
    var montyDoor = categorical({vs: doors});
    condition(montyDoor != prizeDoor);
    return montyDoor;
  });
};

var model = function(switches) {
  var aliceDoor = categorical({vs: doors});
  var prizeDoor = categorical({vs: doors});
  var montyDoorDist = montyAvoidPrize(aliceDoor, prizeDoor);
  var montyDoor = sample(montyDoorDist);
  condition(montyDoor != prizeDoor);
  condition(montyDoor != aliceDoor);
  var aliceDoor = switches ? removeBadItems(doors, [aliceDoor, montyDoor])[0] : aliceDoor;
  return aliceDoor == prizeDoor;
};
var ANSWER = (({
  stay: Infer({method: 'enumerate'}, function() { return model(false); }),
  switch: Infer({method: 'enumerate'}, function() { return model(true); })
}));
generated code
sonnet-46-primer-v3 TV=0 haiku-45-noprimer-v3 fail haiku-45-primer-v3 TV=0 haiku-45-think-noprimer-v3 fail haiku-45-think-primer-v3 TV<1 sonnet-46-noprimer-v3 TV=0 sonnet-46-think-noprimer-v3 TV=0 sonnet-46-think-primer-v3 TV=0
var removeBadItems = function(l, badItems) {
  return reduce(function(badItem, remainingL) {
    return remove(badItem, remainingL);
  }, l, badItems);
};
var doors = [1, 2, 3];

var montyAvoidPrize = function(aliceDoor, prizeDoor) {
  return Infer({method: 'enumerate'}, function() {
    var montyDoor = uniformDraw(doors);
    condition(montyDoor != prizeDoor);
    return montyDoor;
  });
};

var model = function(switches) {
  var aliceDoor = uniformDraw(doors);
  var prizeDoor = uniformDraw(doors);
  var montyDoor = sample(montyAvoidPrize(aliceDoor, prizeDoor));
  condition(montyDoor != aliceDoor);
  var finalDoor = switches
    ? uniformDraw(removeBadItems(doors, [aliceDoor, montyDoor]))
    : aliceDoor;
  return finalDoor == prizeDoor;
};

var ANSWER = {
  stay: Infer({method: 'enumerate'}, function() { return model(false); }),
  switch: Infer({method: 'enumerate'}, function() { return model(true); })
};
Error: Line 52: Did you mean var finalDoor = ?
var removeBadItems = function(l, badItems) {
  return reduce(function(badItem, remainingL) {
    return remove(badItem, remainingL);
  }, l, badItems);
};
var doors = [1, 2, 3];

var montyAvoidPrize = function(aliceDoor, prizeDoor) {
  return Infer({method: 'enumerate'}, function() {
    var montyDoor = uniformDraw(doors);
    condition(montyDoor !== prizeDoor);
    return montyDoor;
  });
};

var model = function(switches) {
  var aliceDoor = uniformDraw(doors);
  var prizeDoor = uniformDraw(doors);
  var montyDoor = sample(montyAvoidPrize(aliceDoor, prizeDoor));
  
  condition(montyDoor !== prizeDoor && montyDoor !== aliceDoor);
  
  var finalDoor;
  if (switches) {
    var remainingDoors = removeBadItems(doors, [aliceDoor, montyDoor]);
    finalDoor = remainingDoors[0];
  } else {
    finalDoor = aliceDoor;
  }
  
  return finalDoor === prizeDoor;
};

var ANSWER = {
  stay: Infer({method: 'enumerate'}, function() { return model(false); }),
  switch: Infer({method: 'enumerate'}, function() { return model(true); })
};
var removeBadItems = function(l, badItems) {
  return reduce(function(badItem, remainingL) {
    return remove(badItem, remainingL);
  }, l, badItems);
};
var doors = [1, 2, 3];

var montyAvoidPrize = function(aliceDoor, prizeDoor) {
  return Infer({method: 'enumerate'}, function() {
    var montyDoor = uniformDraw(doors);
    condition(montyDoor !== prizeDoor);
    return montyDoor;
  });
};

var model = function(switches) {
  var aliceDoor = uniformDraw(doors);
  var prizeDoor = uniformDraw(doors);
  var montyDoor = sample(montyAvoidPrize(aliceDoor, prizeDoor));
  
  condition(montyDoor !== prizeDoor && montyDoor !== aliceDoor);
  
  var finalDoor = switches ? 
    uniformDraw(removeBadItems(doors, [aliceDoor, montyDoor])) :
    aliceDoor;
  
  return finalDoor === prizeDoor;
};

var ANSWER = {
  stay: Infer({method: 'enumerate'}, function() { return model(false); }),
  switch: Infer({method: 'enumerate'}, function() { return model(true); })
};
Error: Line 54: Did you mean var finalDoor = ?
var removeBadItems = function(l, badItems) {
  return reduce(function(badItem, remainingL) {
    return remove(badItem, remainingL);
  }, l, badItems);
};
var doors = [1, 2, 3];

var montyAvoidPrize = function(aliceDoor, prizeDoor) {
  return Infer({method: 'enumerate'}, function() {
    var montyDoor = uniformDraw(doors);
    condition(montyDoor != prizeDoor);
    return montyDoor;
  });
};

var model = function(switches) {
  return Infer({method: 'enumerate'}, function() {
    var aliceDoor = uniformDraw(doors);
    var prizeDoor = uniformDraw(doors);
    var montyDoor = sample(montyAvoidPrize(aliceDoor, prizeDoor));
    
    condition(montyDoor != prizeDoor && montyDoor != aliceDoor);
    
    var finalDoor;
    if (switches) {
      // Alice switches to the remaining unopened door
      var remainingDoors = removeBadItems(doors, [aliceDoor, montyDoor]);
      finalDoor = remainingDoors[0];
    } else {
      // Alice stays with her original door
      finalDoor = aliceDoor;
    }
    
    return finalDoor == prizeDoor;
  });
};

var ANSWER = {
  stay: Infer({method: 'enumerate'}, function() { return model(false); }),
  switch: Infer({method: 'enumerate'}, function() { return model(true); })
};
var removeBadItems = function(l, badItems) {
  return reduce(function(badItem, remainingL) {
    return remove(badItem, remainingL);
  }, l, badItems);
};
var doors = [1, 2, 3];

var montyAvoidPrize = function(aliceDoor, prizeDoor) {
  return Infer({method: 'enumerate'}, function() {
    var montyDoor = sample(Categorical({vs: [1, 2, 3], ps: [1/3, 1/3, 1/3]}));
    condition(montyDoor != prizeDoor);
    return montyDoor;
  });
};

var model = function(switches) {
  var aliceDoor = sample(Categorical({vs: [1, 2, 3], ps: [1/3, 1/3, 1/3]}));
  var prizeDoor = sample(Categorical({vs: [1, 2, 3], ps: [1/3, 1/3, 1/3]}));
  var montyDoor = sample(montyAvoidPrize(aliceDoor, prizeDoor));
  
  // Condition on montyDoor being neither Alice's nor the prize
  condition(montyDoor != prizeDoor && montyDoor != aliceDoor);
  
  // Determine the final door
  var finalDoor = switches ? 
    // If switching, pick the remaining door (not Alice's, not Monty's)
    (_.filter(function(d) { return d != aliceDoor && d != montyDoor; }, doors)[0]) :
    // If staying, keep Alice's door
    aliceDoor;
  
  return finalDoor == prizeDoor;
};

var ANSWER = {
  stay: Infer({method: 'enumerate'}, function() { return model(false); }),
  switch: Infer({method: 'enumerate'}, function() { return model(true); })
};
var removeBadItems = function(l, badItems) {
  return reduce(function(badItem, remainingL) {
    return remove(badItem, remainingL);
  }, l, badItems);
};
var doors = [1, 2, 3];

var montyAvoidPrize = function(aliceDoor, prizeDoor) {
  return Infer({method: 'enumerate'}, function() {
    var montyDoor = uniformDraw(doors);
    condition(montyDoor != prizeDoor);
    return montyDoor;
  });
};

var model = function(switches) {
  var aliceDoor = uniformDraw(doors);
  var prizeDoor = uniformDraw(doors);
  var montyDoor = sample(montyAvoidPrize(aliceDoor, prizeDoor));
  condition(montyDoor != aliceDoor);
  
  var finalDoor = switches
    ? uniformDraw(removeBadItems(doors, [aliceDoor, montyDoor]))
    : aliceDoor;
  
  return finalDoor == prizeDoor;
};

var ANSWER = {
  stay: Infer({method: 'enumerate'}, function() { return model(false); }),
  switch: Infer({method: 'enumerate'}, function() { return model(true); })
};
var removeBadItems = function(l, badItems) {
  return reduce(function(badItem, remainingL) {
    return remove(badItem, remainingL);
  }, l, badItems);
};
var doors = [1, 2, 3];

var montyAvoidPrize = function(aliceDoor, prizeDoor) {
  return Infer({method: 'enumerate'}, function() {
    var montyDoor = uniformDraw(doors);
    condition(montyDoor != prizeDoor);
    return montyDoor;
  });
};

var model = function(switches) {
  var aliceDoor = uniformDraw(doors);
  var prizeDoor = uniformDraw(doors);
  var montyDoor = sample(montyAvoidPrize(aliceDoor, prizeDoor));
  condition(montyDoor != prizeDoor && montyDoor != aliceDoor);

  var finalDoor = switches
    ? uniformDraw(removeBadItems(doors, [aliceDoor, montyDoor]))
    : aliceDoor;

  return finalDoor == prizeDoor;
};

var ANSWER = {
  stay:   Infer({method: 'enumerate'}, function() { return model(false); }),
  switch: Infer({method: 'enumerate'}, function() { return model(true);  })
};
var removeBadItems = function(l, badItems) {
  return reduce(function(badItem, remainingL) {
    return remove(badItem, remainingL);
  }, l, badItems);
};
var doors = [1, 2, 3];

var montyAvoidPrize = function(aliceDoor, prizeDoor) {
  return Infer({method: 'enumerate'}, function() {
    var montyDoor = uniformDraw(doors);
    condition(montyDoor != prizeDoor);
    return montyDoor;
  });
};

var model = function(switches) {
  var aliceDoor = uniformDraw(doors);
  var prizeDoor = uniformDraw(doors);
  var montyDoor = sample(montyAvoidPrize(aliceDoor, prizeDoor));
  condition(montyDoor != prizeDoor && montyDoor != aliceDoor);
  var remainingDoor = removeBadItems(doors, [aliceDoor, montyDoor])[0];
  var finalDoor = switches ? remainingDoor : aliceDoor;
  return finalDoor == prizeDoor;
};

var ANSWER = {
  stay:   Infer({method: 'enumerate'}, function() { return model(false); }),
  switch: Infer({method: 'enumerate'}, function() { return model(true); })
};
groundtruth output
{
  "stay": {
    "__kind": "distribution",
    "probs": [
      0.5,
      0.5
    ],
    "support": [
      true,
      false
    ]
  },
  "switch": {
    "__kind": "distribution",
    "probs": [
      0.5,
      0.5
    ],
    "support": [
      false,
      true
    ]
  }
}
feedback
not signed