BEGIN:VCALENDAR
VERSION:2.0
PRODID:ILLC Website
X-WR-TIMEZONE:Europe/Amsterdam
BEGIN:VTIMEZONE
TZID:Europe/Amsterdam
X-LIC-LOCATION:Europe/Amsterdam
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:19700329T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:19701025T030000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
UID:/NewsandEvents/Archives/2003/newsitem/555/6-No
vember-2003-CWI-INS4-seminar-Peter-Grunwald
DTSTAMP:20031029T000000
SUMMARY:CWI INS4 seminar, Peter Grunwald
ATTENDEE;ROLE=Speaker:Peter Grunwald (CWI)
DTSTART;TZID=Europe/Amsterdam:20031106T160000
DTEND;TZID=Europe/Amsterdam:20031106T000000
LOCATION:CWI, Kruislaan 413c, room C001
DESCRIPTION:(joint work with Joe Halpern, Cornell
University, Ithaca, NY) As examples such as the M
onty Hall and the 3-prisoners puzzle show, applyin
g conditioning to update a probability distributio
n on a ``naive space'', which does not take into a
ccount the protocol used, can often lead to counte
rintuitive results. We give a detailed explanation
of this phenomenon. A criterion known as CAR (``c
oarsening at random'') in the statistical literatu
re characterizes when ``naive'' conditioning in a
naive space works. We provide two new characteriza
tions of CAR. First we show that in many situation
s, CAR essentially *cannot* hold, so that naive co
nditioning must give the wrong answer. Second, we
provide a procedural characterization of CAR, givi
ng a randomized algorithm that generates all and o
nly distributions for which CAR holds. Both result
s complement earlier work by Gill, van der Laan an
d Robins. We also consider more generalized not
ions of update such as Jeffrey conditioning and mi
nimizing relative entropy (MRE). We give a general
ization of the CAR condition that characterizes wh
en Jeffrey conditioning leads to appropriate answe
rs, and show that there exist some very simple set
tings in which MRE essentially never gives the rig
ht results. This generalizes and interconnects pre
vious results obtained in the literature on CAR an
d MRE.
X-ALT-DESC;FMTTYPE=text/html:\n (joint wor
k with Joe Halpern, Cornell University, Ithaca, NY
)

\n As examples such as the Monty Hal
l and the 3-prisoners puzzle show, applying condit
ioning to update a probability distribution on a `
`naive space'', which does not take into account t
he protocol used, can often lead to counterintuiti
ve results. We give a detailed explanation of thi
s phenomenon. A criterion known as CAR (``coarsen
ing at random'') in the statistical literature cha
racterizes when ``naive'' conditioning in a naive
space works. We provide two new characterizations
of CAR. First we show that in many situations, CA
R essentially *cannot* hold, so that naive conditi
oning must give the wrong answer. Second, we provi
de a procedural characterization of CAR, giving a
randomized algorithm that generates all and only d
istributions for which CAR holds. Both results com
plement earlier work by Gill, van der Laan and Rob
ins.\n

\n \n We also consi
der more generalized notions of update such as Jef
frey conditioning and minimizing relative entropy
(MRE). We give a generalization of the CAR condit
ion that characterizes when Jeffrey conditioning l
eads to\n appropriate answers, and show th
at there exist some very simple settings in which
MRE essentially never gives the right results. Thi
s generalizes and interconnects previous results o
btained in the literature on CAR and MRE.\n <
/p>\n
URL:/NewsandEvents/Archives/2003/newsitem/555/6-No
vember-2003-CWI-INS4-seminar-Peter-Grunwald
END:VEVENT
END:VCALENDAR