Sie sind auf Seite 1von 4

Q1: 1/6 = 0.166667 or 16.6667%.

Q2: (i) For n rolls, you would expect to get one n*(1/6) times, or n/6.
(ii) For n rolls, you would expect to get two-six n*(5/6) times, or 5n/6.

Q3: Every time I run the simulation, the total number of occurrences bounce around but seem to
be grouped around 17 for each number (sometimes they're higher, sometimes they're lower).
More often, one number will be over 17, causing the rest to be slightly lower than 17, but this
can occur for any number depending on the simulation.
This matches my expectations from Q2, which would predict that any single number (e.g. one)
should occur roughly n/6 times (16.6667 times with n=100). Likewise, the occurrence of two-six
occurs roughly 83 times, or with each number roughly occurring 17 times, matching the
prediction of 5n/6 (83.3334 for n=100).

Q4: (i) For n=1000, the same results occur as with n=100, but the numbers group even closer to
the predicted n/6 for any given value, with most numbers approaching and staying around 167.
(ii) For n=10000, the results of the simulation almost perfectly match the predicted values, now,
with each value occurring around 1667 times.
(iii) The general trend from n=100 to n=10000 shows that the higher number of times the dice
are rolled, the closer the observed results match the predicted value.
(iv) The higher the number of rolls, the closer the result will be to the predicted value. N=100
was fairly variable, but the values were often within 2-3 to the predicted value. N=1000 was less
variable but still was not perfect. To reasonably reproduce the predicted results every time,
n=10000 would be best, though n=1000 also reasonably works.

Q6: Since 1/(average return period) = (average frequency), we can compute the average return
period in this way: 1/(average frequency) = (average return period). As the simulation confirmed,
the expected frequency of a roll of one is 1/6. Thus, the average return period for a roll of one is
1/(1/6) = 6.

This means that on average, you have to roll the die six times to get a value of one.

Q7: Overall, two consecutive rolls of one tend to occur more frequently closer together than the
average return period of 6 rolls. Less frequently does the period in between ones approach 6, and
less frequently is it larger. Given the list of data, the larger the period, the less frequently it
occurs. This may seem at odds with the prediction in Q6, which shows the average return period
being 6, but it is not actually at odds with it. Since the period in between ones can, theoretically,
be as large as the number of rolls itself (not including the first or last roll, which would have to
be ones), the average return period is not going to be the most frequent return period, but it
reflects the more extreme cases and the most frequent cases.

Q8: As n increases, the histogram approaches the predicted trend, with the most frequent return
period being 1 (i.e. the next roll is a one as well), and the frequency of return periods larger than
this decreasing as the period increases. The shape of the return period frequency histogram is
constantly decreasing, as well as an increasing slope which approaches zero.

Q10: (i) The distribution is not representative of the average return period estimated in Q6, since
the most frequent return period was the lowest return period, whereas the predicted value was 6.
(ii) In response to this: " ‘Since Houston had Harvey last summer and since the average return
period is 60 years, Houston is safe for another 59 years.’ ? Yes or no? And how do you
substantiate your answer quantitatively? Another way to pose the question: if you were to bet on
the next landfall of a Harvey-class hurricane on Houston, what would be your best bet — within
the next 10 years, around 59 years from now, or 100 years?"
a. No, this does not mean that Houston is safe for another 59 years. The return period merely
reflects a gap of time in which, on average, an event will occur again. This may be in the next
year, halfway through the time, or even at the end, but it does not mean it will occur every 60
years.
b. Within the next 10 years would be the best bet, since the frequency is highest at the lower ends
of the return period. However, as time increases, the probability that an event will occur
increases as well

Q12: (i) The percentage change is greater at 2 K. The percentage change for T1 = 0.5 K is
113.3148% (a change from 35.207% to 39.894%). The percentage change for T2 = 2 K is
239.8875% (a change from 5.3991% to 12.952%).
(ii) (a) The average return period for T1 = 0.5 K decreases by 0.3337 years. (1/0.39894
- 1/0.35207 = 2.5066426029 - 2.84034424972 = -0.3337).
(b) The average return period for T2 = 2 K decreases by 10.8 years. (1/0.12952 - 1/0.053991 =
7.7208153181- 18.5216054528 = -10.8007901347)
(iii) This tells us that global warming will be more impactful on rare temperature events, causing
extremes to occur more often.

Q13: (i) (At which temperature (0.5 K or 2 K) is the percentage change greater?) The percentage
change is greater at 2 K. The percentage change for T1 = 0.5 K is 105.7128% (a change from
25.159% to 26.596%). The percentage change for T2 = 2 K is 147.5341% (a change from
10.934% to 16.131%).
(ii) (How much do the average return periods change at these temperatures?)
(a) The average return period for T1 = 0.5 K decreases by 0.21 years.
(1/0.26596 - 1/0.25159 = 3.7599639 - 3.97472078 = -0.21475688).
(b) The average return period for T2 = 2 K decreases by 2.95 years. (1/0.16131 - 1/0.10934 =
6.19924369227 - 9.14578379367 = -2.9465401014)
(iii) (What does it tell about the impact of global warming on extreme heat?) This tells us that
global warming will be more impactful on rare temperature events, causing extremes to occur
more often.

(iv) As the amount of variability increases (for example, using =1.5 instead of 1), the
change in probability for extreme weather events after warming become less pronounced since
there is generally more variability and extremes with a higher standard deviation. This is
reflected in the change in probability at both T1 and T2. As standard deviation increases, the
probability change after an increase in delta T is smaller.

Q15: (i) (At which temperature (0.5 K or 2 K) is the percentage change greater?) The percentage
change is greater at 2 K. The percentage change for T1 = 0.5 K is 75.5432% (a change from
35.207% to 26.596%). The percentage change for T2 = 2 K is 298.7793% (a change from
5.3991% to 16.131%).
(ii) (How much do the average return periods change at these temperatures?)
(a) The average return period for T1 = 0.5 K increases by 0.92 years. (1/0.26596 - 1/0.35207 =
3.75996390435 - 2.84034424972 = 0.91961965463).
(b) The average return period for T2 = 2 K decreases by 12.32 years. (1/0.16131 - 1/0.053991
= 6.19924369227 - 18.5216054528 = -12.3223617605)
(iii) As variability increases and there is warming, the probability that mild changes in the
weather (such as an increase in days that are 0.5 K hotter) become less frequent compared to the
curve which shows less variability and no temperature change. However, the more extreme
weather events become more frequent as temperature increases and variability increases. This
means that it becomes more likely to see extreme weather events and less likely to see stable
temperatures.

Q17: (i) The scientific objectives of the author's project is to create climate models at a 25 km
resolution using the method of a superensemble. The superensemble is an ensemble model of
roughly 136,000 simulations, which, grouped together, allow the compiled model to predict
different probability distributions as well as levels of uncertainty reflected by a large number of
simulations. It uses volunteers' computers throughout the world in order to get this large number
of simulation.
(ii) It's niche is a large number of simulations at such a low resolution. Normally there is a trade-
off of spatial resolution and the number of predicted years, but a large number of simulations
allows for a low resolution and a larger scale predictions. This allows one to study and predict
extremes with more statistical rigor, and given the low resolution, it can better be applied to risk-
analysis at the regional level.

Q18: ClimatePrediction.net is unique in that it has a very low resolution in its grid for climate
modeling, while still having a large number of simulations. In order to do this, they create an
ensemble with all the simulations, which allows them to have more statistical rigor, in that they
can create distribution models and more accurately predict uncertainty, which is much more
difficult with lower numbers of simulations.

This methodology is also unique in that it uses volunteers' computers around the world in order
to reach this large number of simulations.

Q19: This approach augments other studies by better quantifying and measuring variability and
uncertainty within climate models on a regional scale (i.e. low resolution). Despite this, there is a
downside in that the computers that most volunteers have do not have enough working memory
to accommodate a large number of variables, making the superensemble atmosphere only, which
doesn't fully reflect the complexity of other climate models. Overall, however, the model allows
for a better understanding on how to deal with variability in models which can help other, more
complex models.

Das könnte Ihnen auch gefallen