Sequential Gaussian Simulation Flashcards

1
Q

State the steps of SGS

A
  1. Normalise data (Normal Score Transformation)
  2. Compute and model the variogram, covariance, or correlogram of the normalised data
  3. Define a random path that goes through each node of the grid representing the deposit
  4. Krige the normalised value at the selected node using both actual and simulated data to estimate the mean and variance of the normal local conditional distribution
  5. Simulate value by randomly sampling the estimated normal local conditional distribution (From random number generation ]0,1[)
  6. Add the new simulated value to the conditioning data set and move to the next grid node (Make every subsequent estimate dependent on initial and previously generated results)
  7. Repeat the process until all nodes are simulated
  8. Back transform the simulated values and validate the results
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Case study: Mine is looking to convert ______ to __________ to reduce costs by $0.04/ton. However, increased _____________ will result in __________.

A

Mine is looking to convert 20ft benches to 25ft benches to reduce costs by $0.04/ton. However, increased bench height will result in increased dilution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the goal of the case study in this section ? What are the variables?

A

Determine whether it is worth it to increase bench heights. Variables include: bench heights and blasthole spacing/sampling

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Increased bench height combined with _________________ may show the same dilution effects as the current bench height with a _________________.

A

Increased bench height combined with a denser blasthole spacing may show the same dilution effects as the current bench height with a sparse blasthole spacing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are some considerations for the case study about optimizing mining parameters?

A
  • Define parameters to test for maximum profit; e.g. bench height 20 ft vs 25 ft and blasthole spacing 15 vs 18ft
  • Find a way to minimize misclassification (grade control) and consider related economics
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are possible ore destinations for the case study about optimizing mining parameters?

A
  1. Oxide Leach
  2. Oxide Mill
  3. Refractory Mill
  4. Waste
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What steps did they take for the case study about optimizing mining parameters?

A
  1. Conditionally simulate several images of grade and material types (oxide and refractory)
  2. Sample the simulated images with the selected combinations of bench height and blasthole spacing
  3. Do grade control for each sampling (bench height and blasthole spacing)
  4. Add the related costs and evaluate the expected misclassification and dilution costs
  5. Decide
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is a notable challenge for the case study about optimizing mining parameters? What is the solution?

A
  • All images are generated on a dense grid with a 5 by 5 by 5 ft resolution
  • The available blasthole data represent samples of a 20 ft length

Solution: To use them as conditioning data in the simulations they need to be ‘de-regularized’ to represent 5ft composites

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is de-regularization?

A

It is the reverse of the typical procedure of compositing from exploration data and amounts to splitting each 20ft length to four 5ft samples with the same mean as the original sample and a variance equal to the variance difference between 20ft and 5ft composites. The latter variance is derived from geostatistical charts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Comment on the results and conclude the case study about optimizing mining parameters.

A

The change from 20 ft to 25 ft benches at 16 ft blasthole spacing has an expected cost of dilution at 0.07$ and operational savings were estimated at 0.04$. Conclusion: Don’t do it you will be losing 0.03$ per ton.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How to SGS without samples

A

Knowing distribution of possible grades at x0 is normal with a mean of 5% and a standard deviation of 2%
1. Pick a value from that histogram (look at grey line mean: 5, std. dev 2)
2. Draw a random number, say 0.74, and read the corresponding grade value on the cumulative frequency plot.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is Screen-Effect Approximation?

A

The implementation is based on considering a fixed-size neighborhood around a node where the posterior probability density function is approximated using only the data within this nearby region.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Advantage of SEA?

A

SEA in the context of SGS is utilized to approximate the posterior probability density function, providing a computational advantage by limiting calculations to localized neighborhoods around nodes of interest. The adaptation ensures that the simulation is computationally efficient and more feasible for larger datasets or models by significantly reducing computational and storage requirements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Dilution may be seen as the ______________ above the cut-off introduced by the block support-size compared to the theoretical situation where _____________ is that of data support-size.

A

Dilution may be seen as the percent decrease of average grade above the cut-off introduced by the block support-size compared to the theoretical situation where selectivity is that of data support-size.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Comment on this (2 main points)

A
  • As the block size increase, the average grade decreases because of dilution.
  • As the average grade decreases more tonnes of material are available.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Comment on this (2 main points)

A
  • The higher the degree of selectivity the greater the uncertainty of the grade-tonnage information (look at gaps between min/max)
  • Because at smaller block sizes there is less smoothing
17
Q

Comment on these

A
  • As block size decreases, there is less dilution and reduced smoothing of the data (more variability), resulting in higher recorded tonnages for the superior grades due to the availability of a greater number of high-grade blocks (or tonnes).
  • In essence, smaller blocks capture more detailed variability in grade, thus presenting a more accurate representation of high-grade zones in the data, leading to higher tonnages at those high grades.
18
Q

Comment on this and what it is due to

A

Here, kriging overestimates lower grades and conditional simulations overestimates higher end. This is due to the smoothing effect of the traditional OK model. Similarly due to smoothing, the OK model curve is higher at lower grades and lower at higher grades than the simulated blocks because of smoothing.

19
Q

Difference between LU and SGS

A
  • SGS requires a new search neighborhhod every time but small system to solve
  • LU would require one single search but a larger system to solve
20
Q

What do you understand from this

A

The larger the grid, the larger the neighborhoods are takes significantly less computational costs because less search neighborhoods are required.

21
Q

What do you understand from this

A
  • For small group size SGS is better
  • For bigger group size LU-decomposition is better because of less search neighborhoods
  • Worst computational efficiency around group size 15
22
Q
A