Volume 2

G.M. Jacquez , ... P.E. Goovaerts , in Encyclopedia of Environmental Health (Second Edition), 2019

Dynamic Polygons

Polygons are used in GIS to describe static objects such as zip code zones, census units, counties, states, and so on. Dynamic polygons could be considered as polygons that are capable of the following actions: creation, extinction, movement, changes in extent, changes in the locations of their edges, and division and accretion. Attributes such as the crop type in a field defined by a polygon may also change through time. Inheritance may become an important aspect of dynamic polygons, as occurs in cadastral and land management systems that record histories of land ownership.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780124095489017048

Thiessen Polygon

L. Mu , in International Encyclopedia of Human Geography, 2009

Summary and Conclusion

Thiessen polygons are widely used in geography, computer science, and other fields. The major application of Thiessen polygon in human geography is to delineate dominant regions or service areas for point data, such as stores or hospitals. Thiessen polygons are models of spatial processes, nonparametric techniques in point pattern analysis, organizing structures for displaying spatial data, and information theoretical approaches to point patterns. Although ordinary Thiessen polygons are used most commonly in human geography and other fields due to their simplicity, their limitations have hindered applications. Such limitations include using point location as the only determination factor to construct the model, lack of weight consideration for each point, and inadequate consideration of spatial relationships among points. Research developments in weighted Thiessen polygons and Thiessen polygons for lines, polygons, and three-dimensional (3D) objects have effectively overcome the limitations of ordinary Thiessen polygons. Future research of Thiessen polygons in human geography may investigate issues of Thiessen polygons of multidimensions (e.g., multiple socioeconomic variables), Thiessen polygons of dynamic phenomena (e.g., transportation network and human movement), and Thiessen polygons of real-time inputs (e.g., wireless communication).

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780080449104005459

Conventional and Statistical Resource/Reserve Estimation

S.M. Gandhi , B.C. Sarkar , in Essentials of Mineral Exploration and Evaluation, 2016

11.2.1 Polygonal Method

Polygons may be constructed on plans, cross sections, or longitudinal sections by drawing perpendicular bisectors of lines connecting sample points. The polygons are then planimetered to define the area of mineral body. The thickness of above cutoff grade mineralization is applied to the entire polygon to establish the volume estimate. The individual volumes are then summed and converted to tonnage on the application of an appropriate tonnage factor. The average grade of mineralization encountered by the sample point lying within the polygon is considered to accurately represent the grade of the entire volume of material within the polygon. The global grade estimate is determined by summing the sample grades weighted by the area of the relevant polygons. The major drawbacks of this method are: (1) in an irregularly sampled area, the zone of influence of a drill hole will vary inversely with the density of sampling and not according to the mineralization and (2) the problem of closure at the boundary of a deposit depends on the interpretation and the experience of the interpreter.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128053294000181

Data Room Management for Mergers and Acquisitions in the Oil and Gas Industry

Bob Harrison , in Developments in Petroleum Science, 2020

9.11 Thiessen polygon mapping

Thiessen polygon maps, which are also called Voronoi diagrams, are used to define and to delineate proximal regions around individual data points by using polygonal boundaries. They are employed by many disciplines including weather forecasters, hydrologists and geographers, and by geologists in the mining sector, who use them to estimate resource volumes from exploratory boreholes.

Polygon maps can be used in the data room to create relatively accurate PIIP estimates from the seller's base geomodel. The subsequent model, which is based on reservoir area around each of the wells and extending out to the field oil water contact (OWC), can be used in numerous ways for rapid sensitivity analysis and "what-if" scenarios.

Fig. 9.17 shows a polygon map and outcome for an oil field after the individual well petrophysical analysis results have been applied. The polygon mapping estimate was within 5 percent of the geomodel estimate. The size of polygons in the table gives an idea of how much of the in-place volume is influenced by each well. On an areal basis one might think the southern polygons are more important, but after having applied the relevant petrophysical results it is obvious that the northern polygons contain the lion's share of the PIIP, and polygon-5 in particular.

Figure 9.17

Figure 9.17. Workflow shows going from an oil field top structure map with well locations, to a corresponding Thiessen polygon map, then to a table which provides a PIIP estimate within 5% of the geomodel PIIP.

Note that any major structural faults can be used as boundaries for the polygons (i.e., well parameters of a polygon on one side of the fault are not applied to the polygon on the other side of the fault). License boundaries can also act as boundaries for the polygons, although the polygon crossing the lease line would have the same parameters on either side. This allows PIIP to be broken down by lease and so the resultant polygon map can be used for computing working interest based on PIIP in unitization studies.

The Thiessen polygon model permits different fluid properties to be assigned to different polygons if hydrocarbon quality varies across structure. Different well performance (and associated EUR) can be allocated to different polygons to reflect spatial variation in reservoir quality. Also, the field area itself can be altered if the OWC is uncertain by increasing the areas of the outer polygons accordingly.

The main advantages of a polygon mapping approach (which has been "adjusted" to tolerably match the geomodel's base PIIP) are flexibility and speed to allow the M&A team to make rapid and reasonable judgments on the relative importance of various assumptions made by the seller.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780444637468000090

Mineral Resource and Ore Reserve Estimation

Swapan Kumar Haldar , in Mineral Exploration (Second Edition), 2018

8.3.4 Polygonal

Polygons are drawn around each drill sampling point to establish the surface coverage that extends half the distance (d/2) of the sampling distance between two sampling points (d) and in an equidistant measure. The surface area of the individual polygon ( Fig. 8.6) is measured by geometric procedure, planimeter, or using reserve estimation software. A planimeter, also known as a platometer, is a measuring device used to determine the area of an irregular 2D shape. The block, total reserves, and grade are estimated as described for triangular, square, and rectangular methods.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128140222000083

GIS Methods and Techniques

Linna Li , ... Bo Xu , in Comprehensive Geographic Information Systems, 2018

1.22.3.1.5 Modeling positional uncertainty of polygons

Uncertainty indicators for polygons can be used to estimate uncertainty in area, perimeter, and a gravity point or the centroid of a polygon. The most widely applied error indicator is for the area of polygons. Because polygons are composed of vertices and lines, it is natural to estimate polygon uncertainty based on point uncertainty and line uncertainty. Chrisman and Yandell (1988) proposed a simple statistical model to compute the variance of a polygon area using the variances of its vertices under the assumption that the uncertainties of the vertices are independent and identically distributed. A similar statistical model developed by Ghilani (2000) used two less rigorous and simplified techniques and resulted in the same outcome. Zhang and Kirby (2000) suggested a conditional simulation approach to incorporate spatial correlation between vertices into modeling of polygon uncertainty. Liu and Tong (2005) employed two approaches to compute the standard deviation of a polygon area. One approach is based on the variance of the polygon's vertices and the other is based on the area of the standard error band of its line segments. A case study shows that the uncertainty of a polygon is caused by the positional uncertainty of its vertices and boundary lines, and there is no significant difference between the two approaches (Kiiveri, 1997; Griffith, 1989; Prisley et al., 1989; Leung et al., 2004c).

Hunter and Goodchild (1996) integrated both vector and raster approaches to address uncertainty in positional data. They enhanced the existing grid cell model and extended it to vector data. Two separate and normally distributed random error grids in the x and y directions are created with a mean and standard deviation equal to the estimate of positional error in the original polygon data set. The error grids are then overlaid with the polygon to create a new but equally probable version of the original polygon by applying the x and y positional shifts in the error grids to the vertices of the original polygon. This process can be repeated a number of times to assess uncertainty in the final products. Hunter et al. (2000) applied this model to a group of six polygons. By perturbing the set of polygons 20 times, the resulting 20 realizations are obtained and the mean polygon areas and their standard deviations are calculated to show area uncertainty of the six polygons.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B978012409548909610X

Models of Basic Structures: Points and Fields

André Dauphiné , in Geographical Models with Mathematica, 2017

7.1.5.2 Interpolation by Voronoi polygons

Interpolation by Voronoi polygons initially consists in determining these polygons, each point of which is closer to a specific site than to any other site. In the beginning, the Voronoi partitioning into a polygon depends solely on the position of the points and is by no means related to the values assigned to them. Afterward, the values of the points situated within each polygon are calculated by several algorithms. The simplest one consists in assigning to all the points of the polygon the known value of the reference point around which the Voronoi polygon was drawn. The value of the points is often determined by a reciprocal function of their distance from the reference point.

Program 7.7, developed with the help of Henrik Schachner, draws the Voronoi polygons in relation to the position of the 20 largest French cities, before converting these regions into images, which allows us to describe each polygon with a large number of criteria. The instruction ComponentMeasurements[] included in this program allows us to calculate more than 50 properties, including the area and elongation of each Voronoi cell. Thus, it becomes easy to define theoretical areas of influence around each city.

Program 7.7

Voronoi polygons for a group of places

ClearAll["Global'*"]

country   =   "France";

ny   =   [email protected] DialogInput[ DynamicModule[{name   =   ""}, Column[{"How many cities?", InputField[Dynamic[name], String], ChoiceButtons[{DialogReturn[name],

DialogReturn[]}]}]]];

coord   =   Take[Table[Reverse[CityData[c, "Coordinates"]], {c, CityData[{All, country}]}], ny];

border   =   ConvexHullMesh[coord];

voron   =   VoronoiMesh[coord];

chm   =   ConvexHullMesh @@@ MeshPrimitives[voron, 2];

ri   =   RegionIntersection[border, #] & /@ chm;

grLines   =   MeshPrimitives[#, 1] & /@ ri;

gr   =   Graphics[grLines] ;

ima   =   Image[gr] // Binarize;

ima2   =   MorphologicalComponents[ima] // Colorize;

ima3   =   RemoveBackground[ima2]

ComponentMeasurements[ima3, {"Area","Elongation"}]

Those geographers who choose this program can modify it to adapt it in relation to their own needs. Above all, they can find graphic updates of this program by reading our discussions on the website: Communauty.wolfram.com.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9781785482250500105

Water Resources Systems Planning and Management

S.K. Jain , V.P. Singh , in Developments in Water Science, 2003

Thiessen Polygon

The Thiessen Polygon method is based on the concept of proximal mapping and weights are assigned to each station according to the area which is closer to that station than to any other station. This area is found by drawing perpendicular bisectors of the lines joining the nearby stations so that the polygons are formed around each station ( Fig. 2.7). It is assumed that these polygons are the boundaries of the effective area that is represented by the station. The area governed by each station is planimetered and expressed as a percentage of the total area. The weighted average precipitation for the basin is computed by multiplying the precipitation received at each station by its weight and summing. The weighted average precipitation is given by:

Fig. 2.7. The Thiessen polygon method for computing the mean areal rainfall.

(2.9) P = i = 1 n P i W i

in which Wi = Ai/A, where Ai is the area represented by the station i and A is the total catchment area. Clearly, the weights will sum to unity. An advantage of this method is that the data of stations outside the catchment may also be used. A major drawback of this method is the assumption that precipitation between two stations varies linearly and the method does not make allowance for variation due to orography. In this method, the precipitation depth changes abruptly at the boundary of polygons. Also, whenever a set of stations are added to or removed from the network, a new set of polygons have to be drawn.

The method fails to give any idea as to the accuracy of the results. If a few observations are missing, it may be more convenient to estimate the missing data than to construct the new set of polygons.

Example 2.4: For a catchment, the rainfall data at six stations for July month along with their weights are as given in Table 2.4. Find the weighted average rainfall for the catchment using Thiessen polygon method.

Table 2.4. Estimation of the mean areal rainfall by the Thiessen polygon method.

S.N. Station Name Station weight Rainfall (mm) Weighted rainfall (mm)
1. Sohela 0.06 262.0 15.7
2. Bijepur 0.12 521.0 62.5
3. Padampur 0.42 177.0 74.3
4. Paikmal 0.28 338.0 94.6
5. Binka 0.04 158.0 16.1
6. Bolangir 0.08 401.6 12.6
Weighted catchment rainfall 275.8

Solution: Using the observed rainfall and station weight, weighted rainfall at each station is computed. Summation gives the weighted average rainfall for the catchment. The computations are shown in Table 2.4.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S0167564803800564

GIS Applications for Environment and Resources

Jürgen Böhner , Benjamin Bechtel , in Comprehensive Geographic Information Systems, 2018

2.10.3.1.1 Thiessen polygons

The Thiessen polygon (TP) method, frequently also referred to as Voronoy tessellation ( Zhu, 2016), is originally not targeted at an estimation of regular gridded surfaces but aims to geometrically define the spatial "responsibility" of punctual observations or information at the points Pi , by defining the area that is closest to a particular data point. The optimal determination of TP is mostly performed using the Delaunay Triangulation algorithm, which results in a gapless, non-overlapping triangulated irregular network (TIN) and moreover ensures a maximization of the minimum angle in each triangle (McCloy, 2005). Based on the triangle faces, each defined by three vertices and edges, the Thiessen polygons are geometrically defined by the perpendicular bisectors of the edge half of the distance between adjacent vertices. In the result, a plane is discretized into a finite set of areas, each assigned to a specific observation point (or seed). Since no new values are estimated or interpolated but only the observed values vi at points Pi are assigned to the respective surface unit, TP is not an interpolation method in the narrower sense. However, the TP vector data operation enables a "Natural Neighbor" interpolation, which estimates weighted means from neighboring values vi (cf. Hofstra et al., 2008) or may be directly rasterized to determine the Nearest Neighbor, i.e., the nearest neighboring point value for each grid node. Although both methods yield rather poor representations of continuous climate fields, the Nearest Neighbor method is useful when trying to separate spatially discrete climate phenomena, e.g., areas with or without precipitation.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780124095489096330

Applying modeling and optimization tools to existing city quarters

Mario Potente Prieto , ... Giovanni Tardioli , in Urban Energy Systems for Low-Carbon Cities, 2019

10.6.2.3 Zoning and boundary conditions

Once polygons and vertices are defined, the simulation input file requires the definition of all the boundary conditions and the association of each surface to each thermal zone. First, all the surfaces are classified in glazing, floor or ceiling, roof, ground and external walls. Then zoning and boundary condition definition are performed in an automatic fashion by scripting. External walls, roof, and ground floors require definition of different type of boundary conditions. This can be performed easily in the scripting environment considering the position of each surface. Nevertheless, for the internal partitions such as ceiling-to-floor adjacencies and building-to-building adjacencies more sophisticated calculations are required to estimate relative orientations and contact areas between different surfaces. Building adjacencies are evaluated considering the relative position of the buildings and the surrounding area of the target building. The script allows to define automatically the boundary conditions. An example of building adjacency and complex boundary condition definition is shown in a SketchUp visualization of the building energy model in Fig. 10.75.

Fig. 10.75

Fig. 10.75. AUSBEMA: surface processing and zoning.

Thermal zones are defined considering the derived information of each floor, and each surface is linked to the relative thermal zone. Further work and a higher level of information are required to define internal partitions and to include a better division of each thermal zone. A possible future development could be the definition of core and perimeter zones following the standard ASHRAE 90.1 annex G (ASHRAE, 2010). In addition, information regarding the internal partitions of the building is required to generate accurate thermal zone taking into account unheated surfaces and volumes.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B978012811553400010X