The Dialafara area is part of the highly endowed Kédougou-Kéniéba Inlier (KKI), West-Malian gold belt, which corresponds to a Paleoproterozoic window through the West African Craton (WAC). This study pr...The Dialafara area is part of the highly endowed Kédougou-Kéniéba Inlier (KKI), West-Malian gold belt, which corresponds to a Paleoproterozoic window through the West African Craton (WAC). This study presents, first of all, an integration of geophysical data interpretation with litho-structural field reconnaissance and then proposes a new litho-structural map of the Dialafara area. The Dialafara area shows a variety of lithology characterized by volcanic and volcano-sedimentary units, metasediments and plutonic intrusion. These lithologies were affected by a complex superposition of structures of unequal importance defining three deformation phases (D<sub>D1</sub> to D<sub>D3</sub>) under ductile to brittle regimes. These features permit to portray a new litho-structural map, which shows that the Dialafara area presents a more complex lithological and structural context than the one presented in regional map of the KKI. This leads to the evidence that this area could be a potential site for exploration as it is situated between two world-class gold districts.展开更多
Multidatabase systems are designed to achieve schema integration and data interoperation among distributed and heterogeneous database systems. But data model heterogeneity and schema heterogeneity make this a challeng...Multidatabase systems are designed to achieve schema integration and data interoperation among distributed and heterogeneous database systems. But data model heterogeneity and schema heterogeneity make this a challenging task. A multidatabase common data model is firstly introduced based on XML, named XML-based Integration Data Model (XIDM), which is suitable for integrating different types of schemas. Then an approach of schema mappings based on XIDM in multidatabase systems has been presented. The mappings include global mappings, dealing with horizontal and vertical partitioning between global schemas and export schemas, and local mappings, processing the transformation between export schemas and local schemas. Finally, the illustration and implementation of schema mappings in a multidatabase prototype - Panorama system are also discussed. The implementation results demonstrate that the XIDM is an efficient model for managing multiple heterogeneous data sources and the approaches of schema mapping based on XIDM behave very well when integrating relational, object-oriented database systems and other file systems.展开更多
In the frame of landslide susceptibility assessment, a spectral library was created to support the identification of materials confined to a particular region using remote sensing images. This library, called Pakistan...In the frame of landslide susceptibility assessment, a spectral library was created to support the identification of materials confined to a particular region using remote sensing images. This library, called Pakistan spectral library(pklib) version 0.1, contains the analysis data of sixty rock samples taken in the Balakot region in Northern Pakistan.The spectral library is implemented as SQLite database. Structure and naming are inspired by the convention system of the ASTER Spectral Library. Usability, application and benefit of the pklib were evaluated and depicted taking two approaches, the multivariate and the spectral based. The spectral information were used to create indices. The indices were applied to Landsat and ASTER data tosupportthespatial delineation of outcropping rock sequences instratigraphic formations. The application of the indices introduced in this paper helps to identify spots where specific lithological characteristics occur. Especially in areas with sparse or missing detailed geological mapping, the spectral discrimination via remote sensing data can speed up the survey. The library can be used not only to support the improvement of factor maps for landslide susceptibility analysis, but also to provide a geoscientific basisto further analyze the lithological spotin numerous regions in the Hindu Kush.展开更多
A question about the analytical capability of Google maps is answered for three examples of pin maps, and polyline and polygon maps that are computer-programmed with the third version of the Google maps application. O...A question about the analytical capability of Google maps is answered for three examples of pin maps, and polyline and polygon maps that are computer-programmed with the third version of the Google maps application. One map reads XML data stored on the home server, whereas another downloads its data from an online fusion table, and the third includes pre-programmed data. Each map permits users to query mashup layers after the map has loaded. However, an analytical capability comparable to GIS should require users to have access to their data for analysis with their own functions while the map is loading. The technical constraint of asynchronous loading of data for Google maps is illustrated for each map. In conclusion, only one map has an analytical capability that is achieved by means of deprecated synchronous loading of data.展开更多
The statistical map is usually used to indicate the quantitative features of various socio economic phenomena among regions on the base map of administrative divisions or on other base maps which connected with stati...The statistical map is usually used to indicate the quantitative features of various socio economic phenomena among regions on the base map of administrative divisions or on other base maps which connected with statistical unit. Making use of geographic information system (GIS) techniques, and supported by Auto CAD software, the author of this paper has put forward a practical method for making statistical map and developed a software (SMT) for the making of small scale statistical map using C language.展开更多
Land cover map for a part of North Sinai was produced using the FAO—Land Cover Classification System (LCCS) of 2004. The standard FAO classification scheme provides a standardized system of classification that can be...Land cover map for a part of North Sinai was produced using the FAO—Land Cover Classification System (LCCS) of 2004. The standard FAO classification scheme provides a standardized system of classification that can be used to analyze spatial and temporal land cover variability in the study area. This approach also has the advantage of facilitating the integration of Sinai land cover mapping products to be included with the regional and global land cover datasets. The total study area is 7450 km2 (1,773,842) feddans. The landscape classification was performed on SPOT4 data acquired in 2011 using combined multi-spectral bands of 20 meter spatial resolution. Geographic Information System (GIS) was used to edit the classification result in order to reach the maximum possible accuracy. GIS was also used to include all necessary information. The identified vegetative land cover classes of the study area are irrigated herbaceous crops, irrigated tree crops and rain fed tree crops. The non-vegetated land covers in the study area include: bare rock, bare soil, bare soil stony, bare soil very stony, bare soil salt crusts, loose and shifting sands and sand dunes. The water bodies were classified as artificial perennial water bodies (fish ponds and irrigated canals) and natural perennial water bodies as lakes (standing) and rivers (flowing). Artificial surfaces in the study area include linear and non-linear. The produced maps and the statistics of the different land covers are included in the following sub-sections.展开更多
Extracting and mining social networks information from massive Web data is of both theoretical and practical significance. However, one of definite features of this task was a large scale data processing, which remain...Extracting and mining social networks information from massive Web data is of both theoretical and practical significance. However, one of definite features of this task was a large scale data processing, which remained to be a great challenge that would be addressed. MapReduce is a kind of distributed programming model. Just through the implementation of map and reduce those two functions, the distributed tasks can work well. Nevertheless, this model does not directly support heterogeneous datasets processing, while heterogeneous datasets are common in Web. This article proposes a new framework which improves original MapReduce framework into a new one called Map-Reduce-Merge. It adds merge phase that can efficiently solve the problems of heterogeneous data processing. At the same time, some works of optimization and improvement are done based on the features of Web data.展开更多
Open-source and free tools are readily available to the public to process data and assist producers in making management decisions related to agricultural landscapes. On-the-go soil sensors are being used as a proxy t...Open-source and free tools are readily available to the public to process data and assist producers in making management decisions related to agricultural landscapes. On-the-go soil sensors are being used as a proxy to develop digital soil maps because of the data they can collect and their ability to cover a large area quickly. Machine learning, a subcomponent of artificial intelligence, makes predictions from data. Intermixing open-source tools, on-the-go sensor technologies, and machine learning may improve Mississippi soil mapping and crop production. This study aimed to evaluate machine learning for mapping apparent soil electrical conductivity (EC<sub>a</sub>) collected with an on-the-go sensor system at two sites (i.e., MF2, MF9) on a research farm in Mississippi. Machine learning tools (support vector machine) incorporated in Smart-Map, an open-source application, were used to evaluate the sites and derive the apparent electrical conductivity maps. Autocorrelation of the shallow (EC<sub>as</sub>) and deep (EC<sub>ad</sub>) readings was statistically significant at both locations (Moran’s I, p 0.001);however, the spatial correlation was greater at MF2. According to the leave-one-out cross-validation results, the best models were developed for EC<sub>as</sub> versus EC<sub>ad</sub>. Spatial patterns were observed for the EC<sub>as</sub> and EC<sub>ad</sub> readings in both fields. The patterns observed for the EC<sub>ad</sub> readings were more distinct than the EC<sub>as</sub> measurements. The research results indicated that machine learning was valuable for deriving apparent electrical conductivity maps in two Mississippi fields. Location and depth played a role in the machine learner’s ability to develop maps.展开更多
To design microstructure and microhardness in the additive manufacturing(AM)of nickel(Ni)-based superalloys,the present work develops a novel data-driven approach that combines physics-based models,experimental measur...To design microstructure and microhardness in the additive manufacturing(AM)of nickel(Ni)-based superalloys,the present work develops a novel data-driven approach that combines physics-based models,experimental measurements,and a data-mining method.The simulation is based on a computational thermal-fluid dynamics(CtFD)model,which can obtain thermal behavior,solidification parameters such as cooling rate,and the dilution of solidified clad.Based on the computed thermal information,dendrite arm spacing and microhardness are estimated using well-tested mechanistic models.Experimental microstructure and microhardness are determined and compared with the simulated values for validation.To visualize process-structure-properties(PSPs)linkages,the simulation and experimental datasets are input to a data-mining model-a self-organizing map(SOM).The design windows of the process parameters under multiple objectives can be obtained from the visualized maps.The proposed approaches can be utilized in AM and other data-intensive processes.Data-driven linkages between process,structure,and properties have the potential to benefit online process monitoring control in order to derive an ideal microstructure and mechanical properties.展开更多
Compressive sensing is a powerful method for reconstruction of sparsely-sampled data, based on statistical optimization. It can be applied to a range of flow measurement and visualization data, and in this work we sho...Compressive sensing is a powerful method for reconstruction of sparsely-sampled data, based on statistical optimization. It can be applied to a range of flow measurement and visualization data, and in this work we show the usage in groundwater mapping. Due to scarcity of water in many regions of the world, including southwestern United States, monitoring and management of groundwater is of utmost importance. A complete mapping of groundwater is difficult since the monitored sites are far from one another, and thus the data sets are considered extremely “sparse”. To overcome this difficulty in complete mapping of groundwater, compressive sensing is an ideal tool, as it bypasses the classical Nyquist criterion. We show that compressive sensing can effectively be used for reconstructions of groundwater level maps, by validating against data. This approach can have an impact on geographical sensing and information, as effective monitoring and management are enabled without constructing numerous or expensive measurement sites for groundwater.展开更多
Data warehouses (DW) must integrate information from the different areas and sources of an organization in order to extract knowledge relevant to decision-making. The DW development is not an easy task, which is why v...Data warehouses (DW) must integrate information from the different areas and sources of an organization in order to extract knowledge relevant to decision-making. The DW development is not an easy task, which is why various design approaches have been put forward. These approaches can be classified in three different paradigms according to the origin of the information requirements: supply-driven, demand-driven, and hybrids of these. This article compares the methodologies for the multidimensional design of DW through a systematic mapping as research methodology. The study is presented for each paradigm, the main characteristics of the methodologies, their notations and problem areas exhibited in each one of them. The results indicate that there is no follow-up to the complete process of implementing a DW in either an academic or industrial environment;however, there is also no evidence that the attempt is made to address the design and development of a DW by applying and comparing different methodologies existing in the field.展开更多
By employing the unique phenological feature of winter wheat extracted from peak before winter (PBW) and the advantages of moderate resolution imaging spectroradiometer (MODIS) data with high temporal resolution a...By employing the unique phenological feature of winter wheat extracted from peak before winter (PBW) and the advantages of moderate resolution imaging spectroradiometer (MODIS) data with high temporal resolution and intermediate spatial resolution, a remote sensing-based model for mapping winter wheat on the North China Plain was built through integration with Landsat images and land-use data. First, a phenological window, PBW was drawn from time-series MODIS data. Next, feature extraction was performed for the PBW to reduce feature dimension and enhance its information. Finally, a regression model was built to model the relationship of the phenological feature and the sample data. The amount of information of the PBW was evaluated and compared with that of the main peak (MP). The relative precision of the mapping reached up to 92% in comparison to the Landsat sample data, and ranged between 87 and 96% in comparison to the statistical data. These results were sufficient to satisfy the accuracy requirements for winter wheat mapping at a large scale. Moreover, the proposed method has the ability to obtain the distribution information for winter wheat in an earlier period than previous studies. This study could throw light on the monitoring of winter wheat in China by using unique phenological feature of winter wheat.展开更多
Since creation of spatial data is a costly and time consuming process, researchers, in this domain, in most of the cases rely on open source spatial attributes for their specific purpose. Likewise, the present researc...Since creation of spatial data is a costly and time consuming process, researchers, in this domain, in most of the cases rely on open source spatial attributes for their specific purpose. Likewise, the present research aims at mapping landslide susceptibility at the metropolitan area of Chittagong district of Bangladesh utilizing obtainable open source spatial data from various web portals. In this regard, we targeted a study region where rainfall induced landslides reportedly causes causalities as well as property damage each year. In this study, however, we employed multi-criteria evaluation (MCE) technique i.e., heuristic, a knowledge driven approach based on expert opinions from various discipline for landslide susceptibility mapping combining nine causative factors—geomorphology, geology, land use/land cover (LULC), slope, aspect, plan curvature, drainage distance, relative relief and vegetation in geographic information system (GIS) environment. The final susceptibility map was devised into five hazard classes viz., very low, low, moderate, high, and very high, representing 22 km2 (13%), 90 km2 (53%);24 km2 (15%);22 km2 (13%) and 10 km2 (6%) areas respectively. This particular study might be beneficial to the local authorities and other stake-holders, concerned in disaster risk reduction and mitigation activities. Moreover this study can also be advantageous for risk sensitive land use planning in the study area.展开更多
The basis of accurate mineral resource estimates is to have a geological model which replicates the nature and style of the orebody. Key inputs into the generation of a good geological model are the sample data and ma...The basis of accurate mineral resource estimates is to have a geological model which replicates the nature and style of the orebody. Key inputs into the generation of a good geological model are the sample data and mapping information. The Obuasi Mine sample data with a lot of legacy issues were subjected to a robust validation process and integrated with mapping information to generate an accurate geological orebody model for mineral resource estimation in Block 8 Lower. Validation of the sample data focused on replacing missing collar coordinates, missing assays, and correcting magnetic declination that was used to convert the downhole surveys from true to magnetic, fix missing lithology and finally assign confidence numbers to all the sample data. The missing coordinates which were replaced ensured that the sample data plotted at their correct location in space as intended from the planning stage. Magnetic declination data, which was maintained constant throughout all the years even though it changes every year, was also corrected in the validation project. The corrected magnetic declination ensured that the drillholes were plotted on their accurate trajectory as per the planned azimuth and also reflected the true position of the intercepted mineralized fissure(s) which was previously not the case and marked a major blot in the modelling of the Obuasi orebody. The incorporation of mapped data with the validated sample data in the wireframes resulted in a better interpretation of the orebody. The updated mineral resource generated by domaining quartz from the sulphides and compared with the old resource showed that the sulphide tonnes in the old resource estimates were overestimated by 1% and the grade overestimated by 8.5%.展开更多
文摘The Dialafara area is part of the highly endowed Kédougou-Kéniéba Inlier (KKI), West-Malian gold belt, which corresponds to a Paleoproterozoic window through the West African Craton (WAC). This study presents, first of all, an integration of geophysical data interpretation with litho-structural field reconnaissance and then proposes a new litho-structural map of the Dialafara area. The Dialafara area shows a variety of lithology characterized by volcanic and volcano-sedimentary units, metasediments and plutonic intrusion. These lithologies were affected by a complex superposition of structures of unequal importance defining three deformation phases (D<sub>D1</sub> to D<sub>D3</sub>) under ductile to brittle regimes. These features permit to portray a new litho-structural map, which shows that the Dialafara area presents a more complex lithological and structural context than the one presented in regional map of the KKI. This leads to the evidence that this area could be a potential site for exploration as it is situated between two world-class gold districts.
文摘Multidatabase systems are designed to achieve schema integration and data interoperation among distributed and heterogeneous database systems. But data model heterogeneity and schema heterogeneity make this a challenging task. A multidatabase common data model is firstly introduced based on XML, named XML-based Integration Data Model (XIDM), which is suitable for integrating different types of schemas. Then an approach of schema mappings based on XIDM in multidatabase systems has been presented. The mappings include global mappings, dealing with horizontal and vertical partitioning between global schemas and export schemas, and local mappings, processing the transformation between export schemas and local schemas. Finally, the illustration and implementation of schema mappings in a multidatabase prototype - Panorama system are also discussed. The implementation results demonstrate that the XIDM is an efficient model for managing multiple heterogeneous data sources and the approaches of schema mapping based on XIDM behave very well when integrating relational, object-oriented database systems and other file systems.
文摘In the frame of landslide susceptibility assessment, a spectral library was created to support the identification of materials confined to a particular region using remote sensing images. This library, called Pakistan spectral library(pklib) version 0.1, contains the analysis data of sixty rock samples taken in the Balakot region in Northern Pakistan.The spectral library is implemented as SQLite database. Structure and naming are inspired by the convention system of the ASTER Spectral Library. Usability, application and benefit of the pklib were evaluated and depicted taking two approaches, the multivariate and the spectral based. The spectral information were used to create indices. The indices were applied to Landsat and ASTER data tosupportthespatial delineation of outcropping rock sequences instratigraphic formations. The application of the indices introduced in this paper helps to identify spots where specific lithological characteristics occur. Especially in areas with sparse or missing detailed geological mapping, the spectral discrimination via remote sensing data can speed up the survey. The library can be used not only to support the improvement of factor maps for landslide susceptibility analysis, but also to provide a geoscientific basisto further analyze the lithological spotin numerous regions in the Hindu Kush.
文摘A question about the analytical capability of Google maps is answered for three examples of pin maps, and polyline and polygon maps that are computer-programmed with the third version of the Google maps application. One map reads XML data stored on the home server, whereas another downloads its data from an online fusion table, and the third includes pre-programmed data. Each map permits users to query mashup layers after the map has loaded. However, an analytical capability comparable to GIS should require users to have access to their data for analysis with their own functions while the map is loading. The technical constraint of asynchronous loading of data for Google maps is illustrated for each map. In conclusion, only one map has an analytical capability that is achieved by means of deprecated synchronous loading of data.
文摘The statistical map is usually used to indicate the quantitative features of various socio economic phenomena among regions on the base map of administrative divisions or on other base maps which connected with statistical unit. Making use of geographic information system (GIS) techniques, and supported by Auto CAD software, the author of this paper has put forward a practical method for making statistical map and developed a software (SMT) for the making of small scale statistical map using C language.
文摘Land cover map for a part of North Sinai was produced using the FAO—Land Cover Classification System (LCCS) of 2004. The standard FAO classification scheme provides a standardized system of classification that can be used to analyze spatial and temporal land cover variability in the study area. This approach also has the advantage of facilitating the integration of Sinai land cover mapping products to be included with the regional and global land cover datasets. The total study area is 7450 km2 (1,773,842) feddans. The landscape classification was performed on SPOT4 data acquired in 2011 using combined multi-spectral bands of 20 meter spatial resolution. Geographic Information System (GIS) was used to edit the classification result in order to reach the maximum possible accuracy. GIS was also used to include all necessary information. The identified vegetative land cover classes of the study area are irrigated herbaceous crops, irrigated tree crops and rain fed tree crops. The non-vegetated land covers in the study area include: bare rock, bare soil, bare soil stony, bare soil very stony, bare soil salt crusts, loose and shifting sands and sand dunes. The water bodies were classified as artificial perennial water bodies (fish ponds and irrigated canals) and natural perennial water bodies as lakes (standing) and rivers (flowing). Artificial surfaces in the study area include linear and non-linear. The produced maps and the statistics of the different land covers are included in the following sub-sections.
文摘Extracting and mining social networks information from massive Web data is of both theoretical and practical significance. However, one of definite features of this task was a large scale data processing, which remained to be a great challenge that would be addressed. MapReduce is a kind of distributed programming model. Just through the implementation of map and reduce those two functions, the distributed tasks can work well. Nevertheless, this model does not directly support heterogeneous datasets processing, while heterogeneous datasets are common in Web. This article proposes a new framework which improves original MapReduce framework into a new one called Map-Reduce-Merge. It adds merge phase that can efficiently solve the problems of heterogeneous data processing. At the same time, some works of optimization and improvement are done based on the features of Web data.
文摘Open-source and free tools are readily available to the public to process data and assist producers in making management decisions related to agricultural landscapes. On-the-go soil sensors are being used as a proxy to develop digital soil maps because of the data they can collect and their ability to cover a large area quickly. Machine learning, a subcomponent of artificial intelligence, makes predictions from data. Intermixing open-source tools, on-the-go sensor technologies, and machine learning may improve Mississippi soil mapping and crop production. This study aimed to evaluate machine learning for mapping apparent soil electrical conductivity (EC<sub>a</sub>) collected with an on-the-go sensor system at two sites (i.e., MF2, MF9) on a research farm in Mississippi. Machine learning tools (support vector machine) incorporated in Smart-Map, an open-source application, were used to evaluate the sites and derive the apparent electrical conductivity maps. Autocorrelation of the shallow (EC<sub>as</sub>) and deep (EC<sub>ad</sub>) readings was statistically significant at both locations (Moran’s I, p 0.001);however, the spatial correlation was greater at MF2. According to the leave-one-out cross-validation results, the best models were developed for EC<sub>as</sub> versus EC<sub>ad</sub>. Spatial patterns were observed for the EC<sub>as</sub> and EC<sub>ad</sub> readings in both fields. The patterns observed for the EC<sub>ad</sub> readings were more distinct than the EC<sub>as</sub> measurements. The research results indicated that machine learning was valuable for deriving apparent electrical conductivity maps in two Mississippi fields. Location and depth played a role in the machine learner’s ability to develop maps.
基金Jian Cao,Gregory J.Wagner,and Wing K.Liu acknowledge support from the National Science Foundation(NSF)Cyber-Physical Systems(CPS)(CPS/CMMI-1646592)Hengyang Li acknowledges support from the Northwestern Data Science Initiative(DSI+6 种基金171474500210043324)Jian Cao,Gregory J.Wagner,Wing K.Liu,Jennifer L.Bennett,and Sarah J.Wolff acknowledge support from the Digital Manufacturing and Design Innovation Institute(DMDII15-07)Jian Cao,Wing K.Liu,Zhengtao Gan,and Jennifer L.Bennett acknowledge support from the Center for Hierarchical Materials Design(CHiMaD70NANB14H012)This work made use of facilities at DMG MORI and Northwestern UniversityIt also made use of the MatCI Facility,which receives support from the MRSEC Program(NSF DMR-168 1720139)of the Materials Research Center at Northwestern University.
文摘To design microstructure and microhardness in the additive manufacturing(AM)of nickel(Ni)-based superalloys,the present work develops a novel data-driven approach that combines physics-based models,experimental measurements,and a data-mining method.The simulation is based on a computational thermal-fluid dynamics(CtFD)model,which can obtain thermal behavior,solidification parameters such as cooling rate,and the dilution of solidified clad.Based on the computed thermal information,dendrite arm spacing and microhardness are estimated using well-tested mechanistic models.Experimental microstructure and microhardness are determined and compared with the simulated values for validation.To visualize process-structure-properties(PSPs)linkages,the simulation and experimental datasets are input to a data-mining model-a self-organizing map(SOM).The design windows of the process parameters under multiple objectives can be obtained from the visualized maps.The proposed approaches can be utilized in AM and other data-intensive processes.Data-driven linkages between process,structure,and properties have the potential to benefit online process monitoring control in order to derive an ideal microstructure and mechanical properties.
文摘Compressive sensing is a powerful method for reconstruction of sparsely-sampled data, based on statistical optimization. It can be applied to a range of flow measurement and visualization data, and in this work we show the usage in groundwater mapping. Due to scarcity of water in many regions of the world, including southwestern United States, monitoring and management of groundwater is of utmost importance. A complete mapping of groundwater is difficult since the monitored sites are far from one another, and thus the data sets are considered extremely “sparse”. To overcome this difficulty in complete mapping of groundwater, compressive sensing is an ideal tool, as it bypasses the classical Nyquist criterion. We show that compressive sensing can effectively be used for reconstructions of groundwater level maps, by validating against data. This approach can have an impact on geographical sensing and information, as effective monitoring and management are enabled without constructing numerous or expensive measurement sites for groundwater.
文摘Data warehouses (DW) must integrate information from the different areas and sources of an organization in order to extract knowledge relevant to decision-making. The DW development is not an easy task, which is why various design approaches have been put forward. These approaches can be classified in three different paradigms according to the origin of the information requirements: supply-driven, demand-driven, and hybrids of these. This article compares the methodologies for the multidimensional design of DW through a systematic mapping as research methodology. The study is presented for each paradigm, the main characteristics of the methodologies, their notations and problem areas exhibited in each one of them. The results indicate that there is no follow-up to the complete process of implementing a DW in either an academic or industrial environment;however, there is also no evidence that the attempt is made to address the design and development of a DW by applying and comparing different methodologies existing in the field.
基金supported by the open research fund of the Key Laboratory of Agri-informatics,Ministry of Agriculture and the fund of Outstanding Agricultural Researcher,Ministry of Agriculture,China
文摘By employing the unique phenological feature of winter wheat extracted from peak before winter (PBW) and the advantages of moderate resolution imaging spectroradiometer (MODIS) data with high temporal resolution and intermediate spatial resolution, a remote sensing-based model for mapping winter wheat on the North China Plain was built through integration with Landsat images and land-use data. First, a phenological window, PBW was drawn from time-series MODIS data. Next, feature extraction was performed for the PBW to reduce feature dimension and enhance its information. Finally, a regression model was built to model the relationship of the phenological feature and the sample data. The amount of information of the PBW was evaluated and compared with that of the main peak (MP). The relative precision of the mapping reached up to 92% in comparison to the Landsat sample data, and ranged between 87 and 96% in comparison to the statistical data. These results were sufficient to satisfy the accuracy requirements for winter wheat mapping at a large scale. Moreover, the proposed method has the ability to obtain the distribution information for winter wheat in an earlier period than previous studies. This study could throw light on the monitoring of winter wheat in China by using unique phenological feature of winter wheat.
文摘Since creation of spatial data is a costly and time consuming process, researchers, in this domain, in most of the cases rely on open source spatial attributes for their specific purpose. Likewise, the present research aims at mapping landslide susceptibility at the metropolitan area of Chittagong district of Bangladesh utilizing obtainable open source spatial data from various web portals. In this regard, we targeted a study region where rainfall induced landslides reportedly causes causalities as well as property damage each year. In this study, however, we employed multi-criteria evaluation (MCE) technique i.e., heuristic, a knowledge driven approach based on expert opinions from various discipline for landslide susceptibility mapping combining nine causative factors—geomorphology, geology, land use/land cover (LULC), slope, aspect, plan curvature, drainage distance, relative relief and vegetation in geographic information system (GIS) environment. The final susceptibility map was devised into five hazard classes viz., very low, low, moderate, high, and very high, representing 22 km2 (13%), 90 km2 (53%);24 km2 (15%);22 km2 (13%) and 10 km2 (6%) areas respectively. This particular study might be beneficial to the local authorities and other stake-holders, concerned in disaster risk reduction and mitigation activities. Moreover this study can also be advantageous for risk sensitive land use planning in the study area.
文摘The basis of accurate mineral resource estimates is to have a geological model which replicates the nature and style of the orebody. Key inputs into the generation of a good geological model are the sample data and mapping information. The Obuasi Mine sample data with a lot of legacy issues were subjected to a robust validation process and integrated with mapping information to generate an accurate geological orebody model for mineral resource estimation in Block 8 Lower. Validation of the sample data focused on replacing missing collar coordinates, missing assays, and correcting magnetic declination that was used to convert the downhole surveys from true to magnetic, fix missing lithology and finally assign confidence numbers to all the sample data. The missing coordinates which were replaced ensured that the sample data plotted at their correct location in space as intended from the planning stage. Magnetic declination data, which was maintained constant throughout all the years even though it changes every year, was also corrected in the validation project. The corrected magnetic declination ensured that the drillholes were plotted on their accurate trajectory as per the planned azimuth and also reflected the true position of the intercepted mineralized fissure(s) which was previously not the case and marked a major blot in the modelling of the Obuasi orebody. The incorporation of mapped data with the validated sample data in the wireframes resulted in a better interpretation of the orebody. The updated mineral resource generated by domaining quartz from the sulphides and compared with the old resource showed that the sulphide tonnes in the old resource estimates were overestimated by 1% and the grade overestimated by 8.5%.