期刊文献+
共找到2,143篇文章
< 1 2 108 >
每页显示 20 50 100
Local and global approaches of affinity propagation clustering for large scale data 被引量:15
1
作者 Ding-yin XIA Fei WU +1 位作者 Xu-qing ZHAN Yue-ting ZHUANG 《Journal of Zhejiang University-Science A(Applied Physics & Engineering)》 SCIE EI CAS CSCD 2008年第10期1373-1381,共9页
Recently a new clustering algorithm called 'affinity propagation' (AP) has been proposed, which efficiently clustered sparsely related data by passing messages between data points. However, we want to cluster ... Recently a new clustering algorithm called 'affinity propagation' (AP) has been proposed, which efficiently clustered sparsely related data by passing messages between data points. However, we want to cluster large scale data where the similarities are not sparse in many cases. This paper presents two variants of AP for grouping large scale data with a dense similarity matrix. The local approach is partition affinity propagation (PAP) and the global method is landmark affinity propagation (LAP). PAP passes messages in the subsets of data first and then merges them as the number of initial step of iterations; it can effectively reduce the number of iterations of clustering. LAP passes messages between the landmark data points first and then clusters non-landmark data points; it is a large global approximation method to speed up clustering. Experiments are conducted on many datasets, such as random data points, manifold subspaces, images of faces and Chinese calligraphy, and the results demonstrate that the two ap-proaches are feasible and practicable. 展开更多
关键词 CLUSTERING Affinity propagation Large scale data Partition affinity propagation Landmark affinity propagation
下载PDF
Data Scale, Data Scope and Platform Enterprise Performance: Insights from Digital Platform M&As
2
作者 Liu Yubin Zhang Guijuan Xu Honghai 《China Economist》 2024年第5期82-106,共25页
Data is a key asset for digital platforms,and mergers and acquisitions(M&As)are an important way for platform enterprises to acquire it.The types of data obtained from intra-industry and cross-sector M&As diff... Data is a key asset for digital platforms,and mergers and acquisitions(M&As)are an important way for platform enterprises to acquire it.The types of data obtained from intra-industry and cross-sector M&As differ,as does the extent to which they interact within or between platforms.The impact of such data on corporate market performance is an important question to consider when selecting strategies for digital platform M&As.Based on our research on advertising-driven platforms,we developed a two-stage Hotelling game model for comparing the market performance effects of intra-industry M&As and cross-sector M&As for digital platforms.We carried out an empirical test using relevant data from advertising-driven digital platforms between 2009 and 2021,as well as a case study on Baidu’s M&A activities.Our research discovered that intra-industry M&As driven by“data economies of scale”and cross-sector M&As driven by“data economies of scope”are both beneficial to the market performance of platform enterprises.Intra-industry M&As have a more significant positive effect on the market performance of platform enterprises because the same types of data are easier to integrate and develop the“network effect of data scale”.From a data factor perspective,this paper reveals the inherent economic logic by which different types of M&As influence the market performance of digital platforms,as well as policymaking recommendations for all digital platforms to select M&A strategies based on data scale,data scope,and the network effect of data. 展开更多
关键词 Digital platforms intra-industry M&A cross-sector M&A data economies of scale data economies of scope
下载PDF
Trend Analysis of Large-Scale Twitter Data Based on Witnesses during a Hazardous Event: A Case Study on California Wildfire Evacuation
3
作者 Syed A. Morshed Khandakar Mamun Ahmed +1 位作者 Kamar Amine Kazi Ashraf Moinuddin 《World Journal of Engineering and Technology》 2021年第2期229-239,共11页
Social media data created a paradigm shift in assessing situational awareness during a natural disaster or emergencies such as wildfire, hurricane, tropical storm etc. Twitter as an emerging data source is an effectiv... Social media data created a paradigm shift in assessing situational awareness during a natural disaster or emergencies such as wildfire, hurricane, tropical storm etc. Twitter as an emerging data source is an effective and innovative digital platform to observe trend from social media users’ perspective who are direct or indirect witnesses of the calamitous event. This paper aims to collect and analyze twitter data related to the recent wildfire in California to perform a trend analysis by classifying firsthand and credible information from Twitter users. This work investigates tweets on the recent wildfire in California and classifies them based on witnesses into two types: 1) direct witnesses and 2) indirect witnesses. The collected and analyzed information can be useful for law enforcement agencies and humanitarian organizations for communication and verification of the situational awareness during wildfire hazards. Trend analysis is an aggregated approach that includes sentimental analysis and topic modeling performed through domain-expert manual annotation and machine learning. Trend analysis ultimately builds a fine-grained analysis to assess evacuation routes and provide valuable information to the firsthand emergency responders<span style="font-family:Verdana;">.</span> 展开更多
关键词 WILDFIRE EVACUATION TWITTER Large-scale data Topic Model Sentimental Analysis Trend Analysis
下载PDF
An analytical model for estimating rock strength parameters from small-scale drilling data 被引量:13
4
作者 Sajjad Kalantari Alireza Baghbanan Hamid Hashemalhosseini 《Journal of Rock Mechanics and Geotechnical Engineering》 SCIE CSCD 2019年第1期135-145,共11页
The small-scale drilling technique can be a fast and reliable method to estimate rock strength parameters. It needs to link the operational drilling parameters and strength properties of rock. The parameters such as b... The small-scale drilling technique can be a fast and reliable method to estimate rock strength parameters. It needs to link the operational drilling parameters and strength properties of rock. The parameters such as bit geometry, bit movement, contact frictions and crushed zone affect the estimated parameters.An analytical model considering operational drilling data and effective parameters can be used for these purposes. In this research, an analytical model was developed based on limit equilibrium of forces in a Tshaped drag bit considering the effective parameters such as bit geometry, crushed zone and contact frictions in drilling process. Based on the model, a method was used to estimate rock strength parameters such as cohesion, internal friction angle and uniaxial compressive strength of different rock types from operational drilling data. Some drilling tests were conducted by a portable and powerful drilling machine which was developed for this work. The obtained results for strength properties of different rock types from the drilling experiments based on the proposed model are in good agreement with the results of standard tests. Experimental results show that the contact friction between the cutting face and rock is close to that between bit end wearing face and rock due to the same bit material. In this case,the strength parameters, especially internal friction angle and cohesion, are estimated only by using a blunt bit drilling data and the bit bluntness does not affect the estimated results. 展开更多
关键词 ANALYTICAL model ROCK strength PARAMETERS SMALL-scale DRILLING data
下载PDF
History and evaluation of national-scale geochemical data sets for the United States 被引量:8
5
作者 David B.Smith Steven M.Smith John D.Horton 《Geoscience Frontiers》 SCIE CAS CSCD 2013年第2期167-183,共17页
Six national-scale,or near national-scale,geochemical data sets for soils or stream sediments exist for the United States.The earliest of these,here termed the 'Shacklette' data set,was generated by a U.S. Geologica... Six national-scale,or near national-scale,geochemical data sets for soils or stream sediments exist for the United States.The earliest of these,here termed the 'Shacklette' data set,was generated by a U.S. Geological Survey(USGS) project conducted from 1961 to 1975.This project used soil collected from a depth of about 20 cm as the sampling medium at 1323 sites throughout the conterminous U.S.The National Uranium Resource Evaluation Hydrogeochemical and Stream Sediment Reconnaissance(NUREHSSR) Program of the U.S.Department of Energy was conducted from 1975 to 1984 and collected either stream sediments,lake sediments,or soils at more than 378,000 sites in both the conterminous U.S.and Alaska.The sampled area represented about 65%of the nation.The Natural Resources Conservation Service(NRCS),from 1978 to 1982,collected samples from multiple soil horizons at sites within the major crop-growing regions of the conterminous U.S.This data set contains analyses of more than 3000 samples.The National Geochemical Survey,a USGS project conducted from 1997 to 2009,used a subset of the NURE-HSSR archival samples as its starting point and then collected primarily stream sediments, with occasional soils,in the parts of the U.S.not covered by the NURE-HSSR Program.This data set contains chemical analyses for more than 70,000 samples.The USGS,in collaboration with the Mexican Geological Survey and the Geological Survey of Canada,initiated soil sampling for the North American Soil Geochemical Landscapes Project in 2007.Sampling of three horizons or depths at more than 4800 sites in the U.S.was completed in 2010,and chemical analyses are currently ongoing.The NRCS initiated a project in the 1990s to analyze the various soil horizons from selected pedons throughout the U.S.This data set currently contains data from more than 1400 sites.This paper(1) discusses each data set in terms of its purpose,sample collection protocols,and analytical methods;and(2) evaluates each data set in terms of its appropriateness as a national-scale geochemical database and its usefulness for nationalscale geochemical mapping. 展开更多
关键词 Geochemical mapping National-scale geochemical data Geochemical baselines United States
下载PDF
Deriving Operational Origin-Destination Matrices From Large Scale Mobile Phone Data 被引量:1
6
作者 Jingtao Ma Huan Li +1 位作者 Fang Yuan Thomas Bauer 《International Journal of Transportation Science and Technology》 2013年第3期183-203,共21页
A method is presented in this work that integrates both emerging and mature data sources to estimate the operational travel demand in fine spatial and temporal resolutions.By analyzing individuals’mobility patterns r... A method is presented in this work that integrates both emerging and mature data sources to estimate the operational travel demand in fine spatial and temporal resolutions.By analyzing individuals’mobility patterns revealed from their mobile phones,researchers and practitioners are now equipped to derive the largest trip samples for a region.Because of its ubiquitous use,extensive coverage of telecommunication services and high penetration rates,travel demand can be studied continuously in fine spatial and temporal resolutions.The derived sample or seed trip matrices are coupled with surveyed commute flow data and prevalent travel demand modeling techniques to provide estimates of the total regional travel demand in the form of origindestination(OD)matrices.The methodology is evaluated in a series of real world transportation planning studies and proved its potentials in application areas such as dynamic traffic assignment modeling,integrated corridor management and online traffic simulations. 展开更多
关键词 operational origin-destination matrix large scale mobile phone data matrix correction trip imputation path-matching travel demand projection
下载PDF
A method for rapid transmission of multi-scale vector river data via the Internet 被引量:1
7
作者 Yang Weifang Jonathon Li 《Geodesy and Geodynamics》 2012年第2期34-41,共8页
Due to the conflict between huge amount of map data and limited network bandwidth, rapid trans- mission of vector map data over the Internet has become a bottleneck of spatial data delivery in web-based environment. T... Due to the conflict between huge amount of map data and limited network bandwidth, rapid trans- mission of vector map data over the Internet has become a bottleneck of spatial data delivery in web-based environment. This paper proposed an approach to organizing and transmitting multi-scale vector river network data via the Internet progressively. This approach takes account of two levels of importance, i.e. the importance of river branches and the importance of the points belonging to each river branch, and forms data packages ac- cording to these. Our experiments have shown that the proposed approach can reduce 90% of original data while preserving the river structure well. 展开更多
关键词 vector river data MULTI-scale progressive transmission river structure
原文传递
Regularized focusing inversion for large-scale gravity data based on GPU parallel computing
8
作者 WANG Haoran DING Yidan +1 位作者 LI Feida LI Jing 《Global Geology》 2019年第3期179-187,共9页
Processing large-scale 3-D gravity data is an important topic in geophysics field. Many existing inversion methods lack the competence of processing massive data and practical application capacity. This study proposes... Processing large-scale 3-D gravity data is an important topic in geophysics field. Many existing inversion methods lack the competence of processing massive data and practical application capacity. This study proposes the application of GPU parallel processing technology to the focusing inversion method, aiming at improving the inversion accuracy while speeding up calculation and reducing the memory consumption, thus obtaining the fast and reliable inversion results for large complex model. In this paper, equivalent storage of geometric trellis is used to calculate the sensitivity matrix, and the inversion is based on GPU parallel computing technology. The parallel computing program that is optimized by reducing data transfer, access restrictions and instruction restrictions as well as latency hiding greatly reduces the memory usage, speeds up the calculation, and makes the fast inversion of large models possible. By comparing and analyzing the computing speed of traditional single thread CPU method and CUDA-based GPU parallel technology, the excellent acceleration performance of GPU parallel computing is verified, which provides ideas for practical application of some theoretical inversion methods restricted by computing speed and computer memory. The model test verifies that the focusing inversion method can overcome the problem of severe skin effect and ambiguity of geological body boundary. Moreover, the increase of the model cells and inversion data can more clearly depict the boundary position of the abnormal body and delineate its specific shape. 展开更多
关键词 LARGE-scale gravity data GPU parallel computing CUDA equivalent geometric TRELLIS FOCUSING INVERSION
下载PDF
Constructing Large Scale Cohort for Clinical Study on Heart Failure with Electronic Health Record in Regional Healthcare Platform:Challenges and Strategies in Data Reuse 被引量:2
9
作者 Daowen Liu Liqi Lei +1 位作者 Tong Ruan Ping He 《Chinese Medical Sciences Journal》 CAS CSCD 2019年第2期90-102,共13页
Regional healthcare platforms collect clinical data from hospitals in specific areas for the purpose of healthcare management.It is a common requirement to reuse the data for clinical research.However,we have to face ... Regional healthcare platforms collect clinical data from hospitals in specific areas for the purpose of healthcare management.It is a common requirement to reuse the data for clinical research.However,we have to face challenges like the inconsistence of terminology in electronic health records (EHR) and the complexities in data quality and data formats in regional healthcare platform.In this paper,we propose methodology and process on constructing large scale cohorts which forms the basis of causality and comparative effectiveness relationship in epidemiology.We firstly constructed a Chinese terminology knowledge graph to deal with the diversity of vocabularies on regional platform.Secondly,we built special disease case repositories (i.e.,heart failure repository) that utilize the graph to search the related patients and to normalize the data.Based on the requirements of the clinical research which aimed to explore the effectiveness of taking statin on 180-days readmission in patients with heart failure,we built a large-scale retrospective cohort with 29647 cases of heart failure patients from the heart failure repository.After the propensity score matching,the study group (n=6346) and the control group (n=6346) with parallel clinical characteristics were acquired.Logistic regression analysis showed that taking statins had a negative correlation with 180-days readmission in heart failure patients.This paper presents the workflow and application example of big data mining based on regional EHR data. 展开更多
关键词 electronic health RECORDS CLINICAL terminology knowledge graph CLINICAL special disease case REPOSITORY evaluation of data quality large scale COHORT study
下载PDF
超大规模数据处理中并行计算技术的应用研究 被引量:1
10
作者 杨多海 《科技创新与应用》 2024年第17期181-184,共4页
随着人工智能和大数据时代的到来,超大规模数据处理成了一个重要的研究领域。该文主要探讨并行计算技术在超大规模数据处理中的应用,首先详细阐述并行计算和超大规模数据处理的基本理论与概念,特别是并行计算的编程模型与工具,最后通过... 随着人工智能和大数据时代的到来,超大规模数据处理成了一个重要的研究领域。该文主要探讨并行计算技术在超大规模数据处理中的应用,首先详细阐述并行计算和超大规模数据处理的基本理论与概念,特别是并行计算的编程模型与工具,最后通过分析并行计算在搜索引擎、气象预报和金融分析等中的实际案例,阐述并行计算技术在超大规模数据处理中的实际应用。 展开更多
关键词 并行计算技术 超大规模数据处理 编程模型与工具 实际案例 具体应用
下载PDF
基于空间投影和聚类划分的SVR加速算法
11
作者 王梅 张天时 +1 位作者 王志宝 任怡果 《计算机技术与发展》 2024年第4期24-29,共6页
数据不仅能产生价值,还对统计学的科学发展提供了动力。随着科技的飞速发展,海量数据得以涌现,但大规模的数据会导致很多传统处理方法很难满足各领域对数据分析的需求。面对海量数据时代学习算法的低效性,分治法通常被认为是解决这一问... 数据不仅能产生价值,还对统计学的科学发展提供了动力。随着科技的飞速发展,海量数据得以涌现,但大规模的数据会导致很多传统处理方法很难满足各领域对数据分析的需求。面对海量数据时代学习算法的低效性,分治法通常被认为是解决这一问题最直接、最广泛使用的策略。SVR是一种强大的回归算法,在模式识别和数据挖掘等领域有广泛应用。然而在处理大规模数据时,SVR训练效率低。为此,该文利用分治思想提出一种基于空间投影和聚类划分的SVR加速算法(PKM-SVR)。利用投影向量将数据投影到二维空间;利用聚类方法将数据空间划分为k个互不相交的区域;在每个区域上训练SVR模型;利用每个区域的SVR模型预测落入同一区域的待识别样本。在标准数据集上与传统的数据划分方法进行对比实验,实验结果表明该算法训练速度较快,并表现出更好的预测性能。 展开更多
关键词 大规模数据 分治法 支持向量回归 主成分分析 聚类
下载PDF
Sample-data Decentralized Reliable H∞ Hyperbolic Control for Uncertain Fuzzy Large-scale Systems with Time-varying Delay 被引量:2
12
作者 LIU Xin-Rui ZHANG Hua-Guang 《自动化学报》 EI CSCD 北大核心 2009年第12期1534-1540,共7页
这份报纸学习样品数据的问题为有变化时间的延期的不明确的连续时间的模糊大规模系统的可靠 H 夸张控制。第一,模糊夸张模型( FHM )被用来为某些复杂大规模系统建立模型,然后根据 Lyapunov 指导方法和大规模系统的分散的控制理论,线... 这份报纸学习样品数据的问题为有变化时间的延期的不明确的连续时间的模糊大规模系统的可靠 H 夸张控制。第一,模糊夸张模型( FHM )被用来为某些复杂大规模系统建立模型,然后根据 Lyapunov 指导方法和大规模系统的分散的控制理论,线性 matrixine 质量( LMI )基于条件 arederived toguarantee H 性能不仅当所有控制部件正在操作很好时,而且面对一些可能的致动器失败。而且,致动器的精确失败参数没被要求,并且要求仅仅是失败参数的更低、上面的界限。条件依赖于时间延期的上面的界限,并且不依赖于变化时间的延期的衍生物。因此,获得的结果是不太保守的。最后,二个例子被提供说明设计过程和它的有效性。 展开更多
关键词 模糊双曲模型 线性矩阵不等式 分散控制理论 执行器
下载PDF
三维成矿预测关键问题 被引量:1
13
作者 袁峰 李晓晖 +5 位作者 田卫东 周官群 汪金菊 葛粲 国显正 郑超杰 《地学前缘》 EI CAS CSCD 北大核心 2024年第4期119-128,共10页
三维成矿预测是当前深部找矿预测和勘查的重要方法和手段,其方法体系和实践应用均已取得大量成果,但同时存在若干关键科学技术问题,导致其进一步发展受到制约。本文从多尺度三维成矿预测方法体系不完善、不确定性分析与优化研究薄弱、... 三维成矿预测是当前深部找矿预测和勘查的重要方法和手段,其方法体系和实践应用均已取得大量成果,但同时存在若干关键科学技术问题,导致其进一步发展受到制约。本文从多尺度三维成矿预测方法体系不完善、不确定性分析与优化研究薄弱、三维成矿预测要素挖掘存在瓶颈、缺少针对三维成矿预测的三维深度学习模型和方法等关键问题出发,对目前三维成矿预测领域相关方面的研究进展进行综合分析,并提出针对上述关键问题可能的解决方案和研究方向。预期未来三维成矿预测领域的研究工作将创新发展出多种方法,实现对三维预测信息的深度挖掘;构建形成适用的三维深度学习模型和训练方法,有效增强三维成矿预测结果的预测能力;通过系统性地开展三维成矿预测不确定性研究,进一步优化预测过程和结果,有效提高三维成矿预测方法的可靠性和准确性;形成面向多尺度三维成矿预测的方法体系,更有效地指导矿集区-矿田-勘查区块(矿床)等不同级别的深部矿产资源找矿勘查工作。相关关键问题的解决将进一步深化和完善三维成矿预测理论和方法体系,促进三维成矿预测理论方法的实践应用,显著提升深部找矿预测和勘查工作的效率与水平,助力深部找矿突破。 展开更多
关键词 三维成矿预测 关键问题 多尺度 预测信息发掘 不确定性 数据融合
下载PDF
基于CycleGAN-IA方法和M-ConvNext网络的苹果叶片病害图像识别 被引量:2
14
作者 李云红 张蕾涛 +3 位作者 李丽敏 苏雪平 谢蓉蓉 史含驰 《农业机械学报》 EI CAS CSCD 北大核心 2024年第4期204-212,共9页
针对苹果叶片病害图像识别存在数据集获取困难、样本不足、识别准确率低等问题,提出基于多尺度特征提取的病害识别网络(Multi-scale feature extraction ConvNext,M-ConvNext)模型。采用一种结合改进的循环一致性生成对抗网络与仿射变... 针对苹果叶片病害图像识别存在数据集获取困难、样本不足、识别准确率低等问题,提出基于多尺度特征提取的病害识别网络(Multi-scale feature extraction ConvNext,M-ConvNext)模型。采用一种结合改进的循环一致性生成对抗网络与仿射变换的数据增强方法(Improved CycleGAN and affine transformation,CycleGAN-IA),首先,使用较小感受野的卷积核和残差注意力模块优化CycleGAN网络结构,使用二值交叉熵损失函数代替CycleGAN网络的均方差损失函数,以此生成高质量样本图像,提高样本特征复杂度;然后,对生成图像进行仿射变换,提高数据样本的空间复杂度,该方法解决了数据样本不足的问题,用于辅助后续的病害识别模型。其次,构建M-ConvNext网络,该网络设计G-RFB模块获取并融合各个尺度的特征信息,GELU激活函数增强网络的特征表达能力,提高苹果叶片病害图像识别准确率。最后,实验结果表明,CycleGAN-IA数据增强方法可以对数据集起到良好的扩充作用,在常用网络上验证,增强后的数据集可以有效提高苹果叶片病害图像识别准确率;通过消融实验可得,M-ConvNex识别准确率可达99.18%,较原ConvNext网络准确率提高0.41个百分点,较ResNet50、MobileNetV3和EfficientNetV2网络分别提高3.78、7.35、4.07个百分点,为后续农作物病害识别提供了新思路。 展开更多
关键词 苹果叶片 病害识别 生成式对抗网络 数据增强 多尺度特征提取
下载PDF
多媒体数据自适应多尺度分块压缩仿真研究 被引量:1
15
作者 段海涛 陈建 《计算机仿真》 2024年第6期318-321,454,共5页
在多媒体图像数据压缩过程中,为了减小数据体积,通常需要牺牲图像的一些细节和精度,这会导致部分信息的丢失。为了提高压缩效果,以多媒体图像为例,提出一种面向多媒体数据的分块无损压缩算法。通过四叉树算法对多媒体图像展开分块处理,... 在多媒体图像数据压缩过程中,为了减小数据体积,通常需要牺牲图像的一些细节和精度,这会导致部分信息的丢失。为了提高压缩效果,以多媒体图像为例,提出一种面向多媒体数据的分块无损压缩算法。通过四叉树算法对多媒体图像展开分块处理,通过结合边缘特征和方向特征的多尺度小波变换算法获取多媒体图像每层子带图像块的自适应采样率,基于纹理块和平坦块的自适应多尺度分块压缩感知方法完成多媒体图像数据的分块无损压缩。实验结果表明,所提算法的压缩效果更好,不仅能够实现数据有效压缩,而且不会损失图像信息,且压缩时间较短,整体应用效果更好。 展开更多
关键词 多媒体数据 多尺度小波变换 自适应采样 自适应分块 分块无损压缩
下载PDF
MapReduce模型在大规模数据并行挖掘中的应用
16
作者 唐婧 杜微 周翼 《智能物联技术》 2024年第2期38-42,共5页
MapReduce并行编程模型通过定义良好的接口和运行支持库,能够自动并行执行大规模计算任务,隐藏底层实现细节,降低并行编程的难度。系统阐述MapReduce的基本工作原理及其工作流程,以TeraSort算法为例,针对其存在的问题,提出动态数据分区... MapReduce并行编程模型通过定义良好的接口和运行支持库,能够自动并行执行大规模计算任务,隐藏底层实现细节,降低并行编程的难度。系统阐述MapReduce的基本工作原理及其工作流程,以TeraSort算法为例,针对其存在的问题,提出动态数据分区和数据压缩等优化建议。研究成果表明,优化后的TeraSort算法能够显著缩短数据处理时间,优化系统的吞吐量,并改善资源分配的均衡性。 展开更多
关键词 MAPREDUCE 大规模数据 并行挖掘 TeraSort
下载PDF
战略需求导向下的资金集中管理模式变革研究——基于美的集团的纵向案例分析
17
作者 刘建勇 张宁 高浪洲 《珞珈管理评论》 2024年第3期143-168,共26页
基于美的集团纵向单案例,分析企业资金集中管理模式随战略需求变化而动态调整的实施效果。研究发现,从机会成长阶段—规模成长阶段—优化升级阶段,美的集团战略需求呈现出“异地扩张—产业扩张—国际化与多元化扩张”的变化态势,其资金... 基于美的集团纵向单案例,分析企业资金集中管理模式随战略需求变化而动态调整的实施效果。研究发现,从机会成长阶段—规模成长阶段—优化升级阶段,美的集团战略需求呈现出“异地扩张—产业扩张—国际化与多元化扩张”的变化态势,其资金集中管理模式经历了“结算中心异地结算模式—数据大集中模式—财务公司”的动态调整,资金集中管理模式对应职能也在动态调整中完成了“异地资金管理能力—跨业务协调能力—金融发展能力”的叠加。美的集团根据战略需求变化动态调整资金集中管理模式是资金集中管理活动成功的关键,这一研究发现对战略需求影响资金集中管理模式的理论解释提供了案例依据,也为集团企业的资金集中管理模式选择提供了启示。 展开更多
关键词 战略需求 资金集中管理 结算中心异地结算模式 结算中心数据大集中模式 财务公司模式
下载PDF
数据驱动的半无限介质裂纹识别模型研究 被引量:1
18
作者 江守燕 邓王涛 +1 位作者 孙立国 杜成斌 《力学学报》 EI CAS CSCD 北大核心 2024年第6期1727-1739,共13页
缺陷识别是结构健康监测的重要研究内容,对评估工程结构的安全性具有重要的指导意义,然而,准确确定结构缺陷的尺寸十分困难.论文提出了一种创新的数据驱动算法,将比例边界有限元法(scaled boundary finite element methods,SBFEM)与自... 缺陷识别是结构健康监测的重要研究内容,对评估工程结构的安全性具有重要的指导意义,然而,准确确定结构缺陷的尺寸十分困难.论文提出了一种创新的数据驱动算法,将比例边界有限元法(scaled boundary finite element methods,SBFEM)与自编码器(autoencoder,AE)、因果膨胀卷积神经网络(causal dilated convolutional neural network,CDCNN)相结合用于半无限介质中的裂纹识别.在该模型中,SBFEM用于模拟波在含不同裂纹状缺陷半无限介质中的传播过程,对于不同的裂纹状缺陷,仅需改变裂纹尖端的比例中心和裂纹开口处节点的位置,避免了复杂的重网格过程,可高效地生成足够的训练数据.模拟波在半无限介质中传播时,建立了基于瑞利阻尼的吸收边界模型,避免了对结构全域模型进行计算.搭建了CDCNN,确保了时序数据的有序性,并获得更大的感受野而不增加神经网络的复杂性,可捕捉更多的历史信息,AE具有较强的非线性特征提取能力,可将高维的原始输入特征向量空间映射到低维潜在特征向量空间,以获得低维潜在特征用于网络模型训练,有效提升了网络模型的学习效率.数值算例表明:提出的模型能够高效且准确地识别半无限介质中裂纹的量化信息,且AE-CDCNN模型的识别效率较单CDCNN模型提高了约2.7倍. 展开更多
关键词 数据驱动 比例边界有限元法 自编码器 因果膨胀卷积神经网络 裂纹识别
下载PDF
星载激光雷达估测森林结构参数研究现状分析与展望
19
作者 黄佳鹏 李国元 刘诏 《农业机械学报》 EI CAS CSCD 北大核心 2024年第6期18-33,共16页
星载激光雷达系统可以覆盖机载系统难以到达的偏远地区,从机理上克服光学影像及合成孔径雷达测量的技术缺陷,为快速准确地获取林下地形、树高、生物量等森林结构参数提供了可靠的数据源。对现有的星载激光雷达技术观测体系进行综述,讨... 星载激光雷达系统可以覆盖机载系统难以到达的偏远地区,从机理上克服光学影像及合成孔径雷达测量的技术缺陷,为快速准确地获取林下地形、树高、生物量等森林结构参数提供了可靠的数据源。对现有的星载激光雷达技术观测体系进行综述,讨论了星载激光雷达数据估测多尺度森林结构参数的适用性,定量化分析现有星载激光雷达研究成果及存在的优缺点。最后,总结当前存在的问题,对星载激光雷达技术未来的前景和发展方向进行了展望。建议后续研究可进一步加大对反演不同森林结构参数、产品体系及标准规范、林业应用精度评价、林业用激光雷达参数设计等方面的深入研究。 展开更多
关键词 森林结构参数 星载激光雷达 多尺度 多源数据
下载PDF
一种在尺度空间下基于边缘的角点目标检测方法
20
作者 宋佳声 李浩天 《电子测量与仪器学报》 CSCD 北大核心 2024年第2期58-66,共9页
图像中的目标角点位置是实现很多计算机视觉任务的关键数据。为了克服传统检测算法产生的数据冗余问题,提出了一种在尺度空间下基于边缘的角点目标检测方法。首先,构建一个分组多层的尺度空间,将原图投影到其中后得到多个平滑图像。与... 图像中的目标角点位置是实现很多计算机视觉任务的关键数据。为了克服传统检测算法产生的数据冗余问题,提出了一种在尺度空间下基于边缘的角点目标检测方法。首先,构建一个分组多层的尺度空间,将原图投影到其中后得到多个平滑图像。与此同时,采用定义的边缘算子检出平滑图像中所有边缘而得到多个按序存放的像素点集,当点集数量稳定时停止更大尺度的变换。然后,在当前尺度下,计算点集中各元素反映其角点强度的特征值。根据这些特征值变化规律检出角点的支撑集区间,并在此区间中采用高斯拟合函数确定最终的目标角点。实验表明,该方法能够检出特征显著的目标角点及其角度,其中合成图像精度在像素级,应用案例中的平均误差与图幅比约为1.5/100。 展开更多
关键词 角点检测 尺度变换 边缘算子 数据拟合
原文传递
上一页 1 2 108 下一页 到第
使用帮助 返回顶部