摘要
WEB数据挖掘的关键是设计智能、高效的网络机器人.详细分析了面向URL的网络机器人的工作流程及实现它的关键技术,提出用多个队列管理URL列表,且队列元素按文档相关性高低排序,并行高速地下载网页.此外,在文档相关性计算中设计了一个可收敛的迭代阈值算法,有效地解决了相关度阈值设定的随意性.
The key issue of mining data on WEB is how to design an intelligent and effective spider. The paper analyzes the work flow and key technologies of the spider facing URL in details. It also brings forward the mind that adopting several queues to manage the URL list, in order to download HTML files in high speed we sort the URLs by document correlativity. Moreover, we import the idea of iterative threshold into computing document correlativity, which resolve the random modification of threshold.
出处
《华东交通大学学报》
2007年第1期67-70,共4页
Journal of East China Jiaotong University
基金
赣教技字[2006]177号
华东交通大学校立基金01305120