This article delves into the analysis of performance and utilization of Support Vector Machines (SVMs) for the critical task of forest fire detection using image datasets. With the increasing threat of forest fires to...This article delves into the analysis of performance and utilization of Support Vector Machines (SVMs) for the critical task of forest fire detection using image datasets. With the increasing threat of forest fires to ecosystems and human settlements, the need for rapid and accurate detection systems is of utmost importance. SVMs, renowned for their strong classification capabilities, exhibit proficiency in recognizing patterns associated with fire within images. By training on labeled data, SVMs acquire the ability to identify distinctive attributes associated with fire, such as flames, smoke, or alterations in the visual characteristics of the forest area. The document thoroughly examines the use of SVMs, covering crucial elements like data preprocessing, feature extraction, and model training. It rigorously evaluates parameters such as accuracy, efficiency, and practical applicability. The knowledge gained from this study aids in the development of efficient forest fire detection systems, enabling prompt responses and improving disaster management. Moreover, the correlation between SVM accuracy and the difficulties presented by high-dimensional datasets is carefully investigated, demonstrated through a revealing case study. The relationship between accuracy scores and the different resolutions used for resizing the training datasets has also been discussed in this article. These comprehensive studies result in a definitive overview of the difficulties faced and the potential sectors requiring further improvement and focus.展开更多
Disease mapping is the study of the distribution of disease relative risks or rates in space and time, and normally uses generalized linear mixed models (GLMMs) which includes fixed effects and spatial, temporal, and ...Disease mapping is the study of the distribution of disease relative risks or rates in space and time, and normally uses generalized linear mixed models (GLMMs) which includes fixed effects and spatial, temporal, and spatio-temporal random effects. Model fitting and statistical inference are commonly accomplished through the empirical Bayes (EB) and fully Bayes (FB) approaches. The EB approach usually relies on the penalized quasi-likelihood (PQL), while the FB approach, which has increasingly become more popular in the recent past, usually uses Markov chain Monte Carlo (McMC) techniques. However, there are many challenges in conventional use of posterior sampling via McMC for inference. This includes the need to evaluate convergence of posterior samples, which often requires extensive simulation and can be very time consuming. Spatio-temporal models used in disease mapping are often very complex and McMC methods may lead to large Monte Carlo errors if the dimension of the data at hand is large. To address these challenges, a new strategy based on integrated nested Laplace approximations (INLA) has recently been recently developed as a promising alternative to the McMC. This technique is now becoming more popular in disease mapping because of its ability to fit fairly complex space-time models much more quickly than the McMC. In this paper, we show how to fit different spatio-temporal models for disease mapping with INLA using the Leroux CAR prior for the spatial component, and we compare it with McMC using Kenya HIV incidence data during the period 2013-2016.展开更多
The Both environmental and genetic factors have roles in the development of some diseases. Complex diseases, such as Crohn's disease or Type II diabetes, are caused by a combination of environmental factors and mu...The Both environmental and genetic factors have roles in the development of some diseases. Complex diseases, such as Crohn's disease or Type II diabetes, are caused by a combination of environmental factors and mutations in multiple genes. Patients who have been diagnosed with such diseases cannot easily be treated. However, many diseases can be avoided if people at high risk change their living style, one example being their diet. But how can we tell their susceptibility to diseases before symptoms are found and help them make informed decisions about their health? With the development of DNA microarray technique, it is possible to access the human genetic information related to specific diseases. This paper uses a combinatorial method to analyze the genetic data for Crohn's disease and search disease-associated factors for given case/control samples. An optimum random forest based method has been applied to publicly available genotype data on Crohn's disease for association study and achieved a promising result.展开更多
Video shreds of evidence are usually admissible in the court of law all over the world. However, individuals manipulate these videos to either defame or incriminate innocent people. Others indulge in video tampering t...Video shreds of evidence are usually admissible in the court of law all over the world. However, individuals manipulate these videos to either defame or incriminate innocent people. Others indulge in video tampering to falsely escape the wrath of the law against misconducts. One way impostors can forge these videos is through inter-frame video forgery. Thus, the integrity of such videos is under threat. This is because these digital forgeries seriously debase the credibility of video contents as being definite records of events. <span style="font-family:Verdana;">This leads to an increasing concern about the trustworthiness of video contents. Hence, it continues to affect the social and legal system, forensic investigations, intelligence services, and security and surveillance systems as the case may be. The problem of inter-frame video forgery is increasingly spontaneous as more video-editing software continues to emerge. These video editing tools can easily manipulate videos without leaving obvious traces and these tampered videos become viral. Alarmingly, even the beginner users of these editing tools can alter the contents of digital videos in a manner that renders them practically indistinguishable from the original content by mere observations. </span><span style="font-family:Verdana;">This paper, however, leveraged on the concept of correlation coefficients to produce a more elaborate and reliable inter-frame video detection to aid forensic investigations, especially in Nigeria. The model employed the use of the idea of a threshold to efficiently distinguish forged videos from authentic videos. A benchmark and locally manipulated video datasets were used to evaluate the proposed model. Experimentally, our approach performed better than the existing methods. The overall accuracy for all the evaluation metrics such as accuracy, recall, precision and F1-score was 100%. The proposed method implemented in the MATLAB programming language has proven to effectively detect inter-frame forgeries.</span>展开更多
We analyzed DNA sequences using a new measure of entropy. The general aim was to analyze DNA sequences and find interesting sections of a genome using a new formulation of Shannon like entropy. We developed this new m...We analyzed DNA sequences using a new measure of entropy. The general aim was to analyze DNA sequences and find interesting sections of a genome using a new formulation of Shannon like entropy. We developed this new measure of entropy for any non-trivial graph or, more broadly, for any square matrix whose non-zero elements represent probabilistic weights assigned to connections or transitions between pairs of vertices. The new measure is called the graph entropy and it quantifies the aggregate indeterminacy effected by the variety of unique walks that exist between each pair of vertices. The new tool is shown to be uniquely capable of revealing CRISPR regions in bacterial genomes and to identify Tandem repeats and Direct repeats of genome. We have done experiment on 26 species and found many tandem repeats and direct repeats (CRISPR for bacteria or archaea). There are several existing separate CRISPR or Tandem finder tools but our entropy can find both of these features if present in genome.展开更多
The case study presented here uses an interpretivist (qualitative, humanistic) approach to illustrate and describe a range of interactions and behaviors that occur during design meetings where mentoring and design sim...The case study presented here uses an interpretivist (qualitative, humanistic) approach to illustrate and describe a range of interactions and behaviors that occur during design meetings where mentoring and design simultaneously occur within a software engineering firm, during a portion of the design phase for a software project. It attempts to examine the interaction between two design team members (one novice and one expert) and describes how these observations intersect with the theoretical and applied literature and actual design processes. Taking cues from two theoretical descriptions of the design process, the study presented here suggests that modes and models of mentorship should be added, when applicable, as a descriptive portion of the design process.展开更多
The ability of machine learning techniques to make accurate predications is increasing. The aim of this work is to apply machine learning techniques such as Support Vector Machine, Na<span style="white-space:n...The ability of machine learning techniques to make accurate predications is increasing. The aim of this work is to apply machine learning techniques such as Support Vector Machine, Na<span style="white-space:nowrap;">ï</span>ve Bayes, Decision Tree, Logistic Regression, and K-Nearest Neighbour algorithms to predict the shelf life of Okra. Predicting the shelf life of Okra is important because Okra becomes harmful for human consumption if consumed after its shelf life. Okra parameters such as weight loss, firmness, Titrable Acid, <span style="font-family:Verdana;">Total Soluble Solids</span><span style="font-family:Verdana;">, Vitamin C/Ascorbic acid content, and PH were used as inputs into these machine learning techniques. Support Vector Machine, Na<span style="white-space:nowrap;">ï</span>ve Bayes and Decision Tree each accurately predicted the shelf life of Okra with accuracies of 100%. However, the Logistic Regression and K-Nearest Neighbour achieved 88.89% and 88.33% accuracies, respectively. These results showed that machine learning techniques especially Support Vector Machine, Na<span style="white-space:nowrap;">ï</span>ve Bayes and Decision Tree can be effectively applied for the prediction of Okra shelf life.</span>展开更多
Investigations towards studying terrorist activities have recently attracted a great amount of research interest. In this paper, we investigate the use of the Apriori algorithm on the Global Terrorism Database (GTD) f...Investigations towards studying terrorist activities have recently attracted a great amount of research interest. In this paper, we investigate the use of the Apriori algorithm on the Global Terrorism Database (GTD) for forensic investigation purposes. Recently, the Apriori algorithm, which could be considered a forensic tool</span><span style="font-family:Verdana;">,</span><span style="font-family:Verdana;"> has been used to study terrorist activities and patterns across the world. As such, our motivation is to utilise the Apriori algorithm approach on the GTD to study terrorist activities and the areas/states in Nigeria with high frequencies of terrorist activities. We observe that the most preferred method of terrorist attacks in Nigeria is through armed assault. Again, our experiment shows that attacks in Nigeria are mostly successful. Also, we observe from our investigations that most terrorists in Nigeria are not suicidal. The main application of this work can be used by forensic experts to assist law enforcement agencies in decision making when handling terrorist attacks in Nigeria</span><span style="font-family:Verdana;">. </p>展开更多
The volume of information being created, generated and stored is huge. Without adequate knowledge of Information Retrieval (IR) methods, the retrieval process for information would be cumbersome and frustrating. Studi...The volume of information being created, generated and stored is huge. Without adequate knowledge of Information Retrieval (IR) methods, the retrieval process for information would be cumbersome and frustrating. Studies have further revealed that IR methods are essential in information centres (for example, Digital Library environment) for storage and retrieval of information. Therefore, with more than one billion people accessing the Internet, and millions of queries being issued on a daily basis, modern Web search engines are facing a problem of daunting scale. The main problem associated with the existing search engines is how to avoid irrelevant information retrieval and to retrieve the relevant ones. In this study, the existing system of library retrieval was studied. Problems associated with them were analyzed in order to address this problem. The concept of existing information retrieval models was studied, and the knowledge gained was used to design a digital library information retrieval system. It was successfully implemented using a real life data. The need for a continuous evaluation of the IR methods for effective and efficient full text retrieval system was recommended.展开更多
The imperfect production center complexity to do with job maximization strategies is shown to have some criteria under which an optimal solution exists. 2010 Subject Classification: 60K25, 97M40,
Energy generation and consumption are the main aspects of social life due to the fact that modern people’s necessity for energy is a crucial ingredient for existence. Therefore, energy efficiency is regarded as the b...Energy generation and consumption are the main aspects of social life due to the fact that modern people’s necessity for energy is a crucial ingredient for existence. Therefore, energy efficiency is regarded as the best economical approach to provide safer and affordable energy for both utilities and consumers, through the enhancement of energy security and reduction of energy emissions. One of the problems of cloud computing service providers is the high rise in the cost of energy, efficiency together with carbon emission with regards to the running of their internet data centres (IDCs). In order to mitigate these issues, smart micro-grid was found to be suitable in increasing the energy efficiency, sustainability together with the reliability of electrical services for the IDCs. Therefore, this paper presents idea on how smart micro-grids can bring down the disturbing cost of energy, carbon emission by the IDCs with some level of energy efficiency all in an effort to attain green cloud computing services from the service providers. In specific term, we aim at achieving green information and communication technology (ICT) in the field of cloud computing in relations to energy efficiency, cost-effectiveness and carbon emission reduction from cloud data center’s perspective.展开更多
Recently, a new type of Radio Frequency IDentification (RFID) system with mobile readers is introduced. In such a system, it is more desirable for mobile readers to identify tags without a back-end server, and thus it...Recently, a new type of Radio Frequency IDentification (RFID) system with mobile readers is introduced. In such a system, it is more desirable for mobile readers to identify tags without a back-end server, and thus it is frequently referred as a serverless mobile RFID system. In this paper, we formalize a serverless mobile RFID system model and propose a new encryption-based system that preserves the privacy of both tags and readers in the model. In addition, we define a new adversary model for the system model and show the security of the proposed system. Throughout comparisons between ours and the other alternatives, we show that our proposed system provides a stronger reader privacy and robustness against a reader forgery attack than the competitors.展开更多
In this paper it is shown that a new Hilbert-type integral inequality can be established by introducing two parameters m(m ∈ N) and λ(λ > 0).And the constant factor expressed by the Bernoulli number and π is pr...In this paper it is shown that a new Hilbert-type integral inequality can be established by introducing two parameters m(m ∈ N) and λ(λ > 0).And the constant factor expressed by the Bernoulli number and π is proved to be the best possible.And then some important and especial results are enumerated.As applications,some equivalent forms are given.展开更多
文摘This article delves into the analysis of performance and utilization of Support Vector Machines (SVMs) for the critical task of forest fire detection using image datasets. With the increasing threat of forest fires to ecosystems and human settlements, the need for rapid and accurate detection systems is of utmost importance. SVMs, renowned for their strong classification capabilities, exhibit proficiency in recognizing patterns associated with fire within images. By training on labeled data, SVMs acquire the ability to identify distinctive attributes associated with fire, such as flames, smoke, or alterations in the visual characteristics of the forest area. The document thoroughly examines the use of SVMs, covering crucial elements like data preprocessing, feature extraction, and model training. It rigorously evaluates parameters such as accuracy, efficiency, and practical applicability. The knowledge gained from this study aids in the development of efficient forest fire detection systems, enabling prompt responses and improving disaster management. Moreover, the correlation between SVM accuracy and the difficulties presented by high-dimensional datasets is carefully investigated, demonstrated through a revealing case study. The relationship between accuracy scores and the different resolutions used for resizing the training datasets has also been discussed in this article. These comprehensive studies result in a definitive overview of the difficulties faced and the potential sectors requiring further improvement and focus.
文摘Disease mapping is the study of the distribution of disease relative risks or rates in space and time, and normally uses generalized linear mixed models (GLMMs) which includes fixed effects and spatial, temporal, and spatio-temporal random effects. Model fitting and statistical inference are commonly accomplished through the empirical Bayes (EB) and fully Bayes (FB) approaches. The EB approach usually relies on the penalized quasi-likelihood (PQL), while the FB approach, which has increasingly become more popular in the recent past, usually uses Markov chain Monte Carlo (McMC) techniques. However, there are many challenges in conventional use of posterior sampling via McMC for inference. This includes the need to evaluate convergence of posterior samples, which often requires extensive simulation and can be very time consuming. Spatio-temporal models used in disease mapping are often very complex and McMC methods may lead to large Monte Carlo errors if the dimension of the data at hand is large. To address these challenges, a new strategy based on integrated nested Laplace approximations (INLA) has recently been recently developed as a promising alternative to the McMC. This technique is now becoming more popular in disease mapping because of its ability to fit fairly complex space-time models much more quickly than the McMC. In this paper, we show how to fit different spatio-temporal models for disease mapping with INLA using the Leroux CAR prior for the spatial component, and we compare it with McMC using Kenya HIV incidence data during the period 2013-2016.
文摘The Both environmental and genetic factors have roles in the development of some diseases. Complex diseases, such as Crohn's disease or Type II diabetes, are caused by a combination of environmental factors and mutations in multiple genes. Patients who have been diagnosed with such diseases cannot easily be treated. However, many diseases can be avoided if people at high risk change their living style, one example being their diet. But how can we tell their susceptibility to diseases before symptoms are found and help them make informed decisions about their health? With the development of DNA microarray technique, it is possible to access the human genetic information related to specific diseases. This paper uses a combinatorial method to analyze the genetic data for Crohn's disease and search disease-associated factors for given case/control samples. An optimum random forest based method has been applied to publicly available genotype data on Crohn's disease for association study and achieved a promising result.
文摘Video shreds of evidence are usually admissible in the court of law all over the world. However, individuals manipulate these videos to either defame or incriminate innocent people. Others indulge in video tampering to falsely escape the wrath of the law against misconducts. One way impostors can forge these videos is through inter-frame video forgery. Thus, the integrity of such videos is under threat. This is because these digital forgeries seriously debase the credibility of video contents as being definite records of events. <span style="font-family:Verdana;">This leads to an increasing concern about the trustworthiness of video contents. Hence, it continues to affect the social and legal system, forensic investigations, intelligence services, and security and surveillance systems as the case may be. The problem of inter-frame video forgery is increasingly spontaneous as more video-editing software continues to emerge. These video editing tools can easily manipulate videos without leaving obvious traces and these tampered videos become viral. Alarmingly, even the beginner users of these editing tools can alter the contents of digital videos in a manner that renders them practically indistinguishable from the original content by mere observations. </span><span style="font-family:Verdana;">This paper, however, leveraged on the concept of correlation coefficients to produce a more elaborate and reliable inter-frame video detection to aid forensic investigations, especially in Nigeria. The model employed the use of the idea of a threshold to efficiently distinguish forged videos from authentic videos. A benchmark and locally manipulated video datasets were used to evaluate the proposed model. Experimentally, our approach performed better than the existing methods. The overall accuracy for all the evaluation metrics such as accuracy, recall, precision and F1-score was 100%. The proposed method implemented in the MATLAB programming language has proven to effectively detect inter-frame forgeries.</span>
文摘We analyzed DNA sequences using a new measure of entropy. The general aim was to analyze DNA sequences and find interesting sections of a genome using a new formulation of Shannon like entropy. We developed this new measure of entropy for any non-trivial graph or, more broadly, for any square matrix whose non-zero elements represent probabilistic weights assigned to connections or transitions between pairs of vertices. The new measure is called the graph entropy and it quantifies the aggregate indeterminacy effected by the variety of unique walks that exist between each pair of vertices. The new tool is shown to be uniquely capable of revealing CRISPR regions in bacterial genomes and to identify Tandem repeats and Direct repeats of genome. We have done experiment on 26 species and found many tandem repeats and direct repeats (CRISPR for bacteria or archaea). There are several existing separate CRISPR or Tandem finder tools but our entropy can find both of these features if present in genome.
文摘The case study presented here uses an interpretivist (qualitative, humanistic) approach to illustrate and describe a range of interactions and behaviors that occur during design meetings where mentoring and design simultaneously occur within a software engineering firm, during a portion of the design phase for a software project. It attempts to examine the interaction between two design team members (one novice and one expert) and describes how these observations intersect with the theoretical and applied literature and actual design processes. Taking cues from two theoretical descriptions of the design process, the study presented here suggests that modes and models of mentorship should be added, when applicable, as a descriptive portion of the design process.
文摘The ability of machine learning techniques to make accurate predications is increasing. The aim of this work is to apply machine learning techniques such as Support Vector Machine, Na<span style="white-space:nowrap;">ï</span>ve Bayes, Decision Tree, Logistic Regression, and K-Nearest Neighbour algorithms to predict the shelf life of Okra. Predicting the shelf life of Okra is important because Okra becomes harmful for human consumption if consumed after its shelf life. Okra parameters such as weight loss, firmness, Titrable Acid, <span style="font-family:Verdana;">Total Soluble Solids</span><span style="font-family:Verdana;">, Vitamin C/Ascorbic acid content, and PH were used as inputs into these machine learning techniques. Support Vector Machine, Na<span style="white-space:nowrap;">ï</span>ve Bayes and Decision Tree each accurately predicted the shelf life of Okra with accuracies of 100%. However, the Logistic Regression and K-Nearest Neighbour achieved 88.89% and 88.33% accuracies, respectively. These results showed that machine learning techniques especially Support Vector Machine, Na<span style="white-space:nowrap;">ï</span>ve Bayes and Decision Tree can be effectively applied for the prediction of Okra shelf life.</span>
文摘Investigations towards studying terrorist activities have recently attracted a great amount of research interest. In this paper, we investigate the use of the Apriori algorithm on the Global Terrorism Database (GTD) for forensic investigation purposes. Recently, the Apriori algorithm, which could be considered a forensic tool</span><span style="font-family:Verdana;">,</span><span style="font-family:Verdana;"> has been used to study terrorist activities and patterns across the world. As such, our motivation is to utilise the Apriori algorithm approach on the GTD to study terrorist activities and the areas/states in Nigeria with high frequencies of terrorist activities. We observe that the most preferred method of terrorist attacks in Nigeria is through armed assault. Again, our experiment shows that attacks in Nigeria are mostly successful. Also, we observe from our investigations that most terrorists in Nigeria are not suicidal. The main application of this work can be used by forensic experts to assist law enforcement agencies in decision making when handling terrorist attacks in Nigeria</span><span style="font-family:Verdana;">. </p>
文摘The volume of information being created, generated and stored is huge. Without adequate knowledge of Information Retrieval (IR) methods, the retrieval process for information would be cumbersome and frustrating. Studies have further revealed that IR methods are essential in information centres (for example, Digital Library environment) for storage and retrieval of information. Therefore, with more than one billion people accessing the Internet, and millions of queries being issued on a daily basis, modern Web search engines are facing a problem of daunting scale. The main problem associated with the existing search engines is how to avoid irrelevant information retrieval and to retrieve the relevant ones. In this study, the existing system of library retrieval was studied. Problems associated with them were analyzed in order to address this problem. The concept of existing information retrieval models was studied, and the knowledge gained was used to design a digital library information retrieval system. It was successfully implemented using a real life data. The need for a continuous evaluation of the IR methods for effective and efficient full text retrieval system was recommended.
文摘The imperfect production center complexity to do with job maximization strategies is shown to have some criteria under which an optimal solution exists. 2010 Subject Classification: 60K25, 97M40,
文摘Energy generation and consumption are the main aspects of social life due to the fact that modern people’s necessity for energy is a crucial ingredient for existence. Therefore, energy efficiency is regarded as the best economical approach to provide safer and affordable energy for both utilities and consumers, through the enhancement of energy security and reduction of energy emissions. One of the problems of cloud computing service providers is the high rise in the cost of energy, efficiency together with carbon emission with regards to the running of their internet data centres (IDCs). In order to mitigate these issues, smart micro-grid was found to be suitable in increasing the energy efficiency, sustainability together with the reliability of electrical services for the IDCs. Therefore, this paper presents idea on how smart micro-grids can bring down the disturbing cost of energy, carbon emission by the IDCs with some level of energy efficiency all in an effort to attain green cloud computing services from the service providers. In specific term, we aim at achieving green information and communication technology (ICT) in the field of cloud computing in relations to energy efficiency, cost-effectiveness and carbon emission reduction from cloud data center’s perspective.
基金Supported in part by the MKE (The Ministry of Knowledge Economy), Korea, under the ITRC (Information Technology Research Center) support program (No. NIPA-2012-H0301-12-4004)supervised by the NIPA (National IT Industry Promotion Agency)+1 种基金supported in part by US National Science Foundation (NSF) CREST (No. HRD-0833184)US Army Research Office (ARO) (No.W911NF-0810510)
文摘Recently, a new type of Radio Frequency IDentification (RFID) system with mobile readers is introduced. In such a system, it is more desirable for mobile readers to identify tags without a back-end server, and thus it is frequently referred as a serverless mobile RFID system. In this paper, we formalize a serverless mobile RFID system model and propose a new encryption-based system that preserves the privacy of both tags and readers in the model. In addition, we define a new adversary model for the system model and show the security of the proposed system. Throughout comparisons between ours and the other alternatives, we show that our proposed system provides a stronger reader privacy and robustness against a reader forgery attack than the competitors.
基金Supported by the Project of Scientific Research Fund of Hunan Provincial Education Department (GrantNo.09C789)
文摘In this paper it is shown that a new Hilbert-type integral inequality can be established by introducing two parameters m(m ∈ N) and λ(λ > 0).And the constant factor expressed by the Bernoulli number and π is proved to be the best possible.And then some important and especial results are enumerated.As applications,some equivalent forms are given.