Proceedings of the International Multiconference on Computer Science and Information Technology
Volume 5
October 18–20, 2010. Wisła, Poland
ISSN 1896-7094
ISBN 978-83-60810-27-9
IEEE Catalog Number CFP1064E-CDR
Multiconference Proceedings (PDF, 58.653 MB)
5th International Symposium Advances in Artificial Intelligence and Applications
- A Breast Cancer Classifier based on a Combination of Case-Based Reasoning and Ontology Approach
192 Case-Based Reasoning, Case-Based Reasoning Frameworks, CBR, CBR Frameworks, jCOLIBRI, myCBR, Breast Cancer Essam AbdRabou, AbdEl-Badeeh Salem, pages 3 – 10. Show abstract Abstract. Breast cancer is the second most common form of cancer amongst females and also the fifth most cause of cancer deaths worldwide. In case of this particular type of malignancy, early detection is the best form of cure and hence timely and accurate diagnosis of the tumor is extremely vital. Extensive research has been carried out on automating the critical diagnosis procedure as various machine learning algorithms have been developed to aid physicians in optimizing the decision task effectively. In this research, we present a benig/malignant breast cancer classification model based on a combination of ontology and case-based reasoning to effectively classify breast cancer tumors as either malignant or benign. This classification system makes use of clinical data. Two CBR object-oriented frameworks based on ontology are used jCOLIBRI and myCBR. A breast cancer diagnostic prototype is built. During prototyping, we examine the use and functionality of the two focused frameworks. - Using data mining for assessing diagnosis of breast cancer
158 Breast Cancer, Classification Support Vector Machine (SVM), Decision Tree, Receiver Operating Characteristic Curve (ROC), Tree Boost, Tree Forest, Gain Medhat Mohamed Ahmed Abdelaal, Muhamed Wael Farouq, Hala Abou Sena, Abdel-Badeeh Mohamed Salem, pages 11 – 17. Show abstract Abstract. The capability of the classification SVM, Tree Boost and Tree Forest in analyzing the DDSM dataset was investigated for the extraction of the mammographic mass features along with age that discriminates true and false cases. In the present study, SVM technique shows promising results for increasing diagnostic accuracy of classifying the cases witnessed by the largest area under the ROC curve (area under empirical ROC curve =0.79768 and area under binomial ROC curve =0.85323) comparable to empirical ROC and binomial ROC of 0.57575 and 0.58548 for tree forest while least empirical ROC and binomial ROC of 0.53452 and 0.53882 was accounted by tree boost. These results are confirmed by SVM average gain of 1.7323, tree forest average gain of 1.5576 and tree boost average gain of 1.5718. - Advanced scale-space invariant, low detailed feature recognition from images - car brand recognition
146 SURF, Keypoint, car brand, recognition Štefan Badura, Stanislav Foltán, pages 19 – 23. Show abstract Abstract. This paper presents analysis of a model for car brand recognition. The used method is an invariant keypoint detector - descriptor. An input for the method is a set of images obtained from the real environment. The task of car classification according its brand is not a trivial task. Our work would be a part of an intelligent traffic system where we try to collect some statistics about various cars passing a given area. It is difficult to recognize objects when they are in different scales, rotated or if they are low contrasted or when it is necessary to take into count high level of details. In out work we present a system for car brand recognition. We use scale space invariant keypoint detector and descriptor (SURF – Speeded-up Robust Features) for this purpose. - Evaluation of Clustering Algorithms for Polish Word Sense Disambiguation
167 word sense disambiguation, clustering, evaluation of clustering Bartosz Broda, Wojciech Mazur, pages 25 – 32. Show abstract Abstract. Word Sense Disambiguation in text is still a difficult problem as the best supervised methods require laborious and costly manual preparation of training data. Thus, this work focuses on evaluation of a few selected clustering algorithms in task of Word Sense Disambiguation for Polish. We tested 6 clustering algorithms (K-Means, K-Medoids, hierarchical agglomerative clustering, hierarchical divisive clustering, Growing Hierarchical Self Organising Maps, graph-partitioning based clustering) and five weighting schemes. For agglomerative and divisive algorithm 13 criterion function were tested. The achieved results are interesting, because best clustering algorithms are close in terms of cluster purity to precision of supervised clustering algorithm on the same dataset, using the same features. - Generation of First-Order Expressions from a Broad Coverage HPSG Grammar
86 recognizing textual entailment; logical inference; HPSG-based text analysis; first-order logics Ravi Coote, Andreas Wotzlaw, pages 33 – 36. Show abstract Abstract. This paper describes an application for computing first-order semantic representations of English texts. It is based on a combination of hybrid shallow-deep components arranged within a middleware framework Heart of Gold. The shallow-deep semantic analysis employs Robust Minimal Recursion Semantics (RMRS) as a common semantic underspecification formalism for natural language processing components. In order to compute efficiently first-order representations of the input text, the intermediate RMRS results of the shallow-deep analysis are transformed into the dominance constraints formalism and resolved by the underspecification resolver UTool. First-order expressions can serve as a formal knowledge representation of natural text and thus can be utilized in knowledge engineering or reasoning. In the end of this paper we describe their application for recognizing textual entailment. - PSO based modeling of Takagi-Sugeno fuzzy motion controller for dynamic object tracking with mobile platform
186 Sensor-motor control, Fuzzy control, mobile robot, particle swarm optimization Meenakshi Gupta, Laxmidhar Behera, Venkatesh K.S., pages 37 – 43. Show abstract Abstract. Modeling of optimized motion controller is one of the interesting problems in the context of behavior based mobile robotics. Behavior based mobile robots should have an ideal controller to generate perfect action. In this paper, a nonlinear identification Takagi-Sugeno fuzzy motion controller has been designed to track the positions of a moving object with the mobile platform. The parameters of the controller are optimized with Particle swarm optimization (PSO) and stochastic approximation method. A gray predictor has also been developed to predict the position of the object when object is beyond the view field of the robot. The combined model has been tested on a Pioneer robot which tracks a triangular red box using a CCD camera and a laser sensor. - Hierarchical Object Categorization with Automatic Feature Selection
115 Object Categorization, Object Class Hierarchy, Automatic Feature Selection Md. Saiful Islam, Andrzej Sluzek, pages 45 – 51. Show abstract Abstract. We have introduced a hierarchical object categorization method with automatic feature selection. A hierarchy obtained by natural similarities and properties is learnt by automatically selected features at different levels. The categorization is a top-down process yielding multiple labels for a test object. We have tested out method and compared the experimental results with that of a nonhierarchical method. It is found that the hierarchical method improves recognition performance at the level of basic classes and reduces error at a higher level. This makes the proposed method plausible for different applications of computer vision including object categorization, semantic image retrieval, and automatic image annotation. - Selecting the best strategy in a software certification process
200 decision making, software certification, pairwise comparisons Waldemar Koczkodaj, Vova Babiy, Agnieszka D. Bogobowicz, Ryszard Janicki, Alan Wassyng, pages 53 – 58. Show abstract Abstract. In this paper, we propose the use of the pairwise comparisons (PC) method for selection of strategies in software certification. This method can also be used to rank alternative software certification strategies. The inconsistency analysis, provided by the PC method, improves the accuracy of the decision making. Some current methods of software certification are presented as they could be modified by the proposed method. Areas of potential future research are discussed in order to make the software certification process more feasible and acceptable to industry. - Extrapolation of Non-Deterministic Processes Based on Conditional Relations
201 process extrapolation, non-deterministic processes, conditional relations, relative logic Juliusz Kulikowski, pages 59 – 65. Show abstract Abstract. A problem of extrapolation of a large class of processes and of their future states forecasting based on their occurrence in the past is considered. Discrete-time discrete-value processes are presented as instances of relations subjected to the general relations algebra rules. The notions of relative relations, parametric relations and non-deterministic relations have been introduced. For extrapolated process states assessment relative credibility levels of process trajectories are used. The variants of direct one-step, indirect one-step and direct multi-step process extrapolation are described. The method is illustrated by numerical examples. - Reasoning in RDFgraphic formal system with quantifiers
114 RDF, RDFS, Associative network, Mapping Alena Lukasova, Marek Vajgl, Martin Žáček, pages 67 – 72. Show abstract Abstract. Both associative networks and RDF model (here we consider especially its graph version) belong to formal systems of knowledge representation based on concept-oriented paradigm. To treat properties of both of them as common properties of the systems is therefore natural. The article shows a possibility to use universal and existential quantified statements introduced prior to associative networks also within RDF graphic system and to define a RDF formal system with extended syntax and semantic that can use inference rules of associative networks. As an example solution, a logical puzzle is presented. - Coevolutionary Algorithm For Rule Induction
178 data mining, rule induction, evolutionary algorithms, image annotation Pawel Myszkowski, pages 73 – 79. Show abstract Abstract. This paper describes our last research results in the field of evolutionary algorithms for rule extraction applied to classification (and image annotation). We focus on the data mining classification task and we propose evolutionary algorithm for rule extraction. Presented approach is based on binary classical genetic algorithm with representation of 'if-then' rules and we propose two specialized genetic operators. We want to show that some search space reduction techniques makes possible to get solution comparable to others from literature. To present our method ability of discovering set of rules with high F-score we tested our approach on four benchmark datasets and ImageCLEF competition dataset. - Evolutionary Algorithm in Forex trade strategy generation
143 financial data mining, evolutionary algorithm, genetic programming, decision tree induction, trade strategy, Forex Pawel Myszkowski, Adam Bicz, pages 81 – 88. Show abstract Abstract. This paper shows an evolutionary algorithm application to generate profitable strategies to trade futures contracts on foreign exchange market (Forex). Strategy model in approach is based on two decision trees, responsible for taking the decisions of opening long or short positions on Euro/US Dollar currency pair. Trees take into consideration only technical analysis indicators, which are connected by logic operators to identify border values of these indicators for taking profitable decision(s). We have tested the efficiency of presented approach on learning and test time-frames of various characteristics. - Emotion-based Image Retrieval—an Artificial Neural Network Approach
176 artificial neural network, feature selection, similarity measures, emotion recognition, image retrieval Katarzyna Agnieszka Olkiewicz, Urszula Markowska-Kaczmar, pages 89 – 96. Show abstract Abstract. Nowadays more and more attention is attracted to the image retrieval based on its mood or emotional content. The purpose of the study described in the paper was to assess possibility of image mood classification on the basis of visual features only and next to check how much image retrieval can be improved by taking into account both visual and emotional image content. As a classifier Multilayer Perceptron (MLP) neural network was used. Performance of the neural network (correct emotions assignment) and accuracy of retrieval results were assessed independently, with concern about various factors which can influence the performance. In the succeeding experiments the neural network answers were applied to annotate images by labels with emotional descriptors and both descriptors -- visual and emotional were used in image retrieval. In this part of experiments, dedicated to overall system performance the system was tested against many factors: various query images, image databases, learning sets and finally we evaluated difference in performance introduced by an emotional descriptors. The results are discussed and conclusions for future are drawn. - Automatic Visual Class Formation using Image Fragment Matching
52 low-level vision, semantic gap, visual class formation Mariusz Paradowski, Andrzej Śluzek, pages 97 – 104. Show abstract Abstract. Low-level vision approaches, such as local image features, are an important component of bottom-up machine vision solutions. They are able to effectively identify local visual similarities between fragments of underlying physical objects. Such vision approaches are used to build a learning system capable to form meaningful visual classes out of unlabeled collections of images. By capturing similar fragments of images, the underlying physical objects are extracted and their visual appearances are generalized. This leads to formation of visual classes, which (typically) represent specific underlying physical objects in a form of automatically extracted multiple template images. - Learning taxonomic relations from a set of text documents
76 taxonomy learning, ontology, clustering, keyphrase extraction Mari-Sanna Paukkeri, Alberto Perez Garcia-Plaza, Sini Pessala, Timo Honkela, pages 105 – 112. Show abstract Abstract. This paper presents a methodology for learning taxonomic relations from a set of documents that each explain one of the concepts. Three different feature extraction approaches with varying degree of language independence are compared in this study. The first feature extraction scheme is a language-independent approach based on statistical keyphrase extraction, and the second one is based on a combination of rule-based stemming and fuzzy logic-based feature weighting and selection. The third approach is the traditional tf-idf weighting scheme with commonly used rule-based stemming. The concept hierarchy is obtained by combining Self-Organizing Map clustering with agglomerative hierarchical clustering. Experiments are conducted for both English and Finnish. The results show that concept hierarchies can be constructed automatically also by using statistical methods without heavy language-specific preprocessing. - Metric properties of populations in artificial immune systems
174 Genetic algorithm, binary coding, Hadamard representation, artificial immune system Zbigniew Pliszka, Olgierd Unold, pages 113 – 119. Show abstract Abstract. A Hadamard representation, which is an alternative towards the binary representation, is considered in this study. It operates on numbers $+1$ and $-1$. Several properties of such defined representation were pointed out and properties of the immune system were expressed based on this representation. - The development features of the face recognition system
196 support vector machines, face detection, pattern recognition Rauf Sadykhov, Igor Frolov, pages 121 – 128. Show abstract Abstract. Nowadays personal identification is a very important issue. There is a wide range of applications in different spheres, such as video surveillance security systems, control of documents, forensics systems and etc. We consider a range of most significant aspects of face identification system based on support vector machines in this paper. The presented system is intended for process the image with low quality, the photo with the different facial expressions. Our goal is to develop face recognition techniques and create the system for face identification. - Multiscale Segmentation Based On Mode-Shift Clustering
100 image segmentation, image retrieval, clustering, multiscale representation Wojciech Tarnawski, Lukasz Miroslaw, Roman Pawlikowski, Krzysztof Ociepa, pages 129 – 133. Show abstract Abstract. We present a novel segmentation technique that effectively segments natural images. The method is designed for the purpose of image retrieval and follows the principle of clustering the regions visible in the image. The concept is based on the multiscale approach where the image undergoes a number of diffusions. The algorithm has been visually compared with a reference segmentation. - Relational database as a source of ontology creation
16 relational database, ontology, mapping Zdenka Telnarova, pages 135 – 139. Show abstract Abstract. The article headed “Relational database as a source of an ontology creation” deals with mapping relational data into ontology, or filling ontology with data from relational databases. It describes the issue of mapping database schemas (particularly relational models) for common data models expressed in the form of ontology. Generous room is given to methods of acquiring ontology from relational databases, where rules are specified and simple example are used to demonstrate their use, mapping of individual concepts of a relational data model into ontology concepts. - Emotional Speech Analysis using Artificial Neural Networks
120 emotional speech, classification, neural networks, MLNN, SOM Jana Tuckova, Martin Sramka, pages 141 – 147. Show abstract Abstract. In the present text, we deal with the problem of classification of speech emotion. Problems of speech processing are addressed through the use of artificial neural networks (ANN). The results can be use for two research projects - for prosody modelling and for analysis of disordered speech. The first ANN topology discussed is the multilayer neural network (MLNN) with the BPG learning algorithm, while the supervised SOM (SSOM) are the second ANN topology. Our aim is to verify the various of knowledge from phonetics and ANN but also to try to classify speech signals which are described by musical theory. Finally, one solution is given for this problem which is supplemented with a proof. - Usage of reflection in .NET to inference of knowledge base
151 .NET, reflection, knowledge base,description logic Marek Vajgl, pages 149 – 154. Show abstract Abstract. This document describes how information generated by integrated development environment (namely Visual Studio 2008) can be used to generate and usage of knowledge base. The main aim is to explain, how the approach of extension of currently implemented software project can achieve knowledge representation, and how already created and implemented data types can be used to create knowledge bases using modern language’s development environment and behavior. The article is aimed to the Description Logic formal system, but can be applied to any formal deduction mechanism. - On the evaluation of the linguistic summarization of temporally focused time series using a measure of informativeness
164 linguistic summaries, computing with words, fuzzy sets, time series analysis Anna Wilbik, Janusz Kacprzyk, pages 155 – 162. Show abstract Abstract. We extend our previous works of deriving linguistic summaries of time series using a fuzzy logic approach to linguistic summarization. We proceed towards a multicriteria analysis of summaries by assuming as a quality criterion Yager's measure of informativeness of classic and temporal protoforms that combines in a natural way the measures of truth, focus and specificity, to obtain a more advanced evaluation of summaries. The use of the informativeness measure for the purpose of a multicriteria evaluation of linguistic summaries of time series seems to be an effective and efficient approach, yet simple enough for practical applications. Results on the summarization of quotations of an investment (mutual) fund are very encouraging.
Workshop on Agent Based Computing: from Model to Implementation VII
- Java-based Mobile Agent Platforms for Wireless Sensor Networks
175 Mobile agent platforms, wireless sensor networks, Java Sun SPOT, finite state machines, intentional agents Francesco Aiello, Alessio Carbone, Giancarlo Fortino, Stefano Galzarano, pages 165 – 172. Show abstract Abstract. This paper proposes an overview and comparison of mobile agent platforms for the development of wireless sensor network applications. In particular, the architecture, programming model and basic performance of two Java-based agent platforms, Mobile Agent Platform for Sun SPOT (MAPS) and Agent Factory Micro Edition (AFME), are discussed and evaluated. Finally, a simple yet effective case study concerning a mobile agent-based monitoring system for remote sensing and aggregation is proposed. The proposed case study is developed both in MAPS and AFME so allowing to analyze the differences of their programming models. - BeesyBees—Efficient and Reliable Execution of Service-based Workflow Applications for BeesyCluster using Distributed Agents
148 execution of service-based workflow applications by agents, agent based simulation, agent negotiation and cooperation, efficient and reliable execution by agents Paweł Czarnul, Mariusz Matuszek, Michał Wójcik, Karol Zalewski, pages 173 – 180. Show abstract Abstract. The paper presents an architecture and implementation that allows distributed execution of workflow applications in BeesyCluster using agents. BeesyCluster is a middleware that allows users to access distributed resources as well as publish applications as services, define service costs, grant access to other users and consume services published by others. Workflows created in the BeesyCluster middleware are exported to BPEL and executed by agents in a distributed environment. As a proof of concept, we have implemented a real workflow for parallel processing of digital images and tested it in a real cluster-based environment. Firstly, we demonstrate that engaging several agents for distributed execution is more efficient than a centralized approach. We also show increasing negotiation time in case of too many agents. Secondly, we demonstrate that execution in the proposed environment is reliable even in case of failures. If a service fails, a task agent picks a new equivalent service at runtime. If one of task agents fails, another agent is spawned and takes over its responsibilities. The communication between the middleware, agents and services is encrypted. - A Technique based on Recursive Hierarchical State Machines for Application-level Capture of Agent Execution State
121 Agent state modeling and capture, translation techniques, recursive hierarchical state machines, JADE Giancarlo Fortino, Francesco Rango, pages 181 – 188. Show abstract Abstract. The capture of the execution state of agents in agent-based and multi-agent systems is a system feature needed to enable agent checkpointing, persistency and strong mobility that are basic mechanisms supporting more complex, distributed policies and algorithms for fault tolerance, load balancing, and transparent migration. Unfortunately, the majority of the currently available platforms for agents, particularly those based on the standard Java Virtual Machine, do not provide this important feature at the system-level. Several system-level and application-level approaches have been to date proposed for agent state execution capture. Although system-level approaches are effective they modify the underlying virtual machine so endangering compatibility. Conversely, application-level approaches do not modify any system layer but they provide sophisticated agent programming models and/or agent converters that only allow a coarse-grain capture of agent state execution. In this paper, we propose an application-level technique that allows for a programmable-grain capture of the execution state of agents ranging from a per-instruction to a statement-driven state capture. The technique is based on the Distilled StateCharts Star (DSC*) formalism that makes it available an agent-oriented type of recursive hierarchical state machines. According to the proposed technique a single-threaded agent program can be translated into a DSC* machine by preserving its original semantics. Although the proposed technique can be applied to any agent program written through an imperativestyle programming language, it is currently implemented in Java and integrated into the JADE framework, being JADE one of the most diffused agent platforms. In particular, agents, which are specified through a generic Java-like agent language, are translated into JADE agents according to the JADE DSCStarBehaviour framework. A simple yet effective example is used to illustrate the proposed technique. - Reorganization in Massive Multiagent Systems
22 modeling organizations, simulation, virual society Henry Hexmoor, pages 189 – 195. Show abstract Abstract. We have explored principled mechanisms for converting a hierarchical organization to an edge type organization. Other than structural differences, organizations differ in information flow network and information sharing strategies. Beyond current effort, many other types of organizational adaptation are possible and require much further research that we anticipate to remain for future work. This article lays the foundation for automatic organizational adaptation. - Effectiveness of Solving Traveling Salesman Problem Using Ant Colony Optimization on Distributed Multi-Agent Middleware
169 distributed algorithm, ant colony optimization, traveling salesman problem Sorin Ilie, Costin Badica, pages 197 – 203. Show abstract Abstract. Recently we have setup the goal of investigating new truly distributed forms of Ant Colony Optimization. We proposed a new distributed approach for Ant Colony Optimization (ACO) algorithms called Ant Colony Optimization on a Distributed Architecture (ACODA). ACODA was designed to allow efficient implementation of ACO algorithms on state-of-the art distributed multi-agent middleware. In this paper we present experimental results that support the feasibility of ACODA by considering a distributed version of the ACS system. In particular we show the effectiveness of this approach for solving Traveling Salesperson Problem by comparing experimental results of ACODA versions of distributed ACS with distributed random searches on a high-speed cluster network. - Selected Security Aspects of Agent-based Computing
145 Agent-based computing, privacy, confidentiality, distributed computing, agents. Mariusz Matuszek, Piotr Szpryngier, pages 205 – 208. Show abstract Abstract. The paper presents selected security aspects related to confidentiality, privacy, trust and authenticity issues of a distributed, agent-based computing. Particular attention has been paid to authenticity of and trust in migrating mobile agents executables, agent's trust in runtime environment, inter agent communication and security of agent's payload. Selected attack vectors against agent-based computing were described and risk mitigation methods and strategies were proposed and discussed based on presented cryptography measures. In summary expected effectiveness of proposed countermeasures was described. - Agent-Oriented Modelling for Simulation of Complex Environments
177 agent, agent-oriented modelling, agent-based simulation, military simulations Inna Shvartsman, Kuldar Taveter, Merle Parmak, Merik Meriste, pages 209 – 216. Show abstract Abstract. Developing realistic scenarios for military simulations is not a trivial task. This article addresses the application of agent-oriented modelling to composing the scenarios for simulating distributed problem domains. The article presents an overview of agent-oriented modelling and describes the problem domain of military operations in an urban environment. After that the simulation scenario of an urban operation is designed by means of agent-oriented modelling. Finally, the article explores a platform for possible execution of agent-based simulations. - Improving Fault-Tolerance of Distributed Multi-Agent Systems with Mobile Network-Management Agents
116 fault-tolerance, multi-agent systems, agent mobility, distributed computing Dejan Mitrović, Zoran Budimac, Mirjana Ivanović, Milan Vidaković, pages 217 – 222. Show abstract Abstract. Large-scale agent-based software solutions need to be able to assure constant delivery of services to end-users, regardless of the underlying software or hardware failures. Fault-tolerance of multi-agent systems is, therefore, an important issue. We present an easy and flexible way of introducing fault-tolerance to existing agent frameworks. The approach is based on two new types of mobile agents that manage efficient construction and maintenance of fault-tolerant multi-agent system networks, and implement a robust agent tracking technique. - Argumentative agents
212 argumentation, agents Francesca Toni, pages 223 – 229. Show abstract Abstract. Argumentation, initially studied in philosophy and law, has been researched extensively in computing in the last decade, especially for inference, decision making and decision support, dialogue, and negotiation. This paper focuses on the use of argumentation to support intelligent agents in multi-agent systems, in general and in the ARGUGRID project and Agreement Technology action. In particular, the paper reviews how argumentation can help agents take decisions, either in isolation (by evaluating pros and cons of conflicting decisions) or in an open and dynamic environment (by assessing the validity of information they become aware of). It also illustrates how argumentation can support negotiation and conflict resolution amongst agents (by allowing them to exchange information and fill in gaps in their incomplete beliefs). Finally, the paper discusses how arguments can improve the assessment of the trustworthiness of agents in contract-regulated interactions (by supporting predictions on these agents’ future behaviours). - An agent based planner for including network QoS in scientific workflows
97 Quality of Service, Semantic web, Advanced Network, Scientific Workflow, e-Science Zhiming Zhao, Paola Grosso, Ralph Koning, Jeroen van der Ham, Cees de Laat, pages 231 – 238. Show abstract Abstract. Advanced network infrastructure plays an important role in the e-Science environment to provide high quality connections between largely distributed data sensors, and computing and storage elements. However, the quality of the network services has so far rarely been considered in composing and executing scientific workflows. Currently, scientific applications tune the execution quality of workflows neglecting network resources, and by selecting only optimal software services and computing resources. One reason is that IP-based networks provide few possibilities for workflow systems to manage the service quality, and limit or prevent bandwidth reservation or network paths selection. We see nonetheless a strong need from scientific applications, and network operators, to include the network quality management in the workflow systems. Novel network infrastructures play an important role in the e-Science environment, as they provide high quality connections between largely distributed data sensors, computing and storage elements. They also open up new possibilities in network tuning at the application level. In this position paper, we discuss our vision on this issue and propose an agent based solution to include network resources in the loop of workflow composition, scheduling and execution when advanced network services are available. We present the first prototype of our approach in the context of the CineGrid project.
International Workshop on Advances in Business ICT
- A method for consolidating application landscapes during the post-merger-integration phase
59 Mergers & Acquisitions, Post-Merger-Integration, Capability, Application Landscape, IT Integration Andreas Freitag, Florian Matthes, Christopher Schulz, pages 241 – 248. Show abstract Abstract. Mergers and acquisitions (M&A) have become frequent events in today’s economy. They are complex strategic transformation projects affecting both - business and information technology (IT). Still, empirical studies reveal high failure rates regarding the achievement of previously defined objectives. Taking into account the role and importance of IT in modern business models, the consolidation of application landscapes and technical infrastructure represents a challenging exercise performed during the post-merger-integration. Unfortunately, not many artifacts in the form of tangible concepts, models, and methods exist facilitating the endeavors of merging IT. After providing a broad overview on relevant literature in the area of M&A from a business and IT perspective, this article presents a method artifact for consolidating application landscapes in the course of a merger. It originates from the approach applied during a case study in the telecommunication industry where the application landscapes of two formerly independent lines of business have been merged. - Hybridization of Temporal Knowledg for Economic Environment Analysis
10 integration, hybridization, temporal knowledge, temporal intelligent system. Maria Antonina Mach, pages 249 – 254. Show abstract Abstract. the paper is devoted to the concept of hybridization of temporal knowledge in an intelligent reasoning system. Hybridization is a special kind of integration, where heterogeneous knowledge is transformed in order to obtain a uniform one, but information on temporal characteristics and on core features of knowledge being transformed is preserved, and may be also used for reasoning. It may be said, that integration serves as means of hybridization, but only if at least two conditions are fulfilled: if integration is source-oriented, and the knowledge sources are kept autonomous. In the paper we present in detail the concept of integration, the conditions that are to be fulfilled if the integration is to serve as the means of temporal hybridization. We show an application area for a hybridized knowledge, namely the analysis of economic environment of an enterprise. We present an example of a hybridization procedure. - Independent Operator of Measurements as a Virtual Enterprise on the Energy Market
56 virtual enterprise on Energy Market, independent operator of measurements, energy power and energy trade, ict tools for energy market Bożena Ewa Matusiak, pages 255 – 258. Show abstract Abstract. The independent Operator of Measurements (IOM) is a new energy market actor that will implement a remote access to the metering data and stores, as well as aggregating and delivering them in real time to all market participants as an independent service-placed “above the market” enterprise. In that sense IOM, with an adequate infrastructure of ICT has become a virtual company with real effect - where, thanks to its distributed and virtual activities on the energy market - achieves it, synergizes the whole market and activates the necessary and accurate real-time decisions of other market participants. IOM is a virtual unit (not necessarily constrained by any single location) from the point of view of the access to the Market. The Energy market (EM) in Poland, where IOM will be set up, has become closer to the full liberalization, energetic efficiency, enhanced competitiveness and contribute to the smart grids potential. This article presents the issues of ICT and the creation of new business models for the above-described virtual enterprise, which manages measurement data at the EM in Poland. - A Two-level algorithm of time series change detection based on a unique deviations similarity method
69 Tomasz Pełech-Pilichowski, Jan T. Duda, pages 259 – 263. Show abstract Abstract. - STRATEGOS: A case-based approach to strategy making in SME
179 case-based reasoning, strategy management Jerzy Surma, pages 265 – 269. Show abstract Abstract. Making strategic decisions in an enterprise is one of the most difficult problems of management. It is a result of unstructured character of such decisions which are made in conditions of high uncertainty. This issue is particularly important in case of Small and Medium Enterprises (SME), where Chief Executive Officers (CEO) are lacking support in this area and most often act intuitively, being convinced that their business is unique. Recent researches on decision-making point out the substantial influence of referring to analogies and self-experience in strategic problems. Accordingly it is proposed to use case-based reasoning to build STRATESOS - the strategic decision support system. Then, the system was verified in a survey by dozens of CEOs from SME. The results of the survey are promising and show the remarkable correspondence of proposed solution with expectations and strategic behavior of CEOs. - Support of the E-business by business intelligence tools and data quality improvement
66 busines intelligence, e-business, strategic management, business intelligence tools, data quality, competitive intelligence Milena Tvrdíková, Ondřej Koubek, pages 271 – 278. Show abstract Abstract. Categories of e-business tools are briefly described in the paper. Explanation of the categories features is based on the models of behavior in e-business proposed, by Timmers. In the next part of the article are discussed the tools and activities that create the conditions for successful implementation of e-business within the company or organization. We focused on business intelligence tools, strategic management, forms of saving and improving data quality and competitive intelligence. The conclusion includes not only general recommendations, but also a simple table that shows the various parts of the article used at the real example of different sized companies and organizations. The aim of this paper is to evaluate what the firms and organizations have opportunities and which are the best suited for them.
Computer Aspects of Numerical Algorithms
- The experimental analysis of GMRES convergence for solution of Markov chains
184 GMRES, projection method, preconditioning, WZ, IWZ Beata Bylina, Jarosław Bylina, pages 281 – 288. Show abstract Abstract. The authors consider the impact of the structure of the matrix on the convergence behaviour for the GMRES projection method for solving large sparse linear equation systems resulting from Markov chains modeling. Studying experimental results we investigate the number of steps and the rate of convergence of GMRES method and the IWZ preconditioning for the GMRES method. The motivation is to better understand the convergence characteristics of Krylov subspace method and the relationship between the Markov model, the nonzero structure of the coefficient matrix asociated with this model and the convergence of the preconditioned GMRES method. - On the Numerical Analysis of Stochastic Lotka-Volterra Models
96 stochastic Lotka-Volterra, master equation, Markov process, stochastic hybrid system Tugrul Dayar, Linar Mikeev, Verena Wolf, pages 289 – 296. Show abstract Abstract. The stochastic Lotka-Volterra model is an infinite Markov population model that has applications in various life science domains. Its analysis is challenging since, besides an infinite state space with unbounded rates, it shows strongly fluctuating dynamics and becomes unstable in the long-run. Traditional numerical methods are therefore not appropriate to solve the system. Here, we suggest adaptations and combinations of traditional methods that yield fast and accurate solutions for certain parameter ranges of the stochastic Lotka-Volterra model. We substantiate our theoretical investigations with a comparison based on experimental results. - Finite Element Approximate Inverse Preconditioning using POSIX threads on multicore systems
104 Sparse linear systems, finite element, parallel preconditioned conjugate gradient method, parallel computations, POSIX threads, multicore systems George A. Gravvanis, P. I. Matskanidis, K. M. Giannoutakis, E. A. Lipitakis, pages 297 – 302. Show abstract Abstract. Explicit finite element approximate inverse preconditioning methods have been extensively used for solving efficiently sparse linear systems on multiprocessor and multicomputer systems. New parallel computational techniques are proposed for the parallelization of explicit preconditioned biconjugate conjugate gradient type methods, based on Portable Operating System Interface for UniX (POSIX) Threads, for multicore systems. Parallelization is achieved by assigning every loop of the parallel explicit preconditioned bi-conjugate conjugate gradient-STAB (PEPBiCG-STAB) to the desired number of threads, thus achieving for-loop parallelization. Theoretical estimates on speedups and efficiency are also presented. Finally, numerical results for the performance of the PEPBiCG-STAB method for solving characteristic two dimensional boundary value problems on multicore computer systems are presented, which are favorably compared to corresponding results from multiprocessor systems. The implementation issues of the proposed method are also discussed using POSIX Threads on a multicore system. - On the implementation of public keys algorithms based on algebraic graphs over finite commutative rings
70 extremal digraph theory, digraph based public key, key-exchange protocols, family of directed graphs of high girth, group theoretical discrete logarithm problem Michał Klisowski, Vasyl Ustimenko, pages 303 – 308. Show abstract Abstract. We will consider balanced directed graphs, i.e. graphs of binary relations, for which the number of inputs and number of outputs are the same for each vertex. The commutative diagram is formed by two directed passes for which the same starting and ending points form the full list of common vertices. We refer to the length of maximal pass (number of arrows) as the rank of the diagram. We will count a directed cycle of length m as a commutative diagram of rank m. We define the girth indicator gi, gi >= 2 of the directed graph as the minimal rank of its commutative diagram. We observe briefly the applications of finite automata related to balanced graphs of high girth in Cryptography. Finally, for each finite commutative ring K with more than two regular elements we consider the explicite construction of algebraic over K family of graphs of high girth and discuss the implementation of the public key algorithm based on finite automata corresponding to members of the family. - Analysis of Pseudo-Random Properties of Nonlinear Congruential Generators with Power of Two Modulus by Numerical Computing of the b-adic Diaphony
62 Pseudo-Random Number, Nonlinear Congruential Generators, Numerical Computing of the b-adic Diaphony Ivan Lirkov, Stanislava Stoilova, pages 309 – 315. Show abstract Abstract. We consider two nonlinear methods for generating uniform pseudo-random numbers in [0, 1), namely quadratic congruential generator and inversive congruential generator. The combinations of the Van der Corput sequence with the considered nonlinear generators are proposed. We simplify the mixed sequences by a restriction of the b-adic representation of the points. We study numerically the b-adic diaphony of the nets obtained through quadratic congruential generator, inversive congruential generator, their combinations with the Van der Corput sequence, and the simplification of the mixed sequences. The value of the b-adic diaphony decreases with the increase of the number of the points of the simplified sequences which proves that the points of the simplified sequences are pseudo-random numbers. The analysis of the results shows that the combinations of the Van der Corput sequence with these nonlinear generators have good pseudo-random properties as well as the generators. - Assembling Recursively Stored Sparse Matrices
205 sparse matrix; recursive CSR; spmv; multicore; recursive; blas Michele Martone, Salvatore Filippone, Marcin Paprzycki, Salvatore Tucci, pages 317 – 325. Show abstract Abstract. Recently, we have introduced an approach to multicore computations on sparse matrices using recursive partitioning called Recursive Sparse Blocks (RSB). In this document, we discuss issues involved in assembling matrices in the RSB format. Since the main application area is in iterative methods, we consider the performance of matrix assembly in this context, and so we outline not only scalability of the method, but also execution time ratio to matrix-vector multiply. - Use of Hybrid Recursive CSR/COO Data Structures in Sparse Matrices-Vector Multiplication
203 sparse matrix, recursive CSR, spmv, multicore, recursive, blas Michele Martone, Salvatore Filippone, Paweł Gepner, Marcin Paprzycki, Salvatore Tucci, pages 327 – 335. Show abstract Abstract. Recently, we have introduced an approach to basic sparse matrix computations on multicore cache based machines using recursive partitioning. Here, the memory representation of a sparse matrix consists of a collection of submatrices as the leaves of a quad-tree structure. In this paper, we evaluate the performance impact, on the Sparse Matrix-Vector Multiplication SPMV, of a modification to our Recursive CSR implementation, allowing use of other data structures in leaf matrices (CSR/COO, with either 16/32 bit indices). - Higher order FEM numerical integration on GPUs with OpenCL
152 OpenCL FEM GPU Numerical Integration Przemysław Płaszewski, Krzysztof Banaś, Paweł Macioł, pages 337 – 342. Show abstract Abstract. Paper presents results obtained when porting FEM 2D linear elastostatic local stiffness matrix calculations to Tesla architecture with OpenCL framework. Comparison with native NVIDIA CUDA implementations has been provided. - Parallelization of SVD of a Matrix-Systolic Approach
182 Singular Value Decomposition, Jacobi rotations, Hestenes-Jacobi method, systolic arrays, complex matrix. Halil Snopce, Ilir Spahiu, pages 343 – 348. Show abstract Abstract. In this paper is investigated the parallelization of Hestenes-Jacobi method for computing the SVD of an MxN matrix using systolic arrays. In the case of real matrix an array of processors is proposed, such that each row contains N columns. In order to extend this idea we have presented three transformations which are used for transforming the complex into the real matrix. After the additional computations, we show how the same array may be used for SVD of complex matrix. - Solving a Kind of BVP for ODEs on heterogeneous CPU + CUDA-enabled GPU Systems
204 BVP for ODEs, GPU, divide and conquer Przemyslaw Stpiczynski, Joanna Potiopa , pages 349 – 353. Show abstract Abstract. The aim of this paper is to show that a kind of boundary value problem for second-order ordinary differential equations which reduces to the problem of solving a tridiagonal system of linear equations with almost Toeplitz structure can be efficiently solved on modern heterogeneous computer architectures based on CPU and GPU processors using an algorithm based on the divide and conquer method for solving linear recurrence systems with constant coefficients.
Computational Linguistics—Applications
- Using Self Organizing Map to Cluster Arabic Crime Documents
154 Arabic language, Crime, Information extraction, Rule based, Clustering, Meshrif Alruily, Aladdin Ayesh, Abdulsamad Al-Marghilani, pages 357 – 363. Show abstract Abstract. This paper presents a system that combine two text mining techniques; information extraction and clustering. A rule based approach is used to perform the information extraction task based on the dependency relation between some intransitive verbs and prepositions. This relationship helps in extracting types of crime from documents within the crime domain. With regard to the clustering task, Self Organizing Map (SOM) is used to cluster Arabic crime documents based on crime types. This work is then validated through two experiments, the results of which show that the techniques developed here are promising. - Quality Benchmarking Relational Databases and Lucene in the TREC4 Adhoc Task Environment
139 English, relational databases, stemming, text retrieval, TREC Ahmet Arslan, Ozgur Yilmazel, pages 365 – 372. Show abstract Abstract. The present work covers a comparison of the text retrieval qualities of open source relational databases and Lucene, which is a full text search engine library, over English documents. TREC-4 adhoc task is completed to compare both search effectiveness and search efficiency. Two relational database management systems and four different well-known English stemming algorithms have been tried. It has been found that language specific preprocessing improves retrieval quality for all systems. The results of the English text retrieval experiments by using Lucene are at par with top six results presented at TREC-4 automatic adhoc. Although open source relational databases integrated full text retrieval technology, their relevancy ranking mechanisms are not as good as Lucene's. - Parallel, Massive Processing in SuperMatrix — a General Tool for Distributional Semantic Analysis of Corpus
149 lexical knowledge acquisition, supermatrix, measure of semantic relatedness, semantic similarity, parallel processing, distributed computing Bartosz Broda, Damian Jaworski, Maciej Piasecki, pages 373 – 379. Show abstract Abstract. The paper presents extended version of SuperMatrix system — a general tool supporting automatic acquisition of lexical semantic relations from corpora. Extensions focuses mainly on parallel processing of massive amount of data. The construction of the system is discussed. Three distributed parts of the system are presented, i.e., distributed construction of co-incidence matrices from corpora, computation of similarity matrix and parallel solving of synonymy tests. Evaluation of proposed approach to paralel processing is shown. Parallelization of similarity matrix computation demonstrates almost linear speedup. The smallest improvements were achieved for construction of matrices, as this process is mostly bound by reading huge amounts of data. Also, a few areas in which functionality of SuperMatrix was improved are described. - Development of a Voice Control Interface for Navigating Robots and Evaluation in Outdoor Environments
83 Noise vulnerability of SVSR, HMM based speech recognition, Acoustical modeling with HMMs Ravi Coote, pages 381 – 388. Show abstract Abstract. In this paper the development of a prototypic mobile voice control for navigating autonomous robots within a multi robot system is described. As basis for the voice control a hidden markov model based speech recognizer with a very low vocabulary of 30 words is utilized. It is investigated how many training samples for a markov model are required for a normal operation of speaker-dependent speech recognition. Therefore, hidden markov models were developed successively in parallel with an own training data corpus containing finally 2290 utterances from 12 speakers. Within the successively development of acoustical models and training corpus, the work revealed details about how many speakers are necessary to achieve an acceptable degree of speaker independence. We focused on an evaluation of the speech recognizer in adverse outdoor environments. The evaluation ranges from almost calm conditions of about 39 dB up to very adverse noise conditions of 120 dB. It is investigated whether a small vocabulary attenuates the noise vulnerability and in how far an increase of speaking volume can compensate noises of different intensity. The voice control was tested in outdoor environments and aspects of its usage are described. - The Role of the Newly Introduced Word Types in the Translations of Novels
109 newly introduced word types, translation, vocabulary rich text segments Maria Csernoch, pages 389 – 396. Show abstract Abstract. The project detailed in the article is able to find the vocabulary rich segments of novels in different languages. The method used here takes into account the frequency of the words of the text, and based on this information we are able to create artificial texts with the same parameters. Since the original and the artificial texts share parameters they are comparable and we can find those segments of the original text which are richer in vocabulary then it is expected as compared to a random selection of the words. The advantages of finding these vocabulary rich segments of the text, beyond that they give an insight of the development of the vocabulary of a novel, is that in any translation, adaptation process it is a great advantage being familiar with these sections of the text. - SyMGiza++: A Tool for Parallel Computation of Symmetrized Word Alignment Models
172 Word Alignment, Symmetrization, Parallel Computing Marcin Junczys-Dowmunt, Arkadiusz Szał, pages 397 – 401. Show abstract Abstract. SyMGiza++ — a tool that computes symmetric word alignment models with the capability to take advantage of multi-processor systems — is presented. We achieve a relative alignment quality improvement of more than 17% compared to Giza++ and MGiza++ and more than 20% compared to the BerkeleyAligner on the standard Canadaian Hansards task, while mainainting the speed improvements provided by MGiza++'s capability of parallel computations. - Semi-Automatic Extension of Morphological Lexica
44 tools, lexical acquisition, morphological lexica Tobias Kaufmann, Beat Pfister, pages 403 – 409. Show abstract Abstract. We present a tool that facilitates the efficient extension of morphological lexica. The tool exploits information from a morphological lexicon, a morphological grammar and a text corpus to guide the acquisition process. In particular, it employs statistical models to analyze out-of-vocabulary words and predict lexical information. These models do not require any additional labeled data for training. Furthermore, they are based on generic features that are not specific to any particular language. This paper describes the general design of the tool and evaluates the accuracy of its machine learning components. - Automatic Extraction of Arabic Multi-Word Terms
134 Arabic language processing, automatic term recognition, multi-word terms. Khalid Al Khatib, Amer Badarneh, pages 411 – 418. Show abstract Abstract. Whereas a wide range of methods has been conducted to English multi-word terms (MWTs) extraction, relatively few studied have been applied to Arabic MWTs extraction. In this paper, we present an efficient approach for automatic extraction of Arabic MWTs. The approach relies on two main filtering steps: the linguistic filter, where simple part of speech (POS) tagger is used to extract candidate MWTs matching given syntactic patterns, and the statistical filter, where two statistical methods (log-likelihood ratio and C-value) are used to rank candidate MWTs. Many types of variations (e.g. inflectional variants) are taken into consideration to improve the quality of extracted MWTs. We obtained promising results in both coverage and precision of multi-word term extraction in our experiments based on environment domain corpus. - “Beautiful picture of an ugly place”. Exploring photo collections using opinion and sentiment analysis of user comments
65 Slava Kisilevich, Christian Rohrdantz, Daniel Keim, pages 419 – 428. Show abstract Abstract. User generated content in the form of customer reviews, feedbacks and comments plays an important role in all kind of Internet services and activities like news, shopping, forums and blogs. Therefore, the analysis of user opinions is potentially beneficial for the understanding of user attitudes or the improvement of various Internet services. In this paper, we propose a practical unsupervised approach to improve users’ experience when exploring photo collections by using opinions and sentiments expressed in user comments on the uploaded photos. While most existing techniques concentrate on binary (negative or positive) opinion orientation, we use a real-valued scale for modeling opinion and sentiment strengths. We extract two types of sentiments: opinions that relate to the photo quality and general sentiments targeted towards objects depicted on the photo. Our approach combines linguistic features for part of speech tagging, traditional statistical methods for modeling word importance in the photo comment corpora (in a real-valued scale), and a predefined sentiment lexicon for detecting negative and positive opinion orientation. In addition, a semiautomatic photo feature detection method is applied and a set of syntactic patterns is introduced to resolve opinion references. We implemented a prototype system that incorporates the proposed approach and evaluated it on several regions in the World using real data extracted from Flickr. - LEXiTRON-Pro Editor: An Integrated Tool for developing Thai Pronunciation Dictionary
113 LEXiTRON-Pro Editor, pronunciation dictionary, word segmentation, grapheme-to-phoneme, Thai, database, statistic Supon Klaithin, Patcharika chootrakool, Krit Kosawat, pages 429 – 433. Show abstract Abstract. Pronunciation dictionary is a crucial part for both Text-To-Speech and Automatic Speech Recognition systems. In this paper, we propose a tool to easily create and edit Thai pronunciation dictionary, called LEXiTRON-Pro Editor. This tool integrates Thai word segmentation, Thai Grapheme-to-Phoneme (G2P) and database system with statistic. LEXiTRON-Pro Editor can automatically propose word's pronunciation to users in 3 options: pronunciation from LEXiTRON-Pro database, pronunciation from Thai G2P and pronunciation combined from syllables with highest frequency. In every case, users can change to another option or even input directly the pronunciation they want by themselves with our easy interface editor. Our LEXiTRON-Pro database contains initially 105,129 unique words with pronunciation and 24,736 unique syllables with pronunciation. Comparing to the previous version of pronunciation dictionary, our new program can reduce the process of dictionary development from 5 steps to only 1 and can reduce the number of tools used by linguists from 3 to only 1 program. - Automatic Detection of Prominent Words in Russian Speech
118 phonology, prosody, prominence, pitch accent, speech signal processing Daniil Kocharov, pages 435 – 438. Show abstract Abstract. An experimental research with a goal to automatically detect prominent words in Russian speech is presented in this paper. The proposed automatic prominent word detection system could be further used as a module of an automatic speech recognition system or as a tool to highlight prominent words within a speech corpus for unit selection text-to-speech synthesis. The detection procedure is based on the use of prosodic features such as speech signal intensity, fundamental frequency and speech segment duration. A large corpus of Russian speech of over 200 000 running words was used to evaluate the proposed prosodic features and statistical method of speech data processing. The proposed system is speaker-independent and achieves an efficiency of 84.2%. - Computing trees of named word usages from a crowdsourced lexical network
81 Natural Language Processing, lexical network, classification tree of labelled word usages for a term Mathieu Lafourcade, Alain Joubert, pages 439 – 446. Show abstract Abstract. Thanks to the participation of a large number of persons via web-based games, a large-sized evolutionary lexical network is available for French. With this resource, we approached the question of the determination of the word usages of a term, and then we introduced the notion of similarity between these various word usages. So, we were able to build for a term its word usage tree: the root groups together all possible usages of this term and a search in the tree corresponds to a refinement of these word usages. The labelling of the various nodes of the word usage tree of a term is made during a width-first search: the root is labelled by the term itself and each node of the tree is labelled by a term stemming from the clique or quasi-clique this node represents. We show on a precise example that it is possible that some nodes of the tree, often leaves, cannot be labelled without ambiguity. This paper ends with an evaluation about word usages detected in our lexical network. - RefGen: a Tool for Reference Chains Identification
140 reference chains, anaphora resolution, genre-based properties Laurence Longo, Amalia Todirascu, pages 447 – 454. Show abstract Abstract. Abstract—In this paper, we present RefGen, a reference chain identification module for French. RefGen algorithm uses genre specific properties of reference chains and (Ariel 1990)’s accessibility theory to find the mentions. The module applies strong and weak filters (lexical, morphosyntactic and semantic filters) to automatically identify coreference relations between referential expressions. We evaluate the results obtained by RefGen from a public reports corpus. - Is Shallow Semantic Analysis Really That Shallow? A Study on Improving Text Classification Performance
31 text classification, graph, shallow semantic analysis Przemysław Maciołek, Grzegorz Dobrowolski, pages 455 – 460. Show abstract Abstract. The paper presents a graph-based, shallow semantic analysis-driven approach for modeling document contents. This allows to extract additional information about meaning of text and effects in improved document classification. Its performance is compared against the “legacy” bag-of-words and Schenker et al. approaches with k − NN classification based on Polish and English news articles. - PerGram: A TRALE Implementation of an HPSG Fragment of Persian
111 Persian grammar, HPSG, TRALE system, parsing Stefan Müller, Masood Ghayoomi, pages 461 – 467. Show abstract Abstract. In this paper, we discuss an HPSG grammar of Persian (PerGram) that is implemented in the TRALE system. We describe some of the phenomena which are currently covered. While working on the grammar, we developed a test suite with positive and negative examples from the linguistic literature. To be able to test the coverage of the grammar with respect to naturally occurring sentences, we use a subcorpus of a big corpus of Persian. - WordnetLoom: a Graph-based Visual Wordnet Development Framework
173 wordnet, wordnet editor, semi-automated wordnet expansion Maciej Piasecki, Michał Marcińczuk, Adam Musiał, Radosław Ramocki, Marek Maziarz, pages 469 – 476. Show abstract Abstract. The paper presents WordnetLoom -- a new version of an application supporting the development of the Polish wordnet called plWordNet. The primary user interface of WordnetLoom is a graph-based, graphical, active presentation of the wordnet structure. Linguist can directly work on the structure of synsets linked by relation links. The new version is compared with the previous one in order to show the lines of development and to illustrate the difference introduced. A new version of WordnetWeaver -- a tool supporting semi-automated expansion of wordnet is also presented. The new version is based on the same user interface as WordnetLoom, gives access the linguist to all types of relations and is tightly integrated with the rest of the wordnet editor. The role of the system in the wordnet development process, as well, as experience from its application are discussed. A set of WWW-based tools supporting coordination of team work and verification is also presented. - Building and Using Existing Hunspell Dictionaries and TeX Hyphenators as Finite-State Automata
110 finite-state, spelling, hyphenation, hunspell, tex Tommi Pirinen, Krister Lindén, pages 477 – 484. Show abstract Abstract. There are numerous formats for writing spell checkers for open source systems, and descriptions for languages written in formats. Similarly for word hyphenation by computer there are TeX rules for many languages. In this paper we demonstrate a method for converting all these old spell checking lexicons and hyphenation rulesets into finite-state automata, and present a new finite-state based system for writer's tools used in current open source software such as Firefox, OpenOffice.org and enchant via spell-checking library voikko. - The Polish Cyc lexicon as a bridge between Polish language and the Semantic Web
132 Cyc, ontology, semantic web, lexicon, machine translation Aleksander Pohl, pages 485 – 492. Show abstract Abstract. In this paper we discuss the problem of building the Polish lexicon for the Cyc ontology. As the ontology is very complex and huge we describe semi-automatic translation of part of it, which might be useful for tasks laying on the border between fields of Semantic Web and Natural Language Processing. We concentrate on precise identification of lexemes, which is crucial for tasks such as natural language generation in massively inflected languages like Polish, and we also concentrate on multi-word entries, since in Cyc for every 10 concepts, 9 of them is mapped to expressions containing more than one word. - Tools for syntactic concordancing
38 concordancing, collocations, multi-word expressions, multilingualism, syntactic analysis Violeta Seretan, Eric Wehrli, pages 493 – 500. Show abstract Abstract. Concordancers are tools that display the immediate context for the occurrences of a given word in a corpus. Also called KWIC – Key Word in Context tools, they are essential in the work of lexicographers, corpus linguists, and translators alike. We present an enhanced type of concordancer, which relies on a syntactic parser and on statistical association measures in order to detect those words in the context that are syntactically related to the sought word and are the most relevant for it, because together they may participate in multi-word expressions (MWEs). Our syntax-based concordancer highlights the MWEs in a corpus, groups them into syntactically-homogeneous classes (e.g., verb-object, adjective-noun), ranks MWEs according to the strength of association with the given word, and for each MWE occurrence displays the whole source sentence as a context. In addition, parallel sentence alignment and MWE translation techniques are used to display the translation of the source sentence in another language, and to automatically find a translation for the identified MWEs. The tool also offers functionalities for building a MWE database, and is available both off-line and on-line for a number languages (among which English, French, Spanish, Italian, German, Greek and Romanian). - Effective natural language parsing with probabilistic grammars
166 natural language parsing, A* algorithm, machine translation Paweł Skórzewski, pages 501 – 504. Show abstract Abstract. This paper presents an example of application of a PCFG parsing algorithm based on the A* search procedure in a machine translation system. We modified the existing CYK-based parser used in the machine translation system Translatica and applied the A* parsing algorithm in order to improve the performance of the parser. - Finding Patterns in Strings using Suffixarrays
142 Herman Stehouwer, Menno Van Zaanen, pages 505 – 511. Show abstract Abstract. Finding regularities in large data sets requires implementations of systems that are efficient in both time and space requirements. Here, we describe a newly developed system that exploits the internal structure of the enhanced suffixarray to find significant patterns in a large collection of sequences. The system searches exhaustively for all significantly compressing patterns where patterns may consist of symbols and skips or wildcards. We demonstrate a possible application of the system by detecting interesting patterns in a Dutch and an English corpus. - Entity Summarisation with Limited Edge Budget on Knowledge Graphs
185 semantic search, knowledge graph, summarisation, algorithm, evaluation experiment Marcin Sydow, Mariusz Pikuła, Ralf Schenkel, Adam Siemion, pages 513 – 516. Show abstract Abstract. We formulate a novel problem of summarising entities with limited presentation budget on entity-relationship knowledge graphs and propose an efficient algorithm for solving this problem. The algorithm has been implemented together with a visualising tool. Experimental user evaluation of the algorithm was conducted on real large semantic knowledge graphs extracted from the web. The reported results of experimental user evaluation are promising and encourage to continue the work on improving the algorithm. - Multiple Noun Expression Analysis: An Implementation of Ontological Semantic Technology
160 multiple noun expressions, meaning interpretation, ontological semantic technology Julia Taylor, Victor Raskin, Maxim Petrenko, Christian F. Hempelmann, pages 517 – 524. Show abstract Abstract. The paper analyzes multiple noun expressions, as part of the implementation of the Ontological Semantic Technology, which uses the lexicon, ontology and semantic text analyzer to access the meaning of text. Because the analysis and results depend on the lexical senses of words, general principles of lexical acquisition are discussed. The success in interpretation and classification of such expressions is demonstrated on 100 randomly selected sequences. - A web-based translation service at the UOC based on Apertium
71 Machine translation, free rule-based system, Apertium Luis Villarejo, Mireia Farrus, Gema Ramírez, Sergio Ortíz, pages 525 – 530. Show abstract Abstract. In this paper, we describe the adaptation process of Apertium, a free/open-source rule-based machine translation platform which is operating in a number of different real-life contexts, to the linguistic needs of the Universitat Oberta de Catalunya (Open University of Catalonia, UOC), a private elearning university based in Barcelona where linguistic and cultural diversity is a crucial factor. This paper describes the main features of the Apertium platform and the practical developments required to fully adapt it to UOC’s linguistic needs. The settting up of a translation service at UOC based on Apertium shows the growing interest of this kind of institution for open-source solutions in which their investment is oriented toward adding value to the available features to offer the best possible adapted service to their user community. - Tools and Methodologies for Annotating Syntax and Named Entities in the National Corpus of Polish
89 National Corpus of Polish, corpus annotation, shallow parsing, named entity recognition, Spejd, Sprout, Tred Jakub Waszczuk, Katarzyna Głowińska, Agata Savary, Adam Przepiórkowski, pages 531 – 539. Show abstract Abstract. The on-going project aiming at the creation of the National Corpus of Polish assumes several levels of linguistic annotation. We present the technical environment and methodological background developed for the three upper annotation levels: the level of syntactic words and groups and the level of named entities. We show how knowledge-based platforms Spejd and Sprout are used for the automatic pre-annotation of the corpus, and we discuss some particular problems faced during the elaboration of the syntactic grammar, which contains over 800 rules and is one of the largest chunking grammars for Polish. We also show how the tree editor TrEd has been customized for manual post-editing of annotations, and for further revision of discrepancies. Our XML format converters and customized archiving repository ensure the automatic data flow and efficient corpus file management. We believe that this environment or substantial parts of it can be reused in or adapted to other corpus annotation tasks. - TREF – TRanslation Enhancement Framework for Japanese-English
144 Machine Translation, Syntactical Analysis, Sequence Alignment Bartholomäus Wloka, Werner Winiwarter, pages 541 – 546. Show abstract Abstract. We present a method for improving existing statistical machine translation methods using an information-base compiled from a bilingual corpus as well as sequence alignment and pattern matching techniques from the area of machine learning and bioinformatics. An alignment algorithm identifies similar sentences, which are then used to construct a better word order for the translation. Our preliminary test results indicate a significant improvement of the translation quality. - Matura Evaluation Experiment Based on Human Evaluation of Machine Translation
168 machine translation, machine translation evaluation, comprehension test Aleksandra Wojak, Filip Graliński, pages 547 – 551. Show abstract Abstract. A Web-based system for human evaluation of machine translation is presented in this paper. The system is based on comprehension tests similar to the ones used in Polish matura (secondary school-leaving) examinations. The results of preliminary experiments for Polish-English and English-Polish machine translation evaluation are presented and discussed. - German subordinate clause word order in dialogue-based CALL.
150 dialogue systems, CALL Magdalena Wolska, Sabrina Wilske, pages 553 – 559. Show abstract Abstract. We present a dialogue system for exercising the German subordinate clause word order. The pedagogical methodology we adopt is based on focused tasks: the targeted linguistic structure is embedded in a naturalistic scenario, ``Making appointments'', in which the structure can be plausibly elicited. We report on the system we built and an experimental methodology which we use in order to investigate whether the computer-based conversational focused task we designed promotes acquisition of the form. Our goal is two-fold: First, learners should improve their overall communicative skills in the task scenario and, second, they should improve their mastery of the structure. In this paper, we present a methodology for evaluating learners' progress on the latter. - Polish Phones Statistics
75 Natural language processing, triphone statistics, speech processing, Polish Bartosz Ziolko, Jakub Galka, pages 561 – 565. Show abstract Abstract. The phonemic statistics were collected from several large Polish corpora. The paper presents methodology of the acquisition process, summarisation of the data and some phenomena in the statistics. Triphone statistics apply context-dependent speech units which have an important role in speech technologies. The phonemic alphabet for Polish, SAMPA, and methods of providing phonemic transcriptions are described with detailed comments. - APyCA: Towards the Automatic Subtitling of Television Content in Spanish
87 Automatic subtitling, speech recognition, speech processing Aitor Álvarez, Arantza del Pozo, Andoni Arruti, pages 567 – 574. Show abstract Abstract. Automatic subtitling of television content has become an approachable challenge due to the advancement of the technology involved. In addition, it has also become a priority need for many Spanish TV broadcasters, who will have to broadcast up to 90% of subtitled content by 2013 to comply with recently approved national audiovisual policies. APyCA, the prototype system described in this paper, has been developed in an attempt to automate the process of subtitling television content in Spanish through the application of state-of-the-art speech and language technologies. Voice activity detection, automatic speech recognition and alignment, discourse segment detection and speaker diarization have proved to be useful to generate time-coded colour-assigned draft transcriptions for post-editing. The productive benefit of the followed approach heavily depends on the performance of the speech recognition module, which achieves reasonable results on clean read speech but degrades as this becomes more noisy and/or spontaneous.
10th International Multidisciplinary Conference on e-Commerce and e-Government
- Trusted Data in IBM’s MDM: Accuracy Dimension
8 Data Quality, Data Trust, Maste Data Management, Accuracy Przemyslaw Pawluk, pages 577 – 584. Show abstract Abstract. A good data model designed for e-Commerce or e-Government has little value if it lacks accurate, up-to-date data. In this paper data quality measures, its processing and maintenance in IBM InfoSphere MDM Server and IBM InfoSphere Information Server is described. We also introduce a notion of trust, which extends the concept of data quality and allows businesses to consider additional factors, that can influence the decision making process. In the solutions presented here, we would like to utilize existing tools provided by IBM in an innovative way and provide new data structures and algorithms for calculating scores for persistent and transient quality and trust factors. - Multicriteria Evaluation of DVB-RCS Satellite Internet Performance Used for e-Government and e-Learning Purposes
188 Satellite internet, rankings, multicriteria decision making, evalua¬tion algorithms, learning scenarios, e-government Andrzej M. J. Skulimowski, pages 585 – 592. Show abstract Abstract. In this paper we report the findings of the EU 6th Framework Pro¬gramme Project “Rural Wings” concerning the selection, performance and evaluation of the satellite internet pilot sites, based on the case studies of ten such sites in Poland. First, we present the methodology of ex-ante assessment of spe¬cific needs concerning the intensity and the scope of use of the DVB-RCS bidi¬rectional satellite internet technology for lear¬ning at all levels in the moun¬tain and rural areas that led to the selection of rural sites, where the bidirectio¬nal satellite termi¬nals were installed . Then we review the op¬eration of the pilot sites and their final performance evaluation. We compare the rankings resulting from the initial needs assessment with that one derived from the final evaluation and analyse the divergen¬ces. Finally, we propose a learning scheme resulting from ex-post evaluation of the initial ranking procedure, which allows to assess the adequacy of the multicriteria decision-making approach applied to derive the initial pilot sites’ ranking. - INFOMAT-E – public information system for people with sight and hearing dysfunctions
27 blind, deaf, kiosk, Infomat-E, e-government, information, ergonomy Michał Socha, Wojciech Górka, Adam Piasecki, Beata Sitek, pages 593 – 597. Show abstract Abstract. The article features the results of two initial stages of the Infomat-E project. The project is to provide access to information to people with sight and hearing dysfunctions through a hardware-software solution. So far, a number of analyses have been conducted within the project with respect to the method in which the contents of information is presented as well as interaction with the devices that present this information. These included the analysis of suitable colours, font sizes, ergonomic layout of screen menu bars, and ergonomic keyboards – to make them most convenient for people with sight and hearing dysfunctions. There were also analyses conducted how written texts are understood, especially in the case of the deaf. The project assumes integration of elements which were results of separate research projects. Within the project, the following will be used: speech synthesis, speech analysis, presentation of ideas with the use of the sign language. The project will result in the Infomat-E system which will present information in kiosks specially designed to suit the needs of people with sight and hearing dysfunctions. The article features the results of the conducted analytical works which lay at the basis of the technical concept of the system. This concept is presented in the article too. - Bidirectional voting and continuous voting concepts as possible impact of Internet use on democratic voting process
207 internet, bidirectional voting, continuous voting Jacek Wachowicz, pages 599 – 603. Show abstract Abstract. Democracies need elections for choosing their authorities and governments. This process has many factors that shaped today’s procedures. However, Internet is a medium that may change possibilities and may change elections. The main issue is concern on how changes may influence whole democratic process. This paper shows two possible ideas – of bidirectional voting, continuous voting and considers possible reasons for introducing changes as well as consequences. An introductory research in this matter gives additional hints. - The Double Jeopardy Phenomenon and the Electronic Distribution of Information
211 Double Jeopardy phenomenon Urszula Świerczyńska-Kaczor, Artur Borcuch, Paweł Kossecki, pages 605 – 608. Show abstract Abstract. The aim of this paper is to attract attention to the double jeopardy phenomenon. Double jeopardy seems to very often go unnoticed by companies while they look for an explanation as to why their efforts to enhance the intensity of brand usage are unsuccessful. The clue is that the companies do not pay enough attention to raising the market share. Our discussion in this paper refers to informational websites. Our aim is not to form a final conclusion as to whether there is a double jeopardy phenomenon or not on this particular market. Instead, the conclusion is reached that although the double jeopardy pattern can be observed on the virtual market, the nature of virtual markets can oppose this phenomenon.
International Symposium on E-Learning—Applications
- Simple Blog Searching Framework Based on Social Network Analysis
106 e-learning 2.0, blog, searching, social network analysis, Web 2.0 Iwona Dolińska, pages 611 – 617. Show abstract Abstract. Blogs are very popular Internet communication tools. The process of knowledge sharing is a very important activity in the contemporary information era. Blogs are used for knowledge sharing on any subject all over the world. Knowledge gathered on blogs can be used in personal e‑learning, which is a more informal and personal way of learning than the one offered by traditional e-learning courses. However, it is not easy to find valuable knowledge in the huge amount of invalid information. In this study the Simple Blog Searching framework is proposed to improve the blog searching process. The social network analysis methods of centrality measuring help to choose more easily the best results form the long list of hits, received from a blog search tool. To incorporate social network analysis methods, the blog searching have to be expanded with the blog links searching.
6th Workshop on Large Scale Computations on Grids and 1st Workshop on Scalable Computing in Distributed Systems
- Exploratory Programming in the Virtual Laboratory
183 virtual laboratory, distributed computing, workflows, e-science Eryk Ciepiela, Daniel Harężlak, Joanna Kocot, Tomasz Bartyński, Maciej Malawski, Tomasz Gubała, pages 621 – 628. Show abstract Abstract. GridSpace 2 is a novel virtual laboratory framework enabling researchers to conduct virtual experiments on Grid-based resources and other HPC infrastructures. GridSpace 2 facilitates exploratory development of experiments by means of scripts which can be written in a number of popular languages, including Ruby, Python and Perl. The framework supplies a repository of gems enabling scripts to interface low-level resources such as PBS queues, EGEE computing elements, scientific applications and other types of Grid resources. Moreover, GridSpace 2 provides a Web 2.0-based Experiment Workbench supporting development and execution of virtual experiments by groups of collaborating scientists. We present an overview of the most important features of the Experiment Workbench, which is the main user interface of the Virtual laboratory, and discuss a sample experiment from the computational chemistry domain. - Modelling, Optimization and Execution of Workflow Applications with Data Distribution, Service Selection and Budget Constraints in BeesyCluster
131 workflow application management and scheduling, data distribution, service selection, QoS optimization, compute and data intensive application, applications as services Paweł Czarnul, pages 629 – 636. Show abstract Abstract. The paper proposes a model which allows integration of services published by independent providers into scientific or business workflows. Optimization algorithms are proposed for both distribution of input data for parallel processing and service selection within the workflow. Furthermore, the author has implemented a workflow editor and execution engine on a platform called BeesyCluster which allows easy and fast publishing and integration of scientific and business services. Several tests have been implemented and run in BeesyCluster using services for a practical digital photography workflow with and without budget constraints. Two alternative goals are considered: minimization of the execution time with a budget constraint or a linear combination of cost and time. - Multi-level Parallelization with Parallel Computational Services in BeesyCluster
130 multi-level parallelization, parallel computations on clusters, computational services, dynamic master-slave framework, service discovery, integration of services, MPI applications as services Paweł Czarnul, pages 637 – 645. Show abstract Abstract. The paper presents a concept, implementation and real examples of dynamic parallelization of computations using services derived from MPI applications deployed in the BeesyCluster environment. The load balancing algorithm invokes distributed services to solve subproblems of the original problem. Services may be installed on various clusters or servers by their providers and made available through the BeesyCluster middleware. It is possible to search for services and select them dynamically during parallelization to match the desired function the service should perform with descriptions of services. Dynamic discovery of services is useful when providers publish new services. Costs of services may be incorporated into the selection decision. A real example of integration of a given function using distributed services has been implemented, run on several different clusters without or with external load and optimized to hide communication latency. - Managing large datasets with iRODS—a performance analyses
80 Grid,iRODS,BenchIT,Performance analyse Denis Hünich, Ralph Müller-Pfefferkorn, pages 647 – 654. Show abstract Abstract. The integrated Rule Orientated Data System (iRODS)[3] is a Grid data management system, that organizes geographically distributed data and their metadata. A Rule Engine allows the user a flexible definition of data storage, data access and data processing. This paper presents scenarios and a tool to measure the performance of an iRODS environment as well as results of such measurements with large datasets. The scenarios concentrate on data transfers, metadata transfers and stress tests. The user has the possibility to influence the scenarios to adapt them to his own use case. With the help of the results it is possible to find bottlenecks and to optimize the settings of an iRODS environment. - Service level agreements for job control in high-performance computing
165 Service level agreements, high performance computing, cloud computing Roland Kübert, Stefan Wesner, pages 655 – 661. Show abstract Abstract. A key element for outsourcing critical parts of a business process in Service Oriented Architectures are Service Level Agreements (SLAs). They build the key element to move from solely trust-based towards controlled cross-organisational collaborations. While originating from the domain of telecommunications the SLA concept has gained particular attention for Grid computing environments. Significant focus has been given so far to automated negotiation and agreement (also considering legal constraints) of SLAs between parties. However, how a provider that has agreed to a certain SLA is able to map and implement this on its own physical resources or on the one provided by collaboration partners is not well covered. In this paper we present an approach for a High Performance Computing (HPC) service provider to organize its job submission and scheduling control driven by long-term SLAs. - A Modeling Language Approach for the Abstraction of the Berkeley Open Infrastructure for Network Computing (BOINC) Framework
32 Model-Driven Architecture, Unified Modeling Language, Domain-Specific Language, Public-Resource Computing, BOINC Christian Benjamin Ries, Thomas Hilbig, Christian Schröder, pages 663 – 670. Show abstract Abstract. BOINC (Boinc Open Infrastructure for Network Computing) is a framework for solving large scale and complex computational problems by means of public resource computing. Here, the computational effort is distributed onto a large number of computers connected by the Internet. Each computer works on its own workunits independently from each other and sends back its result to a project server. There are quite a few BOINC-based projects in the world. Installing, configuring, and maintaining a BOINC based project however is a highly sophisticated task. Scientists and developers need a lot of experience regarding the underlying communication and operating system technologies, even if only a handful of BOINC related functions are actually needed for most applications. This limits the application of BOINC in scientific computing although there is an ever growing need for computational power in this field. In this paper we present a new approach for model-based development of BOINC projects based on the specification of a high level abstraction language as well as a suitable development environment. This approach borrows standardized modeling concepts from the well-known Unified Modeling Language (UML) and Object Constraint Language (OCL). - Degisco Green Methodologies in Desktop Grids
191 green methods methodologies desktop grids reduction energy consumption Bernhard Schott, Ad Emmen, pages 671 – 676. Show abstract Abstract. The key advantage of Desktop-Grids over service Grids and datacenters based on clusters of servers is the minimal heat density. Compute Clusters without energy intensive air-condition run into thermal disaster within minutes. PCs participating in Desktop-Grids usually do not make use of any air-condition. They operate at power dissipation levels of 40-150 Watts - an amount that will raise the ambient temperature only slightly. • A Green Strategy for Desktop-Grids would try to exploit this thermodynamic key advantage: proactively take care to run workload only where and when total energy efficiency is optimal. • How much is the energy consumption in case of air-conditioning usage? • Additional energy consumption by air-conditions range typically from 30% to>200% of the energy dissipated by the IT device, depending on the temperature of the cool-reservoir the heat pump can utilize to get rid of the heat. The Code of Conduct on Datacenters quotes that most European datacenter are actually worse: they consume more than 200% of the IT related energy for cooling, UPS and power distribution losses. • We describe several distinct Green Methodologies to optimize compute unit specific energy consumption. • Energy efficiency of Desktop-Grids may prove superior over classical service Grids. How “Green” is Desktop-Grid computing? In this paper we will have a closer look on several aspects of energy consumption and computational performance in Desktop-Grids. - Resource Fabrics: the next level of grids and clouds
88 distributed systems; operating system; distributed application execution; heterogeneity; management of scale Lutz Schubert, Matthias Assel, Stefan Wesner, pages 677 – 684. Show abstract Abstract. With the growing amount of computational resources not only locally (multi-core), but also across the web, utility computing (aka Clouds and Grids) becomes more and more interesting as a means to outsource management and services. So far, these machines still act like external resources that have to be explicitly selected, integrated, accessed etc. - much like the concept of “Virtual Organisation” prescribes. This document will describe how the development of dealing with increased scale and heterogeneity of future systems will implicitly open the door for new ways if integrating and using remote resources through a kind of web-based “fabric”.
2nd International Workshop on Medical Informatics and Engineering
- Agile methodology and development of software for users with specific disorders
68 Agile methodology, autism, Down syndrome, Extreme Programming, mental retardation, VOKS, RUP, testing Rostislav Fojtik, pages 687 – 691. Show abstract Abstract. The paper deals with possibilities of information technologies when improving communicative skills of children with specific disorders, such as autistic spectrum disorders, Down syndrome, mental retardation, etc. The development of an application stemming from the communication system PECS (The Picture Exchange Communication System) and its Czech variant VOKS is the base of this paper to show specificity of the development and verification of software for the given group of handicapped users. The paper shows suitability of using agile methods of software development for a concrete application which is designed for users with specific disorders. It tries to show advantages and disadvantages of new methodologies, particularly Extreme Programming. Agile methodologies of software development appeared in the second half of the 90’s of the last century. Thus it concerns new ways which have not been spread massively yet.
3rd International Symposium on Multimedia—Applications and Processing
- An Hypergraph Object Oriented Model for Image Segmentation and Annotation
48 hypergraph, object oriented model, image segmentation, image annotation Eugen Ganea, Marius Brezovan, pages 695 – 701. Show abstract Abstract. This paper presents a system for segmentation of images into regions and annotation of these regions for semantic identification of the objects present in the image. The unified method for image segmentation and image annotation uses an hypergraph model constructed on the hexagonal structure. The hypergraph structure is used for representing the initial image, the results of segmentation processus and the annotation information together with the $RDF$ ontology format. Our technique has a time complexity much lower than the methods studied in the specialized literature, and the experimental results on the Berkeley Dataset show that the performance of the method is robust. - Classification of Image Regions Using the Wavelet Standard Deviation Descriptor
47 Image Region Classification, Wavelet Standard Deviation Descriptor Sönke Greve, Marcin Grzegorzek, Carsten Saathoff, Dietrich Paulus, pages 703 – 708. Show abstract Abstract. This paper introduces and comprehensively evaluates a new approach for classification of image regions. It is based on the so called wavelet standard deviation descriptor. Experiments performed for almost one thousand images with region segmentation given provided reasonable results for a very general application domain: “holiday pictures.” - High Capacity Colored Two Dimensional Codes
79 two dimensional code, colored code, data density, high capacity Antonio Grillo, Alessandro Lentini, Marco Querini, Giuseppe F. Italiano, pages 709 – 716. Show abstract Abstract. Barcodes enable automated work processes without human intervention, and are widely deployed because they are fast and accurate, eliminate many errors and often save time and money. In order to increase the data capacity of barcodes, two dimensional (2D) code were developed; the main challenges of 2D codes lie in their need to store more information and more character types without compromising their practical efficiency. This paper proposes the High Capacity Colored Two Dimensional (HCC2D) code, a new 2D code which aims at increasing the space available for data, while preserving the strong reliability and robustness properties of QR. The use of colored modules in HCC2D poses some new and non-trivial computer vision challenges. We developed a prototype of HCC2D, which realizes the entire Print&Scan process. The performance of HCC2D was evaluated considering different operating scenarios and data densities. HCC2D was compared to other barcodes, such as QR and Microsoft’s HCCB; the experiment results showed that HCC2D codes obtain data densities close to HCCB and strong robustness similar to QR. - Region-based Measures for Evaluation of Color Image Segmentation
136 color segmentation; graph-based segmentation; Andreea Iancu, Bogdan Popescu, Marius Brezovan, Eugen Ganea, pages 717 – 722. Show abstract Abstract. The present paper is aimed to compare the efficiency of a new segmentation method with several existing approaches. The paper addresses the problem of image segmentation evaluation from the error measurement point of view.We are introducing a new method of salient object recognition with very good results relative to other already known object detection methods. We developed a simple evaluation framework in order to compare the results of our method with other segmentation methods. The experimental results offer a complete basis for parallel analysis with respect to the precision of our algorithm, rather than the individual efficiency. - Undetectable Spread-time Stegosystem Based on Noisy Channels
19 Digital audio signal, error correcting codes, noisy Gaussian channel, relative entropy, stegosystems Valery Korzhik, Guillermo Morales-Luna, Ksenia Loban, Irina Marakova-Begoc, pages 723 – 728. Show abstract Abstract. We consider a scenario where an attacker is able to receive a stegosignal only over a Gaussian channel. But in order to provide security of this channel noise--based stegosystem under the very strong condition that an attacker may know even the cover message, it is necessary to establish a very low signal-to-noise ratio in the channel. The last requirement is very hard to be implemented in practice. Therefore we propose to use spread-time stegosystem (STS). We show that both security and reliability of such STS can be guaranteed and their parameters can be optimized with the use of error correcting codes. We show some simulation results with an own STS implementation for digital audio cover messages presented in WAV format. - Building Personalized Interfaces by Data Mining Integration
26 interface, data minig, e-Learning Marian Cristian Mihaescu, pages 729 – 734. Show abstract Abstract. Building personalized high quality multimedia interfaces represents a great challenge. This paper presents a custom procedure of buiding personalized interfaces within e-Learning environments. The procedure has an interdisciplinary approach since the following domains are met: multimedia interfaces, data mining and e-Learning. A large variety of learners with possible very different background and goals may access an e-Learning system. This situation yields to the necessity that the interface to by dynamically build according with the current state of the learner. The business logic that decides which resources are available for the learner is based on Bayesian network learning. - A Graphical Interface for Evaluating Three Graph-Based Image Segmentation
73 image segmentation, segmentation evaluation, graph algorithm Gabriel Mihai, Alina Doringa, Liana Stanescu, pages 735 – 740. Show abstract Abstract. Image segmentation has an essential role in image analysis, pattern recognition and low-level vision. Since multiple segmentation algorithms exists in literature, numerical evaluations are needed to quantify the consistency between them. Error measures can be used for consistency quantification because are allowing a principled comparison between segmentation results on different images, with differing numbers of regions, and generated by different algorithms with different parameters. This paper presents a graphical interface for evaluating three graph-based image segmentation algorithms: the color set back-projection algorithm, an efficient graph based image segmentation algorithm known also as the local variation algorithm and a new and original segmentation algorithm using a hexagonal structure defined on the set of image pixels. - Basic Consideration of MPEG-2 Coded File Entropy and Lossless Re-encoding
40 coding entropy MPEG-2 reencoding lossless Kazuo Ohzeki, Yuǎn yù Wei, Eizaburo Iwata, Ulrich Speidel, pages 741 – 748. Show abstract Abstract. Re-encoding of once compressed files is one of the difficult challenges in measuring the efficiency of coding methods. Variable length coding with a variable source delimiting scheme is a promising method for improving re-encoding efficiency. Analyses of coded files with fixed length delimiting and with variable length delimiting are reviewed. Motion vector codes of MPEG-2 encoded files are modified as a variable-to-variable coding point of view. Length, bit-rates, and varieties of videos are examined. The largest file is 16 seconds of D1 full size at 720 ×480 among five video files. By entropy evaluation, an improvement of almost 20% in coding efficiency over the conventional MPEG-2 is obtained. - Analyzes of the processing performances of a Multimedia Database
42 multimedia; database server; content based retrieval; insert operations Cosmin Stoica Spahiu, pages 749 – 753. Show abstract Abstract. The paper presents an original dedicated integrated software system for managing and querying alphanumerical information and images from medical domain. The software has a modularized architecture controlled by a multimedia relational database management server. The server is designed to manage database creation, updating and complex querying based on several criteria: simple text-based or content-based image query on color or texture feature, extracted from color and gray-scale image. - Constructive Volume Modeling
135 volumetric data, voxel, constructive solid geometry, volume modelling, constructive volume geometry Mihai Tudorache, Mihai Popescu, Razvan Tanasie, pages 755 – 758. Show abstract Abstract. In this article we intend to present a method of obtaining high complexity synthetic scenes by using simple volumes as the building blocks. The below described method can be used to obtain both homogenous and heterogenous volumes. This is done by combining volumes of different voxel densities. - Real-Time Embedded Fault Detection Estimators in a Satellite’s Reaction Wheels
50 Real-Time control systems, Kalman Filter Estimation, Fault Detection, Nicolae Tudoroiu, Eshan Sobhani-Tehrani, Kash Khorasani, Tiberiu Letia, Roxana-Elena Tudoroiu, pages 759 – 766. Show abstract Abstract. The main idea of this paper is the real-time implementation of the Fault Detection Kalman Filter Estimators (FDKFE) in a satellite’s Reaction Wheels during its scientific mission. We assume that the satellite’s reaction wheels are subjected to several failures due to the abnormal changes in power distribution, motor torque, windings current as well as the temperature caused by a motor current increase or friction. The proposed real-time FDKFE strategies consist of two embedded multiple model bank of nonlinear Kalman Filter (extended and unscented) estimators. This research work is based on our previous results in this field and now we are interested only in real-time implementation of some of these FDKFE strategies (FDDM_EKF and FDDM_UKF). Furthermore we will construct a benchmark to compare their results to have an overall image how perform these strategies. - Application of optimal settings of the LMS adaptive filter for speech signal processing
41 LMS adaptive filter (Least Mean Square), DTW criterion (Dynamic Time Warping), noise canceller Jan Vaňuš, Vítězslav Stýskala, pages 767 – 774. Show abstract Abstract. This paper describes a proposition of the method for optimal adjustment parameters of the adaptive filter with LMS algorithm in the practical application of suppression of additive noise in a speech signal for voice communication with the control system. By the proposed method, the optimal values of parameters of adaptive filter are calculated with guarantees the stability and convergence of the LMS algorithm. The DTW criterion is used for the quality assessment of speech signal processing obtained from output of adaptive filter with LMS algorithm. In the experimental section is described the way of verification of the proposed method on the structure of the adaptive filter with LMS algorithm and on the structure of the adaptive filter with LMS algorithm in application of suppressing noise from speech signal by simulations in MATLAB software and implementation on DSK TMS320C6713. - Obfuscation Methods with Controlled Calculation Amounts and Table Function
82 semantic obfuscation, watermark, non-linear, calculation, encryption, prime number Yuanyu Wei, Kazuo Ohzeki, pages 775 – 780. Show abstract Abstract. A new obfuscation method with two techniques that both the computational complexity can be controlled and the semantic obfuscation can be achieved. The computational complexity can be strictly controlled by using the technique of the encryption. The computational complexity can be arbitrarily specified by the impossibility of factorization on prime numbers by length from one second to about one year. The semantic obfuscation is achieved by transforming a function into a table function. A nonlinear, arbitrary function can be incorporated into the functions, while only linear functions were used in the conventional methods. Because the explicit function form is hided, it is thought that it takes time for the analysis. A computational complexity technique and semantic technique can be used at the same time, and the effect of integrated obfuscation with both techniques is large
International Workshop on Real Time Software
- Computationally effective algorithms for 6DoF INS used for miniature UAVs
129 UAV, MEKF, 6DoF INS, Vector Filter Jan Floder, pages 783 – 790. Show abstract Abstract. The article aims at 6 degrees of freedom inertial navigation systems for miniature UAVs. It shows a new filter design which replaces standard solutions represented by EKF/UKF filters. The new filter is designed to significantly reduce filter complexity and processing power requirements (but keeping estimation accuracy) to be useful in small embedded systems with minimum processing power. - Supervisory control and real-time constraints
163 distributed control, process control, industrial automation, OPC, system integration, Wojciech Grega, pages 791 – 796. Show abstract Abstract. OPC (OLE for Process Control) protocol was developed as a solution which fulfills requirements of open data integration architecture for industrial control. OPC standard is not primarily intended for feedback control or communication with high-bandwidth hard real-time requirements. Adding OPC to a process could influence the dynamics of the control loop and could cause problems in controller design and implementation. The experiments presented in this paper have shown that OPC if properly configured, is capable of providing a loop time shorter than the time constants of many industrial processes. - Integration of Scheduling Analysis into UML Based Development Processes Through Model Transformation
43 Development Process, UML, MARTE, Scheduling Analysis, Model transformation, model based development Matthias Hagner, Ursula Goltz, pages 797 – 804. Show abstract Abstract. The complexity of embedded systems and their safety requirements have risen significantly in recent years. Models and the model based development approach help to keep overview and control of the development. Nevertheless, a support for the analysis of non-functional requirements, e.g. the scheduling, based on development models and consequently the integration of these analysis technologies into a development process exists only sporadically. The problem is that the analysis tools use different metamodels than the development tools. Therefore, a remodeling of the system in the format of the analysis tool or a model transformation is necessary to be able to perform an analysis. Here, we introduce a scheduling analysis view as a part of the development model, which is a MARTE annotated UML model to describe a system from the scheduling behavior point of view. In addition, we present a transformation from this annotated UML model to the scheduling analysis tool SymTA/S and a treatment of the analysis results to integrate scheduling analysis into a development process. With this approach it is not necessary to remodel the system in an analysis tool to profit from the analysis and its results. Additionally, we illustrate our approach in a case study of our Collaborative Research Centre 562. - Laboratory real-time systems to facilitate automatic control education and research
138 real time control, mechatronic systems, education Krzysztof Kołek, Andrzej Turnau, Krystyn Hajduk, Paweł Piątek, Mariusz Pauluk, Dariusz Marchewka, Adam Piłat, Maciej Rosół, Przemysław Gorczyca pages 805 – 812. Show abstract Abstract. The paper is an attempt to interest the reader how to control real-time mechatronic systems under the MS Windows operating system. The authors refer to solutions that combine a software part and a hardware using the FPGA technology, together forming a comprehensive platform for the control purposes in the real-time. The main emphasis is placed on the authors' own designs and constructions. Lectures and laboratory experiments must be conducted hand in hand. Such is the message to facilitate the Automatic Control education. Research works have been carried out in the Department of Automatics at the University of Science and Technology (AGH). - Methods of Computer-Assisted Manual Control of Wheeled Robots
119 manual control, computer-assisted control, motion control, game control, strategy, mobile robots Viktor Michna, Petr Wagner, Jiri Kotzian, pages 813 – 816. Show abstract Abstract. This paper deals with the possibilities of manual control of wheeled robots. The problem is tested on the robot-soccer application. The manual computer-assisted control module creates interface between the game controller and controlled robot. There are several ways of steering implementation. The simplest is differential steering. The module currently uses semi-automatic steering which provides a capability to move with the robot as a point. This functionality requires interaction with vision system, usage of Kalman or similar filter and compensators. Next logical step is fully-assisted steering. This steering system interacts with strategy and motion control module and provides additional functionality such as tracking the ball and shot. The objective is the control of many robots by only one player. - Software and hardware in the loop component for an IEC 61850 Co-Simulation platform
103 IEC61850, Co Simulation platform, hardware in the loop, software in the loop, OpNet modeler Haffar Mohamad, Thiriet Jean Marc, pages 817 – 823. Show abstract Abstract. The deployment of IEC61850 standard in the world of substation automation system brings to the use of specific strategies for architecture testing. To validate IEC61850 architecture, the first step consists in validating the conformity of the object modeling and services implementation inside devices. The second step consists in validating IEC61850 applications compliance according to the project specifications. A part of the architecture can of course be tested “physically”; however in the design phase or when the actual architecture cannot be checked directly, modeling is helpful. In our research study we propose a co-simulation approach based on several components allowing the realization of advanced tests. This paper describes the need and the design implementation of software and hardware in the loop components as well as the object modeling concept of IED models. - Real-time controller design based on NI Compact-RIO
147 real-time control, magnetic levitation, CompactRio, scheduling. Maciej Rosół, Adam Piłat, Andrzej Turnau, pages 825 – 830. Show abstract Abstract. The paper is focused on NI Compact-RIO configured as a controller for the active magnetic levitation used here as a benchmark for time-critical systems. Three real-time configurations: soft, soft with IRQ and hard FPGA are considered. The quality of the real-time control has been tested for each configuration. - Intelligent Car Control and Recognition Embedded System
133 communication system, embedded device, image processing, industrial network, object recognition Vilem Srovnal Jr., Zdenek Machacek, Radim Hercik, Roman Slaby, Vilem Srovnal, pages 831 – 836. Show abstract Abstract. There is presented control system design with autonomous control elements focus on field of automotive industry in this paper. The main objective of this document is description of a control and monitoring system with integrated image processing from the camera. The camera’s images are use for recognizing of routing and traffic situation. During system proposal we focus our attention on integration of components for a car localization using GPS and navigation system as well. The implemented embedded system communicates with other car control units using CAN bus and industrial Ethernet. The communication interface between driver and car integrated system is realized by process visualization on a LCD touch panel.
4th International Workshop on Secure Information Systems
- A Security Model for Personal Information Security Management Based on Partial Approximative Set Theory
127 Approximation of sets, rough set theory, partial approximative set theory, security policies. Zoltán Csajbók, pages 839 – 845. Show abstract Abstract. Nowadays, computer users especially run their applications in a complex open computing environment which permanently changes in the running time. To describe the behavior of such systems, we focus solely on externally observable execution traces generated by the observed computing system. In these extreme circumstances the pattern of sequences of primitive actions (execution traces) which is observed by an external observer cannot be designed and/or forecast in advance. We have also taken into account in our framework that security policies are partial-natured. To manage the outlined problem we need tools which are approximately able to discover secure or insecure patterns in execution traces based on presupposes of computer users. Rough set theory may be such a tool. According to it, the vagueness of a subset of a finite universe U is defined by the difference of its lower and upper approximations with respect to a partition of the universe U. Using partitions, however, is a very strict requirement. In this paper, our starting point will be an arbitrary family of subsets of U. Neither that this family of sets covers the universe nor that the universe is finite will be assumed. This new approach is called the partial approximative set theory. We will apply it to build up a new security model for distributed software systems solely focusing on their externally observable executions and to find out whether the observed system is secure or not. - Social Engineering-Based Attacks—Model and New Zealand Perspective
36 Social Engineering Attacks, Threats Lech Janczewski, Lingyan (René) Fu, pages 847 – 853. Show abstract Abstract. The objective of this research was to present and demonstrate the major aspects and underlying constructs of social engineering. An in-depth literature review was carried out resulting in the construction of a conceptual model of social engineering attacks. A case study was undertaken to understand the phenomenon with New Zealand-based IT practitioners to contribute insightful opinions. On this basis an improved model of social engineering-based attacks was formulated.
International Symposium on Technologies for Social Advancement
- Global Mobile Applications For Monitoring Health
156 mobile health calorie intake mhealth technology applications monitor Tapsie Giridher Giridher, Anita Wasliewska, Jennifer Wong, pages 855 – 859. Show abstract Abstract. The incentive of the mobile applications presented in this paper is the extensive spread of the mobile phone cultureduring the past decade. The first application is CalorieMeter, a calorie intake monitoring application. Cheer Up is the secondmobile application that is based on self-help scientific methodologies for diagnosing possibility of different kinds of depressions. We apply the ideology of Mobile Phone Template applications. It supports easy transformation of a given application into other application domains and allows us gain language and regional independence, the hence the global nature of our approach.Our applications have been developed for and tested on low end mobile phones. - A Study on the Expectations and Actual Satisfaction about Mobile Handset before and after Purchase
74 Mobile Communication Industry, Customer Satisfaction, Customer Base, Mobile Communication Termina JIBum Jung, seungpyo Hong, pages 861 – 866. Show abstract Abstract. This thesis is intended to examine factors that affect customer satisfaction in the domestic mobile communication terminal market as to expectations before purchase and actual satisfaction after purchase. Also, how the factors that affect customer satisfaction about mobile phones influence the customer base was theoretically and positively examined.The mobile communication terminal industry, which has been the driving force behind the development of Korea as a great power in the information and communication industry, has a great influence on the global market as well as the domestic economy. Nevertheless, research efforts on the existing mobile communication market havebeen focused on the mobile communication service market rather than the mobile communication terminal market. In addition, research on customer satisfaction about mobile communication terminals, which has been done by a few scholars in the related fields, has been limited to prices and brands. That is, the traditional research efforts on customer satisfaction about mobile communication terminal products have been focused on the influences of prices and brands on the purchase of products rather than on the evaluation of the unique quality attribute of each product. Therefore, this thesis is intended to examine factors expected to enhance customer satisfaction about products and expand the customer base other than the external factors including the prices andbrand images of mobile phones so that customer-oriented mobiles phones can be developed and manufactured.
Workshop on Ad-Hoc Wireless Networks
- Wireless Transceiver for Control of Mobile Embedded Devices
99 Wireless Transceiver, Mobile Devices, HW Design Jan Kordas, Petr Wagner, Jiri Kotzian, pages 869 – 872. Show abstract Abstract. This article deals with controlling of mobile embedded devices via wireless transceiver. The only way how to control mobile devices such as robots, is to use wireless data transfer. The possible wireless transceiver solution using the nRF24L01 transceiver by Nordic Semiconductor, which is working in license-free worldwide 2.4 GHz ISM frequency band, is presented. Overview of this chip is included at the beginning of this article. Main part of this article deals with design of communication protocol. Then some optimizations of this protocol to improve its performance and determinism are discussed. In the last part the results of measurement of some data transfer characteristics are presented. - Efficient Coloring of Wireless Ad Hoc Networks With Diminished Transmitter Power
108 Channel Assigment, Diminished Transmitter Power, Coloring, Ad Hoc Networks Krzysztof Krzywdziński, pages 873 – 878. Show abstract Abstract. In our work we present a new approach to the problem of channel assignment in Wireless Ad Hoc Network. We introduce a new algorithm which works in distributed model of computations on Unit Disc Graphs modeling the Wireless Ad Hoc Network. The algorithm first modifies the transmitting power of the devices constituting the network (radii of the vertices of the Unit Disc Graph) and then it assigns the frequencies in the network adjusted to our demands. We assume that initially all the devices have the same transmission range and we are able to reduce the transmission range of some of them in order to decrease the number of necessary frequencies. We are able to diminish the number of communication links without loosing connectivity of the network, i.e. reduce the number of possible interference threats without risk of loosing possibility to exchange information. In addition, the diminution of communication range reduce the power consumption of the transmitting devices. - Fast Construction of Broadcast Scheduling and Gossiping in Dynamic Ad Hoc Networks
107 Broadcast Scheduling, Gossiping, Dynamic Ad Hoc Networks, Unit Disk Graph Krzysztof Krzywdziński, pages 879 – 884. Show abstract Abstract. This paper studies the minimum latency broadcast schedule (MLBS) problem in ad hoc networks represented by unit disk graphs. In our approach we use an algorithm, which does not need BFS tree. We introduce a construction, which does not depend on a source, can be found in constant number of synchronous rounds, uses only short messages and produces broadcast schedule with latency at most 258 times optimum. The advantage of our construction over known algorithms is its ability to adapt fast to the changes in the network, such as adding, moving or deleting vertices (even during the broadcasting). We also study the minimum-latency gossiping (all-to-all broadcast) problem in unit disk graphs. Our algorithm is the best result for gossiping in unit disk graph in unbounded-size messages model. Since our construction of broadcast scheduling does not depend on the source, it may be also used to solve other problems concerning broadcasting in unit disk graphs, such as single source multiple message broadcasting and multi channel broadcast scheduling.
Workshop on Computational Optimization
- ACO with semi-random start applied on MKP
18 ant colony optimization, multiple knapsack problem Stefka Fidanova, Pencho Marinov, Krassimir Atanassov, pages 887 – 891. Show abstract Abstract. Ant Colony Optimization (ACO) is a stochastic search method that mimic the social behavior of real ants colonies, which manage to establish the shortest rout to feeding sources and back. Such algorithms have been developed to arrive at near-optimal solutions to large-scale optimization problems, for which traditional mathematical techniques may fail. On this paper semi-random start is applied. A new kind of estimation of start nodes of the ants are made and several start strategies are prepared and combined. The idea of semi-random start is better management of the ants. This new technique is tested on Multiple Knapsack Problem (MKP). Benchmark comparison among the strategies are presented in terms of quality of the results. Based on this comparison analysis, the performance of the algorithm is discussed. The study presents ideas that should be beneficial to both practitioners and researchers involved in solving optimization problems. - On the Probabilistic min spanning tree problem
15 Combinatorial optimization, probabilistic optimization, minimum spanning tree, polynomial approximation Boria Nicolas, Murat Cécile, Paschos Vangelis, pages 893 – 900. Show abstract Abstract. We study a probabilistic optimization model for min spanning tree, where any vertex vi of the input-graph G(V, E) has some presence probability pi in the final instance G′ ⊂ G that will effectively be optimized. Supposing that when this “real” instance G′ becomes known, a decision maker might have no time to perform computation from scratch, we assume that a spanning tree T , called anticipatory or a priori spanning tree, has already been computed in G and, also, that a decision maker can run a quick algorithm, called modification strategy, that modifies the anticipatory tree T in order to fit G′. The goal is to compute an anticipatory spanning tree of G such that, its modification for any G′ ⊆ G is optimal for G′. This is what we call probabilistic min spanning tree problem. In this paper we study complexity and approximation of probabilistic min spanning tree in complete graphs as well as of two natural subproblems of it, namely, the probabilistic metric min spanning tree and the probabilistic min spanning tree 1,2 that deal with metric complete graphs and complete graphs with edge-weights either 1, or 2, respectively. - Efficient Portfolio Optimization with Conditional Value at Risk
105 risk measures, portfolio optimization, computability, linear programming Wlodzimierz Ogryczak, Tomasz Sliwinski, pages 901 – 908. Show abstract Abstract. The portfolio optimization problem is modeled as a mean-risk bicriteria optimization problem where the expected return is maximized and some (scalar) risk measure is minimized. In the original Markowitz model the risk is measured by the variance while several polyhedral risk measures have been introduced leading to Linear Programming (LP) computable portfolio optimization models in the case of discrete random variables represented by their realizations under specified scenarios. Among them, the second order quantile risk measures, recently, become popular in finance and banking. The simplest such measure, now commonly called the Conditional Value at Risk (CVaR) or Tail VaR, represents the mean shortfall at a specified confidence level. Recently, the second order quantile risk measures have been introduced and become popular in finance and banking. The corresponding portfolio optimization models can be solved with general purpose LP solvers. However, in the case of more advanced simulation models employed for scenario generation one may get several thousands of scenarios. This may lead to the LP model with huge number of variables and constraints thus decreasing the computational efficiency of the model since the number of constraints (matrix rows) is usually proportional to the number of scenarios. while the number of variables (matrix columns) is proportional to the total of the number of scenarios and the number of instruments. We show that the computational efficiency can be then dramatically improved with an alternative model taking advantages of the LP duality. In the introduced models the number of structural constraints (matrix rows) is proportional to the number of instruments thus not affecting seriously the simplex method efficiency by the number of scenarios. - Enhanced Competitive Differential Evolution for Constrained Optimization
91 constrained optimization; differencial evolution; enhanced search of feasible region Josef Tvrdik, Radka Polakova, pages 909 – 915. Show abstract Abstract. The constrained optimization with differential evolution (DE) is addressed. A novel variant of competitive differential evolution with a hybridized search of feasibility region is proposed, where opposite-based optimization and adaptive controlled random search are combined. Various variants of the algorithm are experimentally compared on the benchmark set developed for the special session of IEEE Congress of Evolutionary Computation (CEC) 2010. The results of the enhanced competitive DE show effective search of feasible solutions, in difficult problems significantly better than the competitive DE variant presented at CEC 2010.