COMPLEXIS 2018 Abstracts


Area 1 - Complexity in Biology and Biomedical Engineering

Short Papers
Paper Nr: 2
Title:

On Code-Prompting Auto-Catalytic Sets and the Origins of Coded Life

Authors:

I. Fayerverker and T. Mor

Abstract: The genetic code and genetic evolution are at the core of complexity in biology, however, there is no reasonable explanation yet for the emergence of the genetic code. We present here a possible scenario accounting for the emergence of “coded life” in nature: We describe the emergence of the genetic code from molecular evolution, prior to genetic evolution, in a chemical era in which all the molecules were still located within (probably non-biological) compartments. Our scenario is obtained by combining the conceptual idea of “code-prompting autocatalytic sets” (Agmon and Mor, 2015), with recent results about non-enzymatic template replication methods (Prywes et al, 2016), possibly relevant to the prebiotic stage preceding RNA-world. In the scenario described here, we often use computer science viewpoint and abstraction: We consider sets of strings composed of letters, such that each letter represents a molecular building block—mainly nucleotides and amino acids, and each string represents a more complex molecule which is some concatenation of the simpler molecules represented by letters; the biochemical rules are described in an abstract language of rules and statistics of letters and strings. We then suggest a novel path, containing several phases, for the emergence of “coded life”.

Paper Nr: 26
Title:

Bio-backfill: A Scheduling Policy Enhancing the Performance of Bioinformatics Workflows in Shared Clusters

Authors:

Ferran Badosa, Antonio Espinosa, Gonzalo Vera and Ana Ripoll

Abstract: In this work we present the bio-backfill scheduler, a backfill scheduler for bioinformatics workflows applications running on shared, heterogeneous clusters. Backfill techniques advance low-priority jobs in cluster queues, if doing so doesn't delay higher-priority jobs. They improve the resource utilization and turnaround achieved with classical policies such as First Come First Served, Longest Job First.. When attempting to implement backfill techniques such as Firstfit or Bestfit on bioinformatics workflows, we have found several issues. Backfill requires runtime predictions, which is particularly difficult for bioinformatics applications. Their performance varies substantially depending on input datasets and the values of its many configuration parameters. Furthermore, backfill approaches are mainly intended to schedule independent, rather than dependent tasks as those forming workflows. Backfilled jobs are chosen upon its number of processors and length runtime, but not by considering the amount of slowdown when the Degree of Multiprogramming of the nodes is greater than 1. To tackle these issues, we developed the bio-backfill scheduler. Based on a predictor generating performance predictions of each job with multiple resources, and a resource-sharing model that minimizes slowdown, we designed a scheduling algorithm capable of backfilling bioinformatics workflows applications. Our experiments show that our proposal can improve average workflow turnaround by roughly 9\% by and resource utilization by almost 4\%, compared to popular backfill strategies such as Firstfit or BestFit.

Paper Nr: 27
Title:

The Fuzzy Mortality Model based on Quaternion Theory

Authors:

Andrzej Szymanski and Agnieszka Rossa

Abstract: The mortality models are of fundamental importance in many areas, such as the pension plans, the care of the elderly, the provision of health service, etc. In the paper we propose a new class of mortality models based on a fuzzy version of the well-known Lee–Carter model (1992). Theoretical backgrounds are based on the algebraic approach to fuzzy numbers. The essential idea in our approach focuses on representing a membership function of a fuzzy number as an element of C*-Banach algebra. If the membership function µ(z) of a fuzzy number is strictly monotonic on two disjoint intervals, then it can be decomposed into strictly decreasing and strictly increasing functions, and the inverse functions f(u), g(u) can be found. The membership function µ(z) can be represented by means of a complex-valued function f(u) + ig(u), where i is an imaginary unit. Then the pair (f, g) is a quaternion and quaternion-valued square integrable functions form the separable Hilbert space. We use the Hilbert space of quaternion-valued functions as a tool for constructing the new class of mortality models.

Area 2 - Complexity in Informatics, Automation and Networking

Full Papers
Paper Nr: 9
Title:

A Complex Network Analysis Approach for Risk Increase Factor Prediction in Nuclear Power Plants

Authors:

Mouna Rifi, Mohamed Hibti and Rushed Kanawati

Abstract: We explore applying network based metrics to predict safety metrics of components in Nuclear Power Plants (NPP). We first show how to model accident sequences as complex networks, then we conduct a statistical study over the main network metrics to show that these are highly correlated with the RIF (Risk Increase Factor) which is a very popular metric in nuclear safety studies.

Paper Nr: 18
Title:

Bitcoin Currency Fluctuation

Authors:

Marius Kinderis, Marija Bezbradica and Martin Crane

Abstract: Predicting currency prices remains a difficult endeavour. Investors are continually seeking new ways to extract meaningful information about the future direction of price changes. Recently, cryptocurrencies have attracted huge attention due to their unique way of transferring value as well as its value as a hedge. A method proposed in this project involves using data mining techniques: mining text documents such as news articles and tweets try to infer the relationship between information contained in such items and cryptocurrency price direction. The Long Short-Term Memory Recurrent Neural Network (LSTM RNN) assists in creating a hybrid model which comprises of sentiment analysis techniques, as well as a predictive machine learning model. The success of the model was evaluated within the context of predicting the direction of Bitcoin price changes. Findings reported here reveal that our system yields more accurate and real-time predictions of Bitcoin price fluctuations when compared to other existing models in the market.

Short Papers
Paper Nr: 1
Title:

PaaS-BDP - A Multi-Cloud Architectural Pattern for Big Data Processing on a Platform-as-a-Service Model

Authors:

Thalita Vergilio and Muthu Ramachandran

Abstract: This paper presents a contribution to the fields of Big Data Analytics and Software Architecture, namely an emerging and unifying architectural pattern for big data processing in the cloud from a cloud consumer’s perspective. PaaS-BDP (Platform-as-a-Service for Big Data) is an architectural pattern based on resource pooling and the use of a unified programming model for building big data processing pipelines capable of processing both batch and stream data. It uses container cluster technology on a PaaS service model to overcome common shortfalls of current big data solutions offered by major cloud providers such as low portability, lack of interoperability and the risk of vendor lock-in.

Paper Nr: 3
Title:

Using Tag based Semantic Annotation to Empower Client and REST Service Interaction

Authors:

Cong Peng and Guohua Bai

Abstract: The utilization of Web services is becoming a human labor consuming work as the rapid growth of Web. The semantic annotated service description can support more automatic ways on tasks such as service discovery, invocation and composition. But the adoption of existed Semantic Web Services solutions is hindering by their overly complexity and high expertise demand. In this paper we propose a highly lightweight and non-intrusive method to enrich the REST Web service resources with semantic annotations to support a more autonomous Web service utilization and generic client service interaction. It is achieved by turning the service description into a semantic resource graph represented in RDF, with the added tag based semantic annotation and a small vocabulary. The method is implemented with the popular OpenAPI service description format, and illustrated by a simple use case example.

Paper Nr: 4
Title:

A Novel Algorithm for Bi-Level Image Coding and Lossless Compression based on Virtual Ant Colonies

Authors:

Matthew Mouring, Khaldoon Dhou and Mirsad Hadzikadic

Abstract: Ant colonies emerged as a topic of research and they are applied in different fields. In this paper, we develop an algorithm based on the concept of ant colonies and we utilize if for image coding and compression. To apply the algorithm on images, we represent each image as a virtual world which contains food and routes for ants to walk and search for it. Ants in the algorithm have certain type of movements depending on when and where they find food. When an ant finds food, it releases a pheromone, which allows other ants to follow the source of food. This increases the likelihood that food areas are covered. The chemical evaporates after a certain amount of time, which in turn helps ants move to cover another food area. In addition to the pheromone, ants use proximity awareness to detect other ants in the surrounding, which can help ants cover more food areas. When an ant finds food, it moves to that location and the movement and coordinates are recorded. If there is no food, an ant moves randomly to a location in the neighborhood and starts searching. We ran our algorithm on a set of 8 images and the empirical results showed that we could outperform many techniques in image compression including JBIG2.

Paper Nr: 6
Title:

Proposing a Holistic Framework for the Assessment and Management of Manufacturing Complexity through Data-centric and Human-centric Approaches

Authors:

Dominik Kohr, Mussawar Ahmad, Bugra Alkan, Malarvizhi Kaniappan Chinnathai, Lukas Budde, Daniel Vera, Thomas Friedli and Robert Harrison

Abstract: A multiplicity of factors including technological innovations, dynamic operating environments, and globalisation are all believed to contribute towards the ever-increasing complexity of manufacturing systems. Although complexity is necessary to meet functional needs, it is important to assess and monitor it to reduce life-cycle costs by simplifying designs and minimising failure modes. This research paper identifies and describes two key industrially relevant methods for assessing complexity, namely a data-centric approach using the information theoretic method and a human-centric approach based on surveys and questionnaires. The paper goes on to describe the benefits and shortcomings of each and contributes to the body of knowledge by proposing a holistic framework that combines both assessment methods.

Posters
Paper Nr: 16
Title:

Complexity Evaluation with Business Process Modeling and Simulation

Authors:

Krishan Chand and Muthu Ramachandran

Abstract: To stay in the competition and to make a stand in the market, companies have to make the quick changes. Business Process Modelling (BPM) has made an impact in the respect to capture the process and to make the changes accordingly for improvement in business operations. Modeling and simulation is the process of making a process simple to reduce complexity. However, modellers or researchers still making the complex models. Modeling and simulation are the areas which need to be addressed, despite only a few researchers worked in the respective areas of modelling and simulation. The paper addresses the complexity issue of cloud performance criteria of time and cost. To this end, this paper has evaluated the domain of financial services in the cloud with Business Process Modeling Notation (BPMN) and simulation. Two different scenarios have been created to demonstrate the result of performance complexity of cloud services. Finally, the conclusion has been derived to help and guide further research.

Area 3 - Complexity in Social Sciences

Full Papers
Paper Nr: 8
Title:

Why so Emotional? An Analysis of Emotional Bot-generated Content on Twitter

Authors:

Ema Kušen and Mark Strembeck

Abstract: In this paper, we present a study on the emotions conveyed in bot-generated Twitter messages as compared to emotions conveyed in human-generated messages. Social bots are software programs that automatically produce messages and interact with human users on social media platforms. In recent years, bots have become quite complex and may mimic the behavior of human users. Prior studies have shown that emotional messages may significantly influence their readers. Therefore, it is important to study the effects that emotional bot-generated content has on the reactions of human users and on information diffusion over online social networks (OSNs). For the purposes of this paper, we analyzed 1.3 million Twitter accounts that generated 4.4 million tweets related to 24 systematically chosen real-world events. Our findings show that: 1) bots emotionally polarize during controversial events and even inject polarizing emotions into the Twitter discourse on harmless events such as Thanksgiving, 2) humans generally tend to conform to the base emotion of the respective event, while bots contribute to the higher intensity of shifted emotions (i.e. emotions that do not conform to the base emotion of the respective event), 3) bots tend to shift emotions to receive more attention (in terms of likes and retweets).

Short Papers
Paper Nr: 5
Title:

Management of Co-evolutionary Complexity - Some Methodological Considerations

Authors:

Helena Knyazeva

Abstract: The nature of co-evolutionary complexity of society and the ways of its management are under consideration. Co-evolutionary complexity emerges in the process of joint and concordant development of structures and organizations. Management of complexity is possible as ensuring its innovative self-management. The role of external control actions increases under conditions of uncertainty and crisis in society. Co-evolution means the art to live together and to co-exist in one and the same tempo-world. It is substantiated that an extended ecological discourse based on the notion of Umwelt is useful in the modern management activity.

Paper Nr: 11
Title:

A New Pricing Model for Freelancing Platforms based on Financial and Social Capital

Authors:

Stefan Kambiz Behfar and Qumars Behfar

Abstract: Over the freelancing platforms, there is usually disagreement on the price between project owners and freelancers. Usually project owners do not know what price to offer to have the project done with excellence within the allocated time, and freelancers do not usually know what price to offer in order to win the project in the competition. What we propose is to calculate and offer a realistic value to project owners based on financial and social capital. In this way, the company would be able to attract more clients with upscale projects, because 1) Both the project owners and freelancers become satisfied with the offered price, 2) There will be less negotiation on how much a project really worth, and 3) Have the clients more segmented, therefore the company can attract high value customers from the competitors. In our methodology, social capital is calculated via different approaches such as embedded resources. At the group level, capital represents some aggregation of valued resources such as financial resources as well as social connections.

Paper Nr: 13
Title:

Elephants, Donkeys, and Colonel Blotto

Authors:

Ivan P. Yamshchikov and Sharwin Rezagholi

Abstract: This paper employs a novel method for the empirical analysis of political discourse and develops a model that demonstrates dynamics comparable with the empirical data. Applying a set of binary text classifiers based on convolutional neural networks, we label statements in the political programs of the Democratic and the Republican Party in the United States. Extending the framework of the Colonel Blotto game by a stochastic activation structure, we show that, under a simple learning rule, the simulated game exhibits dynamics that resemble the empirical data.

Paper Nr: 14
Title:

On the Public Perception of Police Forces in Riot Events - The Role of Emotions in Three Major Social Networks During the 2017 G20 Riots

Authors:

Ema Kušen and Mark Strembeck

Abstract: In this paper, we present a study on the impact of emotions on information diffusion during a riot event. In particular, we analyze a data-set consisting of more than 750 thousand social media messages related to the 2017 G20 summit that have been extracted from Facebook, Twitter, and YouTube. Because of the controversies surrounding police operations during violent protests, our analysis especially focuses on emotions conveyed in messages related to the local police. We found that a) negative emotions of high arousal (anger and fear) dominate in messages mentioning the police on all three social networks, b) emotional content was forwarded (retweeted) more often, regardless of the corresponding emotion valence, and c) in contrast to previous studies we found that emotions have a considerably larger impact on the retweeting behavior than the number of hashtags a message contains.

Posters
Paper Nr: 15
Title:

University Student Desertion Analysis using Agent-Based Modeling Approach

Authors:

M. C. Castellanos Rojas, L. D. Alvarado Nieto and J. E. Villamil Puentes

Abstract: Student dropout at universities is a worldwide phenomenon that exceeds a 40% rate of the students admitted to first semester. In Colombia, it exceeds 45%, an alerting rate that makes studies on the subject very important for Governments and universities considering that it has a huge social impact and affects the resources of the education area. Traditionally, studies are performed by statistical and mathematical methods and the Ministry of National Education acknowledges that they have been insufficient since they fail to explain the dropout behavior. Agent-based modeling and simulation (ABMS) has been considered a new way of doing science when managing social problems that are complex systems forcing an evolution simulation as a useful approach to develop this phenomenon.

Area 4 - Complexity in Computational Intelligence and Future Information Systems

Short Papers
Paper Nr: 7
Title:

CBR-Mining Approach to Improve Learning System Engineering in a Collaborative E-Learning Platform

Authors:

Fatima Zahra Berriche, Besma Zeddini, Hubert Kadima and Alain Riviere

Abstract: System engineering (SE) is an approach that involves customers and users in the development process and more particularly during the definition of requirements and system functionalities. In order to meet the challenges and increasing complexity of system engineering, the training of engineering students in this field is necessary. It enables learners to acquire sound theoretical and practical knowledge, and to adapt to the majority of profiles of the position related to system engineering field proposed by industrial companies. In this paper, we present a continuity of our research work (Berriche et al., 2015), we study the feasibility of the CBR-mining (case based reasoning and process mining) approach in the context of our platform dedicated to the learning of system engineering. First, we apply the CBR-mining approach to monitor student interactions from log files. Secondly, we propose clusters that bring together all the educational processes most performed by students. We have experimented this approach using the ProM Framework.

Paper Nr: 10
Title:

Stock Market Prediction based on Deep Long Short Term Memory Neural Network

Authors:

Xiongwen Pang, Yanqiang Zhou, Pan Wang, Weiwei Lin and Victor Chang

Abstract: To study the influence of market characteristics on stock prices, traditional neural network algorithm may also fail to predict the stock market precisely, since the initial weight of the random selection problem can be easily prone to incorrect predictions. Based on the idea of word vector in deep learning, we demonstrate the concept of stock vector. The input is no longer a single index or single stock index, but multi-stock high-dimensional historical data. We propose the deep long-short term memory neural network (LSMN) with embedded layer to predict the stock market. In this model, we use the embedded layer to vectorize the data, in a bid to forecast the stock via long-short term memory neural network. The experimental results show that the deep long short term memory neural network with embedded layer is state-of-the-art in developing countries. Specifically, the accuracy of this model is 57.2% for the Shanghai A-shares composite index. Furthermore, this is 52.4% for individual stocks.

Paper Nr: 17
Title:

Modeling and Implementation of a Ludic Application using Simple Reactive Agents - Hydrological Impact of High Andean Ecosystems

Authors:

J. A. Villarraga Morales and L. D. Alvarado Nieto

Abstract: Intense human activity is causing drastic changes in the Colombian ecosystems. Therefore, a ludic mobile app that uses simple reactive agents was implemented to teach children about some ecosystems, that are part of the country and the role that they play for the balance of the environment. A test to determinate the app’s efficacy was implemented, and the results obtained indicated that the group of children who used the app obtained a better learning curve in comparison to the group that was only taught in class.