## Biography

Luke Mathieson is a theoretical computer scientist and Scholarly Teaching Fellow at UTS. His research is centred on parameterised complexity and its applications, but extends to a variety of areas of complexity theory, algorithmics and mathematics.

He has taught an extensive range of computer science subjects, particularly focussing on the theory of computation, computational complexity and algorithmics.

He is also a epeeist, a science fiction fan and a terrible guitarist.

#### Can supervise: YES

#### Research Interests

Luke's main research interests are in parameterized complexity and the fundamentals of computational complexity, however his research spans a number of areas:

- Computational complexity
- Parameterized complexity
- Finite Model Theory/Logic
- Algorithmics
- Approximation
- Memetic algorithms

A major theme of his research is the complexity of graph editing problems, a topic in which he specialises.

#### Teaching Areas

Luke has taught subjects in most areas of computer science, but specialises in core theory topics such as:

- Computational complexity
- Algorithms
- Data structures
- Formal languages and automata
- Computability
- Graph theory and discrete mathematics

At UTS he teaches the following subjects:

- 31251 Data structures and algorithms
- 48024 Applications Programming (Spring)
- 41078 Computing Science Studio 1

#### Publications

Lancia, G., Mathieson, L. & Moscato, P. 2017, 'Separating sets of strings by finding matching patterns is almost always hard', *Theoretical Computer Science*, vol. 665, pp. 73-86.View/Download from: UTS OPUS or Publisher's site

#### View description

© 2017 Elsevier B.V. We study the complexity of the problem of searching for a set of patterns that separate two given sets of strings. This problem has applications in a wide variety of areas, most notably in data mining, computational biology, and in understanding the complexity of genetic algorithms. We show that the basic problem of finding a small set of patterns that match one set of strings but do not match any string in a second set is difficult (NP-complete, W[2]-hard when parameterized by the size of the pattern set, and APX-hard). We then perform a detailed parameterized analysis of the problem, separating tractable and intractable variants. In particular we show that parameterizing by the size of pattern set and the number of strings, and the size of the alphabet and the number of strings give FPT results, amongst others.

Mans, B. & Mathieson, L. 2017, 'Incremental Problems in the Parameterized Complexity Setting', *Theory of Computing Systems*, vol. 60, no. 1, pp. 3-19.View/Download from: UTS OPUS or Publisher's site

#### View description

© 2016, Springer Science+Business Media New York. Dynamic systems are becoming steadily more important with the profusion of mobile and distributed computing devices. Coincidentally incremental computation is a natural approach to deal with ongoing changes. We explore incremental computation in the parameterized complexity setting and show that incrementalization leads to non-trivial complexity classifications. Interestingly, some incremental versions of hard problems become tractable, while others remain hard. Moreover tractability or intractability is not a simple function of the problem's static complexity, every level of the W-hierarchy exhibits complete problems with both tractable and intractable incrementalizations. For problems that are already tractable in their static form, we also show that incrementalization can lead to interesting algorithms, improving upon the trivial approach of using the static algorithm at each step.

Mathieson, L. 2017, 'Graph editing problems with extended regularity constraints', *Theoretical Computer Science*, vol. 677, pp. 56-68.View/Download from: UTS OPUS or Publisher's site

#### View description

© 2017 Graph editing problems offer an interesting perspective on sub- and supergraph identification problems for a large variety of target properties. They have also attracted significant attention in recent years, particularly in the area of parameterized complexity as the problems have rich parameter ecologies. In this paper we examine generalisations of the notion of editing a graph to obtain a regular subgraph. In particular we extend the notion of regularity to include two variants of edge-regularity along with the unifying constraint of strong regularity. We present a number of results, with the central observation that these problems retain the general complexity profile of their regularity-based inspiration: when the number of edits k and the maximum degree r are taken together as a combined parameter, the problems are tractable (i.e. in FPT), but are otherwise intractable. We also examine variants of the basic editing to obtain a regular subgraph problem from the perspective of parameterizing by the treewidth of the input graph. In this case the treewidth of the input graph essentially becomes a limiting parameter on the natural k+r parameterization.

Konyagin, S.V., Luca, F., Mans, B., Mathieson, L., Sha, M. & Shparlinski, I.E. 2016, 'Functional graphs of polynomials over finite fields', *Journal of Combinatorial Theory. Series B*, vol. 116, pp. 87-122.View/Download from: UTS OPUS or Publisher's site

#### View description

© 2015 Elsevier Inc. Given a function f in a finite field Fqof q elements, we define the functional graph of f as a directed graph on q nodes labelled by the elements of Fqwhere there is an edge from u to v if and only if f(u)=v. We obtain some theoretical estimates on the number of non-isomorphic graphs generated by all polynomials of a given degree. We then develop a simple and practical algorithm to test the isomorphism of quadratic polynomials that has linear memory and time complexities. Furthermore, we extend this isomorphism testing algorithm to the general case of functional graphs, and prove that, while its time complexity deviates from linear by a (usually small) multiplier dependent on graph parameters, its memory complexity remains linear. We exploit this algorithm to provide an upper bound on the number of functional graphs corresponding to polynomials of degree d over Fq. Finally, we present some numerical results and compare function graphs of quadratic polynomials with those generated by random maps and pose interesting new problems.

Luccio, F., Mans, B., Mathieson, L. & Pagli, L. 2016, 'Complete Balancing via Rotation', *Computer Journal*, vol. 59, no. 8, pp. 1252-1263.View/Download from: Publisher's site

#### View description

© 2016 The British Computer Society 2016. All rights reserved. Trees are a fundamental structure in algorithmics. In this paper, we study the transformation of an arbitrary binary tree S with n vertices into a completely balanced tree T via rotations, a widely studied elementary tree operation. Combining concepts on rotation distance and data structures, we give a basic algorithm that performs the transformation in (n) time and (1) space, making at most 2n - 2 log2n rotations and improving on known previous results. The algorithm is then improved, exploiting particular properties of S. Finally, we show tighter upper bounds and obtain a close lower bound on the rotation distance between a zig-zag tree and a completely balanced tree. We also find the exact rotation distance of a particular almost balanced tree to a completely balanced tree, and thus show that their distance is quite large despite the similarity of the two trees.

Mathieson, L. 2016, 'Synergies in critical reflective practice and science: Science as reflection and reflection as science', *Journal of University Teaching and Learning Practice*, vol. 13, no. 2, pp. 1-13.View/Download from: UTS OPUS

#### View description

© 2016, University of Wollongong. All rights reserved. The conceptions of reflective practice in education have their roots at least partly in the work of Dewey, who describes reflection as 'the active, persistent, and careful consideration of any belief or supposed form of knowledge in the light of the grounds that support it and the further conclusions to which it tends (Dewey 1933, p.9). This conception of reflection has carried on into more-focused efforts to describe critical reflection as a tool for improving professional practice (where academic and educational practice is the particular interest of this study); '… some puzzling or troubling or interesting phenomenon allows the practitioner to access 'the understandings which have been implicit in his action, understandings which he surfaces, criticizes, restructures, and embodies in further action (Schön 1983, p. 50). Both of these descriptions embody a central idea of critical reflective practice: that the examination of practice involves the divination (in a rational, critical sense) of order and perhaps meaning from the facts at hand (which, in turn, are brought to light by the events that occur as the results of implementation of theory). As part of a lecture series, Gottlieb defined science as 'an intellectual activity carried out by humans to understand the structure and functions of the world in which they live (Gottlieb 1997). While science and critical reflective practice attempt to build models about different parts of our world – the natural world and the world of professional (educational) practice respectively – both embody certain underlying aims and methodologies. Indeed, it is striking that in these definitions the simple replacement of the terminology of reflective practice with the terminology of science (or vice versa) leads to a perfectly comprehensible definition of either.It is this confluence that this paper studies, building from two separate foundations, critical reflective pr...

Frati, F, Gaspers, S, Gudmundsson, J & Mathieson, L 2015, 'Augmenting Graphs to Minimize the Diameter', *Algorithmica*, vol. 72, no. 4, pp. 995-1010.View/Download from: UTS OPUS or Publisher's site

#### View description

© 2014, Springer Science+Business Media New York. We study the problem of augmenting a weighted graph by inserting edges of bounded total cost while minimizing the diameter of the augmented graph. Our main result is an FPT $$4$$4-approximation algorithm for the problem.

Mans, B. & Mathieson, L. 2014, 'On the treewidth of dynamic graphs', *Theoretical Computer Science*, vol. 554, no. C, pp. 217-228.View/Download from: UTS OPUS or Publisher's site

#### View description

© 2014 Elsevier B.V. Dynamic graph theory is a novel, growing area that deals with graphs that change over time and is of great utility in modeling modern wireless, mobile and dynamic environments. As a graph evolves, possibly arbitrarily, it is challenging to identify the graph properties that can be preserved over time and understand their respective computability. In this paper we are concerned with the treewidth of dynamic graphs. We focus on metatheorems, which allow the generation of a series of results based on general properties of classes of structures. In graph theory two major metatheorems on treewidth provide complexity classifications by employing structural graph measures and finite model theory. Courcelle's Theorem gives a general tractability result for problems expressible in monadic second order logic on graphs of bounded treewidth, and Frick and Grohe demonstrate a similar result for first order logic and graphs of bounded local treewidth. We extend these theorems by showing that dynamic graphs of bounded (local) treewidth where the length of time over which the graph evolves and is observed is finite and bounded can be modeled in such a way that the (local) treewidth of the underlying graph is maintained. We show the application of these results to problems in dynamic graph theory and dynamic extensions to static problems. In addition we demonstrate that certain widely used dynamic graph classes have bounded local treewidth.

Frati, F., Gaspers, S., Gudmundsson, J. & Mathieson, L. 2013, 'Augmenting graphs to minimize the diameter', *Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)*, vol. 8283 LNCS, pp. 383-393.View/Download from: UTS OPUS or Publisher's site

#### View description

We study the problem of augmenting a weighted graph by inserting edges of bounded total cost while minimizing the diameter of the augmented graph. Our main result is an FPT 4-approximation algorithm for the problem. © 2013 Springer-Verlag.

Arefin, A.S., Mathieson, L., Johnstone, D., Berretta, R. & Moscato, P. 2012, 'Unveiling clusters of RNA transcript pairs associated with markers of Alzheimer's disease progression.', *PloS one*, vol. 7, no. 9, p. e45535.View/Download from: UTS OPUS or Publisher's site

#### View description

BACKGROUND: One primary goal of transcriptomic studies is identifying gene expression patterns correlating with disease progression. This is usually achieved by considering transcripts that independently pass an arbitrary threshold (e.g. p<0.05). In diseases involving severe perturbations of multiple molecular systems, such as Alzheimer's disease (AD), this univariate approach often results in a large list of seemingly unrelated transcripts. We utilised a powerful multivariate clustering approach to identify clusters of RNA biomarkers strongly associated with markers of AD progression. We discuss the value of considering pairs of transcripts which, in contrast to individual transcripts, helps avoid natural human transcriptome variation that can overshadow disease-related changes. METHODOLOGY/PRINCIPAL FINDINGS: We re-analysed a dataset of hippocampal transcript levels in nine controls and 22 patients with varying degrees of AD. A large-scale clustering approach determined groups of transcript probe sets that correlate strongly with measures of AD progression, including both clinical and neuropathological measures and quantifiers of the characteristic transcriptome shift from control to severe AD. This enabled identification of restricted groups of highly correlated probe sets from an initial list of 1,372 previously published by our group. We repeated this analysis on an expanded dataset that included all pair-wise combinations of the 1,372 probe sets. As clustering of this massive dataset is unfeasible using standard computational tools, we adapted and re-implemented a clustering algorithm that uses external memory algorithmic approach. This identified various pairs that strongly correlated with markers of AD progression and highlighted important biological pathways potentially involved in AD pathogenesis. CONCLUSIONS/SIGNIFICANCE: Our analyses demonstrate that, although there exists a relatively large molecular signature of AD progression, only a small number o...

Mathieson, L & Szeider, S 2012, 'Editing graphs to satisfy degree constraints: A parameterized approach', *Journal of Computer and System Sciences*, vol. 78, no. 1, pp. 179-191.View/Download from: UTS OPUS or Publisher's site

#### View description

We study a wide class of graph editing problems that ask whether a given graph can be modified to satisfy certain degree constraints, using a limited number of vertex deletions, edge deletions, or edge additions. The problems generalize several well-studied problems such as the General Factor Problem and the Regular Subgraph Problem. We classify the parameterized complexity of the considered problems taking upper bounds on the number of editing steps and the maximum degree of the resulting graph as parameters. © 2011 Elsevier Inc. All rights reserved.

Mathieson, L. 2010, 'The parameterized complexity of editing graphs for bounded degeneracy', *Theoretical Computer Science*, vol. 411, no. 34-36, pp. 3181-3187.View/Download from: Publisher's site

#### View description

We examine the parameterized complexity of the problem of editing a graph to obtain an r-degenerate graph. We show that for the editing operations vertex deletion and edge deletion, both separately and combined, the problem is W[P]-complete, and remains W[P]-complete even if the input graph is already (r+1)-degenerate, or has maximum degree 2r+1 for all r<2. We also demonstrate fixed-parameter tractability for several Clique based problems when the input graph has bounded degeneracy. © 2010 Elsevier B.V.

Mellor, D., Prieto, E., Mathieson, L. & Moscato, P. 2010, 'A kernelisation approach for multiple d-Hitting Set and its application in optimal multi-drug therapeutic combinations.', *PloS one*, vol. 5, no. 10, p. e13055.View/Download from: UTS OPUS or Publisher's site

#### View description

Therapies consisting of a combination of agents are an attractive proposition, especially in the context of diseases such as cancer, which can manifest with a variety of tumor types in a single case. However uncovering usable drug combinations is expensive both financially and temporally. By employing computational methods to identify candidate combinations with a greater likelihood of success we can avoid these problems, even when the amount of data is prohibitively large. Hitting Set is a combinatorial problem that has useful application across many fields, however as it is NP-complete it is traditionally considered hard to solve exactly. We introduce a more general version of the problem (,,d)-Hitting Set, which allows more precise control over how and what the hitting set targets. Employing the framework of Parameterized Complexity we show that despite being NP-complete, the (,,d)-Hitting Set problem is fixed-parameter tractable with a kernel of size O(dk(d)) when we parameterize by the size k of the hitting set and the maximum number of the minimum number of hits, and taking the maximum degree d of the target sets as a constant. We demonstrate the application of this problem to multiple drug selection for cancer therapy, showing the flexibility of the problem in tailoring such drug sets. The fixed-parameter tractability result indicates that for low values of the parameters the problem can be solved quickly using exact methods. We also demonstrate that the problem is indeed practical, with computation times on the order of 5 seconds, as compared to previous Hitting Set applications using the same dataset which exhibited times on the order of 1 day, even with relatively relaxed notions for what constitutes a low value for the parameters. Furthermore the existence of a kernelization for (,,d)-Hitting Set indicates that the problem is readily scalable to large datasets.

Rizzi, R., Mahata, P., Mathieson, L. & Moscato, P. 2010, 'Hierarchical clustering using the arithmetic-harmonic cut: complexity and experiments.', *PloS one*, vol. 5, no. 12, p. e14067.View/Download from: UTS OPUS or Publisher's site

#### View description

Clustering, particularly hierarchical clustering, is an important method for understanding and analysing data across a wide variety of knowledge domains with notable utility in systems where the data can be classified in an evolutionary context. This paper introduces a new hierarchical clustering problem defined by a novel objective function we call the arithmetic-harmonic cut. We show that the problem of finding such a cut is NP-hard and APX-hard but is fixed-parameter tractable, which indicates that although the problem is unlikely to have a polynomial time algorithm (even for approximation), exact parameterized and local search based techniques may produce workable algorithms. To this end, we implement a memetic algorithm for the problem and demonstrate the effectiveness of the arithmetic-harmonic cut on a number of datasets including a cancer type dataset and a corona virus dataset. We show favorable performance compared to currently used hierarchical clustering techniques such as k-Means, Graclus and Normalized-Cut. The arithmetic-harmonic cut metric overcoming difficulties other hierarchical methods have in representing both intercluster differences and intracluster similarities.

Mathieson, L., Cotta, C. & Moscato, P. 2018, 'Memetic Algorithms' in Resende, M.G.C., Marti, R. & Pardalos, P.M. (eds), *Handbook of Heuristics*, Springer.

Mathieson, L, Mendes, A, Marsden, J, Pond, J & Moscato, P 2017, 'Computer-Aided Breast Cancer Diagnosis with Optimal Feature Sets: Reduction Rules and Optimization Techniques.' in *Bioinformatics*, Springer, Germany, pp. 299-325.View/Download from: UTS OPUS or Publisher's site

#### View description

This chapter introduces a new method for knowledge extraction from databases for the purpose of finding a discriminative set of features that is also a robust set for within-class classification. Our method is generic and we introduce it here in the field of breast cancer diagnosis from digital mammography data. The mathematical formalism is based on a generalization of the k-Feature Set problem called (, )-k-Feature Set problem, introduced by Cotta and Moscato (J Comput Syst Sci 67(4):686-690, 2003). This method proceeds in two steps: first, an optimal (, )-k-feature set of minimum cardinality is identified and then, a set of classification rules using these features is obtained. We obtain the (, )-k-feature set in two phases; first a series of extremely powerful reduction techniques, which do not lose the optimal solution, are employed; and second, a metaheuristic search to identify the remaining features to be considered or disregarded. Two algorithms were tested with a public domain digital mammography dataset composed of 71 malignant and 75 benign cases. Based on the results provided by the algorithms, we obtain classification rules that employ only a subset of these features.

Mathieson, L., Gallardo, J.E., Cotta, C. & Moscato, P. 2016, 'Memetic Algorithms: A Contemporary Introduction' in Webster, J.G. (ed), *Wiley Encyclopedia of Electrical and Electronics Engineering*, John Wiley & Sons, USA.View/Download from: Publisher's site

#### View description

Memetic algorithms (MAs) constitute a search and optimization paradigm based on the orchestrated interplay between global and local search components, and have the exploitation of specific problem knowledge as one of their central tenets. MAs can take different forms although a classical incarnation involves the integration of independent search processes within a population-based optimization approach. We discuss this basic structure as well as several of the issues arising in the design process. This paves the way for providing a glimpse of some algorithmic extensions of this basic scheme. After providing an overview of the numerous practical applications of MAs, we close this article with some current perspectives of these techniques.

Haque, M.N., Mathieson, L. & Moscato, P. 2017, 'A Memetic Algorithm for community detection by maximising the Connected Cohesion', *Proceedings of the 2017 IEEE Symposium Series on Computational Intelligence*, IEEE Symposium Series on Computational Intelligence, IEEE, Honolulu, Hawaii, USA, pp. 1-8.View/Download from: UTS OPUS or Publisher's site

#### View description

Community detection is an exciting field of research

which has attracted the interest of many researchers during the

last decade. While many algorithms and heuristics have been

proposed to scale existing approaches a relatively smaller number

of studies have looked at exploring different measures of quality

of the detected community.

Recently, a new score called 'cohesion' was introduced in the

computing literature. The cohesion score is based comparing the

number of triangles in a given group of vertices to the number

of triangles only partly in that group. In this contribution, we

propose a memetic algorithm that aims to find a subset of the

vertices of an undirected graph that maximizes the cohesion

score. The associated combinatorial optimisation problem is

known to be NP-Hard and we also prove it to be W[1]-hard when

parameterized by the score. We used a Local Search individual

improvement heuristic to expand the putative solution. Then we

removed all vertices from the group which are not a part of

any triangle and expand the neighbourhood by adding triangles

which have at least two nodes already in the group. Finally we

compute the maximum connected component of this group.

The highest quality solutions of the memetic algorithm have

been obtained for four real-world network scenarios and we

compare our results with ground-truth information about the

graphs. We also compare the results to those obtained with eight

other community detection algorithms via interrater agreement

measures.

Our results give a new lower bound on the parameterized

complexity of this problem and give novel insights on its poten

de Vries, NJ, Arefin, AS, Mathieson, L, Lucas, B & Moscato, P 2016, 'Relative neighborhood graphs uncover the dynamics of social media engagement', *Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)*, International Conference on Advanced Data Mining and Applications, Springer, Gold Coast, Queensland, Australia, pp. 283-297.View/Download from: UTS OPUS or Publisher's site

#### View description

In this paper, we examine if the Relative Neighborhood Graph (RNG) can reveal related dynamics of page-level social media metrics. A statistical analysis is also provided to illustrate the application of the method in two other datasets (the Indo-European Language dataset and the Shakespearean Era Text dataset). Using social media metrics on the world's 'top check-in locations' Facebook pages dataset, the statistical analysis reveals coherent dynamical patterns. In the largest cluster, the categories 'Gym', 'Fitness Center', and 'Sports and Recreation' appear closely linked together in the RNG. Taken together, our study validates our expectation that RNGs can provide a 'parameter-free" mathematical formalization of proximity. Our approach gives useful insights on user behaviour in social media page-level metrics as well as other applications.

Frati, F., Gaspers, S., Gudmundsson, J. & Mathieson, L. 2013, 'Augmenting graphs to minimize the diameter', *Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)*, pp. 383-393.View/Download from: Publisher's site

#### View description

We study the problem of augmenting a weighted graph by inserting edges of bounded total cost while minimizing the diameter of the augmented graph. Our main result is an FPT 4-approximation algorithm for the problem. © 2013 Springer-Verlag.

Mans, B. & Mathieson, L. 2013, 'On the treewidth of dynamic graphs',

#### View description

Dynamic graph theory is a novel, growing area that deals with graphs that change over time and is of great utility in modelling modern wireless, mobile and dynamic environments. As a graph evolves, possibly arbitrarily, it is challenging to identify the graph properties that can be preserved over time and understand their respective computability. In this paper we are concerned with the treewidth of dynamic graphs. We focus on metatheorems, which allow the generation of a series of results based on general properties of classes of structures. In graph theory two major metatheorems on treewidth provide complexity classifications by employing structural graph measures and finite model theory. Courcelle's Theorem gives a general tractability result for problems expressible in monadic second order logic on graphs of bounded treewidth, and Frick & Grohe demonstrate a similar result for first order logic and graphs of bounded local treewidth. We extend these theorems by showing that dynamic graphs of bounded (local) treewidth where the length of time over which the graph evolves and is observed is finite and bounded can be modelled in such a way that the (local) treewidth of the underlying graph is maintained. We show the application of these results to problems in dynamic graph theory and dynamic extensions to static problems. In addition we demonstrate that certain widely used dynamic graph classes naturally have bounded local treewidth. © 2013 Springer-Verlag Berlin Heidelberg.

Arefin, A.S., Inostroza-Ponta, M., Mathieson, L., Berretta, R. & Moscato, P. 2011, 'Clustering nodes in large-scale biological networks using external memory algorithms',

#### View description

Novel analytical techniques have dramatically enhanced our understanding of many application domains including biological networks inferred from gene expression studies. However, there are clear computational challenges associated to the large datasets generated from these studies. The algorithmic solution of some NP-hard combinatorial optimization problems that naturally arise on the analysis of large networks is difficult without specialized computer facilities (i.e. supercomputers). In this work, we address the data clustering problem of large-scale biological networks with a polynomial-time algorithm that uses reasonable computing resources and is limited by the available memory. We have adapted and improved the MSTkNN graph partitioning algorithm and redesigned it to take advantage of external memory (EM) algorithms. We evaluate the scalability and performance of our proposed algorithm on a well-known breast cancer microarray study and its associated dataset. © 2011 Springer-Verlag.

Mathieson, L. & Szeider, S. 2008, 'Parameterized graph editing with chosen vertex degrees',

#### View description

We study the parameterized complexity of the following problem: is it possible to make a given graph r-regular by applying at most k elementary editing operations; the operations are vertex deletion, edge deletion, and edge addition. We also consider more general annotated variants of this problem, where vertices and edges are assigned an integer cost and each vertex v has assigned its own desired degree (v) {0,...,r}. We show that both problems are fixed-parameter tractable when parameterized by (k,r), but W[1]-hard when parameterized by k alone. These results extend our earlier results on problems that are defined similarly but where edge addition is not available. We also show that if edge addition and/or deletion are the only available operations, then the problems are solvable in polynomial time. This completes the classification for all combinations of the three considered editing operations. © Springer-Verlag Berlin Heidelberg 2008.

Mathieson, L. & Szeider, S. 2008, 'The parameterized complexity of regular subgraph problems and generalizations', *Conferences in Research and Practice in Information Technology Series*.

#### View description

We study variants and generalizations of the problem of finding an r-regular subgraph (where r 3) in a given graph by deleting at most k vertices. Moser and Thilikos (2006) have shown that the problem is fixed-parameter tractable (FPT) if parameterized by (k, r). They asked whether the problem remains fixed-parameter tractable if parameterized by k alone. We answer this question negatively: we show that if parameterized by k alone the problem is W[1]-hard and therefore very unlikely fixed-parameter tractable. We also give W[1]-hardness results for variants of the problem where the parameter is the number of vertex and edge deletions allowed, and for a new generalized form of the problem where the obtained subgraph is not necessarily regular but its vertices have certain prescribed degrees. Following this we demonstrate fixed-parameter tractability for the considered problems if the parameter includes the regularity r or an upper bound on the prescribed degrees in the generalized form of the problem. These FPT results are obtained via kernelization, so also provide a practical approach to the problems presented. © 2008, Australian Computer Society, Inc.

Moscato, P., Mathieson, L., Mendes, A. & Berretta, R. 2005, 'The electronic primaries: Predicting the U.S. Presidency using feature selection with safe data reduction', *Conferences in Research and Practice in Information Technology Series*, pp. 371-380.

#### View description

The data mining inspired problem of finding the critical, and most useful features to be used to classify a data set, and construct rules to predict the class of future examples is an interesting and important problem. It is also one of the most useful problems with applications in many areas such as microarray analysis, genomics, proteomics, pattern recognition, data compression and knowledge discovery. Expressed as -Feature Set it is also a formally hard problem. In this paper we present a method for coping with this hardness using the combinatorial optimisation and parameterized complexity inspired technique of sound reduction rules. We apply our method to an interesting data set which is used to predict the winner of the popular vote in the U.S. presidential elections. We demonstrate the power and exibility of the reductions, especially when used in the context of the ( )-Feature Set variant problem. Copyright © 2005, Australian Computer Society, Inc.

Mathieson, L., King, T. & Brankovic, L. 2004, '2-Compromise: usability in 1-dimensional statistical database', Fifteenth Australasian Workshop on Combinatorial Algorithms, pp. 5-15.

Mathieson, L., Prieto, E. & Shaw, P. 2004, 'Packing edge disjoint triangles: A parameterized view',

#### View description

The problem of packing k edge-disjoint triangles in a graph has been thoroughly studied both in the classical complexity and the approximation fields and it has a wide range of applications in many areas, especially computational biology [BP96]. In this paper we present an analysis of the problem from a parameterized complexity viewpoint. We describe a fixed-parameter tractable algorithm for the problem by means of kernelization and crown rule reductions, two of the newest techniques for fixed-parameter algorithm design. We achieve a kernel size bounded by 4k, where k is the number of triangles in the packing. © Springer-Verlag 2004.

Casteigts, A., Mans, B. & Mathieson, L. 2012, 'On the Feasibility of Maintenance Algorithms in Dynamic Graphs'.View/Download from: UTS OPUS

#### View description

Near ubiquitous mobile computing has led to intense interest in dynamic graph

theory. This provides a new and challenging setting for algorithmics and

complexity theory. For any graph-based problem, the rapid evolution of a

(possibly disconnected) graph over time naturally leads to the important

complexity question: is it better to calculate a new solution from scratch or

to adapt the known solution on the prior graph to quickly provide a solution of

guaranteed quality for the changed graph?

In this paper, we demonstrate that the former is the best approach in some

cases, but that there are cases where the latter is feasible. We prove that,

under certain conditions, hard problems cannot even be approximated in any

reasonable complexity bound --- i.e., even with a large amount of time, having

a solution to a very similar graph does not help in computing a solution to the

current graph. To achieve this, we formalize the idea as a maintenance

algorithm. Using r-Regular Subgraph as the primary example we show that

W[1]-hardness for the parameterized approximation problem implies the

non-existence of a maintenance algorithm for the given approximation ratio.

Conversely we show that Vertex Cover, which is fixed-parameter tractable, has a

2-approximate maintenance algorithm. The implications of NP-hardness and

NPO-hardness are also explored.

Mathieson, L. 2012, 'A Proof Checking View of Parameterized Complexity'.View/Download from: UTS OPUS

#### View description

The PCP Theorem is one of the most stunning results in computational

complexity theory, a culmination of a series of results regarding proof

checking it exposes some deep structure of computational problems. As a

surprising side-effect, it also gives strong non-approximability results. In

this paper we initiate the study of proof checking within the scope of

Parameterized Complexity. In particular we adapt and extend the PCP[n log log

n, n log log n] result of Feige et al. to several parameterized classes, and

discuss some corollaries.