UTS site search

John Leaney

Biography

John Leaney has practiced, researched and taught in the fields of control, systems/software and telecommunications network management engineering over his career.

His focus is on analysis, and, design of large systems.

He retired as an associate professor, and, is now an adjunct professor supervising, and, doing research in systems/software/telecommunications management design.

Career highlights include the first computer based systems engineering degree in 1986, his research in architecture, especially measures and evolution and design optimisation during the 1990s till now, and, the founding of the company Avolution in 2001.

Professional

John Leaney works as a member of the IEEE engineering of computer based systems technical committee to further the focus of computer based systems in research and industry.

Casual Academic, University Casual Academics
Core Member, HCTD - Human Centred Technology Design
BE (UNSW), ME (UNSW)
 

Research Interests

John Leaney has, for the past twenty-five years, been researching and developing techniques for the management, design, evolution and measurement of architecture focussed, complex computer systems.

In the last ten years, he has developed expertise in combining qualitative techniques, such as action research and ethnography, to apply to these complex problems.

These approaches are merged with quantitative approaches to software/systems architecture (such as graphs and typing) to provide and effective method in understanding and designing complex systems.

Systems/software architecture research topics include: understanding via 3D visualisation and immersive environments; analysis, via architecture calculation of non-functional properties; refinement, via rewriting logic and graph rewriting; design, via action research and refinement; optimisation, via control engineering, refinement and analysis techniques; and evolution, via an ontology of change.

Can supervise: Yes
John Leaney Registered at Level 1

John Leaney has developed degrees and subjects, and taught in the area of software and systems engineering, with a focus on design.

Chapters

O'Neill, T., Denford, M., Leaney, J.R. & Dunsire, K. 2007, 'Managing Enterprise Architecture Change' in Pallab Saha (ed), Handbook of Enterprise Systems Architecture in Practice, IGI Global, USA, pp. 192-206.
View/Download from: UTS OPUS

Conferences

Prior, J., Ferguson, S. & Leaney, J. 2016, 'Reflection is hard: teaching and learning reflective practice in a software studio', http://dl.acm.org/citation.cfm?id=2843346, Australasian Computing Education Conference, ACM, Canberra, Australia.
We have observed that it is a non-trivial exercise for undergraduate students to learn how to reflect. Reflective practice is now recognised as important for software developers and has become a key part of software studios in universities, but there is limited empirical investigation into how best to teach and learn reflection. In the literature on reflection in software studios, there are many papers that claim that reflection in the studio is mandatory. However, there is inadequate guidance about teaching early stage students to reflect in that literature. The essence of the work presented in this paper is a beginning to the consideration of how the teaching of software development can best be combined with teaching reflective practice for early stage software development students. We started on a research programme to understand how to encourage students to learn to reflect. As we were unsure about teaching reflection, and we wished to change our teaching as we progressively understood better what to do, we chose action research as the most suitable approach. Within the action research cycles we used ethnography to understand what was happening with the students when they attempted to reflect. This paper reports on the first 4 semesters of research. We have developed and tested a reflection model and process that provide scaffolding for students beginning to reflect. We have observed three patterns in how our students applied this process in writing their reflections, which we will use to further understand what will help them learn to reflect. We have also identified two themes, namely, motivation and intervention, which highlight where the challenges lie in teaching and learning reflection.
Prior, J.R., Arjpru, S. & Leaney, J.R. 2014, 'Towards an industry-collaborative, reflective software learning and development environment', Proceedings of the 23rd Australasian Software Engineering Conference ASWEC 2014, 23rd Australasian Software Engineering Conference ASWEC 2014, IEEE, Sydney, Australia.
View/Download from: UTS OPUS
A significant mismatch (88%) has been found between what employers and graduates perceived as important abilities and how universities had prepared graduates for employment. Conventional Teaching and Learning approaches fall short of providing the kind of learning experiences needed to prepare graduates for the realities of professional practice in industry. On the other hand, current students have very different learning styles than their forebears. Their learning preferences are experiential, working in teams, and using technology for learning. One solution to address this mismatch issue is the software development studio. Our aim is to provide an industry-collaborative, reflective learning environment that will effect the students development of holistic skills, such as teamwork, collaboration and communication, together with technical skills, in a discipline context. This paper further describes the design and validation via prototyping for our software development studio, the progress that we have made so far, and presents the preliminary insights gleaned from our studio prototyping. The prototypes raised issues of attitudinal change, communication, reflection, sharing, mentoring, use of process, `doing time, relationships and innovation.
Prior, J.R., Connor, A.L. & Leaney, J.R. 2014, 'Things Coming Together: Learning Experiences in a Software Studio', Proceedings of the 19th Annual SIGCSE Conference on Innovation and Technology in Computer Science Education 2014, 19th Annual SIGCSE Conference on Innovation and Technology in Computer Science Education 2014, ACM, Uppsala, Sweden.
View/Download from: UTS OPUS or Publisher's site
We have evidence that the software studio provides learning that genuinely prepares students for professional practice. Learning that entails dealing with complex technical problems and tools. Learning that involves working effectively in groups. Learning that results in the building of students self-confidence and the conviction that they can successfully deal with the challenges of modern software system development. Learning that allows the accomplishment of the more elusive professional competencies. In order for students to achieve this type of deep learning, they need time to immerse themselves in complex problems within a rich environment such as the software studio. The studio also enables each student group to develop and succeed according to their needs, and in different ways. The conclusions above arise from an ethnographic study in an undergraduate software studio prototype with two student groups and their mentors.
Mearns, H.K. & Leaney, J.R. 2013, 'The Use of Autonomic Management in Multi-provider Telecommunication Services', 2013 20th IEEE International Conference and Workshops on the Engineering of Computer Based Systems, IEEE International Conference and Workshops on the Engineering of Computer Based Systems, IEEE, Scottsdale, USA, pp. 129-138.
View/Download from: UTS OPUS or Publisher's site
The continuing expansion of telecommunication service domains, from Quality of Service guaranteed connectivity to ubiquitous cloud environments, has introduced an ever increasing level of complexity in the field of service management. This complexity arises not only from the sheer variability in service requirements but also through the required but ill-defined interaction of multiple organisations and providers. As a result of this complexity and variability, the provisioning and performance of current services is adversely affected, often with little or no accountability to the users of the service.This exposes a need for total coverage in the management of such complex services, a system which provides for service responsibility. Service responsibility is defined as the provisioning of service resilience and the judgement of service risk across all the service components. To be effective in responsible management for current complex services, any framework must be able to interact with multiple providers and management systems. The CARMA framework upon which we are working, aims to fulfil these requirements through a multi-agent system, that is based in a global market, and can negotiate and be responsible for multiple complex services.
Mearns, H.K., Leaney, J.R., Parakhine, A., Debenham, J.K. & Verchere, D.G. 2012, 'CARMA: Complete autonomous responsible management agents for telecommunications and inter-cloud services', 2012 IEEE Network Operations and Management Symposium (NOMS), IEEE Network Operations and Management Symposium, IEEE, Maui, USA, pp. 1089-1095.
View/Download from: UTS OPUS or Publisher's site
The continuing rise in telecommunication and cloud services usage is matched by an increased complexity in maintaining adequate performance management. To combat this complexity, researchers and telecommunication companies are exploring a variety of management strategies to leverage their individual infrastructures to provide better performance and increased utilisation. We extend these strategies by addressing the complexities that arise through the interaction of multiple telecommunication and cloud providers when providing a modern complex service. Our overall aim is for the management to accept responsibility for the complex service in an open marketplace. Responsibility is, firstly, defined by aiming to cover the totality of modern complex services, managing both the connectivity and virtual infrastructure. Secondly, responsibility is defined as managing risk and resilience in the provisioning and operation of the complex service. With these aims, we are working towards a bundled service provider agent architecture, which can negotiate on the open service market. This approach aims to also optimise the utilisation of the providers infrastructure while reducing the risk of failure to users through total service management. We present the specification, design and simulation of the Complete Autonomous Responsible Management Agents (CARMA) in a marketplace environment.
Kennard, R. & Leaney, J. 2012, 'An Introduction to Software Mining', NEW TRENDS IN SOFTWARE METHODOLOGIES, TOOLS AND TECHNIQUES, pp. 312-323.
View/Download from: Publisher's site
Mearns, H.K., Leaney, J.R., Parakhine, A., Debenham, J.K. & Verchere, D.G. 2011, 'An autonomic open marketplace for service management and resilience', Conference on Network & service Management (CNSM 2011), Conference on Network & service Management (CNSM 2011), IEEE, Paris, France, pp. 1-5.
View/Download from: UTS OPUS
Expansion in telecommunications services, such as triple play and unified communications, introduces complexity that adversely affects service and network provisioning, especially in terms of provisioning times and the risk of delivery (failure) of new services. We envision a marketplace in which all manner of complex services will be provisioned, and their performance managed, especially against poor performance. The first phase of our work is a focus on the architecture, negotiation and management, which will lead to effective specification of network management requirements. We are working towards a bundled service agent architecture, which can negotiate on an open single service market, and which will eventually help to optimise the utilisation of the providers networks while reducing the risk of failure to users. Our work to date has been on the specification, behaviour, service definition and simulation of service agents for bundled service delivery.
Mearns, H.K., Leaney, J.R., Parakhine, A., Debenham, J.K. & Verchere, D.G. 2011, 'An Autonomic Open Marketplace for Inter-Cloud Service Management', 2011 Fourth IEEE International Conference on Utility and Cloud Computing, IEEE/ACM International Conference on Utility and Cloud Computing, IEEE Computer Society, Melbourne, Australia, pp. 186-193.
View/Download from: UTS OPUS or Publisher's site
The rise of utility in cloud computing and telecommunications has introduced greater complexity in the provisioning and performance management of remote services. We propose extended management strategies for this complexity. Our overall aim is for the management to accept responsibility for the complex service in an open marketplace. Responsibility is, firstly, defined by aiming to cover the totality of modern complex services, managing both the connectivity and virtual infrastructure. Secondly, responsibility is further defined as managing risk and resilience in the provisioning and operation of the complex service. With these aims, we are working towards a bundled service provider agent architecture, which can negotiate on the open service market. This approach aims to also optimise the utilisation of the providers infrastructure while reducing the risk of failure to users through total service management. We present the specification, design and simulation of the bundled service agents in a marketplace environment.
Mearns, H.K., Leaney, J.R. & Verchere, D.G. 2010, 'Critique of network management systems and their practicality', Proceedings of the 7th IEEE International Conference and Workshop on Engineering of Autonomic and Autonomous Systems, EASe 2010, IEEE International Conference and Workshop on the Engineering of Autonomic and Autonomous Systems, IEEE, Oxford, UK, pp. 51-59.
View/Download from: UTS OPUS or Publisher's site
Networks have become an integral part of the computing landscape, forming a global interconnection of a staggering number of heterogeneous systems and services. Current research focuses on policy based management and autonomous systems and involves the u
Mearns, H.K., Leaney, J.R. & Verchere, D.G. 2010, 'The architectural evolution of telecommunications network management systems', 17th IEEE International Conference and Workshops on the Engineering of Computer-Based Systems, ECBS 2010, International Conference and Workshops on the Engineering of Computer-Based Systems, IEEE, Oxford, England, pp. 281-285.
View/Download from: UTS OPUS or Publisher's site
Telecommunications Network Management Systems ( TNMSs) have had to respond to enormous change, as telecommunication networks have changed from the early (digital) timeframe of being ISDN based, (devoted dominantly to voice), to the current timeframe, pro
Kennard, R., Edmonds, E.A. & Leaney, J.R. 2009, 'Separation anxiety: Stresses of developing a modern day separable User Interface', Human System Interactions, 2009. HSI '09. 2nd Conference on, Human System Interactions, IEEE Xplore, Catania, Italy, pp. 228-235.
View/Download from: UTS OPUS or Publisher's site
The evolution of user interface (UI) tools has generally regarded the UI as separable from the underlying application it represents. This viewpoint leaves the UI having to restate invariants already specified in other subsystems of an application, and any discrepancy between the versions in the UI and those in the subsystems leads to errors. This paper explores a sample of real world subsystems in use by enterprise applications today, and underscores the problem of duplication between them and the UI. It then surveys the prevalence of this issue within mainstream software development.
Lee, S., Leaney, J.R., O'Neill, T. & Hunter, M. 2008, 'Evaluating Open Service Access with an Abstract Model of NGN Functions', 11th Asia-Pacific Network Operations and Management Symposium, APNOMS 2008, Asia-Pacific Network Operations and Management Symposium, Springer Berlin / Heidelberg, Beijing, China, pp. 487-490.
View/Download from: UTS OPUS or Publisher's site
As new business models and market opportunities are rapidly emerging from the `opening up of telecommunications networks, we required a better understanding of the effectiveness of using open standards to provide access to functions in NGNs. In this paper we reason about the coverage of openly accessible functions using an abstract model of NGN functionality. Defining and using an abstract model allows us to evaluate the effectiveness of open standards from a perspective where a wide range of NGN functionality can be generalised and conveniently categorised. Subsequently, it will be possible to identify the gaps, which are a subset of functionality that we are specifically interested in for our project.
Prior, J.R., Robertson, T.J. & Leaney, J.R. 2008, 'Situated Software Development: Work Practice and Infrastructure are Mutually Constitutive', Proceedings of 19th Australian Software Engineering Conference, Australian Software Engineering Conference, IEEE Computer Society, Perth, Western Australia, pp. 160-169.
View/Download from: UTS OPUS or Publisher's site
Software developers work is much more interesting and multifarious in practice than formal definitions of software development processes imply. Rational models of work are often representations of processes defined as they should be performed, rather than portrayals of what people actually do in practice. These models offer a simplified picture of the phenomena involved, and are frequently confused with how the work is carried out in reality, or they are advocated as the ideal way to accomplish the work. A longitudinal ethnographic study (45 days of fieldwork over 20 months) of a group of professional software developers revealed the importance of including their observed practice, and the infrastructure that supports and shapes this practice, in an authentic account of their work. Moreover, this research revealed that software development work practice and the infrastructure used to produce software are inextricably entwined and mutually constitutive over time.
Debenham, J.K., Simoff, S.J., Leaney, J.R. & Mirchandani, V.R. 2008, 'Smart Communications Network Management Through a Synthesis of Distributed Intelligence and Information', Artificial Intelligence in Theory and Practice II, World Computer Congress, Springer Verlag, Milano, Italy, pp. 415-420.
View/Download from: UTS OPUS or Publisher's site
Demands on communications networks to support bundled, interdependent communications services (data, voice, video) are increasing in complexity. Smart network management techniques are required to meet this demand. Such management techniques are envisioned to be based on two main technologies: (i) embedded intelligence; and (ii) up-to-the-millisecond delivery of performance information. This paper explores the idea of delivery of intelligent network management as a synthesis of distributed intelligence and information, obtained through information mining of network performance.
Parakhine, A., Leaney, J.R. & O'Neill, T. 2008, 'Design Guidance Using Simulation-Based Bayesian Belief Networks', Design Guidance Using Simulation-Based Bayesian Belief Networks, IEEE International Conference and Workshop on the Engineering of Computer Based Systems, IEEE, Belfast, Northern Ireland, pp. 76-84.
View/Download from: UTS OPUS or Publisher's site
In this work, the task of complex computer-based system design optimization involves exploration of a number of possible candidate designs matching the optimisation criteria. However, the process by which the possible candidate designs are generated and rated is fundamental to an optimal outcome. It is dependent upon the set of system characteristics deemed relevant by the designer given the systems requirements. We propose a method which is aimed at providing the designer with guidance based upon description of the possible causal relationships between various system characteristics and qualities. This guidance information is obtained by employing principles of multiparadigm simulation to generate a set of data which is then processed by an algorithm to generate a Bayesian Belief Network representation of causalities present in the source system. Furthermore, we address the issues and tools associated with application of the proposed method by presenting a detailed simulation and network generation effort undertaken as part of a significant industrial case study.
Lee, S., Leaney, J., O'Neill, T. & Hunter, M. 2008, 'Evaluating open service access with an abstract model of NGN functions', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), pp. 487-490.
View/Download from: Publisher's site
As new business models and market opportunities are rapidly emerging from the 'opening up' of telecommunications networks, we required a better understanding of the effectiveness of using open standards to provide access to functions in NGNs. In this paper we reason about the coverage of openly accessible functions using an abstract model of NGN functionality. Defining and using an abstract model allows us to evaluate the effectiveness of open standards from a perspective where a wide range of NGN functionality can be generalised and conveniently categorised. Subsequently, it will be possible to identify the gaps, which are a subset of functionality that we are specifically interested in for our project. © 2008 Springer Berlin Heidelberg.
Prior, J., Robertson, T. & Leaney, J. 2008, 'Situated software development: Work practice and infrastructure are mutually constitutive', Proceedings of the Australian Software Engineering Conference, ASWEC, pp. 160-169.
View/Download from: Publisher's site
Software developers' work is much more interesting and multifarious in practice than formal definitions of software development processes imply. Rational models of work are often representations of processes defined as they should be performed, rather than portrayals of what people actually do in practice. These models offer a simplified picture of the phenomena involved, and are frequently confused with how the work is carried out in reality, or they are advocated as the ideal way to accomplish the work. A longitudinal ethnographic study (45 days of fieldwork over 20 months) of a group of professional software developers revealed the importance of including their observed practice, and the "infrastructure" that supports and shapes this practice, in an authentic account of their work. Moreover, this research revealed that software development work practice and the infrastructure used to produce software are inextricably entwined and mutually constitutive over time. © 2008 IEEE.
Maxwell, C., Leaney, J. & O apos Neill, T. 2008, 'Utilising abstract matching to preserve the nature of heuristics in design optimisation', Proceedings - Fifteenth IEEE International Conference and Workshops on the Engineering of Computer-Based Systems, ECBS 2008, pp. 287-296.
View/Download from: Publisher's site
Design space exploration, the generation of alternate designs to identify working designs with varying system properties, has the potential to provide a basis for the optimisation of computer-based system architectures. To utilise design space exploration for this purpose requires that an effective mechanism exist for the storage and application of potential design changes. Heuristics have shown some promise in this area due to their ability to capture expert design knowledge and their flexibility across multiple domains. Heuristics are also especially attractive as change descriptions as they can capture changes that operate across a large spectrum of change detail, from the very detailed to the very abstract. Heuristics are, however, at their most powerful, and their most useful, when they are specified in an abstract manner. This presents a challenge in the formal application of heuristics in capturing design knowledge. Formally describing heuristics inclines their specification of change to be in a more detailed, more concrete, state than an abstract one. This occurs because architectural models tend be both domain specific and are often described at a more concrete level than the level at which the heuristic is described. This has the potential to greatly reduce the effectiveness of heuristics. In this paper we propose that by providing an abstract match method heuristics may be specified in an abstract manner and still be applied to a detailed formal model, thereby eliminating this problem. © 2008 IEEE.
Colquitt, D. & Leaney, J.R. 2007, 'Expanding the view on Complexity within the Architecture Trade-off Analysis Method', Proceedings of the 20th IEEE International Conference on Engineering of Computer Based Systems (ECBS), IEEE International Conference and Workshop on the Engineering of Computer Based Systems, IEEE, Tucson, Arizona, pp. 1-10.
View/Download from: UTS OPUS
Parakhine, A., O'Neill, T. & Leaney, J.R. 2007, 'Application of Bayesian Networks to Architectural Optimisation', Proceedings of the 20th IEEE International Conference on Engineering of Computer Based Systems (ECBS), IEEE International Conference and Workshop on the Engineering of Computer Based Systems, IEEE, Tucson, Arizona, pp. 37-44.
View/Download from: UTS OPUS or Publisher's site
The field of optimisation covers a great multitude of principles, methods and frameworks aimed at maximisation of an objective under constraints. However, the classical optimisation can not be easily applied in the context of computer-based systems architecture as there is not enough knowledge concerning the dependencies between non-functional qualities of the system. Out approach is based on the simulation optimisation methodology where the system simulation is first created to assess the current state of the design with respect to the objectives. The results of the simulation are used to construct a Bayesian belief network which effectively becomes a base for an objective function and serves as the main source of the decision support pertaining to the guidance of the optimisation process. The potential effects of each proposed change or combination of changes is then examined by updating and re-evaluating the system simulation
Maxwell, C.I., O'Neill, T. & Leaney, J.R. 2007, 'Formal architecture transformation using heuristics', Proceedings of the 20th IEEE International Conference on Engineering of Computer Based Systems (ECBS), IEEE International Conference and Workshop on the Engineering of Computer Based Systems, IEEE, Tucson, Arizona, pp. 15-24.
View/Download from: UTS OPUS or Publisher's site
Heuristics have long been a popular and effective mechanism for capturing the knowledge of experts. In recent times, however, the more common use of heuristics has been as a means for communicating ideas at an abstract level, with little consideration to their potential as a structured approach to design improvement. With this paper we present the issues surrounding, and a structured method for, formally capturing architectural change embodied within heuristics. We demonstrate how through the application of graph theory, category theory and predicate calculus we can capture change within a heuristic and then use it to achieve formal heuristic-based transformation of a real-world system. By capturing heuristics in the structured and formal manner discussed in this paper we present ourselves with the opportunity to create a practical and reliable heuristic-based architecture transformation system. This is done within the wider context of achieving a process for optimising the non-functional qualities of a system architecture through design transformation.
Leaney, J., Rozenblit, J.W. & Jianfeng, P. 2007, 'Proceeding - 14th Annual IEEE International Conference and Workshops on the Engineering of Computer-Based Systems, ECBS 2007: Raising Expectations of Computer-Based Systems: Foreword', Proceedings of the International Symposium and Workshop on Engineering of Computer Based Systems.
View/Download from: Publisher's site
Colquitt, D. & Leaney, J. 2007, 'Expanding the view on complexity within the architecture trade-off analysis method', Proceedings of the International Symposium and Workshop on Engineering of Computer Based Systems, pp. 45-54.
View/Download from: Publisher's site
The following paper presents the learning outcomes from an investigation into the aspects of complexity involved in architecture-based analysis. Using a framework of situational complexity as provocation, the manifestations of complexity observed in the Architecture Tradeoff Analysis Method (ATAM) process are presented in terms of a people and systems dimension. These aspects of complexity are shown to impact upon some of the most important ATAM objectives. The change in ATAM complexity is also presented with respect to the design lifecycle. Some resolution to the complexity suffered by the process is suggested in terms of splitting out the analysis objectives and maintaining two types of analysis, as well as paying attention to the content aspects of the process that drive its direction from within. © 2007 IEEE.
Livolsi, D., O'Neill, T., Leaney, J.R., Denford, M. & Dunsire, K. 2006, 'Guided architecture-based design optimisation of CBSs', 13th Annual IEEE International Symposium And Workshop On Engineering Of Computer Based Systems, Proceedings - Mastering The Complexity Of Computer-Based Systems, IEEE International Conference and Workshop on the Engineering of Computer Based Systems, IEEE Computer Soc, Potsdam, GERMANY, pp. 247-256.
View/Download from: UTS OPUS
Computer-Based Systems (CBS) are becoming increasingly pervasive throughout society, continually increasing in complexity and cost as they are called upon to fulfil more and more complicated tasks. Unfortunately, multi-million dollar projects often fail
Prior, J.R., Robertson, T.J. & Leaney, J.R. 2006, 'Technology Designers as Technology Users: The Intertwining Of Infrastructure & Product', OZCHI 2006 Conference Proceedings Design: activities artefacts & environment, Australian Computer Human Interaction Conference, ACM, Sydney, Australia, pp. 353-356.
View/Download from: UTS OPUS or Publisher's site
This paper is about the developer as technical user interacting with computer technology as part of the infrastructure that makes possible their 'real work' of developing a large and complex software product. A longitudinal ethnographic study of work practice in a software development company that uses an Agile development approach found that the developers spend a large part of their working time designing, creating, modifying and interacting with infrastructure to enable and support their software development work. This empirical work-in-progress shows that an understanding of situated technology design may have implications for the future development of HCI methods, tools and approaches
Prior, J.R., Robertson, T.J. & Leaney, J.R. 2006, 'Programming Infrastructure and Code Production: An Ethnographic Study', Team Ethno-Online Journal, Issue 2 June 2006, Ethnographies of Code: Computer Programs as Lived Work of Computer Programming, TeamEthno-Online, Lancashire, UK, pp. 112-120.
View/Download from: UTS OPUS
Maxwell, C.I., Leaney, J.R. & O'Neill, T. 2006, 'A framework for understanding heuristics in architectural optimisation', 13th Annual IEEE International Symposium and Workshop on Engineering Computer Based Systems, 2006 (ECBS2006), IEEE International Conference and Workshop on the Engineering of Computer Based Systems, IEEE, Potsdam, Germany, pp. 65-72.
View/Download from: UTS OPUS
Sheridan-Smith, N.B., O'Neill, T., Leaney, J.R. & Hunter, M. 2006, 'A Policy-based Service Definition Language for Service Management', Proceedings of the 2006 IEEE/IFIP Network Operations & Management Symposium (NOMS), IEEE Network Operations and Management Symposium, IEEE, Vancouver, Canada, pp. 282-293.
View/Download from: UTS OPUS or Publisher's site
In a competitive environment, Service Providers wish to deliver services in a lean and agile manner, despite the rising complexity and heterogeneity within the network. The desire to support personalised customer experiences and differentiated services requires that management systems are increasingly flexible, adaptable and dynamic. Policy-based Management (PBM) systems can be helpful in reducing complexity and enhancing flexibility, but they have not typically been involved in end-to-end management of the services, leading to only the partial management of different network functions.
Lee, S., Leaney, J.R., O'Neill, T. & Hunter, M. 2005, 'Open service access for QoS control in next generation networks - Improving the OSA/Parlay Connectivity Manager', Lecture Notes In computer Science: Operations And Management In Ip-Based Networks, Proceedings, IPOM, Springer-Verlag Berlin, Heidelberg, Germany, pp. 29-38.
View/Download from: UTS OPUS
The need for providing applications with practical, manageable access to feature-rich capabilities of telecommunications networks has resulted in standardization of the OSA/Parlay APIs and more recently the Parlay X Web Services. Connectivity Manager is
Maxwell, C.I., Parakhine, A., Leaney, J.R., O'Neill, T. & Denford, M. 2005, 'Heuristic-based architecture generation for complex computer system optimisation', Proceedings of 12th IEEE International Conference And Workshops On the Engineering Of Computer-Based Systems, IEEE International Conference and Workshop on the Engineering of Computer Based Systems, IEEE, Greenbelt, USA, pp. 70-78.
View/Download from: UTS OPUS
Having come of age in the last decade, the use of architecture to describe complex systems, especially in software, is now maturing. With our current ability to describe, represent, analyse and evaluate architectures comes the next logical step in our application of architecture to system design and optimisation. Driven by the increasing scale and complexity of modern systems, the designers have been forced to find new ways of managing the difficult and complex task of balancing the quality trade-offs inherent in all architectures. Architecture-based optimisation has the potential to not only provide designers with a practical method for approaching this task, but also to provide a generic mechanism for increasing the overall quality of system design. In this paper we explore the issues that surround the development of architectural optimisation and present an example of heuristic-based optimisation of a system with respect to concrete goals.
Lin, P., Macarthur, A. & Leaney, J.R. 2005, 'Defining Autonomic Computing: A software engineering perspective', Proceedings of 2005 Australian Software Engineering Conference, Australian Software Engineering Conference, IEEE, Brisbane, Australia, pp. 88-97.
View/Download from: UTS OPUS
As a rapidly growing field, Autonomic Computing is a promising new approach for developing large scale distributed systems. However, while the vision of achieving self-management in computing systems is well established, the field still lacks a commonly accepted definition of what an Autonomic Computing system is. Without a common definition to dictate the direction of development, it is not possible to know whether a system or technology is a part of Autonomic Computing, or if in fact an Autonomic Computing system has already been built. The purpose of this paper is to establish a standardised and quantitative definition of Autonomic Computing through the application of the Quality Metrics Framework described in IEEE Std 1061-1998 [1]. Through the application of this methodology, stakeholders were systematically analysed and evaluated to obtain a balanced and structured definition of Autonomic Computing. This definition allows for further development and implementation of quality metrics, which are project-specific, quantitative measurements that can be used to validate the success of future Autonomic Computing projects.
Maxwell, C.I., Parakhine, A. & Leaney, J.R. 2005, 'Practical application of formal methods for specification and analysis of software architecture', Proceedings of 2005 Australian Software Engineering Conference, Australian Software Engineering Conference, IEEE, Brisbane, Australia, pp. 302-311.
View/Download from: UTS OPUS
With the ever-growing pace of technological advancement, computer software is required to become increasingly complex to meet the demands of today's leading edge technologies, and their applications. However, fulfilling this requirement creates new, previously unknown, problems pertaining to non-functional properties of software. Specifically, as the software complexity escalates, it becomes increasingly difficult to scale the software in order to cope with the sometimes overwhelming demand created by system growth. It is therefore essential to have processes for addressing the issues associated with scalability that arise due to the complexity in software systems. In this paper we describe an approach aimed at fulfilling the need for such processes. A combination of Object-Z and temporal logic is used to create an architectural description open to further analysis. We also demonstrate the practicality of this methodology within the context of the coordinated adaptive traffic system (CATS).
Sheridan-Smith, N.B., O'Neill, T., Leaney, J.R. & Hunter, M. 2005, 'Enhancements to Policy Distribution for Control Flow and Looping', Lecture Notes in Computer Science Vol 3775/2005, IFIP/IEEE International Workshop on Distributed Systems Operations and Management, Springer Berlin/Heidelberg, Barcelona, Spain, pp. 269-280.
View/Download from: UTS OPUS or Publisher's site
Our previous work proposed a simple algorithm for the distribution and coordination of network management policies across a number of autonomous management nodes by partitioning an Abstract Syntax Tree into different branches and specifying coordination points based on data and control flow dependencies. We now extend this work to support more complex policies containing control flow logic and looping, which are part of the PRONTO policy language. Early simulation results demonstrate the potential performance and scalability characteristics of this framework.
Sheridan-Smith, N.B., O'Neill, T., Leaney, J.R. & Hunter, M. 2005, 'Distribution and Coordination of Policies for Large-Scale Service Management', Proceedings of the IV Latin American Network Operations and Management Symposium, 4th Latin American Network Operations and Management Symposium (LANOMS), Unknown, Porto Alegre, Brazil, pp. 1-12.
Sheridan-Smith, N.B., Leaney, J.R., O'Neill, T. & Hunter, M. 2005, 'A Policy-Driven Autonomous System for Evolutive and Adaptive Management of Complex Services and Networks', Proceedings of 12th IEEE International Conference And Workshops On the Engineering Of Computer-Based Systems, IEEE International Conference and Workshop on the Engineering of Computer Based Systems, IEEE Computer Society, Greenbelt, Maryland, USA, pp. 389-397.
View/Download from: UTS OPUS
Many existing management systems are not evolutive or adaptive, leading to multiplicity over time and increasing the management burden. Policy-based management approaches may assist in making networks less complex and more automated, but to date they have not yet been able to evolve to support new service sets or provide the capacity for differentiation. We present the architecture for a policy-based system named Pronto that helps to deal with these issues. Layered network and service models are built above an extensible virtual device model that supports heterogenous management interfaces. Interchangeable management components provide the basic building blocks to construct logical services. The integrated policy-driven service definition language automates the management of the services in a manner that is adaptive, dynamic and reactive to improve the users overall service experience.
Lee, S., Leaney, J.R., O'Neill, T. & Hunter, M. 2005, 'Open API of QoS control in Next Generation Networks', Toward Managed Ubiquitous Information Society, Asia-Pacific Network Operations and Management Symposium, IEICE TM, KICS KNOM, IEEE CNOM, IEEE APB, IEEE COMSOC Japan Chapter and TMF, Okinawa, Japan, pp. 295-306.
Lee, S., Leaney, J.R., O'Neill, T. & Hunter, M. 2005, 'Performance benchmark of a parallel and distributed network simulator', Proceedings of Workshop On Principles Of Advanced And Distributed Simulation, ACM/IEEE/SCS Workshop on Parallel and Distributed Simulation, ACM/IEEE/SCS, Monterey, USA, pp. 101-108.
View/Download from: UTS OPUS or Publisher's site
Simulation of large-scale networks requires enormous amounts of memory and processing time. One way of speeding up these simulations is to distribute the model over a number of connected workstations. However, this introduces inefficiencies caused by the
Dunsire, K., O'Neill, T., Denford, M. & Leaney, J.R. 2005, 'The ABACUS Architectural Approach to Computer-Based System and Enterprise Evolution', Proceedings of the 18th IEEE International Conference on Engineering of Computer Based Systems (ECBS), IEEE International Conference and Workshop on the Engineering of Computer Based Systems, IEEE, Maryland, USA, pp. 62-69.
View/Download from: UTS OPUS or Publisher's site
The enterprise computer-based systems employed by the organisations of today can be extremely complex. Not only do they consist of countless hardware and software products from many varied sources, but they often span continents, piggybacking on public networks. These systems are essential for undertaking business and general operations in the modern environment, and yet the ability of organisations to control their evolution is questionable. The emerging practice of enterprise architecture seeks to control that complexity through the use of a holistic and top-down perspective. However, the toolsets already in me, are very much bottom-up by nature. To overcome the limitations of current enterprise architecture practices, the authors propose the use of the ABACUS methodology and toolset. The authors conclude that by using ABACUS to analyse software and enterprise systems, architects can guide the design and evolution of architectures based on quantifiable non-functional requirements. Furthermore, hierarchical 3D visualisation provides a meaningful and intuitive means for conceiving and communicating complex architectures.
Sheridan-Smith, N., O'Neill, T., Leaney, J. & Hunter, M. 2005, 'Distribution and coordination of policies for large-scale service management', LANOMS 2005 - 4th Latin American Network Operations and Management Symposium, Proceedings, pp. 257-262.
The distribution and coordination of policies is often overlooked but is crucial to the scalability of dynamic, personalised services. In this work we partition an Abstract Syntax Tree of the policies to determine the responsibility of different management nodes in a geographically segregated network (i.e. management by delegation). This partitioning is combined with IN/OUT set analysis to determine the required coordination for policy enforcement of complex policies with inter-dependencies. Our simulation results show that this approach is promising, as higher decision loads can be readily handled by further sub-dividing of the network.
Hinchey, M., Rozenblit, J., Leaney, J. & O'Neill, T. 2005, 'Proceedings - 12th IEEE International Conference and Workshop on the Engineering of Computer-Based System, ECS: Foreword', Proceedings - 12th IEEE International Conference and Workshops on the Engineering of Computer-Based Systems, ECS 2005.
Leaney, J.R., Denford, M. & O'Neill, T. 2004, 'Enabling Optimisation in the design of Complex computer based systems', Proceedings of the 11th IEEE International Conference and Workshop on the Engineering of Computer-Based Systems, IEEE International Conference and Workshop on the Engineering of Computer Based Systems, IEEE, Brno, Czech Republic, pp. 69-74.
View/Download from: UTS OPUS
Denford, M., Leaney, J.R. & O'Neill, T. 2004, 'Non-Functional Refinement of Computer Based Systems Architecture', Proceedings of the 11th IEEE International Conference and Workshop on the Engineering of Computer-Based Systems, IEEE International Conference and Workshop on the Engineering of Computer Based Systems, IEEE, Brno, Czech Republic, pp. 168-177.
View/Download from: UTS OPUS
Denford, M., Solomon, A.I. & Leaney, J.R. 2004, 'Modelling Architectural Abstraction with a Category of Poset Labelled Graphs to Aid The Practice of Design via Refinement', Proceedings of Fifth IEEE Joint Workshop on the Formal Specifications of Computer-Based Systems, IEEE International Conference and Workshop on the Engineering of Computer Based Systems, IEEE, Brno, Czech Republic, pp. 17-22.
Sheridan-Smith, N.B., Colquitt, D., Soliman, J.A., Leaney, J.R., O'Neill, T. & Hunter, M. 2004, 'Improving The User Experience Through Adaptive and Dynamic Service Management', Proceedings of the Australian Telecommunication Networks and Application Conference 2004, Australian Telecommunication Networks and Applications Conference, ATNAC 2004, Sydney, Australia, pp. 212-215.
View/Download from: UTS OPUS
Lee, S., Leaney, J.R., O'Neill, T. & Hunter, M. 2004, 'Measuring the Effect of Cross-Traffic on Execution Time in a Parallel and Distributed Network Simulator', Proceedings of the Australian Telecommunication Networks and Application Conference 2004, Australian Telecommunication Networks and Applications Conference, ATNAC 2004, Sydney, Australia, pp. 232-235.
View/Download from: UTS OPUS
Colquitt, D., Leaney, J.R. & O'Neill, T. 2004, 'The Case For Understanding Social Complexity In the Architecture-Based Analysis Process', Proceedings of International Conference on Qualitative Research in IT 2004, International Conference on Qualitative Research in IT & IT in Qualitative Research, QualIT2004, Brisbane, Australia, pp. 1-11.
View/Download from: UTS OPUS
Lee, S., Sheridan-Smith, N.B., O'Neill, T., Leaney, J.R., Sandrasegaran, K. & Markovits, S. 2003, 'Managing the Enriched Experience Network - Learning-Outcome Approach to the Experimental Design Life-Cycle', Proceedings of the Australian Telecommunication Networks and Applications Conference (ATNAC'03), Australian Telecommunication Networks and Applications Conference, Australian Telecommunications CRC, Melbourne, Australia, pp. 1-5.
View/Download from: UTS OPUS
Denford, M., O'Neill, T. & Leaney, J.R. 2003, 'Architecture-Based Design of Computer Based Systems', Proceedings of 10th IEEE International Conference and Workshop on the Engineering of Computer-Based Systems, IEEE International Conference and Workshop on the Engineering of Computer Based Systems, IEEE Computer Society, Huntsville, Alabama, USA, pp. 39-46.
View/Download from: UTS OPUS
Lister, R.F. & Leaney, J.R. 2003, 'Bad Theory Versus Bad Teachers: Towards a Pragmatic Synthesis of Constructivism and Objectivism', Learning for an Unknown Future - Proceedings of the 2003 Annual International Conference of the Higher Educational Research and Development Society of Australasia Volume 26, Higher Education Research and Development Society of Australasia Annual Conference, Higher Educational Research and Development Society of Australasia, Christchurch, New Zealand, pp. 429-436.
View/Download from: UTS OPUS
Lister, R.F. & Leaney, J.R. 2003, 'Introductory Programming, Criterion-Referencing and Bloom', Proceedings of the 34th SIGCSE Technical Symposium on Computer Science Education, ACM Special Interest Group on Computer Science Education Conference, The Association for Computing Machinery, Reno, Nevada, USA, pp. 143-147.
View/Download from: Publisher's site
In the traditional norm-referencing approach to grading, all students in a CS1 class attempt the same programming tasks, and those attempts are graded "to a curve". The danger is that such tasks are aimed at a hypothetical average student. Weaker students can do little of these tasks, and learn little. Meanwhile, these tasks do not stretch the stronger students, so they too are denied an opportunity to learn. Our solution is two-fold. First, we use a criterion-referenced approach, where fundamentally different tasks are set, according to the ability of the students. Second, the differences in the nature of the tasks reflect the differing levels of Bloom's taxonomy. Weaker CS1 students are simply required to demonstrate knowledge and comprehension; the ability to read and understand programs. Middling students attempt traditional tasks, while the stronger students are set open-ended tasks at the synthesis and evaluation levels.
Lister, R.F. & Leaney, J.R. 2003, 'First Year Programming: Let All the Flowers Bloom', Computing Education 2003. Fifth Australasian Computing Educational Conference Volume 20, Australasian Computing Education Conference, Australian Computer Society Inc., Adelaide, Australia, pp. 221-230.
Denford, M., O'Neill, T. & Leaney, J.R. 2002, 'Architecture-Based Visualisation of Computer Based Systems', Proceedings of Ninth Annual IEEE International Conference and Workshop on the Engineering of Computer-Based Systems, IEEE International Conference and Workshop on the Engineering of Computer Based Systems, IEEE, Lund, Sweden, pp. 139-146.
View/Download from: UTS OPUS or Publisher's site
Architecture is a central concept in the engineering of computer based systems. Given a standard architectural representation, the architecture of systems can be discussed, drawn, reasoned about and classified. Complex architectures may benefit from visualisation. Currently, tools that visualise architectures do so in two-dimensions. Above and beyond visualising the form (or structure) of the architecture in three dimensions, other characteristics of the architecture (e.g. Modularity, Performance, Evolvability, and Openness) can be shown through the visualisation. This paper focuses on the "drawing" of architectures or what is referred to as Architectural Visualisation.
Leaney, J.R., Rowe, D. & O'Neill, T. 2002, 'Issues in the Construction of new Measures within the Discipline of Open Systems', Proceeding of The Asia Pacific Software Engineering Conference, Asia-Pacific Software Engineering Conference, IEEE Computer Society, Gold Coast, QLD, Australia, pp. 527-536.
View/Download from: UTS OPUS or Publisher's site
Formalising a measurement is often preceded by rich discussion of the ideas, and the development of general understanding of the meaning of concepts, often over decades. As a consequence, data required to fit into a representational measurement system is usually readily available and there is a general acceptance of the intention of the measure. This paper reports on a research project which has formalised the measurement of a relatively new body of knowledge, open systems. Open systems rely on standards to guarantee interoperability, portability, scalability and user portability. The Internet is the most successful of the open systems in existence, in terms of interoperability and scalability. The first of four issues was that since the project was a research contract, and because there were very few generally understood notions of measurement foundations (or relations) within open systems, the aims and the requirements of the measurement were formalised into a measurement requirements specification (MRS). A second issue concerns the building of a relational model. Building relations in a representational measurement model is relatively straightforward in the case where the measurement entity has been around. A third issue concerned the use of the measures when the measures were (not surprisingly) complex, and had to be combined by biased combiners, the values of which not all parties would naturally agree. A fourth issue raised in the paper is the extent of the validation of the measures which was required because of the contract.
Leaney, J.R., Rowe, D., O'Neill, T., Hoye, S. & Gionis, P. 2001, 'Measuring the effectiveness of computer based systems : an open system measurement example', Proceedings of ECBS '01, IEEE International Conference and Workshop on the Engineering of Computer Based Systems, IEEE, Washington DC USA.
View/Download from: UTS OPUS
O'Neill, T. & Leaney, J.R. 2001, 'Risk Management of an Open CBS Project', Proceedings of ECBS '01, IEEE International Conference and Workshop on the Engineering of Computer Based Systems, IEEE, Washington DC.
View/Download from: UTS OPUS
Leaney, J. 2001, 'System composition strategies, a position paper for an ECBS panel - My position: Analogy questionable, intentions estimable', Proceedings of the International Symposium and Workshop on Engineering of Computer Based Systems, pp. 30-31.
There can surely be no doubt that appropriate, automated design techniques are essential for the success of any complex computer based systems. No computer system can succeed unless the complexity is well managed. The techniques cited in the brief are fundamental for the management of complexity and the development of computer based systems. My position is that automated system composition is inevitable if computer based systems are to be well engineered. However, I question the analogy of software as glue, and the goal of a single integrated system. Software is more than glue, it is function providing. The work of other branches of engineering demonstrates the importance of automated design techniques to gain success in complex systems. The architecture of separate autonomous components, managed over a reliable infrastructure is proposed in contrast to a single integrated system.
O'Neill, T., Leaney, J.R. & Martyn, P. 2000, 'Architecture-based Performance Analysis of the COLLINS class submarine Open System Extension (COSE) Concept Demonstrator', 7th IEEE International Conference and workshop on the Engineering of computer based systems, IEEE International Conference and Workshop on the Engineering of Computer Based Systems, IEEE Computer Society, Edinburgh, UK, pp. 26-35.
O apos Neill, T., Leaney, J., Rowe, D., Simpson, H., Rangarajan, M., Weiss, J., Papp, Z., Bapty, T., Purves, B., Horvath, G. & de Jong, E. 2000, 'IEEE ECBS'99 TC Architecture Working Group (AWG) report', Proceedings of the International Symposium and Workshop on Engineering of Computer Based Systems, pp. 383-389.
The guiding document for the AWG was produced by David Rowe of UTS with collaboration from Hugo Simpson, John Leaney, Willi Rossak and Christoph Schaffer. The document was entitled 'IEEE ECBS TC Architecture Focus Group Discussion Paper' and is available from www.eng.uts.edu.au/approx.drowe. The discussion paper was tabled and distributed to the AWG participants and served as a guide for the WG discussions. The AWG primarily traversed this document and therefore the section headings of this report correspond directly to those of the discussion paper with a few additions for clarification.
Rowe, D. & Leaney, J. 1997, 'Evaluating evolvability of computer based systems architectures - an ontological approach', Proceedings of the International Symposium and Workshop on Engineering of Computer Based Systems, pp. 360-367.
System evolvability is a system's ability to withstand changes in its requirements, environment and implementation technologies. The need for greater systems evolvability is becoming recognised, especially in the engineering of computer based systems, where the development, commissioning and replacement of large systems is highly resource intensive. Despite this need, there are no formal means for evaluating the evolvability of a system and thus no means of proving that one system is more evolvable than another. Recognising this, we review the nature of change and evolution with respect to computer based systems. We contend that a systems architecture is the best level of abstraction at which to evaluate its evolvability. An ontological basis which allows for the formal definition of a system and its change at the architectural level is presented and applied to the domain of computer based systems engineering. Utilising this definition of change we draw on the deeper ontological theories in order to establish a model of systems architecture evolution. This model is then applied to a small CBS for concept validation.
Rowe, D., Leaney, J. & Lowe, D. 1996, 'Development of a systems architecting process for computer based systems', Proceedings of the IEEE International Conference on Engineering of Complex Computer Systems, ICECCS, pp. 200-203.
The need to address system architecture within the computer systems development process is well accepted. Despite this, the process of architecting is still not well understood. In this paper we discuss the need for an architecting process which goes beyond specific architecting methods in order to address issues such as architecture evaluation and tailoring of architectures to specific system requirements. Based on these needs we propose an architecting process which selects from existing methods, bases and performance indices to produce a verifiable architecture. This process provides a framework within which we can develop an architecture which has the maximum (measurable) likelihood of satisfying both functional and nonfunctional system requirements upon implementation.
Leaney, J., Peterson, C. & Drane, C. 1996, 'Computer systems engineering in large groups', Proceedings - Frontiers in Education Conference, pp. 1491-1494.
The subjects Computer Systems Analysis and Computer Systems Design within the computer systems engineering degree at UTS concern themselves with the specification, architecture, design and implementation of a computer based system of moderate complexity, covering electrical and mechanical hardware, computer hardware and software. Students are expected to develop the system to appropriate standards, using suitable techniques, within a defined process and operating within a team. The computer based system is concerned with the problem of the automatic assembly of (pseudo) chocolates into (pseudo) chocolate boxes. There are a variety of boxes and a variety of chocolates, which have to be assembled to (operator entered) orders. The class is divided into teams. A team comprises five groups. Each of four groups is responsible for one of the major subassemblies, and the fifth group is responsible for the systems engineering and telecommunications. The major subassemblies are the assembly robot, the box conveyor and (Vision) recognition system, the chocolate recognition system, and the supervisory control system. The project has been running for five years and this paper summarizes the history, reports on the development and analyses educational aspects. Student appreciation of the subjects has been entirely positive, with the most often made comment that finally they have understood why they have studied engineering for the previous four to five years.

Journal articles

Kennard, R. & Leaney, J.R. 2011, 'Is There Convergence in the Field of UI Generation?', Journal of Systems and Software, vol. 84, no. 12, pp. 2079-2087.
View/Download from: UTS OPUS or Publisher's site
For many software projects, the construction of the User Interface (UI) consumes a significant proportion of their development time. Any degree of automation in this area therefore has clear benefits. But it is difficult to achieve such automation in a way that will be widely adopted by industry because of the diversity of UIs, software architectures, platforms and development environments. In a previous article, the authors identified five key characteristics any UI generator would need in order to address this diversity. We asserted that, without these characteristics, a UI generator should not expect wide industry adoption or standardisation. We supported this assertion with evidence from industry adoption studies. A further source of validation would be to see if other research teams, who were also conducting industry field trials, were independently converging on this same set of characteristics. Conversely, it would be instructive if they were found to be converging on a different set of characteristics. In this article, the authors look for such evidence of convergence by interviewing the team behind one of the research community's most significant UI generators: Naked Objects. We observe strong signs of convergence, which we believe signal the beginning of a general purpose architecture for UI generation, one that both industry and the research community could standardise upon.
Kennard, R. & Leaney, J.R. 2010, 'Towards a general purpose architecture for UI generation', Journal of Systems and Software, vol. 83, no. 10, pp. 1896-1906.
View/Download from: UTS OPUS or Publisher's site
Many software projects spend a significant proportion of their time developing the User Interface (UI), therefore any degree of automation in this area has clear benefits. Such automation is difficult due principally to the diversity of architectures, platforms and development environments. Attempts to automate UI generation to date have contained restrictions which did not accommodate this diversity, leading to a lack of wide industry adoption or standardisation. The authors set out to understand and address these restrictions. We studied the issues of UI generation (especially duplication) in practice, using action research cycles guided by interviews, adoption studies and close collaboration with industry practitioners. In addressing the issues raised in our investigation, we identified five key characteristics any UI generation technique would need before it should expect wide adoption or standardisation. These can be summarised as: inspecting existing, heterogeneous back-end architectures; appreciating different practices in applying inspection results; recognising multiple, and mixtures of, UI widget libraries; supporting multiple, and mixtures of, UI adornments; applying multiple, and mixtures of, UI layouts. Many of these characteristics seem ignored by current approaches. In addition, we discovered an emergent feature of these characteristics that opens the possibility of a previously unattempted goal â namely, retrofitting UI generation to an existing application.
Denford, M., Solomon, A.I., Leaney, J.R. & O'Neill, T. 2004, 'Modelling Architectural Abstraction with a Category of Poset Labelled Graphs', Journal of Universal Computer Science, vol. 10, no. 10, pp. 1408-1428.
View/Download from: UTS OPUS or Publisher's site
The design of large, complex computer based systems, based on their architecture, will benefit from a formal system that is intuitive, scalable and accessible to practitioners. The work herein is based in graphs which are an efficient and intuitive way of encoding structure, the essence of architecture. A model of system architectures and architectural abstraction is proposed, using poset labelled graphs and their transformations. The poset labelled graph formalism closely models several important aspects of architectures, namely topology, type and levels of abstraction. The technical merits of the formalism are discussed in terms of the ability to express and use domain knowledge to ensure sensible refinements. An abstraction / refinement calculus is introduced and illustrated with a detailed usage scenario. The paper concludes with an evaluation of the formalism in terms of its rigour, expressiveness, simplicity and practicality.
Denford, M., Solomon, A.I., Leaney, J.R. & O'Neill, T. 2004, 'Architectural Abstraction As Transformation Of Poset Labelled Graphs', Journal Of Universal Computer Science, vol. 10, no. 10, pp. 1408-1428.
View/Download from: UTS OPUS
The design of large, complex computer based systems, based on their architecture, will benefit from a formal system that is intuitive, scalable and accessible to practitioners. The work herein is based in graphs which are an efficient and intuitive way o
Lister, R. & Leaney, J. 2003, 'Introductory programming, criterion-referencing, and bloom', ACM SIGCSE Bulletin, vol. 35, no. 1, pp. 143-143.
View/Download from: Publisher's site
Pfeiffer, M. & Leaney, J. 1995, 'Simple reliable monitor: a formalisation of the concept of a safe software monitor', Australian Computer Journal, vol. 27, no. 1, pp. 9-15.
The (safety) monitor concept is extended to the simple, reliable (SR) monitor which may be considered as an alternative to n-version programming. With the use of subtype and inheritance concepts, plus pre and post condition refinement, it is shown how the classes of monitors for problems can be set up.