Dr Nagesh Shukla is working in the area of business data analytics and optimisation (including Supply Chain and Healthcare). He joined School of Information, Systems and Modelling in July 2017. He received his PhD from the University of Warwick (UK) and Bachelor's degree (Manufacturing Engineering) from the National Institute of Foundry and Forge Technology (INDIA).
Often successful operation of complex systems requires analytical approaches based on operational research and industrial engineering principles. These complex systems are present across a wide range of industries including manufacturing, supply chain, logistics, transportation, and healthcare. Dr Shukla's research focuses on the development of: (i) models that deal with making complex business processes efficient and effective; (ii) analytical models for system level optimisation and decision making; and (iii) data driven algorithms for decision making.
Dr Shukla has published in the areas of healthcare modelling, business analytics and optimisation, supply chain management and logistics, and stochastic simulation modelling techniques. He has contributed to more than 50 research publications in journals, conferences, patents, book chapters, and research reports. Dr Shukla is one of the associate editors of International Journal of Systems Science, and he is on the editorial board of the Journal of Advances in Radiology and Medical Imaging. He has guest edited a special issue for Transportation Research Part E (Logistics and Transportation Review) on "Big Data in Logistics and Supply Chain Management" (2016). He has a patent on healthcare systems data analysis in the USA Patent Office based on his PhD research work.
Dr Shukla was previously leader for a research group on Smart Data Analytics team at SMART Infrastructure Facility (Uni-Wollongong) between 2012 - 2017. Over the last 5 years, Dr Shukla has played a major role in: i) successful grant applications (NHMRC, Industry funded, Seed grants); (ii) research project delivery; iii) high quality publications (ERA A/A*); and iv) supervision of project staff and students.
NHMRC Research Translation Faculty Member (2014 - ongoing)
Session Chair of e-Business and Operations in EUROMA (2017)
Member EUROMA (2016-ongoing)
Member SCS The Society for Modelling and Simulation International (2013 - Ongoing)
Member Modelling and Simulation Society of Australia and New Zealand (MESSANZ) (2013-ongoing)
Guest Editor - Big data analytics and application for logistics and supply chain management in Transportation Research Part E: Logistics and Transportation Review (2016)
Guest Editor - Modelling and Simulation in Healthcare Systems in International Journal of Systems Science: Operations & Logistics, 2015
Technical Program Committee member for IEEE International Conference on Industrial Engineering and Engineering Management (IEEM) 2017
International Advisory Committee of International Conference on Evolution in Manufacturing: Technologies and Business Strategies for Global Competitiveness: ICEM 2016
Recognised reviewer for Elsevier for: International Journal of Production Economics, Computers and Operations Research, Computers and Industrial Engineering (2016)
Can supervise: YES
Supply Chain & Logistics Management for Food Sector
- Modeling for logistics, supply chain configuration design
Big Data Analytics
- Social media data analytics for food products supply chain
- Data analytics for healthcare applications
Healthcare Systems Modelling and Simulation
- Process mapping and simulation models for efficient operations management
- Healthcare facility location and management
I am actively recruiting PhD students (on the topics mentioned under research interests). Various scholarchips are available for domestic and internation students. Please refer to UTS Domestic and International Research Scholarship Page.
- Value Chain Engineering Systems
- Quality Planning and Analysis
CHAPTER 1 Introduction: Trade Logistics in Asian Countries That Are
Landlocked and Resource Cursed Kankesu Jayanthakumaran 1
INTRODUCTION The purpose of this book is to provide a comprehensive picture
of trade facilitation in ...
Shukla, N, Dunbar, M, Belieres, S, Amirghasemi, M, Perez, P & Mishra, N 2020, 'A genetic column generation algorithm for sustainable spare part delivery: application to the Sydney DropPoint network', pp. 1-19.View/Download from: UTS OPUS
Choudhury, TT, Paul, SK, Rahman, HF, Jia, Z & Shukla, N 2020, 'A systematic literature review on the service supply chain: research agenda and future research directions', Production Planning & Control, pp. 1-22.View/Download from: UTS OPUS or Publisher's site
Shukla, N, Merigó, JM, Lammers, T & Miranda, L 2020, 'Half a century of computer methods and programs in biomedicine: A bibliometric analysis from 1970 to 2017', Computer Methods and Programs in Biomedicine, vol. 183.View/Download from: UTS OPUS or Publisher's site
© 2019 Background and Objective: Computer Methods and Programs in Biomedicine (CMPB) is a leading international journal that presents developments about computing methods and their application in biomedical research. The journal published its first issue in 1970. In 2020, the journal celebrates the 50th anniversary. Motivated by this event, this article presents a bibliometric analysis of the publications of the journal during this period (1970–2017). Methods: The objective is to identify the leading trends occurring in the journal by analysing the most cited papers, keywords, authors, institutions and countries. For doing so, the study uses the Web of Science Core Collection database. Additionally, the work presents a graphical mapping of the bibliographic information by using the visualization of similarities (VOS) viewer software. This is done to analyze bibliographic coupling, co-citation and co-occurrence of keywords. Results: CMPB is identified as a leading and core journal for biomedical researchers. The journal is strongly connected to IEEE Transactions on Biomedical Engineering and IEEE Transactions on Medical Imaging. Paper from Wang, Jacques, Zheng (published in 1995) is its most cited document. The top author in this journal is James Geoffrey Chase and the top contributing institution is Uppsala U (Sweden). Most of the papers in CMPB are from the USA followed by the UK and Italy. China and Taiwan are the only Asian countries to appear in the top 10 publishing in CMPB. A keyword co-occurrences analysis revealed strong co-occurrences for classification, picture archiving and communication system (PACS), heart rate variability, survival analysis and simulation. Keywords analysis for the last decade revealed that machine learning for a variety of healthcare problems (including image processing and analysis) dominated other research fields in CMPB. Conclusions: It can be concluded that CMPB is a world-renowned publication outlet for biomedical researchers ...
Shukla, N, Tiwari, MK & Beydoun, G 2019, 'Next generation smart manufacturing and service systems using big data analytics', Computers and Industrial Engineering.View/Download from: UTS OPUS or Publisher's site
Abdollahi, A, Pradhan, B & Shukla, N 2019, 'Extraction of road features from UAV images using a novel level set segmentation approach', International Journal of Urban Sciences, vol. 23, no. 3, pp. 391-405.View/Download from: UTS OPUS or Publisher's site
Dharmapriya, S, Kiridena, S & Shukla, N 2019, 'Multiagent Optimization Approach to Supply Network Configuration Problems With Varied Product-Market Profiles', IEEE Transactions on Engineering Management.View/Download from: UTS OPUS or Publisher's site
IEEE This article demonstrates the application of a novel multiagent modeling approach to support supply network configuration (SNC) decisions toward addressing several challenges reported in the literature. These challenges include: enhancing supply network (SN)-level performance in alignment with the goals of individual SN entities; addressing the issue of limited information sharing between SN entities; and sustaining competitiveness of SNs in dynamic business environments. To this end, a multistage, multiechelon SN consisting of geographically dispersed SN entities catering to distinct product-market profiles was modeled. In modeling the SNC decision problem, two types of agents, each having distinct attributes and functions, were used. The modeling approach incorporated a reverse-auctioning process to simulate the behavior of SN entities with differing individual goals collectively contributing to enhance SN-level performance, by means of setting reserve values generated through the application of a genetic algorithm. A set of Pareto-optimal SNCs catering to distinct product-market profiles was generated using Nondominated Sorting Genetic Algorithm II. Further evaluation of these SNCs against additional criteria, using a rule-based approach, allowed the selection of the most appropriate SNC to meet a broader set of conditions. The model was tested using a refrigerator SN case study drawn from the literature. The results reveal that a number of SNC decisions can be supported by the proposed model, in particular, identifying and evaluating robust SNs to suit varied product-market profiles, enhancing SC capabilities to withstand disruptions and developing contingencies to recover from disruptions.
Moktadir, MA, Ali, SM, Paul, SK & Shukla, N 2019, 'Barriers to big data analytics in manufacturing supply chains: A case study from Bangladesh', Computers and Industrial Engineering, vol. 128, pp. 1063-1075.View/Download from: UTS OPUS or Publisher's site
Recently, big data (BD) has attracted researchers and practitioners due to its potential usefulness in decision-making processes. Big data analytics (BDA) is becoming increasingly popular among manufacturing companies as it helps gain insights and make decisions based on BD. However, there many barriers to the adoption of BDA in manufacturing supply chains. It is therefore necessary for manufacturing companies to identify and examine the nature of each barrier. Previous studies have mostly built conceptual frameworks for BDA in a given situation and have ignored examining the nature of the barriers to BDA. Due to the significance of both BD and BDA, this research aims to identify and examine the critical barriers to the adoption of BDA in manufacturing supply chains in the context of Bangladesh. This research explores the existing body of knowledge by examining these barriers using a Delphi-based analytic hierarchy process (AHP). Data were obtained from five Bangladeshi manufacturing companies. The findings of this research are as follows: (i) data-related barriers are most important, (ii) technology-related barriers are second, and (iii) the five most important components of these barriers are (a) lack of infrastructure, (b) complexity of data integration, (c) data privacy, (d) lack of availability of BDA tools and (e) high cost of investment. The findings can assist industrial managers to understand the actual nature of the barriers and potential benefits of using BDA and to make policy regarding BDA adoption in manufacturing supply chains. A sensitivity analysis was carried out to justify the robustness of the barrier rankings.
Azeez, OS, Pradhan, B, Shafri, HZM, Shukla, N, Lee, C & Rizeei, HM 2019, 'Modeling of CO Emissions from Traffic Vehicles Using Artificial Neural Networks', Applied Sciences (Bucureşti), vol. 9, no. 2.View/Download from: UTS OPUS
Traffic emissions are considered one of the leading causes of environmental impact in megacities and their dangerous effects on human health. This paper presents a hybrid model based on data mining and GIS models designed to predict vehicular Carbon Monoxide (CO) emitted from traffic on the New Klang Valley Expressway, Malaysia. The hybrid model was developed based on the integration of GIS and the optimized Artificial Neural Network algorithm that combined with the Correlation based Feature Selection (CFS) algorithm to predict the daily vehicular CO emissions and generate prediction maps at a microscale level in a small urban area by using a field survey and open source data, which are the main contributions to this paper. The other contribution is related to the case study, which represents the spatial and quantitative variations in the vehicular CO emissions between toll plaza areas and road networks. The proposed hybrid model consists of three steps: the first step is the implementation of the correlation-based Feature Selection model to select the best model’s predictors; the second step is the prediction of vehicular CO by using a multilayer perceptron neural network model; and the third step is the creation of micro scale prediction maps. The model was developed using six traffic CO predictors: number of vehicles, number of heavy vehicles, number of motorbikes, temperature, wind speed and a digital surface model. The network architecture and its hyperparameters were optimized through a grid search approach. The traffic CO concentrations were observed at 15-min intervals on weekends and weekdays, four times per day. The results showed that the developed model had achieved validation accuracy of 80.6 %. Overall, the developed models are found to be promising tools for vehicular CO simulations in highly congested areas.
Govindan, K, Cheng, TCE, Mishra, N & Shukla, N 2018, 'Big data analytics and application for logistics and supply chain management', Transportation Research Part E: Logistics and Transportation Review, vol. 114, pp. 343-349.View/Download from: UTS OPUS or Publisher's site
Keshari, A, Mishra, N, Shukla, N, McGuire, S & Khorana, S 2018, 'Multiple order-up-to policy for mitigating bullwhip effect in supply chain network', Annals of Operations Research, vol. 269, no. 1-2, pp. 361-386.View/Download from: Publisher's site
© 2017 Springer Science+Business Media New YorkThis paper proposes a multiple order-up-to policy based inventory replenishment scheme to mitigate the bullwhip effect in a multi-stage supply chain scenario, where various transportation modes are available between the supply chain (SC) participants. The proposed policy is similar to the fixed order-up-to policy approach where replenishment decision ¿how much to order¿ is made periodically on the basis of the pre-decided order-up-to inventory level. In the proposed policy, optimal multiple order-up-to levels are assigned to each SC participants, which provides decision making reference point for deciding the transportation related order quantity. Subsequently, a mathematical model is established to define optimal multiple order-up-to levels for each SC participants that aims to maximize overall profit from the SC network. In parallel, the model ensures the control over supply chain pipeline inventory, high satisfaction of customer demand and enables timely utilization of available transportation modes. Findings from the various numerical datasets including stochastic customer demand and lead times validate that¿the proposed optimal multiple order-up-to policy based inventory replenishment scheme can be a viable alternative for mitigating the bullwhip effect and well-coordinated SC. Moreover, determining the multiple order-up-to levels is a NP hard combinatorial optimization problem. It is found that the implementation of new emerging optimization algorithm named bacterial foraging algorithm (BFA) has presented superior optimization performances. The robustness and applicability of the BFA algorithm are further validated statistically by employing the percentage heuristic gap and two-way ANOVA analysis.
Patne, K, Shukla, N, Kiridena, S & Tiwari, MK 2018, 'Solving closed-loop supply chain problems using game theoretic particle swarm optimisation', International Journal of Production Research, vol. 56, no. 17, pp. 5836-5853.View/Download from: UTS OPUS or Publisher's site
© 2018, © 2018 Informa UK Limited, trading as Taylor & Francis Group. In this paper, we propose a closed-loop supply chain network configuration model and a solution methodology that aim to address several research gaps in the literature. The proposed solution methodology employs a novel metaheuristic algorithm, along with the popular gradient descent search method, to aid location-allocation and pricing-inventory decisions in a two-stage process. In the first stage, we use an improved version of the particle swarm optimisation (PSO) algorithm, which we call improved PSO (IPSO), to solve the location-allocation problem (LAP). The IPSO algorithm is developed by introducing mutation to avoid premature convergence and embedding an evolutionary game-based procedure known as replicator dynamics to increase the rate of convergence. The results obtained through the application of IPSO are used as input in the second stage to solve the inventory-pricing problem. In this stage, we use the gradient descent search method to determine the selling price of new products and the buy-back price of returned products, as well as inventory cycle times for both product types. Numerical evaluations undertaken using problem instances of different scales confirm that the proposed IPSO algorithm performs better than the comparable traditional PSO, simulated annealing (SA) and genetic algorithm (GA) methods.
Shukla, N, Hagenbuchner, M, Win, KT & Yang, J 2018, 'Breast cancer data analysis for survivability studies and prediction', Computer Methods and Programs in Biomedicine, vol. 155, pp. 199-208.View/Download from: UTS OPUS or Publisher's site
Singh, A, Shukla, N & Mishra, N 2018, 'Social media data analytics to improve supply chain management in food industries', Transportation Research Part E: Logistics and Transportation Review.View/Download from: UTS OPUS
© 2017 Elsevier Ltd.This paper proposes a big-data analytics-based approach that considers social media (Twitter) data for the identification of supply chain management issues in food industries. In particular, the proposed approach includes text analysis using a support vector machine (SVM) and hierarchical clustering with multiscale bootstrap resampling. The result of this approach included a cluster of words which could inform supply-chain (SC) decision makers about customer feedback and issues in the flow/quality of food products. A case study in the beef supply chain was analysed using the proposed approach, where three weeks of data from Twitter were used.
Ogie, RI, Shukla, N, Sedlar, F & Holderness, T 2017, 'Optimal placement of water-level sensors to facilitate data-driven management of hydrological infrastructure assets in coastal mega-cities of developing nations', Sustainable Cities and Society, vol. 35, pp. 385-395.View/Download from: UTS OPUS or Publisher's site
© 2017 Elsevier Ltd Management decisions, including real-time control of hydrological infrastructure assets such as drainage channels or waterways, floodgates, pumping stations, etc. are crucial for the sustainability of flood-prone coastal mega-cities. The veracity of such crucial flood control decisions depends heavily on the availability of city-wide, real-time water-level data which is often lacking in developing countries. Smart sensors can reliably provide the required data, but installing one of these devices in every single point in the hydrological network is not economically feasible. This study proposes a methodology for finding optimal locations for the placement of a limited number of water-level sensors, such that the acquired data are most relevant for facilitating informed decisions about management of the flood control infrastructure in different parts of a coastal city. The proposed methodology entails defining a set of optimisation objectives and constraint, which are then assessed computationally at each potential sensor location through the topological/connectivity analysis of a city-wide, graph-based hydrological infrastructure network. The computed values are then utilised in an optimisation algorithm (NSGA-II) to determine the optimal locations for the placement of a limited number of sensors. The usefulness of the proposed methodology is demonstrated in deploying water-level sensors in the city of Jakarta.
Shukla, N, Keast, JE & Ceglarek, D 2017, 'Role activity diagram-based discrete event simulation model for healthcare service delivery processes', International Journal of Systems Science: Operations and Logistics, vol. 4, no. 1, pp. 68-83.View/Download from: Publisher's site
© 2015, © 2015 Informa UK Limited, trading as Taylor & Francis Group. In case of health care systems, discrete event simulations (DESs) are useful techniques to identify problematic process issues. However, currently available simulation models often use a simplified flow chart as an input which represents patient flow obtained from on-site observations and interviews complemented with historic patient data. This is insufficient in modelling important interactions between clinical staff, equipment and patients, causing the resultant models to be incomplete and unrealistic. This in turn leads to oversimplified outputs from any simulations. This paper presents a systematic methodology for the development of DES model from process mapping model based on the role activity diagram (RAD) notations. RAD allows complex collaborative health care service delivery processes to be modelled as roles, interactions, actions and decision questions. The workflow simulation modelling methodology based on RADs includes: (1) development of RAD model of the service delivery process; (2) data model for RAD-based service delivery process; (3) developing DES model based on RAD; and (4) adding dynamic attributes and validating DES model. The methodology is demonstrated through a case study of magnetic resonance (MR) scanning process of radiology department in a large hospital.
Shukla, N, Perez, P, Tiwari, MK, Ceglarek, D & Dias, JM 2017, 'Modelling and simulation in health care systems', International Journal of Systems Science: Operations and Logistics, vol. 4, no. 1, pp. 1-3.View/Download from: Publisher's site
Hoang, VP, Shanahan, M, Shukla, N, Perez, P, Farrell, M & Ritter, A 2016, 'A systematic review of modelling approaches in economic evaluations of health interventions for drug and alcohol problems', BMC Health Services Research, vol. 16, no. 127, pp. 1-14.
Background: The overarching goal of health policies is to maximize health and societal benefits. Economic evaluations can play a vital role in assessing whether or not such benefits occur. This paper reviews the application of modelling techniques in economic evaluations of drug and alcohol interventions with regard to (i) modelling paradigms themselves; (ii) perspectives of costs and benefits and (iii) time frame. Methods: Papers that use modelling approaches for economic evaluations of drug and alcohol interventions were identified by carrying out searches of major databases. Results: Thirty eight papers met the inclusion criteria. Overall, the cohort Markov models remain the most popular approach, followed by decision trees, Individual based model and System dynamics model (SD). Most of the papers adopted a long term time frame to reflect the long term costs and benefits of health interventions. However, it was fairly common among the reviewed papers to adopt a narrow perspective that only takes into account costs and benefits borne by the health care sector. Conclusions: This review paper informs policy makers about the availability of modelling techniques that can be used to enhance the quality of economic evaluations for drug and alcohol treatment interventions.
Hoang, VP, Shanahan, M, Shukla, N, Perez, P, Farrell, M & Ritter, A 2016, 'A systematic review of modelling approaches in economic evaluations of health interventions for drug and alcohol problems', BMC Health Services Research, vol. 16, no. 1.View/Download from: UTS OPUS or Publisher's site
© 2016 Hoang et al. Background: The overarching goal of health policies is to maximize health and societal benefits. Economic evaluations can play a vital role in assessing whether or not such benefits occur. This paper reviews the application of modelling techniques in economic evaluations of drug and alcohol interventions with regard to (i) modelling paradigms themselves; (ii) perspectives of costs and benefits and (iii) time frame. Methods: Papers that use modelling approaches for economic evaluations of drug and alcohol interventions were identified by carrying out searches of major databases. Results: Thirty eight papers met the inclusion criteria. Overall, the cohort Markov models remain the most popular approach, followed by decision trees, Individual based model and System dynamics model (SD). Most of the papers adopted a long term time frame to reflect the long term costs and benefits of health interventions. However, it was fairly common among the reviewed papers to adopt a narrow perspective that only takes into account costs and benefits borne by the health care sector. Conclusions: This review paper informs policy makers about the availability of modelling techniques that can be used to enhance the quality of economic evaluations for drug and alcohol treatment interventions.
Namazi-Rad, M, Mokhtarian, P, Shukla, N & Munoz, A 2016, 'A data-driven predictive model for residential mobility in Australia - a generalised linear mixed model for repeated measured binary data', Journal of Choice Modelling, vol. Online First, pp. 1-12.
Household relocation modelling is an integral part of the Government planning process as residential movements influence the demand for community facilities and services. This study will address the problem of modelling residential relocation choice by estimating a logit-link class model. The proposed model estimates the probability of an event which triggers household relocation. The attributes considered in this study are: requirement for bedrooms, employment status, income status, household characteristics, and tenure (i.e. duration living at the current location). Accurate prediction of household relocations for population units should rely on real world observations. In this study, a longitudinal survey data gathered in the Household, Income and Labour Dynamics in Australia (HILDA) program is used for modelling purposes. The HILDA dataset includes socio-demographic information such general health situation and well-being, lifestyle changes, residential mobility, income and welfare dynamics, and labour market dynamics collected from the sampled individuals and households. The technique presented in this paper links possible changes in households' socio-demographic characteristics to the probability of residential relocation by developing a mixed effects discrete-choice logit model (MEDCLM) for longitudinal binary data using the HILDA dataset. The proposed model captures the effect of repeated measurements together with the area-specific random effects.
Namazi-Rad, MR, Mokhtarian, P, Shukla, N & Munoz, A 2016, 'A data-driven predictive model for residential mobility in Australia – A generalised linear mixed model for repeated measured binary data', Journal of Choice Modelling, vol. 20, pp. 49-60.View/Download from: Publisher's site
© 2016 Elsevier Ltd Household relocation modelling is an integral part of the Government planning process as residential movements influence the demand for community facilities and services. This study will address the problem of modelling residential relocation choice by estimating a logit-link class model. The proposed model estimates the probability of an event which triggers household relocation. The attributes considered in this study are: requirement for bedrooms, employment status, income status, household characteristics, and tenure (i.e. duration living at the current location). Accurate prediction of household relocations for population units should rely on real world observations. In this study, a longitudinal survey data gathered in the Household, Income and Labour Dynamics in Australia (HILDA) program is used for modelling purposes. The HILDA dataset includes socio-demographic information such as general health situation and well-being, lifestyle changes, residential mobility, income and welfare dynamics, and labour market dynamics collected from the sampled individuals and households. The technique presented in this paper links possible changes in households’ socio-demographic characteristics to the probability of residential relocation by developing a mixed effects discrete-choice logit model (MEDCLM) for longitudinal binary data using the HILDA dataset. The proposed model captures the effect of repeated measurements together with the area-specific random effects.
Ritter, A, Shukla, N, Shanahan, M, Phuong, VH, Cao, VL, Perez, P & Farrell, M 2016, 'Building a microsimulation model of heroin use careers in Australia', International Journal of Microsimulation, vol. 9, no. 3, pp. 140-176.
Illicit heroin use is a worldwide problem, with significant health and social costs. Treatment is known to be effective in changing heroin use habits, but it often needs to be provided over a lifetime, with people cycling in and out of treatment. It is therefore important to capture a long-term perspective on heroin use careers. The aim of this project was to build a lifetime microsimulation model of heroin using careers. This paper describes the conceptual logic of the model, the input parameters and the verification and validation results. A microsimulation model was chosen as the most appropriate simulation platform with 9 states, and 111,400 individuals (aged between 18 and 60) each with gender, HIV (human immunodeficiency virus) and HCV (hepatitis C) status, and treatment history. Probabilities associated with crime commission and individually calculated lengths of stay in each state were determined from multiple datasets. The model included costs associated with treatment provision, healthcare services, criminal activity, life years lost, and family benefit of treatment. The final model represented 42 years of a heroin use career for a cohort based on Australian (New South Wales) data. Individuals cycle into and out of heroin using states (including abstinence), as well as treatment and prison states. We were able to build a stable, tractable model and verified all parameters. Validation against external data sources revealed high validity. While there are limitations associated with any model, the heroin career model now has the potential to be used for simulations of alternate policy scenarios.
Shukla, N & Kiridena, S 2016, 'A fuzzy rough sets-based multi-agent analytics framework for dynamic supply chain configuration', International Journal of Production Research, vol. Online First, pp. 1-17.
Considering the need for more effective decision support in the context of distributed manufacturing, this paper develops an advanced analytics framework for configuring supply chain networks. The proposed framework utilizes a distributed multi-agent system architecture to deploy fuzzy rough sets-based algorithms for knowledge elicitation and representation. A set of historical sales data, including network node-related information, is used together with the relevant details of product families to predict supply chain configurations capable of fulfilling desired customer orders. Multiple agents such as data retrieval agent, knowledge acquisition agent, knowledge representation agent, configuration predictor agent, evaluator agent and dispatching agent are used to help execute a broad spectrum of supply chain configuration decisions. The proposed framework considers multiple product variants and sourcing options at each network node, as well as multiple performance objectives. It also captures decisions that span the entire supply chain simultaneously and, by implication, represents multiple network links. Using an industry test case, the paper demonstrates the effectiveness of the proposed framework in terms of fulfilling customer orders with lower production and emissions costs, compared to the results generated using existing tools.
Shukla, N, Perez, P, Tiwari, MK, Ceglarek, D & Dias, JM 2016, 'Editorial: Modelling and simulation in health care systems', International Journal of Systems Science: Operations & Logistics, vol. Online First, pp. 1-2.
Increasingly, changes in population demography, technological and medical advancements, and others, have affected the paradigm of health and social care systems worldwide. These changes have direct effect on organisation and working of health care systems whether they are hospitals, general practitioners or long-term care. An efficient and effective health care system is crucial for high quality of the life in the society. In recent times, major challenges faced by health care systems are accurate diagnosis, operational issues (such as bottlenecks, low throughput, low resource utilisation), hospital redesign, workforce planning and scheduling, streamlining of patient flow, performance management, disease monitoring, and health care technology assessment. Over the last 10 years, operations research and management science scholars have implemented their innovative techniques and knowledge to improve health care systems. However, it still has many untouched and unresolved issues, requiring attention. Along with this, there are many techniques that have been successfully implemented and tested in other sectors that can be employed in this area for major improvements.
Tyagi, S, Shukla, N & Kulkarni, S 2016, 'Optimal design of fixture layout in a multi-station assembly using highly optimized tolerance inspired heuristic', Applied Mathematical Modelling: simulation and computation for engineering and environmental systems, vol. 40, no. 11-12, pp. 6134-6147.
The multi-station assembly (MSA) process requires auxiliary devices such as fixtures and clamps to accurately locate and firmly hold the workpiece in a desired position. Improper positioning of these fixtures and clamps affects the dimensional integrity of final product. This study determines the optimal design of fixture layout that minimizes the product dimensional variations caused by the manhandling and aging of auxiliaries. In order to model variation propagation from one assembly station to another in the MSA, a state space model is employed. Further, an E-optimality based sensitivity criterion is proposed to mathematically formulate and measure the quality of the fixture layout design. In order to solve the mathematical formulation, a highly optimized tolerance inspired heuristic is proposed. The proposed approach takes its governing traits from local incremental algorithm (LIA) which was initially exploited to maximize the design parameter (yield) in the percolation model. LIA analogous to the evolution by natural selection schema, assists in suitably exploring the search space of the underlying problem. The assembly of Sports Utility Vehicle side frame has been used to illustrate the concepts and test the performance the proposed solution methodology. Further, robustness of the proposed heuristic is demonstrated by comparing its results with that of obtained from Basic Exchange Algorithm used in the literature.
Ulutas, A, Shukla, N, Kiridena, S & Gibson, P 2016, 'A utility-driven approach to supplier evaluation and selection: Empirical validation of an integrated solution framework', International Journal of Production Research, vol. 54, no. 5, pp. 1554-1567.View/Download from: Publisher's site
© 2015 Taylor & Francis. Supplier evaluation and selection (SES) problems have long been studied, leading to the development of a wide range of individual and hybrid models for solving them. However, the lack of widespread diffusion of existing SES models in the industry points to a need for simpler models that can systematically evaluate both qualitative and quantitative attributes of potential suppliers while enhancing the flexibility decision-makers need to account for relevant situational factors. Furthermore, empirical validations of existing models in SES have been few and far between. With a view to addressing these issues, this paper proposes an integrated solution framework that can be used to evaluate both tangible and intangible attributes of potential suppliers. The proposed framework combines three individual methods, namely the fuzzy analytic hierarchy process, fuzzy complex proportional assessment and fuzzy linear programming. The framework is validated through application in a Turkish textile company. The results generated using the proposed framework is compared with the actual historical data collected from the company. Additionally, a feasibility assessment is conducted on the sample supplier selection criteria employed, as well as assessment of the results generated using the proposed model.
Shukla, N, Wickramasuriya, R, Miller, AA & Perez, P 2015, 'An approach to plan and evaluate the location of radiotherapy services and its application in the New South Wales, Australia', Computer Methods and Programs in Biomedicine, vol. Online First, pp. 1-24.
Sangi, M, Win, KT, Shirvani, F, Namazi-Rad, M & Shukla, N 2015, 'Applying a novel combination of techniques to develop a predictive model for diabetes complications', PLoS One, vol. 10, no. 4, pp. e0121569-1-e0121569-22.View/Download from: UTS OPUS
Shukla, N, Keast, JE & Ceglarek, D 2015, 'Role activity diagram-based discrete event simulation model for healthcare service delivery processes', International Journal of Systems Science, vol. Online First, pp. 1-16.
In case of healthcare systems, discrete event simulations are useful techniques to identify problematic process issues. However, currently available simulation models often use a simplified flow chart as an input which represents patient flow obtained from on on-site observations and interviews complemented with historic patient data. This is insufficient in modelling important interactions between clinical staff, equipment and patients causing the resultant models to be incomplete and unrealistic. This in turn leads to oversimplified outputs from any simulations. This paper presents a systematic methodology for the development of discrete event simulation model from process mapping model based on the Role Activity Diagram (RAD) notations. RAD allows complex collaborative healthcare service delivery processes to be modelled as roles, interactions, actions, and decision questions. The workflow simulation modelling methodology based on RADs includes: (i) development of RAD model of the service delivery process; (ii) data model for RAD based service delivery process; (iii) developing DES model based on RAD; and, (iv) adding dynamic attributes and validating DES model. The methodology is demonstrated through a case study of magnetic resonance (MR) scanning process of radiology department in a large hospital.
Shukla, N, Lahiri, S & Ceglarek, D 2015, 'Pathway variation analysis (PVA): modelling and simulations', Operations Research for Health Care, vol. In press.
Maintaining a care pathway within a hospital to provide complex care to patients is associated with challenges related to variations from the pathway. This occurs due to ineffective decision-making processes, unclear process steps, the interactions, conflicting performance measures for speciality units, and the availability of resources. These variations from the care pathway or standard care delivery processes lead to longer patient waiting times and lower patient throughput. Traditional approaches to improve the pathway focus primarily on reducing variations within the care pathway such as bottlenecks or throughput within the pathway rather than examining variations from the care pathway. In this study, we propose a novel methodology, called pathway variation analysis (PVA), to identify, simulate and analyse variations from the patient care pathways. PVA method includes patient ward level journey dataset and qualitative staff interviews to simulate patient variations. The proposed methodology had been applied to the stroke care services of a hospital, which increased their key performance from 73% to 84.97%. A PVA methodology is proposed which simulated patient diversions from the care pathway by modelling hospital operational parameters, assessing the accuracy of clinical decisions and performance measures of speciality units involved. The proposed methodology can be applied to other care pathways settings to reduce patient diversion from the care pathway.
Shukla, N, Wickramasuriya, R, Miller, A & Perez, P 2015, 'An approach to plan and evaluate the location of radiotherapy services and its application in the New South Wales, Australia', Computer Methods and Programs in Biomedicine, vol. 122, no. 2, pp. 245-256.View/Download from: Publisher's site
© 2015 Elsevier Ireland Ltd. This paper proposes an integrated modelling approach for location planning of radiotherapy treatment services based on cancer incidence and road network-based accessibility. Previous research efforts have established travel distance/time barriers as a key factor affecting access to cancer treatment services, as well as epidemiological studies have shown that cancer incidence rates vary with population demography. Our study is built on the evidence that the travel distances to treatment centres and demographic profiles of the accessible regions greatly influence the uptake of cancer radiotherapy (RT) services. An integrated service planning approach that combines spatially-explicit cancer incidence projections, and the placement of new RT services based on road network based accessibility measures have never been attempted. This research presents a novel approach for the location planning of RT services, and demonstrates its viability by modelling cancer incidence rates for different age-sex groups in New South Wales, Australia based on observed cancer incidence trends; and estimations of the road network-based access to current NSW treatment centres. Using three indices (General Efficiency, Service Availability and Equity), we show how the best location for a new RT centre may be chosen when there are multiple competing locations.
Singh, A, Mishra, N, Ali, SI, Shukla, N & Shankar, R 2015, 'Cloud computing technology: reducing carbon footprint in beef supply chain', International Journal of Production Economics, vol. 164, pp. 462-471.
Global warmingisanalarmingissueforthewholehumanity.Themanufacturingandfoodsupplychains are contributingsignificantly tothelarge-scalecarbonemissions.Beefsupplychainisoneofthe segments offoodindustryhavingconsiderablecarbonfootprintthroughoutitssupplychain.Themajor emissions areoccurringatbeeffarmsintheformofmethaneandnitrousoxidegases.Theothercarbon hotspotsinbeefsupplychainareabattoir,processor,logisticsandretailer.Thereisahugeamountof pressurefromgovernmentauthoritiestoallthebusiness firms tocutdowncarbonemissions.The different stakeholdersofbeefsupplychainespeciallysmallandmedium-sizedstakeholders,lackin technical and financial resourcestooptimizeandmeasurecarbonemissionsattheirend.Thereisno integratedsystemwhichcanaddressthisissuefortheentirebeefsupplychain.Keepingthesamein mind, inthispaper,anintegratedsystemisproposedusingCloudComputingTechnology(CCT)whereall stakeholders ofbeefsupplychaincanminimizeandmeasurecarbonemissionattheirendwithin reasonable expensesandinfrastructure.Theintegratedapproachofmappingtheentirebeefsupply chain byasinglecloudwillalsoimprovethecoordinationamongitsstakeholders.Thesystemboundary of thisstudywillbefrombeeffarmstotheretailerinvolvinglogistics,abattoirandprocessorinbetween. The efficacy oftheproposedsystemisdemonstratedinasimulatedcasestudy.
Ulutas, A, Shukla, N, Kiridena, S & Gibson, P 2015, 'A utility-driven approach to supplier evaluation and selection: empirical validation of an integrated solution framework', International Journal of Production Research, vol. Online First, pp. 1-14.
Shukla, N, Keast, J & Ceglarek, D 2014, 'Improved workflow modelling using role activity diagram-based modelling with application to a radiology service case study', Computer Methods and Programs in Biomedicine, vol. 116, no. 3, pp. 274-298.
The modelling of complex workflows is an important problem-solving technique within healthcare settings. However, currently most of the workflow models use a simplified flow chart of patient flow obtained using on-site observations, groupbased debates and brainstorming sessions, together with historic patient data. This paper presents a systematic and semi-automatic methodology for knowledge acquisition with detailed process representation using sequential interviews of people in the key roles involved in the service delivery process. The proposed methodology allows the modelling of roles, interactions, actions, and decisions involved in the service delivery process. This approach is based on protocol generation and analysis techniques such as: (i) initial protocol generation based on qualitative interviews of radiology staff, (ii) extraction of key features of the service delivery process, (iii) discovering the relationships among the key features extracted, and, (iv) a graphical representation of the final structured model of the service delivery process. The methodology is demonstrated through a case study of a magnetic resonance (MR) scanning service-delivery process in the radiology department of a large hospital. A set of guidelines is also presented in this paper to visually analyse the resulting process model for identifying process vulnerabilities. A comparative analysis of different workflow models is also conducted.
Shukla, N, Keast, J & Ceglarek, D 2014, 'Modelling variations in hospital service delivery based on real time locating information', Applied Mathematical Modelling: simulation and computation for engineering and environmental systems, vol. 38, no. 3, pp. 878-893.
Variations in service delivery have been identified as a major challenge to the success of process improvement studies in service departments of hospital such as radiology. Largely, these variations are due to inherent system level factors, i.e., system variations such as unavailability of resources (nurse, bed, doctors, and equipment). These system variations are largely unnecessary/unwarranted and mostly lead to longer waiting times, delays, and lowered productivity of the service units. There is limited research on identifying system variations and modelling them for service improvements within hospital. Therefore, this paper proposes a modelling methodology to model system variations in radiology based on real time locating system (RTLS) tracking data. The methodology employs concepts from graph theory to identify and represent system variations. In particular, edge coloured directed multi-graphs (ECDMs) are used to model system variations which are reflected in paths adopted by staff, i.e., sequence of rooms/areas traversed while delivering services. The main steps of the methodology are: (i) identifying the most standard path followed by staff for service delivery; (ii) filtering the redundant events in RTLS tracking database for analysis; (iii) identifying rooms/areas of hospital site involved in the service delivery; (iv) determining patterns of paths adopted by staff from filtered tracking database; and, (v) representation of patterns in graph based model called as edge coloured directed multigraphs (ECDMs) of a role. A case study of MR scanning process is utilized to illustrate the implementation of the proposed methodology for modelling system variations reflected in the paths adopted by staff.
Verma, A, Shuklal, N, Tyagi, S & Mishra, N 2014, 'Stochastic modeling and optimization of multi-plant capacity planning problem', International Journal of Intelligent Engineering Informatics, vol. 2, no. 2 & 3, pp. 139-165.
n this paper the problem of capacity planning under risk from demand and price/cost uncertainty of the finished products is addressed. The deterministic model is extended into a two-stage stochastic model with fixed recourse by means of various expected levels of demand as random. A recourse penalty is also included in the objective for both shortage and surplus in the finished products. The model is analyzed to quantify the risk using Markowitz mean-variance model.
Shukla, N, Ceglarek, D & Tiwari, MK 2013, 'Key characteristics-based sensor distribution in multi-station assembly processes', Journal of Intelligent Manufacturing, vol. 26, no. 1, pp. 43-58.
This paper presents a novel approach for optimal key characteristics-based sensor distribution in a multi-station assembly process, for the purpose of diagnosing variation sources responsible for product quality defects in a timely manner. Current approaches for sensor distribution are based on the assumption that measurement points can be allocated at arbitrary locations on the part or subassembly. This not only presents challenges in the implementation of these approaches but additionally does not allow required product assurance and quality control standards to be integrated with them, due to lack of explicit relations between measured features and geometric dimensioning and tolerancing (GD&T). Furthermore, it causes difficulty in calibration of measurement system and increases the likelihood of measurement error due to the introduction of measurement points not defined in GD&T. In the proposed approach, we develop methodology for optimal sensor allocation for 6-sigma root cause analysis that maximizes the number of measurement points placed at critical design features called Key Characteristics (KCs) which are classified into: Key Product Characteristics and Key Control Characteristics and represent critical product and process design features, respectively. In particular, KCs have defined dimensional and geometric tolerances which provides necessary design reference model for process control and diagnosis of product 6-sigma variation faults. The proposed approach allows obtaining minimum required production system 6-sigma diagnosability. A feature-based procedure is proposed which includes Genetic Algorithm-based approach (allowing pre-defined KCs as the measurement points) and state-of-the-art approaches (unrestricted location of measurement points) to iteratively include arbitrary measurement points together with KCs in the final sensor layout. A case study of automotive assembly processes is used to illustrate the proposed feature-based approach
Shukla, N, Choudhary, AK, Prakash, P, Fernandes, KJ & Tiwari, MK 2013, 'Algorithm portfolios for logistics optimization considering stochastic demands and mobility allowance', International Journal of Production Economics, vol. 141, no. 1, pp. 146-166.
The vehicle routing problem with stochastic demand (VRPSD) is a well known NP-hard problem. The uncharacteristic behaviour associated with the problem enhances the computational efforts required to obtain a feasible and near-optimal solution. This paper proposes an algorithm portfolio methodology based on evolutionary algorithms, which takes into account the stochastic nature of customer demand to solve this computationally complex problem. These problems are well known to have computationally complex objective functions, which make their solutions hard to find, particularly when problem instances of large dimensions are considered. Of particular importance in such situations is the timeliness of the solution. For example, Apple was forced to delay their shipments of iPads internationally due to unprecedented demand and issues with their delivery systems in Samsung Electronics and Seiko Epson. Such examples illustrate the importance of stochastic customer demands and the timing of delivery. Moreover, most of the evolutionary algorithms, known for providing computationally efficient solutions, are unable to always provide optimal or near optimal solutions to all the VRPSD instances within allocated time interval. This is due to the characteristic variations in the computational time taken by evolutionary algorithms for same or varying size of the VRPSD instances. Therefore, this paper presents portfolios of different evolutionary algorithms to reduce the computational time taken to resolve the VRPSD. Moreover, an innovative concept of the mobility allowance (MA) in landmoves based on the levy's distribution function has been introduced to cope with real situations existing in vehicle routing problems. The proposed portfolio approach has been evaluated for the varying instances of the VRPSD. Four of the existing metaheuristics including Genetic Algorithm (GA), Simulated Annealing (SA), Artificial Immune System (AIS), TABU Search (TS) along with new neighbourho...
Shukla, N, Dashora, Y, Tiwari, MK & Shankar, R 2013, 'Design of Computer Network Topologies: A Vroom Inspired Psychoclonal Algorithm', Applied Mathematical Modelling: simulation and computation for engineering and environmental systems, vol. 37, no. 3, pp. 888-902.
In the prevailing era of network and communication technology, the problem pertaining to the determination of the most economic way to interconnect nodes while satisfying some reliability and quality of service constraints has been agnised as one of the most intricate and challenging problem for the modern day researchers and practitioners belonging to Communication and Networking community. Motivated by the improved performance of the concepts like proliferation, affinity maturation, receptor editing, etc., over the more prevalent generalized crossover and mutation; and by the application and effectiveness of Maslow’s need hierarchy in combinatorial optimization as well the more logical motivational concepts provided by Vroom’s valence Expectancy theory, authors have proposed and investigated their applications to the topological design of distributed packet switched networks. The extensive computations over the problems of varying complexities and dimensions prove the superiority of the proposed methodology. It has been observed that the proposed Vroom Inspired Psychoclonal Algorithm (VIPA) outperforms the traditional well established random search algorithms (i.e. Genetic Algorithm, Simulated Annealing and Artificial Immune Systems) in the context of underlying problem; the performance being significantly improved as the problem complexity increases.
Shukla, N, Tiwari, MK & Ceglarek, D 2013, 'Genetic-algorithms-based algorithm portfolio for inventory routing problem with stochastic demand', International Journal of Production Research, vol. 51, no. 1, pp. 118-137.
This paper presents an algorithm portfolio methodology based on evolutionary algorithms to solve complex dynamic optimization problems. These problems are known to have computationally complex objective functions which make their solutions to be computationally hard to find, when problem instances of large dimensions are considered. This is due to the inability of the algorithms to provide optimal or near optimal solution within allocated time interval. Therefore, this paper employs a bundle of evolutionary algorithms (EAs) tied together with several processors, known as algorithm portfolio, to solve a complex optimization problem such as inventory routing problem (IRP) with stochastic demands. EAs considered for algorithm portfolios are genetic algorithm (GA) and its four variants like memetic algorithm (MA), genetic algorithm with chromosome differentiation (GACD), age genetic algorithm (AGA), and gender specific genetic algorithm (aka SGA). In order to illustrate the applicability of the proposed methodology, generic method for algorithm portfolios design, evaluation, and analysis is discussed in detail. Experimentation has been performed on varying dimensions of IRP instances to validate different properties of algorithm portfolio. A case study was conducted to illustrate that the set of EAs allocated to certain number of processors performed better than their individual counterparts.
Ulutas, A, Kiridena, S, Gibson, P & Shukla, N 2012, 'A novel integrated model to measure supplier performance considering qualitative and quantitative criteria used in the supplier selection process', International Journal of Logistics and SCM Systems, vol. 6, no. 1, pp. 57-70.
Supplier evaluation has become a significant topic over the past few decades, as companies have become more outsourced oriented. However, previous research on this topic has not paid adequate attention to the limitations associated with the availability of accurate and reliable data relating to the performance of potential suppliers. In an attempt to address this issue, this paper proposes a novel supplier evaluation model that can handle imprecise quantitative and qualitative data. Additionally, Decision Maker’s judgement regarding both qualitative and quantitative criteria are incorporated into this model so that a more comprehensive and realistic assessment of supplier performance can be achieved. The model combines five separate methods that have specific capabilities to handle multiple limitations in the existing methods: first, Fuzzy Analytical Hierarchy Process and Fuzzy TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) method are used to analyse qualitative criteria/data; second, Analytical Hierarchy Process and Axiomatic Design are used to analyse quantitative criteria/data, with a particular focus on handling variability in performance data; and third, Data Envelopment Analysis is used to integrate the results of the two approaches above to arrive at a comparative assessment of supplier performance. The proposed integrated model is verified using a numerical example.
Nagalakshmi, MR, Tripathi, M, Shukla, N & Tiwari, M 2009, 'Vehicle routing problem with stochastic demand (VRPSD): optimisation by neighbourhood search embedded adaptive ant algorithm (ns-AAA)', International Journal of Computer Aided Engineering and Technology, vol. 1, no. 3, pp. 300-321.
Taking into account the real world applications, this paper considers a vehicle routing problem with stochastic demand (VRPSD) in which the customer demand has been modelled as a stochastic variable. Considering the computational complexity of the problem and to enhance the algorithm performance, a neighbourhood search embedded adaptive ant algorithm (ns-AAA) is proposed as an improvement to the existing ant colony optimisation. The proposed metaheuristic adapts itself to maintain an adequate balance between exploitation and exploration throughout the run of the algorithm. The performance of the proposed methodology is benchmarked against a set of test instances that were generated using design of experiment (DOE) techniques. Besides, analysis of variance (ANOVA) is performed to determine the impact of various factors on the objective function value. The robustness of the proposed algorithm is authenticated against ant colony optimisation and genetic algorithm over which it always demonstrated better results thereby proving its supremacy on the concerned problem.
Shukla, M, Shukla, N, Tiwari, MK & Chan, FTS 2009, 'Integrated model for the batch sequencing problem in a multi-stage supply chain: an artificial immune system based approach', International Journal of Production Research, vol. 47, no. 4, pp. 1015-1037.
In this paper a mathematical model for the batch sequencing problem in a multistage supply chain is developed by taking into account three practically important objectives, viz. minimization of lead time, blocking time and due date violation. Attribute dependent operation time, sequence dependent setup time, different due dates, different lot sizes for batches and variable time losses due to interaction among several stages like waiting, idling, and blocking are also considered in the model. The problem is combinatorial in nature and complete enumeration of all its possibilities is computationally prohibitive. Therefore, a metaheuristic, artificial immune system (AIS) is employed to find an optimal/near optimal solution. In order to test the efficacy of AIS in solving the problem, its implementation on four different problems has been studied. Further, the comparative analysis of the results obtained by implementing AIS, genetic algorithm (GA) and simulated annealing (SA) on the proposed model reveals that AIS outperforms GA and SA in solving the underlying problem.
Dashora, Y, Kumar, S, Shukla, N & Tiwari, MK 2008, 'Improved and generalized learning strategies for dynamically fast and statistically robust evolutionary algorithms', Engineering Applications of Artificial Intelligence, vol. 21, no. 4, pp. 525-547.
This paper characterizes general optimization problems into four categories based on the solution representation schemes, as they have been the key to the design of various evolutionary algorithms (EAs). Four EAs have been designed for different formulations with the aim of utilizing similar and generalized strategies for all of them. Several modifications to the existing EAs have been proposed and studied. First, a new tradeoff function-based mutation has been proposed that takes advantages of Cauchy, Gaussian, random as well as chaotic mutations. In addition, a generalized learning rule has also been proposed to ensure more thorough and explorative search. A theoretical analysis has been performed to establish the convergence of the learning rule. A theoretical study has also been performed in order to investigate the various aspects of the search strategy employed by the new tradeoff-based mutations. A more logical parameter tuning has been done by introducing the concept of orthogonal arrays in the EA experimentation. The use of noise-based tuning ensures the robust parameter tuning that enables the EAs to perform remarkably well in the further experimentations. The performance of the proposed EAs has been analyzed for different problems of varying complexities. The results prove the supremacy of the proposed EAs over other well-established strategies given in the literature.
Shukla, N, Tiwari, MK & Shankar, R 2008, 'Optimal sensor distribution for multi-station assembly process using chaos-embedded fast-simulated annealing', International Journal of Production Research, vol. 47, no. 1, pp. 187-211.
This paper presents a novel methodology for the allocation of sensors in multi-station assembly processes. It resolves two core issues pertaining to the determination of an optimal number of sensors to be employed and their best locations. To make the traditional approach more effective, the effect of noise on sensor placement is minimized by maximizing the determinant of the Fischer information matrix. A state-space approach is adopted to model the variation propagation pertaining to the transfer of parts in a given multi-station assembly process. Further, the objective function conceived is significant over other contributions with respect to adding the effect of noise coupled with the sensors. Moreover, a new algorithm is developed to optimize the newly formulated objective function. The proposed algorithm combines chaotic sequences with traditional evolutionary fast simulated annealing (EFSA), hence it is termed chaos-embedded fast-simulated annealing (CEFSA). It can find the optimal sensor distribution with the minimum effect of noise in the sensor data. This paper reports on conceptual work, which underpins the research, and also presents details of a numerical example carried out in an industrial context to test the efficacy of the proposed algorithm. Further analysis reveals that the proposed approach obtains optimal distribution of sensors and offers more generic results compared with previously concluded analysis.
Bachlaus, M, Shukla, N, Tiwari, MK & Shankar, R 2006, 'Optimization of system reliability using chaos-embedded self-organizing hierarchical particle swarm optimization', Institution of Mechanical Engineers. Proceedings. Part O: Journal of Risk and Reliability, vol. 220, no. 2, pp. 77-91.
This paper addresses a reliability optimization problem, where the motive is to select the best components for series and series–parallel systems such that system reliability becomes maximized while simultaneously minimizing the cost, weight, and volume. Previous formulation of the problem has implicit restrictions, i.e. it either maximizes system reliability or minimizes the cost. Thus, in order to give a realistic view to the model, a comprehensive objective function has been formulated by combining the normalized values of reliability, cost, weight, and volume. In this paper, a chaos-embedded hierarchical particle swarm optimization (CE-HPSO) algorithm has been proposed to solve the problems arising in the optimization of system reliability using redundancy. The salient features of the proposed algorithm are the use of chaotic sequences and time-varying acceleration coefficients which are responsible for diversifying the search space. Moreover, to restrict the premature convergence, a hierarchical particle swarm optimizer has been used in the proposed algorithm. The performance of the CE-HPSO algorithm has been tested on three benchmark problems and the comparisons are made with genetic algorithm results. In order to check the scalability of the proposed solution methodology, small and large problems are also considered. The results demonstrate the benefits of the proposed algorithm for solving this type of problem.
Most of landlocked developing countries (LLDCs) such as Mongolia suffer economically due to their geographical location, lack of access to seaports and underdeveloped infrastructure. Political influences and cross-border delays add to the challenges in which Mongolian firms involved in trade operate. However, recent changes in the political atmosphere of the Northeast Asian region have encouraged firms to conduct trade through advanced logistics designs. This chapter discusses a multi-method simulation approach using Anylogic software as one of the few approaches which can be used to model end-to-end cross-border trade logistics in Mongolia with a view to optimise/improve trade opportunities/operations. Successful implementation of this method could significantly impact the effectiveness of supply chain networks and trade logistics of LLDCs with similar geographical and political attributes.
Shukla, N, Ma, J, Wickramasuriya, R, Huynh, N & Perez, P 2016, 'Modelling mode choice of individual in linked trips with artificial neural networks and fuzzy representation' in Artificial Neural Network Modelling, Switzerland, pp. 405-422.
Traditional mode choice models consider travel modes of an individual in a consecutive trip to be independent. However, a persons choice of the travel mode of a trip is likely to be affected by the mode choice of the previous trips, particularly when it comes to car driving. Furthermore, traditional travel mode choice models involve discrete choice models, which are largely derived from expert knowledge, to build rules or heuristics. Their approach relies heavily on a predefined specific model structure (utility model) and constraining it to hold across an entire series of historical observations. These studies also assumed that the travel diaries of individuals in travel survey data is complete, which seldom occurs. Therefore, in this chapter, we propose a data-driven methodology with artificial neural networks (ANNs) and fuzzy sets (to better represent historical knowledge in an intuitive way) to model travel mode choices. The proposed methodology models and analyses travel mode choice of an individual trip and its influence on consecutive trips of individuals. The methodology is tested using the Household Travel Survey (HTS) data of Sydney metropolitan area and its performance is compared with the state-of-the-art approaches such as decision trees. Experimental results indicate that the proposed methodology with ANN and fuzzy sets can effectively improve the accuracy of travel mode choice prediction.
Shukla, N & Prakash, PKS 2011, 'Multiple fault diagnosis using psycho‐clonal algorithms' in Evolutionary computing in advanced manufacturing, Hoboken, N.J, pp. 235-258.
Multiple Fault Diagnosis (MFD) is used as an effective way to tackle the problems of a real shop fl oor environment in order to reduce the total lifetime maintenance costs of the system. It is a well-known computationally complex problem, where computational complexity increases exponentially as the number of faults increases; thus, it warrants the application of heuristic techniques or artifi cial intelligence (AI) based optimization tools to diagnose the exact faults in real time. In this chapter, a methodology based on a Probabilistic Causal Model has been illustrated to resolve graph based multiple fault diagnosis problems. This methodology involves a new nature inspired algorithm know as the psycho-clonal algorithm for fault diagnosis. In the proposed methodology, we collect the faults corresponding to each observed manifestation that can give the best possible result instead of fi nding all possible combinations of faults. Intensive computational experiments on well-known data sets witness the superiority of the proposed psycho-clonal algorithms over earlier approaches existing in the literature. From experimental results, it is observed that the proposed methodology can diagnose the exact fault in the minimum fault isolation time as compared to other approaches.
Prakash, A, Shukla, N, Tiwari, M & Shankar, R 2008, 'Solving machine loading problem of FMS: an artificial intelligence (AI) based random search optimization approach' in Handbook of Computational Intelligence in Manufacturing and Production Management., US, pp. 19-43.
Jayanthakumaran, M, Shukla, N & Beydoun, G 2019, 'Impact of disruption in currency supply chain on farming operations in India: A social media analytics approach', 26th International European Operation Management Conference, Finland.View/Download from: UTS OPUS
Dharmapriya, S, Kiridena, S & Shukla, N 2018, 'Modelling sustainable supply networks with adaptive agents', International Conference on Production and Operations Management Society, Peradeniya, Sri Lanka, Sri Lanka.View/Download from: UTS OPUS
Dharmapriya, S, Kiridena, S & Shukla, N 2018, 'Modeling supply network configuration problems with varying demand profiles', 2018 IEEE Technology and Engineering Management Conference, TEMSCON 2018, IEEE Technology and Engineering Management Conference, IEEE, Evanston, IL, USA.View/Download from: UTS OPUS or Publisher's site
© 2018 IEEE. In this paper, we develop a novel multi-objective modeling approach to support supply network configuration decisions, while considering varying demand profiles. In so doing, we illustrate how such an approach could contribute to building supply network robustness and resilience. The proposed model entails two key objectives; minimizing lead time and cost across the supply network. The solution approach first employs a bidding mechanism to select a set of supply network entities that match with a given demand profile from a candidate pool of entities. It then applies the popular technique known as N on-dominated Sorting Genetic Algorithm-II to generate a set of Pareto-optimal solutions representing alternative supply network configurations. The proposed model is tested on a case study of a refrigerator supply network to draw delivery time and cost comparisons under static and dynamic demand profiles.
Michal, G, Huynh, N, Shukla, N, Munoz, A & Barthelemy, J 2017, 'RailNet: a simulation model for operational planning of rail freight', Transportation Research Procedia, Elsevier, Netherlands, pp. 461-473.View/Download from: UTS OPUS
In many rail networks, infrastructure constraints force the shared usage of lines between passenger and freight movements. Scheduling additional freight movements around existing passenger services and peak traffic based curfews presents significant challenges to commodity industries eager to increase export volumes. This paper addresses the problem of inserting additional freight movements in a constrained railway network. To this end, a railway operations planning model was developed to simulate and insert feasible rail movements in a non-periodic timetable. The simulation modelling platform developed in this paper is called RailNet, which simulates the existing railway network constraints and is capable of adding freight paths for planning and scheduling. The timetable for passenger trains is kept unchanged. The paper also reports a real case study in which RailNet was used to quantify the capacity of the track network at the Port Kembla Coal Terminal in New South Wales, Australia under different scenarios of infrastructure upgrades.
Shukla, N, Mishra, N & Singh, A 2017, 'Understanding the Food Supply Chain using Social Media Data Analysis', 7th International Conference on Advances in Information Mining and Management (IMMM), International Conference on Advances in Information Mining and Management, IARIA, Venice, Italy.View/Download from: UTS OPUS
This paper proposes a big data analytics based approach, which considers social media (Twitter) data for identifying supply chain management issues in food in-dustries. In particular, the proposed approach includes: (i) capturing of relevant tweets based on keywords; (ii) pre-processing of the raw tweets; and, (iii) text analysis using support vector machine (SVM) and hierarchical clustering with multiscale bootstrap resampling. The result of this approach included cluster of words, which can inform supply chain (SC) decision makers about the customer feedback and issues in the flow/quality of the food products. A case study of the beef supply chain was analysed using the proposed approach where three weeks of data from Twitter was used. The results indicated that the proposed text analytic approach can be helpful to efficiently identify and summarise crucial customer feedback for supply chain management.
Dharmapriya, USS, Kiridena, SB & Shukla, N 2016, 'A review of supply network configuration literature and decision support tools', 2016 International Conference on Industrial Engineering and Engineering Management, Danvers, United States, pp. 149-153.
Supply chain literature highlights the increasing importance of effective supply network configuration decisions that take in to account such realities as market turbulence and demand volatility, as well as ever expanding global production networks. Supply network configurations decisions that account for these contingencies are expected to meet the evolving needs of customers while delivering better outcomes for all parties involved. This paper presents the findings of a structured review of supply network configuration literature which is synthesized under the two categories, drivers of supply network configuration decisions and the key parameters considered in developing decision support tools. This review also included an evaluation of the tools used for supporting supply network configuration decisions. The paper identifies the areas for future research, as well as the decision support tools required for building supply network capacity to meet the challenges brought about by the changing business environment.
Patil, R, Shukla, N & Kiridena, S 2016, 'Simulation-based evaluation of an integrated planning and scheduling algorithm for maintenance projects', 23rd EurOMA Conference, pp. 1-10.
The field of maintenance project planning and scheduling is attracting increasing attention due to ever growing competition among manufacturing organisations. There is a lack of studies that has tackled all the aspects of maintenance project implementation such as costs, resources, down times, uncertainties, operational constraints, among others. Therefore, an approach which uses a unitary structuring method and discrete event simulation to integrate relevant data about the maintenance projects is proposed. The results of the evaluation, on a case from paper-pulp industry, have shown that the proposed approach is able to overcome most of the issues of maintenance planning and scheduling.
Shukla, N, Cao, VL, Phuong, VH, Shanahan, M, Ritter, A & Perez, P 2016, 'Individual-level simulation model for cost benefit analysis in healthcare', Proceedings, 30th European Conference on Modelling and Simulation, ECMS 2016, United States, pp. 138-144.
Illicit drug use creates significant burden at societal, family and personal levels. Every year substantial resources are allocated for treatment and the consequences of illicit drug use in Australia and around the world. Heroin is one of the major forms of illicit drugs. Several independent heroin treatment strategies or interventions exist and state-of-the art research demonstrates their efficacy and relative costeffectiveness. However, assessing total potential gains and burden from providing all treatment interventions or varying the mix of heroin treatments has never been attempted. This paper proposed an individual-level simulation model (ISM) which addresses net social benefit over a lifetime that can accommodate the complexity of individuals going in and out of multiple treatments and their corresponding costs and benefits arising from different treatments during the life-course of heroin users in the context of New South Wales (NSW) Australia. This model is intended to serve as an effective tool for economic evaluation and policy making in the illicit drug area in Australia. The validity of the model has been assessed by comparing short term outcomes or examining the status of participants at a various points of time predicted from the model with other data sets that were not used to parameterise the model. Initial model results have been also presented to highlight different types of scenario analysis that can be conducted in future.
Shukla, N, Ma, J, Wickramasuriya Denagamage, R, Huynh, NN & Perez, P 2015, 'Tour-based travel mode choice estimation based on data mining and fuzzy techniques', International Symposium for Next Generation Infrastructure (ISNGI 2014), United Kingdom, pp. 215-220.
Shukla, N, Wickramasuriya, R, Miller, AA & Perez, P 2015, 'Population accessibility to radiotherapy services in New South Wales Region of Australia: a methodological contribution', Journal of Physics: Conference Series, Euro Mini conference: Improving Healthcare: New Challenges, New Approaches, United Kingdom, pp. 012004-1-012004-9.
Huynh, NN, Shukla, N, Munoz Aneiros, A, Cao, VL & Perez, P 2014, 'A semi-deterministic approach for modelling of urban travel demand', International Symposium for Next Generation Infrastructure (ISNGI 2013), Australia, pp. 191-199.
This paper presents a methodology to construct travel related activity schedules for individuals in a synthetic population. The resulting list of activity schedules are designed as an input into a micro-simulator for urban transport dynamics analysis. The methodology involves two main steps. The first step generates a synthetic population based on census data sourced from the Australian Bureau of Statistics (ABS). The second step assigns individuals in the synthetic population activity schedules using Household Travel Survey (HTS) data related to the geographical area of interest (in this case, the Sydney Greater Metropolitan area). Each individual is assigned an ordered set of trips, travel purpose, travel mode, departure time and estimated trip time. The significance of the methodology is twofold in that it generates a synthetic population aligned with area demographics, as well as generating activity schedules that realistically represent how the population uses existing transport infrastructure. The methodology also preserves the inter-dependencies (in terms of the sequence, travel times and purpose of trips) of individual’s daily trips, in contrast to many trip generators for transport micro-simulation purposes. A case study of Randwick area in southern Sydney is presented where the proposed methodology is applied. Case study data is validated against real world results and the scalability and applicability to other urban areas are discussed.
Namazi-Rad, M, Shukla, N, Munoz, A, Mokhtarian, P & Ma, J 2014, 'A probabilistic predictive model for residential mobility in Australia', International Symposium for Next Generation Infrastructure (ISNGI 2013), Australia, pp. 400-406.
Household relocation modelling is an integral part of the planning process as residential movements influence the demand for community facilities and services. Department of Families, Housing, Community Services and Indigenous Affairs (FaHCSIA) created the Household, Income and Labour Dynamics in Australia (HILDA) program to collect reliable longitudinal data on family and household dynamics. Socio-demographic information (such as general health situation and well-being, lifestyle changes, residential mobility, income and welfare dynamics, and labour market dynamics) is collected from the sampled individuals and households. The data shows that approximately 17% of Australian households and 13% of couple families in the HILDA sample relocate residence each year. Yet, little is known on how this information can be utilised to develop a predictive model of household relocation. This study links changes in employment status and household types to a reliable estimate of the residential relocation probability by developing a logit model to explain the residential relocation in Sydney metropolitan area using the HILDA dataset.
Shukla, N, Hoang, V, Shahanan, M, Ritter, A, Cao, VL & Perez, P 2014, 'A lifetime individual sampling model (ISM) for heroin use and treatment evaluation in Australia', 3rd International Workshop on Innovative Simulation for Health Care, IWISH 2014, pp. 6-13.
Illicit drug use has created an enormous burden at societal, family and personal levels. Every year a significant amount of resources is allocated for treatment and the consequences of illicit drug use in Australia and around the world. Heroin is one of the major forms of illicit drugs that are used illegally. Several independent heroin treatment strategies or interventions exist and state-of-the-art research demonstrates their efficacy and relative costeffectiveness. However, assessing total potential gains and burden from providing all treatment interventions or varying the mix of heroin treatments has never been attempted. Furthermore, the need to include multiple treatments, multiple important outcomes, and the chaotic nature of drug dependence means costeffectiveness studies are not able to provide evidence on net benefit of providing heroin treatments over the lifetime. Evaluations of the current mix of treatment provision remain very limited. Thus, this paper will discuss an individual level model which addresses net social benefit over a lifetime, also known as individual sampling model (ISM), that can accommodate the complexity of individuals going in and out of multiple treatments and their corresponding costs and benefits arising from different treatments during the life-course of heroin users in the context of New South Wales (NSW) Australia. This model is intended to serve as an effective tool for economic evaluation and policy making in illicit drug area in Australia.
Shuklal, N, Hoang, V, Shahanan, M, Ritter, A, Cao, V & Perez, P 2014, 'A lifetime individual sampling model (ISM) for heroin use and treatment evaluation in Australia', 11th International Multidisciplinary Modelling & Simulation Multiconference, Italy, pp. 1-8.
Illicit drug use has created an enormous burden at societal, family and personal levels. Every year a significant amount of resources is allocated for treatment and the consequences of illicit drug use in Australia and around the world. Heroin is one of the major forms of illicit drugs that are used illegally. Several independent heroin treatment strategies or interventions exist and state-of-the-art research demonstrates their efficacy and relative cost-effectiveness. However, assessing total potential gains and burden from providing all treatment interventions or varying the mix of heroin treatments has never been attempted. Furthermore, the need to include multiple treatments, multiple important outcomes, and the chaotic nature of drug dependence means cost-effectiveness studies are not able to provide evidence on net benefit of providing heroin treatments over the lifetime. Evaluations of the current mix of treatment provision remain very limited. Thus, this paper will discuss an individual level model which addresses net social benefit over a lifetime, also known as individual sampling model (ISM), that can accommodate the complexity of individuals going in and out of multiple treatments and their corresponding costs and benefits arising from different treatments during the life-course of heroin users in the context of New South Wales (NSW) Australia. This model is intended to serve as an effective tool for economic evaluation and policy making in illicit drug area in Australia.
Shukla, N, Ma, J, Wickramasuriya, R & Huynh, N 2013, 'Data-driven modeling and analysis of household travel mode choice', 20th International Congress on Modelling and Simulation, Australia, pp. 92-98.
One of the important problems studied in the area of travel behavior analysis is travel mode choice which is one of the four crucial steps in transportation demand estimation for urban planning. State of the art models in travel demand modelling can be classified as trip based; tour based; and activity based. In trip based approach, each individual trips is modelled as independent and isolated trips i.e. no connections between different trips. In tour based approach, trips that start and end from the same location (home, work, etc) and trips within a tour are dependent on each other. In past two decades, researchers have focussed on activity based modelling, where travel demand is derived from the activities that individuals need/wish to perform. In this approach, spatial, temporal, transportation and interpersonal interdependencies (in a household) constrain activity/travel behaviour. This paper extends tour-based mode choice model, which mainly includes individual trip level interactions, to include linked travel modes of consecutive trips of an individual. Travel modes of consecutive trip made by an individual in a household have strong dependency or co-relation because individuals try to maintain their travel modes or use a few combinations of modes for current and subsequent trips. Traditionally, tour based mode choice models involved nested logit models derived from expert knowledge. There are limitations associated with this approach. Logit models assumes i) specific model structure (linear utility model) in advance; and, ii) it holds across an entire historical observations. These assumptions about the predefined model may be representative of reality, however these rules or heuristics for tour based mode choice should ideally be derived from the survey data rather than based on expert knowledge/judgment. Therefore, in this paper, we propose a novel data-driven methodology to address the issues identified in tour based mode choice. The proposed methodolog...
Shukla, N, Munoz, A, Ma, J & Huynh, N 2013, 'Hybrid agent based simulation with adaptive learning of travel mode choices for University commuters (WIP)', Proceedings of the Symposium on Theory of Modeling & Simulation - DEVS Integrative M&S Symposium, United States, pp. 1-6.
This paper presents a methodology for developing a hybrid agent-based micro-simulation model to capture the impacts of commuter travel mode choices on a University campus transport network. The proposed methodology involves: (i) developing realistic population of commuter agents (students and staff); (ii) assigning activity lists and travel mode choices to agents using machine learning method; and, (iii) traffic micro-simulation of the study area transport network. This furthers the understanding of current transport modal distributions, factors affecting the travel mode choice decisions, and, network performance through a number of hypothetical travel scenarios.
Shukla, N, Munoz, A, Ma, J & Huynh, N 2013, 'Hybrid agent based simulation with adaptive learning of travel mode choices for university commuters (WIP)', Simulation Series, pp. 9-14.
This paper presents a methodology for developing a hybrid agent-based micro-simulation model to capture the impacts of commuter travel mode choices on a University campus transport network. The proposed methodology involves: (i) developing realistic population of commuter agents (students and staff); (ii) assigning activity lists and travel mode choices to agents using machine learning method; and, (iii) traffic micro-simulation of the study area transport network. This furthers the understanding of current transport modal distributions, factors affecting the travel mode choice decisions, and, network performance through a number of hypothetical travel scenarios.
Shukla, N, Kiridena, S & Mishra, N 2012, 'Reducing unwarranted variation in healthcare service delivery systems: key issues, research challenges and potential solutions', 26th ANZAM Conference 2012.
There is a growing need worldwide to increase the quality and productivity of healthcare services delivery. To this end, analysing and reducing unwarranted variations in healthcare has attracted much attention in recent times. However, current modelling and simulation approaches to reduce unwarranted variations suffer from numerous limitations. Consequently, service improvement efforts have often failed to deliver expected results. This paper discusses the key issues associated with reducing unwarranted variations in hospital service delivery systems, and proposes a research framework that aims at overcoming these issues. In doing so, it highlights the need for: accurately and efficiently modelling complex service delivery systems; developing systematic knowledge acquisition approaches; and developing scalable simulation models to analyse unwarranted variations on and from care pathways.
Shukla, N, Tiwari, M & Ceglarek, D 2009, 'Feature-based Optimal Sensor Distribution for Six-sigma Variation Diagnosis in Multi-Station Assembly Processes', 7th International Conference on Manufacturing Research (ICMR).
Shukla, N, Dashora, Y, Tiwari, MK, Chan, FTS & Wong, TC 2008, 'Introducing algorithm portfolios to a class of vehicle routing and scheduling problem', Proceedings of The 2nd International Conference on Operations and Supply Chain Management, pp. 1-10.
The paper presents a comprehensive foundation and implementation of Algorithm Portfolios to solve Theater Distribution Vehicle Routing and Scheduling Problems (TDVRSP). In order to evaluate the performance of proposed approach, it has been applied to varying dimensions of theater distribution problem. In particular, eight random search metaheuristics embedded in four processors, packed to form different portfolios. Four basic algorithms- Genetic Algorithm (GA), Simulated Annealing (SA), Tabu Search (TS) and Artificial Immune System (AIS), as well as their group theoretic counterparts have been utilized. The proposed approach also takes care of platform dependence and helps evolving a robust solution pack. The portfolio concept is shown to be computationally advantageous and qualitatively competitive over the benchmark set of problems. The paper does not only provide modeling to TDVRSP, but also aids in developing a generic solution framework for other problems of its kind.
Shukla, N, Tiwari, M & Ceglarek, D 2007, 'Agent-based simulation model for fault diagnosis in multi-station manufacturing systems', Proceedings of 2nd International Conference on Changeable, Agile, Reconfigurable and Virtual Production.
Shukla, N, Tiwari, M & Shankar, R 2006, 'Multi station assembly process and determining the optimal sensor placement using chaos embedded fast simulated annealing', 11th Design for Manufacturing and the Lifecycle Conference ASME 2006 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference (IDETC/CIE2006).
This paper presents a new methodology for allocation of sensors in Multi Station Assembly processes. It resolves two core issues i.e. determining the optimal number of sensors to be used and the best locations for each of sensors. The effect of noise on the sensor placement has been minimized by maximizing the determinant of Fisher information matrix. The paper conceives objective function that is significant over other contributions in respect of adding the effect of noise coupled with the sensor data. To optimize the proposed objective function, a new algorithm is developed that combines Chaotic sequences with traditional Evolutionary Fast Simulated Annealing (EFSA) and therefore termed as chaos embedded fast simulated annealing (CEFSA). The proposed algorithm finds the optimal sensor distribution and allocation with minimum noise term in sensor data. The paper also reports the details of a numerical example, carried out in an industrial context to test the efficacy of proposed algorithm. Further analysis reveals that the proposed approach to obtain optimal number of sensors and selection of best locations offers more generic results compared to previously concluded analysis.
McCusker, A & Shukla, N 2012, Short report on: Inquiry into the utilisation of rail and infrastructure corridors. Prepared for the NSW Legislative Assembly, Committee on Transport and Infrastructure, Australia.
This report to the parliamentary inquiry addresses the use of land development for integrated infrastructure corridors and considers improvement to policy development, planning and strategies to achieve greater productivity, enhanced liveability and improved economic benefit through informed decision making. In considering rail corridor usage long term it is essential to support the needs of society through decision making that is underpinned by proper analysis and longer term modelling. In order to ensure any proposed interventions represent a step forward with respect to competiveness of the transport system, the liveability for affected residence and customers and the overall resilience to unexpected events. This report cites a number of case studies from Europe, Asia and the Americas where an informed and structured approach has yielded positive results.
- WMG (Uni-Warwick, UK)
- National Drugs and Alcohol Research Centre (Uni-NSW)
- Cardiff University (UK)
- University of Liverpool (UK)
- University of Hull (UK)
- SMART Infrastructure Facility (Uni-Wollongong)
- Indian Institute of Technology (Delhi, Kharagpur)
- National University of Mongolia
- Illawarra Cancer Care Centre (ICCC, Wollongong Hospital)