Research Projects

If you are a student interested in doing any of the following research projects under my supervision, please do not hesitate to contact me.

1. Solving real-world large scale black-box optimization problems

Many real-world optimization problems are large scale, expensive to evaluate, and difficult to formulate, involving thousands of variables and constraints. Existing optimization methods are ill-equipped to deal with large scale problems, although they may perform well on smaller problems. This project aims to develop practical solutions for real-world large scale black-box optimization. The project will take an inter-disciplinary approach integrating ideas from mathematical programming, meta-heuristics, and operations research. Our aim is to develop effective optimization techniques for solving real-world large scale optimization problems that really matter to the practitioners. This project will build on our research strength in the area of large scale optimization using meta-heuristics: https://titan.csit.rmit.edu.au/~e46507/ieee-lsgo/

The research findings will help bridge the gap between mathematical programming and meta-heuristics and overcome limitations of the existing large scale optimization techniques. We expect the candidates to have a strong mathematical and computer science background, in particular some knowledge of Artificial Intelligence and optimization methods.

2. Data-driven machine learning and optimization techniques

In the era of internet and big data, abundance of data are readily made available for business to make sense of. These data contain rich information that can be extracted and learnt by using machine learning techniques, to facilitate better decision making. In this project, you will be expected to explore and investigate the interesting interplays between machine learning and optimization in for more effective real-world problem solving. We are particularly interested to study how machine learning can be employed to enhance the efficiency aspect of an optimization method, or vice versa, i.e., how optimization techniques (in terms of quality and/or efficiency) can be also adopted to improve the performance of a machine learning technique (e.g., a deep learning model tends to have a huge parameter space, and the question is how to carry out effective optimization considering the very large search space).

3. Improving multimodal optimization through effective niching techniques

Population-based meta-heuristic algorithms such as Evolutionary Algorithms (EAs) in their original forms are usually designed for locating a single global solution. These algorithms typically converge to a single solution because of the global selection scheme used. Nevertheless, many real-world problems are “multimodal” by nature, i.e., multiple satisfactory solutions exist. It may be desirable to locate many such satisfactory solutions so that a decision maker can choose one that is most proper in his/her problem domain. Numerous techniques have been developed in the past for locating multiple optima (global or local). These techniques are commonly referred to as “niching” methods. A niching method can be incorporated into a standard EA to promote and maintain formation of multiple stable subpopulations within a single population, with an aim to locate multiple globally optimal or suboptimal solutions. Many niching methods have been developed in the past, including crowding, fitness sharing, derating, restricted tournament selection, clearing, speciation, etc. In this project, you are expected to study and design more effective niching methods to overcome some issues by existing niching methods. For a recent survey, please read the following TEVC paper:

Li, X., Epitropakis, M.G., Deb, K., and Engelbrecht, A. (2017), "Seeking Multiple Solutions: an Updated Survey on Niching Methods and Their Applications", IEEE Transactions on Evolutionary Computation, 21(4):518 - 538, August 2017 (Download a local copy).

4. More efficient evolutionary multi-objective optimization algorithms

Many real-world problems involve multiple measures of performance, or objectives, which should be optimized simultaneously, however optimal performance according to one objective often implies unacceptably low performance in one or more of the other objective dimensions. This requires a compromise to be reached. Evolutionary algorithms are naturally suitable to this type of problem solving because EA is population-based and it is possible to generate multiple feasible solutions in a single run. However using EAs could also result in expensive computation. See the EMOO website for more information on this topic: http://www.lania.mx/~ccoello/EMOO./

In this project, you are expected to study methods that allow for more efficient computation in EMO. Standard test functions will be used for conducting experiments and analysis of the results. Comparison with at least one of the existing MO algorithms should be provided.

5. Optimization in dynamic environment

Traditionally optimization is carried out towards a single static objective, which does not change during the course of the optimization. In recent years, there have been increasing interests in using decentralized and spatially distributed evolutionary algorithms to handle an optimization task that changes its optima over time. We could take advantage of the parallel and distributed structure of a parallel evolutionary algorithm to deal with this kind of task. Evolutionary Algorithms and its variants (e.g., Particle Swarm and Differential Evolution), particularly those exhibiting self-adaptive behaviours can be investigated for their effectiveness for such tasks. There has been very little research done in the past in using adaptive or self-adaptive techniques for tracking multiple optima in a dynamic environment. Juergen Branke’s Moving peak benchmark function generator can be used to generate multiple moving peaks (see this paper).

6. Multiobjective approaches applied to single objective optimization

Multiobjective optimization can be considered as a more general case of optimization than a single objective optimization, that is, all single objective optimization problems can be treated in some way as a multiobjective optimization problems. It has been shown that adopting such a "multiobjectivity" viewpoint in the single objective optimization can actually help to further improve its performance. In this project, your task is to investigate a number of different ways of employing a secondary (or so-called helper) objective along side of the primary objective in an effort to further improve the performance of the original optimization algorithm without using such a secondary objective. Some comparative studies are necessary.

7. Multi-objective Scheduling using Evolutionary Algorithms

Scheduling is among the hardest combinatorial optimization problems. From the computational complexity theory, most scheduling problems are known to be NP-complete. In real-world scheduling applications, problems are usually subject to many constraints. This makes finding the optimal solutions particularly hard. A scheduling problem often involves multiple objectives, which ideally need to be optimized simultaneously. For example in a Flowshop scheduling problem, a schedule can be measured by completion time, tardiness, and mean-flow-time. These multiple objectives are often non-commensurable, gaining an improvement on one objective often causing degrading performance on other objectives. Using the concept of dominance (in order to produce non-dominated solutions, so called Pareto front) in conjunction with an Evolutionary Algorithm has proved to be very effective for multiobjective optimization. In this project you will need to look at how to accommodate scheduling problems in a EA model based on NSGA-II, develoed by Kalyanmoy Deb (2002). Some artificially-constructed test cases will be first used for preliminary experimentation, before the model is applied to a complex real-world scheduling problem.

8. Combinatorial problem solving using a binary/discrete Particle Swarm Optimizer

Particle Swarm Optimization (PSO) is a popular population-based stochastic optimization method, inspired by the social interactions of animals or insects. Majority of PSO applications have been solving continuous problems, but very few on combinatorial (ie., binary or discrete) problems. Kennedy and Eberhart (1997) proposed a simple binary PSO, which optimize binary-valued parameters. Here particle position vectors are binary vectors, and the velocity vectors are still floating-point vectors. However, velocities are used to determine the probability that an element of the position vector is bit 0 or bit 1. In this project, you are expected to implement first the original BPSO, then investigate its feasibility to be applied to at least one or two other combinatorial problems, such as scheduling, knapsack or the travelling salesman problems.

9. Evolutionary techniques for solving Sudoku puzzles

Sudoku is a very popular puzzle in recent year. It is NP-complete. Some stochastic search algorithms (such as simulated annealing and evolutionary algorithms) have been developed to solve this problem, but only with limited success. In this project, you will aim to first implement a prototype model for representing Sudoku, and also develop an Evolutionary Algorithm that can evolve candidate solutions. Some representation issues and how to apply EAs to Sudoku puzzle problems can be found from the following papers:

Lewis, R. (2007), ``Metaheuristics can solve sudoku puzzles'', Journal of Heuristics, Vol:13, p.387-401.

Moraglio, A. and Togelius, J. and Lucas, S. (2007), ``Geometric Particle Swarm Optimization for the Sudoku Puzzle'', Genetic and Evolutionary Computation Conference (GECCO'2007), p.118-125.

10. Evolutionary multiobjective approaches to constraint handling

Constrained optimization is optimization of an objective function subject to constraints on the possible values of the domain variables. Constraints can be either equality constraints or inequality constraints. Many real-world problems must be handled with careful consideration of their constraints on variables (e.g., timetable scheduling for classes/tutes/labs at RMIT). Evolutionary Multiobjective Optimization (EMO) shows a great promise in providing alternative efficient constraint handling techniques to the traditional methods, such as using penalty functions or weighted sum approaches. These traditional methods require a user to specify coefficients for the penalty function or assign weights to variables, however, the difficulty is that this knowledge is often unavailable to the user. In EMO, constraints can be treated as secondary objectives (hard or soft constraints). By using the Pareto approach (dominance comparison), it is possible to avoid the use of penalty functions or weighted sum methods. some existing works such as NSGA II (Deb, 2002) have shown competitive performance in comparison with the traditional constraint handling methods.

In this project, you are expected to investigate how to effectively apply one of such EMO techniques, such as NSGA II (Deb, 2002) or a Multiobjective Particle Swarm Optimization algorithm, to a real-world problem, e.g., timetable scheduling, document page layout, or network routing, etc. Some benchmark test functions and simple test cases can be used first to evaluate the prototype EMO algorithm, and subsequently more complex and realistic test cases can be employed to assess the effectiveness and usefulness of the EMO algorithm.

11. Co-evolution algorithms for optimization

Evolutionary algorithms such as Genetic Algorithms traditionally use only one type of individual in the population, but in nature, we often see an abundance of different species living in harmony side-by-side. This research will investigate how to effectively use the idea of species coevolution for function optimization. Issues that are of particular concern are the effect of co-operation and competition, fitness sharing and fitness landscape deformation as species change over time. A good example of co-evolution in nature is the emerged population dynamics as a result of predator-prey interaction (e.g., the fluctuation of the fox and rabbit population sizes). This project will aim at developing a model that allows two or more species co-living in the same environment. In contrast to the traditional reductionist paradigm, this kind of bottom-up approach is well represented in the recently developed Artificial Life research area. This project will study the dynamics produced by the interaction among different species, and investigate the possibility of using such species dynamics for the purpose of function optimization. You are expected to conduct experiments with the model developed using a number of standard test functions.

12. Coevolving team behaviours using neuro-evolution techniques in multi-agent environment

Neuro-evolution is a technique proven to be very successful for evolving cooperative team behaviours (Gomez and Miikkulainen, 1997). Neuro-evolution is a machine learning technique that employs evolutionary algorithms to train neural networks. In the simplest form, neuro-evolution assembles the connection weights of a neural network together to form an individual genome, and then it evolves a population of such genomes by evaluating each genome in the task assigned and reproducing the fitter individuals through crossover and mutation over many generations. A series of neuro-evolution techniques have been developed such as SANE, ESP, and NEAT. Potential application of any of these techniques to evolve cooperative behaviours for some specific team-oriented tasks is yet to be fully explored. Some benchmark test problems could be predator-prey pursuit-and-evasion problems and the `keepaway' task that can be incorporated into the RoboCup simulation (Pietro, et al, 2002).

13. Optimization in noisy environments

Many real-world problems are noisy by nature. For an EA to handle a noisy problem effectively, we often have to use some sampling techniques. However some of these sampling techniques can be very expensive, e.g., they require many more evaluations. More effective and adaptive sampling methods are required, in order to allow computational resources to be more effectively used.

14. Constraint handling using an evolutionary algorithm solver

Many real-world problems are highly constrained. For example, RMIT classroom scheduling, which is currently tuned manually, can be a very daunting task if it be done by automation using an optimization algorithm. An example of a possible constraint can be - if one room is taken by one lecture, that room cannot be taken again by any other lecture at that timeslot. There are numerous constraints that must be enforced on the classroom scheduling problem, in order to produce a valid classroom schedule.

MiniZinc is a constraint modelling language allowing most common constraint problems (e.g., timetabling, vehicle routing, Sudoku, traveling salesman, etc) to be easily represented and solved. More information on MiniZinc can be found in the following website: http://www.g12.csse.unimelb.edu.au/minizinc/

Your task is to develop a Minizinc solver using one of the state-of-the-art evolutionary algorithms.

15. Developing a recommender system for movie goers

One of the classic examples of recommender systems is Amazon.com where by collecting and analysing user data, e.g., purchase or sale data, the system can make appropriate recommendations to the perspective buyers. In this project, you are expected to develop a recommender system for movie goers using some existing machine learning techniques. You will be using the KDD-CUP 2007 competition data sets and rules for this project: http://www.sigkdd.org/kddcup/index.php?section=2007&method=info. The task is very similar to the famous Netflix Prize competition (unfortunately this competition was discontinued in 2009).

16. Applying Monte Carlo Tree Search (MCTS) to new optimization problems

Monte Carlo Tree Search (MCTS) is a very successful tree search algorithm that uses random sampling to distinguish good and bad moves in the context of a two-player game, eg., chess and checkers. In recent years, MCTS and its variants have shown remarkable success in playing Go, where many existing algorithms fail miserably. It is claimed that a MCTS algorithm can now play Go at a top professional player level (see the reference below).

Gelly, S., Kocsis, L., Schoenauer, M., Sebag, M., Silver, D., Szepesvari, C, and Teyaud, O. (2012), "The Grand Challenge of Computer Go: Monte Carlo Tree Search and Extensions", Communications of the ACM, March 2012, 55(3): 106 - 113 (see this article online: http://cacm.acm.org/magazines/2012/3/146245-the-grand-challenge-of-computer-go/fulltext).

It would be very interesting to investigate how to extend the use of MCTS to many other optimization problems such as scheduling. It would be also interesting to explore how existing machine learning techniques can be improved if some of MCTS ideas are adopted. More research ideas on MCTS can be found from: http://www.mcts.ai/