Fast Solution Techniques for Energy Management in Smart Homes
Access status:
Open Access
Type
ThesisThesis type
Doctor of PhilosophyAuthor/s
Keerthisinghe, ChanakaAbstract
In the future, residential energy users will seize the full potential of demand response schemes by using an automated smart home energy management system (SHEMS) to schedule their distributed energy resources. The underlying optimisation problem facing a SHEMS is a sequential ...
See moreIn the future, residential energy users will seize the full potential of demand response schemes by using an automated smart home energy management system (SHEMS) to schedule their distributed energy resources. The underlying optimisation problem facing a SHEMS is a sequential decision making problem under uncertainty because the states of the devices depend on the past state. There are two major challenges to optimisation in this domain; namely, handling uncertainty, and planning over suitably long decision horizons. In more detail, in order to generate high quality schedules, a SHEMS should consider the stochastic nature of the photovoltaic (PV) generation and energy consumption. In addition, the SHEMS should accommodate predictable inter-daily variations over several days. Ideally, the SHEMS should also be able to integrate into an existing smart meter or a similar device with low computational power. However, extending the decision horizon of existing solution techniques for sequential stochastic decision making problems is computationally difficult and moreover, these approaches are only computationally feasible with a limited number of storage devices and a daily decision horizon. Given this, the research investigates, proposes and develops fast solution techniques for implementing efficient SHEMSs. Specifically, three novel methods for overcoming these challenges: a two-stage lookahead stochastic optimisation framework; an approximate dynamic programming (ADP) approach with temporal difference learning; and a policy function approximation (PFA) algorithm using extreme learning machines (ELM) are presented. Throughout the thesis, the performance of these solution techniques are benchmarked against dynamic programming (DP) and stochastic mixed-integer linear programming (MILP) using a range of residential PV-storage (thermal and battery) systems. We use empirical data collected during the Smart Grid Smart City project in New South Wales, Australia, to estimate the parameters of a Markov chain model of PV output and electrical demand using an hierarchical approach, which first cluster empirical data and then learns probability density functions using kernel regression (Chapter 2). The two-stage lookahead method uses deterministic MILP to solve a longer decision horizon, while its end-of-day battery state of charge is used as a constraint for a daily DP approach (Chapter 4). Here DP is used for the daily horizon as it is shown to provide close-to-optimal solutions when the state, decision and outcome spaces are finely discretised (Chapter 3). However, DP is computationally difficult because of the dimensionalities of state, decision and outcome spaces, so we resort to MILP to solve the longer decision horizon. The two-stage lookahead results in significant financial benefits compared to daily DP and stochastic MILP approaches (8.54% electricity cost savings for a very suitable house), however, the benefits decreases as the actual PV output and demand deviates from their forecast values. Building on this, ADP is proposed in Chapter 5 to implement a computationally efficient SHEMS. Here we obtain policies from value function approximations (VFAs) by stepping forward in time, compared to the value functions obtained by backward induction in DP. Similar to DP, we can use VFAs generated during the offline planning phase to generate fast real-time solutions using the Bellman optimality condition, which is computationally efficient compared to having to solve the entire stochastic MILP problem. The decisions obtained from VFAs at a given time-step are optimal regardless of what happened in the previous time-steps. Our results show that ADP computes a solution much faster than both DP and stochastic MILP, and provides only a slight reduction in quality compared to the optimal DP solution. In addition, incorporating a thermal energy storage unit using the proposed ADP-based SHEMS reduces the daily electricity cost by up to 57.27% for a most suitable home, with low computational burden. Moreover, ADP with a two-day decision horizon reduces the average yearly electricity cost by a 4.6% over a daily DP method, yet requires less than half of the computational effort. However, ADP still takes a considerable amount of time to generate VFAs in the off-line planning phase and require us to estimate PV and demand models. Given this, a PFA algorithm that uses ELM is proposed in Chapter 6 to overcome these difficulties. Here ELM is used to learn models that map input states and output decisions within seconds, without solving an optimisation problem. This off-line planning process requires a training data set, which has to be generated by solving the deterministic SHEMS problem over couple of years. Here we can use a powerful cloud or home computer as it is only needed once. PFA models can be used to make fast real-time decisions and can easily be embedded in an existing smart meter or a similar low power device. Moreover, we can use PFA models over a long period of time without updating the model and still obtain similar quality solutions. Collectively, ADP and PFA using ELM can overcome challenges of considering the stochastic variables, extending the decision horizon and integrating multiple controllable devices using existing smart meters or a device with low computational power, and represent a significant advancement to the state of the art in this domain.
See less
See moreIn the future, residential energy users will seize the full potential of demand response schemes by using an automated smart home energy management system (SHEMS) to schedule their distributed energy resources. The underlying optimisation problem facing a SHEMS is a sequential decision making problem under uncertainty because the states of the devices depend on the past state. There are two major challenges to optimisation in this domain; namely, handling uncertainty, and planning over suitably long decision horizons. In more detail, in order to generate high quality schedules, a SHEMS should consider the stochastic nature of the photovoltaic (PV) generation and energy consumption. In addition, the SHEMS should accommodate predictable inter-daily variations over several days. Ideally, the SHEMS should also be able to integrate into an existing smart meter or a similar device with low computational power. However, extending the decision horizon of existing solution techniques for sequential stochastic decision making problems is computationally difficult and moreover, these approaches are only computationally feasible with a limited number of storage devices and a daily decision horizon. Given this, the research investigates, proposes and develops fast solution techniques for implementing efficient SHEMSs. Specifically, three novel methods for overcoming these challenges: a two-stage lookahead stochastic optimisation framework; an approximate dynamic programming (ADP) approach with temporal difference learning; and a policy function approximation (PFA) algorithm using extreme learning machines (ELM) are presented. Throughout the thesis, the performance of these solution techniques are benchmarked against dynamic programming (DP) and stochastic mixed-integer linear programming (MILP) using a range of residential PV-storage (thermal and battery) systems. We use empirical data collected during the Smart Grid Smart City project in New South Wales, Australia, to estimate the parameters of a Markov chain model of PV output and electrical demand using an hierarchical approach, which first cluster empirical data and then learns probability density functions using kernel regression (Chapter 2). The two-stage lookahead method uses deterministic MILP to solve a longer decision horizon, while its end-of-day battery state of charge is used as a constraint for a daily DP approach (Chapter 4). Here DP is used for the daily horizon as it is shown to provide close-to-optimal solutions when the state, decision and outcome spaces are finely discretised (Chapter 3). However, DP is computationally difficult because of the dimensionalities of state, decision and outcome spaces, so we resort to MILP to solve the longer decision horizon. The two-stage lookahead results in significant financial benefits compared to daily DP and stochastic MILP approaches (8.54% electricity cost savings for a very suitable house), however, the benefits decreases as the actual PV output and demand deviates from their forecast values. Building on this, ADP is proposed in Chapter 5 to implement a computationally efficient SHEMS. Here we obtain policies from value function approximations (VFAs) by stepping forward in time, compared to the value functions obtained by backward induction in DP. Similar to DP, we can use VFAs generated during the offline planning phase to generate fast real-time solutions using the Bellman optimality condition, which is computationally efficient compared to having to solve the entire stochastic MILP problem. The decisions obtained from VFAs at a given time-step are optimal regardless of what happened in the previous time-steps. Our results show that ADP computes a solution much faster than both DP and stochastic MILP, and provides only a slight reduction in quality compared to the optimal DP solution. In addition, incorporating a thermal energy storage unit using the proposed ADP-based SHEMS reduces the daily electricity cost by up to 57.27% for a most suitable home, with low computational burden. Moreover, ADP with a two-day decision horizon reduces the average yearly electricity cost by a 4.6% over a daily DP method, yet requires less than half of the computational effort. However, ADP still takes a considerable amount of time to generate VFAs in the off-line planning phase and require us to estimate PV and demand models. Given this, a PFA algorithm that uses ELM is proposed in Chapter 6 to overcome these difficulties. Here ELM is used to learn models that map input states and output decisions within seconds, without solving an optimisation problem. This off-line planning process requires a training data set, which has to be generated by solving the deterministic SHEMS problem over couple of years. Here we can use a powerful cloud or home computer as it is only needed once. PFA models can be used to make fast real-time decisions and can easily be embedded in an existing smart meter or a similar low power device. Moreover, we can use PFA models over a long period of time without updating the model and still obtain similar quality solutions. Collectively, ADP and PFA using ELM can overcome challenges of considering the stochastic variables, extending the decision horizon and integrating multiple controllable devices using existing smart meters or a device with low computational power, and represent a significant advancement to the state of the art in this domain.
See less
Date
2016-08-31Faculty/School
Faculty of Engineering and Information Technologies, School of Electrical and Information EngineeringAwarding institution
The University of SydneyShare