Predicting solar power output with limited data sizes

Share

Scientists at the National Technical University of Athens have used a machine-learning method known as transfer- learning (TL) to develop a new solar power forecast modeling intended at helping developers with limited data size.

The TL method uses a trained model on one task to repurpose in a second, related task. The researchers used three TL strategies in combination with the stacked long short-term memory (LSTM) model, which is a kind of recurrent neural network capable of learning order dependence in sequence prediction problems. The LSTM technique takes the relevant parts of a pre-trained machine learning model and applies it to a new but similar problem.

“TL is exploited both for weight initialization of the LSTM model and for feature extraction, using different freezing approaches,” they explained. “LSTM depends on weight updating between the neurons of the deep learning model, allowing the creation of pre-trained models. Thus, it facilitates pre-training the model on the baseline PV in order to utilize the saved weights of the pre-trained model and apply TL on the target PV.”

The stacked LSTM model considers temperature, humidity, solar irradiance, PV production, one-hot encoding representation of the month of the year, and sine/cosine transformation of the hour of day. The three strategies were based on three different approaches: keeping the weights of the layer fixed, fine-tuning the weights of the layer based on the target domain data, and training the weights of the layer from scratch based on the target domain data.

The “TL Strategy 1” approach is reportedly able to extract features from the source domain and carry them to the target domain.

“This is a widely used scheme when treating images, where the first layers are used as feature extraction layers and the last layers are used to adapt to new data,” the researchers explained.

With “TL Strategy 2,” weights of all layers of the TL model are initialized based on data from the source domain. They are fine-tuned based on data from the target domain.

“This approach is extensively used with problems where there is an abundance of data in the source domain, but a scarcity of data in the target domain,” the group said.

Popular content

In “TL Strategy 3,” the initial layers of the TL model are frozen and the last layer is trained from scratch, popping the last layer of the base model and adding a new layer after the frozen layers.

“This approach is similar to the first one, but it differs in the fact that the weights of the last layer are not initialized based on data from the source domain,” the academics said.

The researchers used the three strategies to forecasting the hourly production of six solar plants located across several locations in Portugal. Their effectiveness was compared to that of conventional non-TL models.

“The findings of the experimental application indicate that all three TL strategies significantly outperform the non-TL approach in terms of forecasting accuracy, evaluated by several error indexes,” the scientists said. “Results indicate that TL models significantly outperform the conventional one, achieving 12.6% accuracy improvement in terms of root-mean-square error (RMSE) and 16.3% in terms of forecast skill index with one year of training data.”

They introduced the model in “Transfer learning strategies for solar power forecasting under data scarcity,” which was recently published in Scientific Reports.

“This study is the first step towards enhancing our understanding of the impact of TL on solar plant power prediction,” they concluded. “Future work will concentrate on assessing the impact of the base model’s training data volume, investigating whether training base models with more data or with data from different solar plants could further improve forecasting accuracy.”

This content is protected by copyright and may not be reused. If you want to cooperate with us and would like to reuse some of our content, please contact: editors@pv-magazine.com.