劉明 董明剛 敬超
摘 要:為提高種群的多樣性和算法的收斂性,提出一種基于定期競爭學(xué)習(xí)機制的多目標(biāo)粒子群算法。該算法將多目標(biāo)粒子群算法和競爭學(xué)習(xí)機制相結(jié)合,即每隔一定迭代代數(shù)便使用一次競爭學(xué)習(xí)機制,很好地保持了種群的多樣性;同時,該算法不需要全局最優(yōu)粒子的外部存檔,而是從當(dāng)前代種群中選取一部分優(yōu)秀的粒子,再從這些優(yōu)秀的粒子中隨機選取一個作為全局最優(yōu)粒子,能夠有效提升算法的收斂性。將提出的算法與基于分解的多目標(biāo)粒子群算法(MPSOD)、基于競爭機制且快速收斂的多目標(biāo)粒子群(CMOPSO)算法、參考向量引導(dǎo)的多目標(biāo)進(jìn)化算法(RVEA)等8個算法在21個標(biāo)準(zhǔn)測試函數(shù)上進(jìn)行了比較,結(jié)果表明,所提算法的帕累托(Pareto)前沿更加均勻,在世代距離(IGD)上會更加小。
關(guān)鍵詞:多目標(biāo)優(yōu)化;粒子群優(yōu)化;定期競爭;競爭學(xué)習(xí)機制;全局最優(yōu)選取策略
中圖分類號: TP183; TP301.6
文獻(xiàn)標(biāo)志碼:A
Abstract: In order to improve the diversity of population and the convergence performance of algorithm, a Scheduled competition learning based Multi-Objective Particle Swarm Optimization (SMOPSO) algorithm was proposed. The multi-objective particle swarm optimization algorithm and the competition learning mechanism were combined and the competition learning mechanism was used in every certain iterations to maintain the diversity of the population. Meanwhile, to improve the convergence of algorithm without using the global best external archive, the elite particles were selected from the current swarm, and then a global best particle was randomly selected from these elite particles. The performance of the proposed algorithm was verified on 21 benchmarks and compared with 8 algorithms, such as Multi-objective Particle Swarm Optimization algorithm based on Decomposition (MPSOD), Competitive Mechanism based multi-Objective Particle Swarm Optimizer (CMOPSO) and Reference Vector guided Evolutionary Algorithm (RVEA). The experimental results prove that the proposed algorithm can get a more uniform Pareto front and a smaller Inverted Generational Distance (IGD).
Key words: multi-objective optimization; Particle Swarm Optimization (PSO); scheduled competition; competitive learning mechanism; global best selection strategy
0 引言
現(xiàn)實生活中普遍存在的優(yōu)化問題都可以歸結(jié)為多目標(biāo)優(yōu)化問題,而解決多目標(biāo)優(yōu)化問題的一個有效途徑就是進(jìn)化算法,常見的多目標(biāo)進(jìn)化算法有帶精英策略的快速非支配排序遺傳算法 (Non-dominated Sort Genetic Algorithm II, NSGAII)[1]、提升Pareto前沿的進(jìn)化算法 (Strength Pareto Evolutionary Algorithm, SPEA2)[2]、基于分解的多目標(biāo)進(jìn)化算法(MultiObjective Evolutionary Algorithm based on Decomposition, MOEAD)[3]。粒子群優(yōu)化(Particle Swarm Optimization, PSO)算法[4]也屬于進(jìn)化算法之一,最初用來解決單目標(biāo)優(yōu)化問題,由于其具有實現(xiàn)簡單、計算成本低、效率高等優(yōu)點,如今也被廣泛應(yīng)用到多目標(biāo)問題優(yōu)化上,因此,大量的多目標(biāo)粒子群算法被提出。
在多目標(biāo)粒子群算法中,種群的多樣性和算法的收斂性是影響算法性能的兩個關(guān)鍵因素[5]。為了提高算法的收斂性,Hu等[5]提出了基于并行網(wǎng)格坐標(biāo)系的自適應(yīng)多目標(biāo)粒子群算法,通過平行單元坐標(biāo)系統(tǒng)反饋回來的信息來進(jìn)行動態(tài)調(diào)整,很好地提升了算法的收斂性。Zhang等[6]提出了一種基于競爭機制[7]的多目標(biāo)粒子群(Competitive Mechanism based multi-Objective Particle Swarm Optimizer, CMOPSO)算法,能夠很好地提升算法的收斂性,并且不需要外部存檔,而是在每一代種群中選取10個精英粒子作為全局最優(yōu)粒子集合,再從10個精英粒子中隨機選取2個精英粒子,運用競爭機制對2個粒子進(jìn)行比較,選取優(yōu)勝的粒子作為全局最優(yōu)粒子來引導(dǎo)種群的進(jìn)化。韓敏等[8]提出了基于高斯混沌變異和精英學(xué)習(xí)的自適應(yīng)多目標(biāo)粒子群算法,提出了收斂性貢獻(xiàn)這一概念作為自適應(yīng)參數(shù)的依據(jù)和精英學(xué)習(xí)方法,很好地提升了算法的收斂性。為了提升種群的多樣性,Cheng等[9]提出了一個有效的多目標(biāo)粒子群教與學(xué)優(yōu)化(Particle Swarm Optimization and Teaching-Learning-Based Optimization, PSO-TLBO)算法, 即將粒子群算法和教與學(xué)算法結(jié)合,主要應(yīng)用粒子群算法來更新種群,同時每隔一定代數(shù)便運用一次教與學(xué)算法,能很好地保持種群的多樣性。Balling[10]提出了Maximin策略,該策略能夠自動地“獎勵”分散的解,“懲罰”聚集的解,整體偏向于分布分散的解,能很好地提升種群的多樣性。
綜上所述,學(xué)者們在針對多目標(biāo)粒子群優(yōu)化算法上取得了大量的優(yōu)異成果,但多目標(biāo)粒子群算法由于收斂速度快導(dǎo)致容易陷入局部最優(yōu)或者早熟這一缺陷仍然沒有得到很好地解決,因此提升種群多樣性與算法收斂性仍然是一個值得研究的領(lǐng)域,故本文提出了一種基于定期競爭學(xué)習(xí)的多目標(biāo)粒子群優(yōu)化(Scheduled competition learning based Multi-Objective Particle Swarm Optimization, SMOPSO) 算法。
本文的主要工作如下:
1) 提出一種定期競爭學(xué)習(xí)機制,可以很好地提升種群的多樣性。該機制將多目標(biāo)粒子群算法和競爭學(xué)習(xí)機制相結(jié)合,每隔一定迭代次數(shù)[10]便運用競爭學(xué)習(xí)機制進(jìn)行一次種群更新,即把種群中的粒子隨機兩兩配對進(jìn)行比較,失敗的粒子將向優(yōu)勝的粒子學(xué)習(xí),優(yōu)勝的粒子則向其保存的個體歷史最優(yōu)粒子學(xué)習(xí),這使得每個粒子都有可能成為全局最優(yōu)粒子,能很好地提升種群的多樣性。
2) 提出一種新的全局最優(yōu)粒子的選擇方式,可以很好地提升算法的收斂性。它采用了基于非支配解排序[17]和擁擠距離[1]的共同排序,選取前10個粒子作為精英粒子[9],然后采取隨機法從精英粒子中隨機選取一個作為全局最優(yōu)粒子來引導(dǎo)其他粒子的更新。此外,該方法不需要存儲全局最優(yōu)粒子的外部存檔,能極大地降低算法的時間復(fù)雜度。
1 相關(guān)工作
1.1 PSO算法
4 結(jié)語
為提升種群多樣性和算法的收斂性,本文提出了一種基于定期競爭機制的多目標(biāo)粒子群算法,它將多目標(biāo)粒子群算法與競爭學(xué)習(xí)機制相結(jié)合,既考慮到了種群的多樣性,又能兼顧收斂性。首先,利用本文提出的多目標(biāo)粒子群優(yōu)化策略,可以很好地提升算法的收斂性,再定期使用競爭學(xué)習(xí)機制,提升種群的多樣性。并且,該算法不需要存儲全局最優(yōu)粒子的外部存檔,極大地降低了算法的時間復(fù)雜度。經(jīng)實驗驗證,本文算法能夠在收斂性和多樣性之間取得良好的平衡,很好地提升了算法的性能。
SMOPSO算法的未來研究方向如下:1)隨著社會的發(fā)展,多目標(biāo)優(yōu)化問題會變得越來越復(fù)雜,將該算法向更多目標(biāo)方向發(fā)展是一個很值得研究的領(lǐng)域;2)可以試著引入局部搜索策略,提高算法的收斂精度,以期獲得更好的收斂性,從而進(jìn)一步提升算法的性能。
致謝:本文算法的實現(xiàn)應(yīng)用了安徽大學(xué)BIMK團(tuán)隊開發(fā)的多目標(biāo)進(jìn)化算法PlatEMO開源平臺。對BIMK團(tuán)隊提供的幫助,在此致以衷心的感謝!
參考文獻(xiàn):
[1] DEB K, PRATAP A, AGARWAL S, et al. A fast and elitist multi-objective genetic algorithm: NSGA-II [J]. IEEE Transactions on Evolutionary Computation, 2002, 6(2):182-197.
[2] LAUMANNS M. SPEA2: improving the strength Pareto evolutionary algorithm, Technical Report Gloriastrasse 35 [R/OL]. [2018-03-09]. https://ci.nii.ac.jp/naid/10017663175.
https://ci.nii.ac.jp/naid/10017663175
[3] ZHANG Q, LI H. MOEA/D: a multiobjective evolutionary algorithm based on decomposition [J]. IEEE Transactions on Evolutionary Computation, 2007, 11(6): 712-731.
[4] KENNEDY J, EBERHART R C. Particle swarm optimization [C]// Proceedings of the 1995 IEEE International Conferenceon Neural Networks. Piscataway: IEEE, 1995: 1942-1948.
[5] HU W, YEN G G. Adaptive multiobjective particle swarm optimization based on parallel cell coordinate system [J]. IEEE Transactions on Evolutionary Computation, 2015, 19(1): 1-18.
[6] ZHANG X, ZHENG X, CHENG R, et al. A competitive mechanism based multi-objective particle swarm optimizer with fast convergence [J]. Information Sciences, 2018, 427: 63-76.
[7] CHENG R, JIN Y. A competitive swarm optimizer for large scale optimization [J]. IEEE Transactions on Cybernetics, 2015, 45(2): 191-204.
[8] 韓敏,何泳.基于高斯混沌變異和精英學(xué)習(xí)的自適應(yīng)多目標(biāo)粒子群算法[J].控制與決策,2016,31(8):1372-1378. (HAN M, HE Y. Adaptive multi-objective particle swarm optimization with Gaussian chaotic mutation and elite learning [J]. Control and Decision,2016,31(8):1372-1378.)
[9] CHENG T, CHEN M, FLEMING P J, et al. An effective PSO-TLBO algorithm for multi-objective optimization [C]// Proceedings of the 2016 IEEE Congress on Evolutionary Computation. Piscataway: IEEE, 2016: 3977-3982.
[10] BALLING R. The maximin fitness function, multi-objective city and regional planning [C]// EMO 2003: Proceedings of the 2003 International Conference on Evolutionary Multi-Criterion Optimization, LNCS 2632. Berlin: Springer, 2003: 1-15.
[11] ZHANG X, TIAN Y, CHENG R, et al. An efficient approach to nondominated sorting for evolutionary multiobjective optimization [J]. IEEE Transactions on Evolutionary Computation, 2015, 19(2): 201-213.
[12] LIN Q, LI J, DU Z, et al. A novel multi-objective particle swarm optimization with multiple search strategies [J]. European Journal of Operational Research, 2015, 247(3): 732-744.
[13] TIAN Y, CHENG R, ZHANG X, et al. PlatEMO: a Matlab platform for evolutionary multi-objective optimization [Educational Forum][J]. IEEE Computational Intelligence Magazine, 2017, 12(4): 73-87.
[14] DAI C, WANG Y, YE M. A new multi-objective particle swarm optimization algorithm based on decomposition [J]. Information Sciences — Informatics and Computer Science, Intelligent Systems, Applications: An International Journal, 2015, 325(C): 541-557.
[15] LIN Q, LIU S, ZHU Q, et al. Particle swarm optimization with a balanceable fitness estimation for many-objective optimization problems [J]. IEEE Transactions on Evolutionary Computation, 2018, 22(1): 32-46.
[16] LI M, YANG S, LIU X. Bi-goal evolution for many-objective optimization problems [J]. Artificial Intelligence, 2015, 228: 45-65.
[17] CHENG R, JIN Y, OLHOFER M, et al. A reference vector guided evolutionary algorithm for many-objective optimization [J]. IEEE Transactions on Evolutionary Computation, 2016, 20(5): 773-791.