鄭秀蓮
摘要:機(jī)器學(xué)習(xí)本質(zhì)上就是多目標(biāo)優(yōu)化問(wèn)題。解決機(jī)器學(xué)習(xí)問(wèn)題的多目標(biāo)優(yōu)化方法有三種:標(biāo)量式的多目標(biāo)優(yōu)化,按詞典排序的多目標(biāo)優(yōu)化以及基于Pareto的多目標(biāo)優(yōu)化,而基于Pareto的多目標(biāo)優(yōu)化方法是目前使用最廣泛的,也是研究較多的。文章概述了在機(jī)器學(xué)習(xí)中使用的多目標(biāo)優(yōu)化算法的優(yōu)缺點(diǎn)。最后表明基于Pareto的多目標(biāo)優(yōu)化方法能有效克服前兩種優(yōu)化方法的缺點(diǎn),是三種方法中最好的。
關(guān)鍵詞:多目標(biāo)優(yōu)化;Pareto;機(jī)器學(xué)習(xí)
中圖分類號(hào):TP18文獻(xiàn)標(biāo)識(shí)碼:A文章編號(hào):1009-3044(2012)15-3689-02
Machine Learning in Multi Objective Optimization Algorithm
ZHENG Xiu-lian
(Taizhou Vocational School of Mechanical, electrical technology,Taizhou 225300,China)
Abstract: Machine learning is essentially multi-objective optimization problem. There are three very different approaches to cope with this multi-objective optimization problem: the conventional weighted-formula approach, the lexicographic approach, the Pareto ap? proach. At present, the most widely used approach is the Pareto approach. This article provides an overview of the machine learning is used in multi objective optimization algorithms. The final show that Pareto based multi objective optimization method can effectively overcome before two optimization method, three methods is the best.
Key words: Multi-objective optimization; Pareto; Machine learning
機(jī)器學(xué)習(xí)本質(zhì)上就是多目標(biāo)優(yōu)化問(wèn)題。任何機(jī)器學(xué)習(xí)方法都包括兩個(gè)步驟,即:構(gòu)造一個(gè)候選模型,然后選擇某個(gè)學(xué)習(xí)算法根據(jù)樣本數(shù)據(jù)集對(duì)候選模型進(jìn)行參數(shù)估計(jì)。通常,模型構(gòu)造和參數(shù)估計(jì)是在迭代的過(guò)程中同步實(shí)現(xiàn),但在許多情況下,模型構(gòu)造只憑直覺和經(jīng)驗(yàn)做一次。換言之,用戶憑經(jīng)驗(yàn)構(gòu)造模型,然后使用某個(gè)學(xué)習(xí)算法來(lái)估計(jì)模型的參數(shù)。
目前,用多目標(biāo)優(yōu)化來(lái)解決機(jī)器學(xué)習(xí)問(wèn)題的方法主要有三種:標(biāo)量式的多目標(biāo)優(yōu)化[2],按詞典排序的多目標(biāo)優(yōu)化[3]以及基于Pare? to的多目標(biāo)優(yōu)化[4-7]。標(biāo)量式的多目標(biāo)優(yōu)化是將多目標(biāo)問(wèn)題轉(zhuǎn)化為單目標(biāo)問(wèn)題來(lái)解決,這種方法實(shí)現(xiàn)簡(jiǎn)單,但存在很多缺點(diǎn),如在目標(biāo)的轉(zhuǎn)化過(guò)程中需要給不同目標(biāo)設(shè)置權(quán)重值,這些權(quán)重值的設(shè)置通常都是憑經(jīng)驗(yàn)或通過(guò)反復(fù)實(shí)驗(yàn)確定的,帶有主觀性,而目標(biāo)的轉(zhuǎn)化是將含義不同的目標(biāo)整合到一起,這使得最終的目標(biāo)函數(shù)對(duì)用戶而言是沒有意義的。按詞典排序的多目標(biāo)優(yōu)化是對(duì)目標(biāo)賦予優(yōu)先權(quán),由優(yōu)先權(quán)決定各目標(biāo)的優(yōu)化順序,該方法避免了將含義不同的目標(biāo)進(jìn)行整合,但在實(shí)現(xiàn)過(guò)程中引入了新的參數(shù),并且各目標(biāo)的優(yōu)先權(quán)的確定也是待解決的問(wèn)題。基于Pareto的多目標(biāo)優(yōu)化是采用多目標(biāo)進(jìn)化算法來(lái)解決多目標(biāo)優(yōu)化問(wèn)題,將各個(gè)目標(biāo)單獨(dú)看待,找出同時(shí)優(yōu)化所有目標(biāo)的非支配解集,用戶可以根據(jù)需要從該解集中選擇自己需要的解,達(dá)到對(duì)問(wèn)題的更深的認(rèn)識(shí)。
[3] Kaufmann K A,Michalski R S.Learning from inconsistent and noisy data:the AQ18 approach[J].Foundations of Intelligent Systems (Proc. ISMIS-99).LNAI,Springer,1999,160(9):411-419.
[4] Kim Y,Street W N,Menczer F.Feature selection in unsupervised learning via evolutionary search[C].Proc.6th ACM SIGKDD Int.Conf.on Knowledge Discovery and Data Mining (KDD-2000),ACM,2000:365-369.
[5] Beatriz de la Iglesia,Mark S.Philpott,Anthony J.Bagnall and Vie J.Rayward-Smith.Data Mining Rules Using Multi-Objective Evolutionary Algorithms[C].IEEE 2003.
[6] Hisao Ishibuchi and Takashi Yamamoto.Fuzzy Rule Selection by Multi-Objective Genetic Local Search Algorithms and Rule Evaluation Measures in Data Mining[D].Department of Industrial Engineering,Osaka Prefecture University,2004.
[7] Abbass H. A memetic Pareto approach to artificial neural networks[C]//Proc 14th Aust Joint Conf Artif Intell,2001:1-12.