<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.0 20120330//EN" "JATS-journalpublishing1.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">JDS</journal-id>
<journal-title-group><journal-title>Journal of Data Science</journal-title></journal-title-group>
<issn pub-type="epub">1683-8602</issn><issn pub-type="ppub">1680-743X</issn><issn-l>1680-743X</issn-l>
<publisher>
<publisher-name>School of Statistics, Renmin University of China</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">JDS1106</article-id>
<article-id pub-id-type="doi">10.6339/23-JDS1106</article-id>
<article-categories><subj-group subj-group-type="heading">
<subject>Computing in Data Science</subject></subj-group></article-categories>
<title-group>
<article-title>Tuning Support Vector Machines and Boosted Trees Using Optimization Algorithms</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<contrib-id contrib-id-type="orcid">https://orcid.org/0000-0002-6048-4700</contrib-id>
<name><surname>Lundell</surname><given-names>Jill F.</given-names></name><email xlink:href="mailto:jlundell@ds.dfci.harvard.edu">jlundell@ds.dfci.harvard.edu</email><xref ref-type="aff" rid="j_jds1106_aff_001">1</xref><xref ref-type="fn" rid="cor1">∗</xref>
</contrib>
<aff id="j_jds1106_aff_001"><label>1</label><institution>Department of Data Science, Dana-Farber Cancer Institute, Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, MA</institution>, <country>USA</country></aff>
</contrib-group>
<author-notes>
<corresp id="cor1"><label>∗</label>Email: <ext-link ext-link-type="uri" xlink:href="mailto:jlundell@ds.dfci.harvard.edu">jlundell@ds.dfci.harvard.edu</ext-link>.</corresp>
</author-notes>
<pub-date pub-type="ppub"><year>2024</year></pub-date><pub-date pub-type="epub"><day>5</day><month>7</month><year>2023</year></pub-date><volume>22</volume><issue>4</issue><fpage>575</fpage><lpage>590</lpage><supplementary-material id="S1" content-type="document" xlink:href="jds1106_s001.pdf" mimetype="application" mime-subtype="pdf">
<caption>
<title>Supplementary Material</title>
<p>The following supplementary material are available:</p>
<p><bold>Appendixes</bold></p>
<p><bold>A:</bold> Description of optimization algorithms</p>
<p><bold>B:</bold> Performance tables</p>
<p><bold>R-package for EZtune:</bold> R-package EZtune that can implement autotuning of SVMs, GBMs, and adaboost using the Hooke-Jeeves algorithm and genetic algorithm. The package also contains Lichen and Mullein datasets used in the examples in the article. The package is currently available on CRAN and updates are available at <uri>https://github.com/jillbo1000/EZtune</uri> (GNU zipped tar file).</p>
<p><bold>Code and data for creating grids and performing optimization tests:</bold> The code and data used to create the error and time response surfaces and the code for testing the optimization algorithms is available at <uri>https://github.com/jillbo1000/autotune</uri>.</p>
</caption>
</supplementary-material><history><date date-type="received"><day>17</day><month>3</month><year>2023</year></date><date date-type="accepted"><day>29</day><month>5</month><year>2023</year></date></history>
<permissions><copyright-statement>2024 The Author(s). Published by the School of Statistics and the Center for Applied Statistics, Renmin University of China.</copyright-statement><copyright-year>2024</copyright-year>
<license license-type="open-access" xlink:href="https://creativecommons.org/licenses/by/4.0/">
<license-p>Open access article under the <ext-link ext-link-type="uri" xlink:href="https://creativecommons.org/licenses/by/4.0/">CC BY</ext-link> license.</license-p></license></permissions>
<abstract>
<p>Statistical learning methods have been growing in popularity in recent years. Many of these procedures have parameters that must be tuned for models to perform well. Research has been extensive in neural networks, but not for many other learning methods. We looked at the behavior of tuning parameters for support vector machines, gradient boosting machines, and adaboost in both a classification and regression setting. We used grid search to identify ranges of tuning parameters where good models can be found across many different datasets. We then explored different optimization algorithms to select a model across the tuning parameter space. Models selected by the optimization algorithm were compared to the best models obtained through grid search to select well performing algorithms. This information was used to create an R package, <monospace>EZtune</monospace>, that automatically tunes support vector machines and boosted trees.</p>
</abstract>
<kwd-group>
<label>Keywords</label>
<kwd>machine learning</kwd>
<kwd>optimization</kwd>
<kwd>R programming</kwd>
</kwd-group>
</article-meta>
</front>
<back>
<ref-list id="j_jds1106_reflist_001">
<title>References</title>
<ref id="j_jds1106_ref_001">
<mixed-citation publication-type="other"> <string-name><surname>Bates</surname> <given-names>D</given-names></string-name>, <string-name><surname>Mullen</surname> <given-names>KM</given-names></string-name>, <string-name><surname>Nash</surname> <given-names>JC</given-names></string-name>, <string-name><surname>Varadhan</surname> <given-names>R</given-names></string-name> (2022). minqa: Derivative-free optimization algorithms by quadratic approximation. R package version 1.2.5.</mixed-citation>
</ref>
<ref id="j_jds1106_ref_002">
<mixed-citation publication-type="journal"> <string-name><surname>Birgin</surname> <given-names>EG</given-names></string-name>, <string-name><surname>Martínez</surname> <given-names>JM</given-names></string-name>, <string-name><surname>Raydan</surname> <given-names>M</given-names></string-name> (<year>2000</year>). <article-title>Nonmonotone spectral projected gradient methods on convex sets</article-title>. <source><italic>SIAM Journal on Optimization</italic></source>, <volume>10</volume>(<issue>4</issue>): <fpage>1196</fpage>–<lpage>1211</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1137/S1052623497330963" xlink:type="simple">https://doi.org/10.1137/S1052623497330963</ext-link></mixed-citation>
</ref>
<ref id="j_jds1106_ref_003">
<mixed-citation publication-type="journal"> <string-name><surname>Breiman</surname> <given-names>L</given-names></string-name> (<year>2001</year>). <article-title>Random forests</article-title>. <source><italic>Machine Learning</italic></source>, <volume>45</volume>(<issue>1</issue>): <fpage>5</fpage>–<lpage>32</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1023/A:1010933404324" xlink:type="simple">https://doi.org/10.1023/A:1010933404324</ext-link></mixed-citation>
</ref>
<ref id="j_jds1106_ref_004">
<mixed-citation publication-type="journal"> <string-name><surname>Byrd</surname> <given-names>RH</given-names></string-name>, <string-name><surname>Lu</surname> <given-names>P</given-names></string-name>, <string-name><surname>Nocedal</surname> <given-names>J</given-names></string-name>, <string-name><surname>Zhu</surname> <given-names>C</given-names></string-name> (<year>1995</year>). <article-title>A limited memory algorithm for bound constrained optimization</article-title>. <source><italic>SIAM Journal on Scientific Computing</italic></source>, <volume>16</volume>(<issue>5</issue>): <fpage>1190</fpage>–<lpage>1208</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1137/0916069" xlink:type="simple">https://doi.org/10.1137/0916069</ext-link></mixed-citation>
</ref>
<ref id="j_jds1106_ref_005">
<mixed-citation publication-type="journal"> <string-name><surname>Cortes</surname> <given-names>C</given-names></string-name>, <string-name><surname>Vapnik</surname> <given-names>V</given-names></string-name> (<year>1995</year>). <article-title>Support-vector networks</article-title>. <source><italic>Machine Learning</italic></source>, <volume>20</volume>(<issue>3</issue>): <fpage>273</fpage>–<lpage>297</lpage>.</mixed-citation>
</ref>
<ref id="j_jds1106_ref_006">
<mixed-citation publication-type="other"> <string-name><surname>Culp</surname> <given-names>M</given-names></string-name>, <string-name><surname>Johnson</surname> <given-names>K</given-names></string-name>, <string-name><surname>Michailidis</surname> <given-names>G</given-names></string-name> (2016). ada: The R package ada for stochastic boosting. R package version 2.0-5.</mixed-citation>
</ref>
<ref id="j_jds1106_ref_007">
<mixed-citation publication-type="journal"> <string-name><surname>Dai</surname> <given-names>YH</given-names></string-name>, <string-name><surname>Yuan</surname> <given-names>Y</given-names></string-name> (<year>2001</year>). <article-title>An efficient hybrid conjugate gradient method for unconstrained optimization</article-title>. <source><italic>Annals of Operations Research</italic></source>, <volume>103</volume>(<issue>1–4</issue>): <fpage>33</fpage>–<lpage>47</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1023/A:1012930416777" xlink:type="simple">https://doi.org/10.1023/A:1012930416777</ext-link></mixed-citation>
</ref>
<ref id="j_jds1106_ref_008">
<mixed-citation publication-type="journal"> <string-name><surname>De Cock</surname> <given-names>D</given-names></string-name> (<year>2011</year>). <article-title>Ames, Iowa: Alternative to the Boston housing data as an end of semester regression project</article-title>. <source><italic>Journal of Statistics Education</italic></source>, <volume>19</volume>: <elocation-id>3</elocation-id>.</mixed-citation>
</ref>
<ref id="j_jds1106_ref_009">
<mixed-citation publication-type="journal"> <string-name><surname>Freund</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Schapire</surname> <given-names>RE</given-names></string-name> (<year>1997</year>). <article-title>A decision-theoretic generalization of on-line learning and an application to boosting</article-title>. <source><italic>Journal of Computer and System Sciences</italic></source>, <volume>55</volume>(<issue>1</issue>): <fpage>119</fpage>–<lpage>139</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1006/jcss.1997.1504" xlink:type="simple">https://doi.org/10.1006/jcss.1997.1504</ext-link></mixed-citation>
</ref>
<ref id="j_jds1106_ref_010">
<mixed-citation publication-type="journal"> <string-name><surname>Friedman</surname> <given-names>JH</given-names></string-name> (<year>2001</year>). <article-title>Greedy function approximation: A gradient boosting machine</article-title>. <source><italic>Annals of Statistics</italic></source>, <volume>29</volume>(<issue>5</issue>): <fpage>1189</fpage>–<lpage>1232</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1214/aos/1013203451" xlink:type="simple">https://doi.org/10.1214/aos/1013203451</ext-link></mixed-citation>
</ref>
<ref id="j_jds1106_ref_011">
<mixed-citation publication-type="book"> <string-name><surname>Goldberg</surname> <given-names>D</given-names></string-name> (<year>1999</year>). <source><italic>Genetic algorithms in search optimization and machine learning</italic></source>. <publisher-name>Addison-Wesley Longman Publishing Company</publisher-name>, <publisher-loc>Boston, MA, USA</publisher-loc>.</mixed-citation>
</ref>
<ref id="j_jds1106_ref_012">
<mixed-citation publication-type="other"> <string-name><surname>Greenwell</surname> <given-names>B</given-names></string-name>, <string-name><surname>Boehmke</surname> <given-names>B</given-names></string-name>, <string-name><surname>Cunningham</surname> <given-names>J</given-names></string-name>, <string-name><surname>Developers</surname> <given-names>G</given-names></string-name> (2022). gbm: Generalized Boosted Regression Models. R package version 2.1.8.1.</mixed-citation>
</ref>
<ref id="j_jds1106_ref_013">
<mixed-citation publication-type="book"> <string-name><surname>Hastie</surname> <given-names>T</given-names></string-name>, <string-name><surname>Tibshirani</surname> <given-names>R</given-names></string-name>, <string-name><surname>Friedman</surname> <given-names>J</given-names></string-name> (<year>2009</year>). <source><italic>The elements of statistical learning: Data mining, inference, and prediction</italic></source>. <publisher-name>Springer</publisher-name>, <publisher-loc>New York, NY, USA</publisher-loc>.</mixed-citation>
</ref>
<ref id="j_jds1106_ref_014">
<mixed-citation publication-type="journal"> <string-name><surname>Hooke</surname> <given-names>R</given-names></string-name>, <string-name><surname>Jeeves</surname> <given-names>TA</given-names></string-name> (<year>1961</year>). <article-title>“Direct Search” solution of numerical and statistical problems</article-title>. <source><italic>Journal of the ACM</italic></source>, <volume>8</volume>(<issue>2</issue>): <fpage>212</fpage>–<lpage>229</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1145/321062.321069" xlink:type="simple">https://doi.org/10.1145/321062.321069</ext-link></mixed-citation>
</ref>
<ref id="j_jds1106_ref_015">
<mixed-citation publication-type="other"> Kaggle (2019). Ames housing dataset. <ext-link ext-link-type="uri" xlink:href="https://www.kaggle.com/datasets/prevek18/ames-housing-dataset">https://www.kaggle.com/datasets/prevek18/ames-housing-dataset</ext-link>. Accessed: 2019-02-13.</mixed-citation>
</ref>
<ref id="j_jds1106_ref_016">
<mixed-citation publication-type="book"> <string-name><surname>Kelley</surname> <given-names>CT</given-names></string-name> (<year>1999</year>). <source><italic>Iterative methods for optimization</italic></source>. <publisher-name>Society for Industrial and Applied Mathematics</publisher-name>, <publisher-loc>Philadelphia, PA, USA</publisher-loc>.</mixed-citation>
</ref>
<ref id="j_jds1106_ref_017">
<mixed-citation publication-type="other"> <string-name><surname>Kuhn</surname> <given-names>M</given-names></string-name>, <string-name><surname>Johnson</surname> <given-names>K</given-names></string-name> (2018). AppliedPredictiveModeling: Functions and data sets for ‘Applied Predictive Modeling’. R package version 1.1-7.</mixed-citation>
</ref>
<ref id="j_jds1106_ref_018">
<mixed-citation publication-type="book"> <string-name><surname>Kuiper</surname> <given-names>S</given-names></string-name>, <string-name><surname>Sklar</surname> <given-names>J</given-names></string-name> (<year>2013</year>). <source><italic>Practicing statistics: Guided investigations for the second course</italic></source>. <publisher-name>Pearson</publisher-name>, <publisher-loc>Boston, MA, USA</publisher-loc>.</mixed-citation>
</ref>
<ref id="j_jds1106_ref_019">
<mixed-citation publication-type="other"> <string-name><surname>Lundell</surname> <given-names>J</given-names></string-name> (2023). Eztune: A package for automated hyperparameter tuning in R. arXiv preprint <ext-link ext-link-type="uri" xlink:href="http://arxiv.org/abs/arXiv:2303.12177">arXiv:2303.12177</ext-link>.</mixed-citation>
</ref>
<ref id="j_jds1106_ref_020">
<mixed-citation publication-type="chapter"> <string-name><surname>Lundell</surname> <given-names>JF</given-names></string-name> (<year>2017</year>). <chapter-title>There has to be an easier way: A simple alternative for parameter tuning of supervised learning methods</chapter-title>. In: <source><italic>JSM Proceedings, Statistical Computing Section</italic></source>, <fpage>3028</fpage>–<lpage>3036</lpage>. <publisher-name>American Statistical Association</publisher-name>, <publisher-loc>Alexandria, VA</publisher-loc>.</mixed-citation>
</ref>
<ref id="j_jds1106_ref_021">
<mixed-citation publication-type="other"> <string-name><surname>Lundell</surname> <given-names>JF</given-names></string-name> (2019). Tuning hyperparameters in supervised learning models and applications of statistical learning in genome-wide association studies with emphasis on heritability, Ph.D. thesis, Utah State University.</mixed-citation>
</ref>
<ref id="j_jds1106_ref_022">
<mixed-citation publication-type="journal"> <string-name><surname>Mahdavi</surname> <given-names>M</given-names></string-name>, <string-name><surname>Fesanghary</surname> <given-names>M</given-names></string-name>, <string-name><surname>Damangir</surname> <given-names>E</given-names></string-name> (<year>2007</year>). <article-title>An improved harmony search algorithm for solving optimization problems</article-title>. <source><italic>Applied Mathematics and Computation</italic></source>, <volume>188</volume>(<issue>2</issue>): <fpage>1567</fpage>–<lpage>1579</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1016/j.amc.2006.11.033" xlink:type="simple">https://doi.org/10.1016/j.amc.2006.11.033</ext-link></mixed-citation>
</ref>
<ref id="j_jds1106_ref_023">
<mixed-citation publication-type="other"> <string-name><surname>Meyer</surname> <given-names>D</given-names></string-name>, <string-name><surname>Dimitriadou</surname> <given-names>E</given-names></string-name>, <string-name><surname>Hornik</surname> <given-names>K</given-names></string-name>, <string-name><surname>Weingessel</surname> <given-names>A</given-names></string-name>, <string-name><surname>Leisch</surname> <given-names>F</given-names></string-name> (2022). e1071: Misc functions of the department of statistics. probability theory group (Formerly: E1071), TU Wien. R package version 1.7-12.</mixed-citation>
</ref>
<ref id="j_jds1106_ref_024">
<mixed-citation publication-type="journal"> <string-name><surname>Mirjalili</surname> <given-names>S</given-names></string-name> (<year>2015</year>a). <article-title>The ant lion optimizer</article-title>. <source><italic>Advances in Engineering Software</italic></source>, <volume>83</volume>: <fpage>80</fpage>–<lpage>98</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1016/j.advengsoft.2015.01.010" xlink:type="simple">https://doi.org/10.1016/j.advengsoft.2015.01.010</ext-link></mixed-citation>
</ref>
<ref id="j_jds1106_ref_025">
<mixed-citation publication-type="journal"> <string-name><surname>Mirjalili</surname> <given-names>S</given-names></string-name> (<year>2015</year>b). <article-title>Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm</article-title>. <source><italic>Knowledge-Based Systems</italic></source>, <volume>89</volume>: <fpage>228</fpage>–<lpage>249</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1016/j.knosys.2015.07.006" xlink:type="simple">https://doi.org/10.1016/j.knosys.2015.07.006</ext-link></mixed-citation>
</ref>
<ref id="j_jds1106_ref_026">
<mixed-citation publication-type="journal"> <string-name><surname>Mirjalili</surname> <given-names>S</given-names></string-name> (<year>2016</year>a). <article-title>Dragonfly algorithm: A new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems</article-title>. <source><italic>Neural Computing &amp; Applications</italic></source>, <volume>27</volume>(<issue>4</issue>): <fpage>1053</fpage>–<lpage>1073</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1007/s00521-015-1920-1" xlink:type="simple">https://doi.org/10.1007/s00521-015-1920-1</ext-link></mixed-citation>
</ref>
<ref id="j_jds1106_ref_027">
<mixed-citation publication-type="journal"> <string-name><surname>Mirjalili</surname> <given-names>S</given-names></string-name> (<year>2016</year>b). <article-title>SCA: A sine cosine algorithm for solving optimization problems</article-title>. <source><italic>Knowledge-Based Systems</italic></source>, <volume>96</volume>: <fpage>120</fpage>–<lpage>133</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1016/j.knosys.2015.12.022" xlink:type="simple">https://doi.org/10.1016/j.knosys.2015.12.022</ext-link></mixed-citation>
</ref>
<ref id="j_jds1106_ref_028">
<mixed-citation publication-type="journal"> <string-name><surname>Mirjalili</surname> <given-names>S</given-names></string-name>, <string-name><surname>Lewis</surname> <given-names>A</given-names></string-name> (<year>2016</year>). <article-title>The whale optimization algorithm</article-title>. <source><italic>Advances in Engineering Software</italic></source>, <volume>95</volume>: <fpage>51</fpage>–<lpage>67</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1016/j.advengsoft.2016.01.008" xlink:type="simple">https://doi.org/10.1016/j.advengsoft.2016.01.008</ext-link></mixed-citation>
</ref>
<ref id="j_jds1106_ref_029">
<mixed-citation publication-type="journal"> <string-name><surname>Mirjalili</surname> <given-names>S</given-names></string-name>, <string-name><surname>Mirjalili</surname> <given-names>SM</given-names></string-name>, <string-name><surname>Lewis</surname> <given-names>A</given-names></string-name> (<year>2014</year>). <article-title>Grey wolf optimizer</article-title>. <source><italic>Advances in Engineering Software</italic></source>, <volume>69</volume>: <fpage>46</fpage>–<lpage>61</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1016/j.advengsoft.2013.12.007" xlink:type="simple">https://doi.org/10.1016/j.advengsoft.2013.12.007</ext-link></mixed-citation>
</ref>
<ref id="j_jds1106_ref_030">
<mixed-citation publication-type="journal"> <string-name><surname>Nash</surname> <given-names>JC</given-names></string-name> (<year>2014</year>a). <article-title>On best practice optimization methods in R</article-title>. <source><italic>Journal of Statistical Software</italic></source>, <volume>60</volume>(<issue>2</issue>): <fpage>1</fpage>–<lpage>14</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.18637/jss.v060.i02" xlink:type="simple">https://doi.org/10.18637/jss.v060.i02</ext-link></mixed-citation>
</ref>
<ref id="j_jds1106_ref_031">
<mixed-citation publication-type="other"> <string-name><surname>Nash</surname> <given-names>JC</given-names></string-name> (2014b). Rcgmin: Conjugate gradient minimization of nonlinear functions. R package version 2013-2.21.</mixed-citation>
</ref>
<ref id="j_jds1106_ref_032">
<mixed-citation publication-type="other"> <string-name><surname>Nash</surname> <given-names>JC</given-names></string-name>, <string-name><surname>Zhu</surname> <given-names>C</given-names></string-name>, <string-name><surname>Byrd</surname> <given-names>R</given-names></string-name>, <string-name><surname>Nocedal</surname> <given-names>J</given-names></string-name>, <string-name><surname>Morales</surname> <given-names>JL</given-names></string-name> (2020). lbfgsb3: Limited memory BFGS minimizer with bounds on parameters. R package version 2020-3.2.</mixed-citation>
</ref>
<ref id="j_jds1106_ref_033">
<mixed-citation publication-type="other"> <string-name><surname>Newman</surname> <given-names>D</given-names></string-name>, <string-name><surname>Hettich</surname> <given-names>S</given-names></string-name>, <string-name><surname>Blake</surname> <given-names>C</given-names></string-name>, <string-name><surname>Merz</surname> <given-names>C</given-names></string-name> (1998). UCI repository of machine learning databases. <uri>http://www.ics.uci.edu/~mlearn/MLRepository.html</uri></mixed-citation>
</ref>
<ref id="j_jds1106_ref_034">
<mixed-citation publication-type="other"> <string-name><surname>Powell</surname> <given-names>MJD</given-names></string-name> (2009). The BOBYQA algorithm for bound constrained optimization without derivatives. <italic>Cambridge NA Report NA2009/06, University of Cambridge, Cambridge</italic>, 26–46.</mixed-citation>
</ref>
<ref id="j_jds1106_ref_035">
<mixed-citation publication-type="book"> <collab>R Core Team</collab> (<year>2022</year>). <source><italic>R: A language and environment for statistical computing</italic></source>. <publisher-name>R Foundation for Statistical Computing</publisher-name>, <publisher-loc>Vienna, Austria</publisher-loc>.</mixed-citation>
</ref>
<ref id="j_jds1106_ref_036">
<mixed-citation publication-type="journal"> <string-name><surname>Saremi</surname> <given-names>S</given-names></string-name>, <string-name><surname>Mirjalili</surname> <given-names>S</given-names></string-name>, <string-name><surname>Lewis</surname> <given-names>A</given-names></string-name> (<year>2017</year>). <article-title>Grasshopper optimisation algorithm: Theory and application</article-title>. <source><italic>Advances in Engineering Software</italic></source>, <volume>105</volume>: <fpage>30</fpage>–<lpage>47</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1016/j.advengsoft.2017.01.004" xlink:type="simple">https://doi.org/10.1016/j.advengsoft.2017.01.004</ext-link></mixed-citation>
</ref>
<ref id="j_jds1106_ref_037">
<mixed-citation publication-type="chapter"> <string-name><surname>Schumacher</surname> <given-names>C</given-names></string-name>, <string-name><surname>Vose</surname> <given-names>MD</given-names></string-name>, <string-name><surname>Whitley</surname> <given-names>LD</given-names></string-name> (<year>2001</year>). <chapter-title>The no free lunch and problem description length</chapter-title>. In: <string-name><surname>Spector</surname> <given-names>L</given-names></string-name>, <string-name><surname>Goodman</surname> <given-names>ED</given-names></string-name>, <string-name><surname>Wu</surname> <given-names>A</given-names></string-name>, <string-name><surname>Langdon</surname> <given-names>WB</given-names></string-name>, <string-name><surname>Voight</surname> <given-names>HM</given-names></string-name> (eds.), <source><italic>Proceedings of the 3rd Annual Conference on Genetic and Evolutionary Computation</italic></source>, <fpage>565</fpage>–<lpage>570</lpage>. <publisher-name>Morgan Kaufmann Publishers Inc.</publisher-name></mixed-citation>
</ref>
<ref id="j_jds1106_ref_038">
<mixed-citation publication-type="journal"> <string-name><surname>Scrucca</surname> <given-names>L</given-names></string-name> (<year>2013</year>). <article-title>GA: A package for genetic algorithms in R</article-title>. <source><italic>Journal of Statistical Software</italic></source>, <volume>53</volume>(<issue>4</issue>): <fpage>1</fpage>–<lpage>37</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.18637/jss.v053.i04" xlink:type="simple">https://doi.org/10.18637/jss.v053.i04</ext-link></mixed-citation>
</ref>
<ref id="j_jds1106_ref_039">
<mixed-citation publication-type="other"> <string-name><surname>Septem Riza</surname> <given-names>L</given-names></string-name>, <string-name><surname>Iip</surname></string-name>, <string-name><surname>Prasetyo Nugroho</surname> <given-names>E</given-names></string-name> (2017). metaheuristicOpt: Metaheuristic for optimization. R package version 1.0.0.</mixed-citation>
</ref>
<ref id="j_jds1106_ref_040">
<mixed-citation publication-type="chapter"> <string-name><surname>Shi</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Eberhart</surname> <given-names>R</given-names></string-name> (<year>1998</year>). <chapter-title>A modified particle swarm optimizer</chapter-title>. In: <source><italic>1998 IEEE International Conference on Evolutionary Computation Proceedings. IEEE World Congress on Computational Intelligence (Cat. No. 98TH8360)</italic></source>, <fpage>69</fpage>–<lpage>73</lpage>. <publisher-name>IEEE</publisher-name>.</mixed-citation>
</ref>
<ref id="j_jds1106_ref_041">
<mixed-citation publication-type="journal"> <string-name><surname>Smola</surname> <given-names>AJ</given-names></string-name>, <string-name><surname>Schölkopf</surname> <given-names>B</given-names></string-name> (<year>2004</year>). <article-title>A tutorial on support vector regression</article-title>. <source><italic>Statistics and Computing</italic></source>, <volume>14</volume>(<issue>3</issue>): <fpage>199</fpage>–<lpage>222</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1023/B:STCO.0000035301.49549.88" xlink:type="simple">https://doi.org/10.1023/B:STCO.0000035301.49549.88</ext-link></mixed-citation>
</ref>
<ref id="j_jds1106_ref_042">
<mixed-citation publication-type="journal"> <string-name><surname>Varadhan</surname> <given-names>R</given-names></string-name>, <string-name><surname>Gilbert</surname> <given-names>P</given-names></string-name> (<year>2009</year>). <article-title>BB: An R package for solving a large system of nonlinear equations and for optimizing a high-dimensional nonlinear objective function</article-title>. <source><italic>Journal of Statistical Software</italic></source>, <volume>32</volume>(<issue>4</issue>): <fpage>1</fpage>–<lpage>26</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.18637/jss.v032.i04" xlink:type="simple">https://doi.org/10.18637/jss.v032.i04</ext-link></mixed-citation>
</ref>
<ref id="j_jds1106_ref_043">
<mixed-citation publication-type="other"> <string-name><surname>Varadhan</surname> <given-names>R</given-names></string-name>, <string-name><surname>Hopkins University</surname> <given-names>J</given-names></string-name>, <string-name><surname>Borchers</surname> <given-names>HW</given-names></string-name> (2020). dfoptim: Derivative-free optimization. In: ABB Corporate Research. R package version 2020.10-1.</mixed-citation>
</ref>
<ref id="j_jds1106_ref_044">
<mixed-citation publication-type="chapter"> <string-name><surname>Yang</surname> <given-names>XS</given-names></string-name> (<year>2009</year>). <chapter-title>Firefly algorithms for multimodal optimization</chapter-title>. In: <string-name><surname>Watanabe</surname> <given-names>O</given-names></string-name>, <string-name><surname>Zeugmann</surname> <given-names>T</given-names></string-name> (eds.), <source><italic>International Symposium on Stochastic Algorithms</italic></source>, <fpage>169</fpage>–<lpage>178</lpage>. <publisher-name>Springer</publisher-name>.</mixed-citation>
</ref>
</ref-list>
</back>
</article>
