<?xml version='1.0' encoding='UTF-8'?>
<OAI-PMH xmlns="http://www.openarchives.org/OAI/2.0/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/ http://www.openarchives.org/OAI/2.0/OAI-PMH.xsd">
  <responseDate>2026-03-14T02:10:45Z</responseDate>
  <request metadataPrefix="oai_dc" identifier="oai:kanazawa-u.repo.nii.ac.jp:00007570" verb="GetRecord">https://kanazawa-u.repo.nii.ac.jp/oai</request>
  <GetRecord>
    <record>
      <header>
        <identifier>oai:kanazawa-u.repo.nii.ac.jp:00007570</identifier>
        <datestamp>2024-06-20T06:18:45Z</datestamp>
        <setSpec>934:935:936</setSpec>
      </header>
      <metadata>
        <oai_dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:oai_dc="http://www.openarchives.org/OAI/2.0/oai_dc/" xmlns="http://www.w3.org/2001/XMLSchema" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd">
          <dc:title>A selective learning algorithm for nonlinear synapses in multilayer neural networks</dc:title>
          <dc:creator>Nakayama, Kenji</dc:creator>
          <dc:creator>353</dc:creator>
          <dc:creator>00207945</dc:creator>
          <dc:creator>00207945</dc:creator>
          <dc:creator>Hirano, Akihiro</dc:creator>
          <dc:creator>377</dc:creator>
          <dc:creator>70303261</dc:creator>
          <dc:creator>70303261</dc:creator>
          <dc:creator>Fusakawa, M.</dc:creator>
          <dc:creator>10116</dc:creator>
          <dc:description>In multilayer neural networks, network size reduction and fast convergence are important. For this purpose, trainable activation functions and nonlinear synapses have been proposed. When high-order polynomials are used for nonlinearity, the number of terms in the polynomial becomes a very large for a high-dimensional input. It causes very complicated networks and slow convergence. In this paper, a method to select the useful terms in the polynomial in a learning process is proposed. This method is based on the genetic algorithm (GA), and combines the internal information, magnitude of connection weihgts, to select the gene in the next generation. A mechanism of pruning the terms is inherently included. Many examples demonstrate usefulness of the proposed method compared with the ordinary GA method. Convergence is stable and the number of the selected terms is well reduced.</dc:description>
          <dc:description>conference paper</dc:description>
          <dc:publisher>IEEE(Institute of Electrical and Electronics Engineers)</dc:publisher>
          <dc:date>2001-07-01</dc:date>
          <dc:type>VoR</dc:type>
          <dc:format>application/pdf</dc:format>
          <dc:identifier>Proceedings of the International Joint Conference on Neural Networks</dc:identifier>
          <dc:identifier>3</dc:identifier>
          <dc:identifier>1704</dc:identifier>
          <dc:identifier>1709</dc:identifier>
          <dc:identifier>https://kanazawa-u.repo.nii.ac.jp/record/7570/files/TE-PR-NAKAYAMA-K-1704.pdf</dc:identifier>
          <dc:identifier>http://hdl.handle.net/2297/6829</dc:identifier>
          <dc:identifier>https://kanazawa-u.repo.nii.ac.jp/records/7570</dc:identifier>
          <dc:language>eng</dc:language>
        </oai_dc:dc>
      </metadata>
    </record>
  </GetRecord>
</OAI-PMH>
