RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • 무료
      • 기관 내 무료
      • 유료
      • 비단조 뉴런 모델의 연상 기억에 관한 연구

        박철영,中島康治 대구대학교 과학기술연구소 1998 科學技術硏究 Vol.5 No.1

        To aim at improving performance of a neural network as an associative memory or as an optimization problem solver, we propose two models using a nonmonotone analog neuron model which differs from traditional ones. Using the proposed model, we construct the energy function which can have two minimums. It is shown that our model can recall embedded patterns successfully. We also discuss the simulation method and the performance of the model by numerical simulations. The memory capacity strongly depends on the shape of input-output function as well as the sharpness. This model should be useful to devise a class of models for associative memory of temporal patterns.

      • 양자화결합 뉴럴네트워크를 이용한 논리함수 구성

        박철영,中島康治 대구대학교 과학기술연구소 1998 科學技術硏究 Vol.5 No.2

        We discuss the performance of the neural networks with quantized interconnections of +1, -1 and 0(Quantized Connection Neural Networks: QCNN, and how to choose the connection weights for the networks from the training set of examples. The basic characteristics of the networks and algorithm to decide the connection weights are presented. The layered QCNN to solve the parity problem with arbitrary number N of inputs is obtained by using the algorithm. The layered QCNN has a single hidden layer and no bias input when N is odd. When N is even, the network requires only one additional input as bias. The networks which perform any logic functions can be designed on the basis of the algorithm, which is slightly different from the way for solving N-parity problems. The network may be expected to have the same ability of generalization as the network trained with learning rules, because it is possible to decide the connection weights even if the given training set is small. It takes rather long time for the learning of the connection weights, however, one can decide them without learning in our case. Hence, we may expect some applications of QCNN for real-time processings.

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼