FANN: Fast Artificial Neural Network Library by 바죠

FANN      Fast Artificial Neural Network Library
http://leenissen.dk/fann/wp/language-bindings/
https://en.wikipedia.org/wiki/Fast_Artificial_Neural_Network
http://fann.sourceforge.net/fann_en.pdf
https://jaist.dl.sourceforge.net/project/fann/fann_doc/1.0/fann_doc_complete_1.0.pdf

FANN

Fast Artificial Neural Network Library is a free open source neural network library, which implements multilayer artificial neural networks in C with support for both fully connected and sparsely connected networks. Cross-platform execution in both fixed and floating point are supported. It includes a framework for easy handling of training data sets. It is easy to use, versatile, well documented, and fast. Bindings to more than 20 programming languages are available. An easy to read introduction article and a reference manual accompanies the library with examples and recommendations on how to use the library. Several graphical user interfaces are also available for the library.

FANN Features:

  • Multilayer Artificial Neural Network Library in C
  • Backpropagation training (RPROP, Quickprop, Batch, Incremental)
  • Evolving topology training which dynamically builds and trains the ANN (Cascade2)
  • Easy to use (create, train and run an ANN with just three function calls)
  • Fast (up to 150 times faster execution than other libraries)
  • Versatile (possible to adjust many parameters and features on-the-fly)
  • Well documented (An easy to read introduction article, a thorough reference manual, and a 50+ page university report describing the implementation considerations etc.)
  • Cross-platform (configure script for linux and unix, dll files for windows, project files for MSVC++ and Borland compilers are also reported to work)
  • Several different activation functions implemented (including stepwise linear functions for that extra bit of speed)
  • Easy to save and load entire ANNs
  • Several easy to use examples
  • Can use both floating point and fixed point numbers (actually both float, double and int are available)
  • Cache optimized (for that extra bit of speed)
  • Open source, but can still be used in commercial applications (licenced under LGPL)
  • Framework for easy handling of training data sets
  • Graphical Interfaces
  • Language Bindings to a large number of different programming languages
  • Widely used (approximately 100 downloads a day)
    NameProgramming LanguageFANN versionComments
    FannCSharpC#2.2
    fannjJava2.1
    FANN Wrapper for C++C++2.1Header file as part of the standard FANN download.
    node-fannnode.js2.1
    fann.jsJavascript2.2
    PHP FANNPHP2.2News, Reference Manual
    Fortran FANNFortran2.1
    Rust FANNRust2.1
    fannerlErlang2.2
    Python FANNPython2.2
    DerelictFANND2.2
    Fann2MqlMetaTrader4 (MQL4)2.1
    AI-FANNPerl2.1
    ruby-fannRuby2.1
    hrb4fannHarbour2.2
    Delphi FANNDelphi2.1Download here.
    Tcl Artificial Neural NetworksTcl2.1Download here.
    lfannLua2.2
    Prolog FANNVisual Prolog 72.1
    plfannSWI Prolog2.1Download here.
    go-fannGo2.1
    FANN KernelSoap / Web service2.1Examples available for .Net and Mathematica
    Matlab FANNMatlab2.1
    R-binding libfannR2.1
    FannAdaAda2.0
    hfannHaskel2.0
    ann.*Grass2.0
    octave-fannOctave2.0
    Smalltalk FANNSqueak Smalltalk2.0Download for Windows and Linux.
    PD ANNPure DataUnknown

    cmake .
    sudo make install


    ln -s /usr/local/lib/*fann* /usr/lib/     [super user level]
    ldconfig   [super user level]


    ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    program example
    !also import iso_c_bindings
      use fann
      implicit none

  type(C_PTR) :: ann
  type(C_PTR) :: train

  integer, parameter :: sp = C_FLOAT
  integer, parameter :: ft = FANN_TYPE

  integer, parameter :: num_layer = 3
  integer, parameter :: nin = 3
  integer, parameter :: nout= 1
  integer, dimension(num_layer) :: layers

  integer, parameter :: ndata = 100
  real, dimension(nin,ndata) :: inTrainData
  real, dimension(nout,ndata) :: outTrainData

  integer :: max_epochs, epochs_between_reports
  real(sp) ::  desired_error

  real(ft), dimension(nin) :: x

!input
  layers(1) = nin
!hidden
  layers(2) = 20
!outout
  layers(3) = nout

!the net, with SIGMOID
  ann = fann_create_standard_array(num_layer,layers)
  call fann_set_activation_function_hidden(ann,enum_activation_function('FANN_SIGMOID'))
  call fann_set_activation_function_output(ann,enum_activation_function('FANN_SIGMOID'))

  call fann_print_connections(ann)

!my training data. Let's learn make a neural net which is a random generator :)
  call random_number(inTrainData)
  call random_number(outTrainData)

  train = fann_create_train_from_callback(ndata,nin,nout,C_FUNLOC(mytrain_callback))

!training
  call fann_set_training_algorithm(ann,enum_training_algorithm('FANN_TRAIN_RPROP'))

  max_epochs = 10000
  epochs_between_reports = 1000
  desired_error = 0.001
  call fann_train_on_data(ann,train,max_epochs,epochs_between_reports,desired_error)
  call fann_print_connections(ann)

!testing

  x = (/0.1_ft,0.5_ft,1._ft/)

!running
  print *, 'ann(x)= ',f_fann_run(ann,x)

!saving
  print *,'saving...', fann_save(ann,f_c_string('arg.dat'))
  call fann_destroy(ann)

!loading
  print *,'loading...'
  ann = fann_create_from_file(f_c_string('arg.dat'))
  print *, 'loaded ann(x)= ',f_fann_run(ann,x)

contains


 subroutine mytrain_callback(num, num_input, num_output, input, output) bind(C)
    implicit none

    integer(C_INT), value :: num, num_input, num_output
#ifdef FIXEDFANN
    integer(FANN_TYPE), dimension(0:num_input-1) :: input
    integer(FANN_TYPE), dimension(0:num_output-1) :: output

    input(0:num_input-1) = int(inTrainData(1:num_input,num+1),FANN_TYPE)
    output(0:num_output-1) = int(outTrainData(1:num_output,num+1),FANN_TYPE)

#else
    real(FANN_TYPE), dimension(0:num_input-1) :: input
    real(FANN_TYPE), dimension(0:num_output-1) :: output

    input(0:num_input-1) = real(inTrainData(1:num_input,num+1),FANN_TYPE)
    output(0:num_output-1) = real(outTrainData(1:num_output,num+1),FANN_TYPE)

#endif

  end subroutine  mytrain_callback

 

end program example

 ./example
Layer / Neuron 0123456789012345678901234
L   1 / N    4 aaAa.....................
L   1 / N    5 aaaa.....................
L   1 / N    6 aAAa.....................
L   1 / N    7 AAAa.....................
L   1 / N    8 AAaa.....................
L   1 / N    9 aAAa.....................
L   1 / N   10 AAAa.....................
L   1 / N   11 AAaA.....................
L   1 / N   12 AAaA.....................
L   1 / N   13 Aaaa.....................
L   1 / N   14 aAAA.....................
L   1 / N   15 AAAA.....................
L   1 / N   16 AaAA.....................
L   1 / N   17 AAAA.....................
L   1 / N   18 aAaa.....................
L   1 / N   19 aaAa.....................
L   1 / N   20 AAAA.....................
L   1 / N   21 aaaa.....................
L   1 / N   22 aAAA.....................
L   1 / N   23 AAaa.....................
L   1 / N   24 .........................
L   2 / N   25 ....AaAAAAaaaAaAAaAaAAaaA
L   2 / N   26 .........................
Max epochs    10000. Desired error: 0.0010000000.
Epochs            1. Current error: 0.0681413040. Bit fail 24.
Epochs         1000. Current error: 0.0270192791. Bit fail 4.
Epochs         2000. Current error: 0.0159599949. Bit fail 2.
Epochs         3000. Current error: 0.0132796671. Bit fail 1.
Epochs         4000. Current error: 0.0120764365. Bit fail 1.
Epochs         5000. Current error: 0.0113466335. Bit fail 1.
Epochs         6000. Current error: 0.0109015331. Bit fail 1.
Epochs         7000. Current error: 0.0105291791. Bit fail 1.
Epochs         8000. Current error: 0.0102274008. Bit fail 0.
Epochs         9000. Current error: 0.0099762194. Bit fail 0.
Epochs        10000. Current error: 0.0097725037. Bit fail 0.
Layer / Neuron 0123456789012345678901234
L   1 / N    4 hBCa.....................
L   1 / N    5 mZrB.....................
L   1 / N    6 fdFb.....................
L   1 / N    7 CZlo.....................
L   1 / N    8 ahiI.....................
L   1 / N    9 ubzN.....................
L   1 / N   10 xBOC.....................
L   1 / N   11 zZzP.....................
L   1 / N   12 mqKF.....................
L   1 / N   13 kGmD.....................
L   1 / N   14 eABC.....................
L   1 / N   15 jZzU.....................
L   1 / N   16 FegB.....................
L   1 / N   17 hFeB.....................
L   1 / N   18 gFAb.....................
L   1 / N   19 ZwzS.....................
L   1 / N   20 pIFB.....................
L   1 / N   21 zzZP.....................
L   1 / N   22 mdFG.....................
L   1 / N   23 zZkf.....................
L   1 / N   24 .........................
L   2 / N   25 ....BEGgggfDgMEEIfkcFEDCe
L   2 / N   26 .........................
 ann(x)=   0.448831797   
 saving...           0
 loading...
 loaded ann(x)=   0.448831797   

--------------------------------------------------------------------------------------------------------------------


https://towardsdatascience.com/meet-artificial-neural-networks-ae5939b1dd3a
https://blog.ttro.com/wp-content/uploads/2017/01/TB010-Deep-Neural-Network.jpg
--------------------------------------------------------------------------------------------------------------------
Resilient propagation :
https://en.wikipedia.org/wiki/Rprop


--------------------------------------------------------------------------------------------------------------------
FC=gfortran
FCFLAGS=-g
#-DDOUBLEFANN -DFIXEDFANN
LFLAGS= -lfann
#-ldoublefann -lfixedfann

example: fann.o example.o
        $(FC) $(FCFLAGS) fann.o example.o -o $@ $(LFLAGS)

%.o: %.f90
        $(FC) $(FCFLAGS) -c $<

%.o: %.F90
        $(FC) $(FCFLAGS) -c $<

%.o: %.f03
        $(FC) $(FCFLAGS) -c $<

%.o: %.F03
        $(FC) $(FCFLAGS) -c $<

clean:
        rm -f *.o *.mod example


--------------------------------------------------------------------------------------------------------------------





                 


핑백

덧글

댓글 입력 영역

최근 포토로그



MathJax