Writing an Op to work on an ndarray in C
The following code works, but important error-checking has been omitted forclarity. For example, when you write C code that assumes memory is contiguous,you should check the strides and alignment.
In the first two lines of the C function, we make y point to a new array withthe correct size for the output. This is essentially simulating the liney = x.copy()
.The variables %(x)s
and %(y)s
are set up by the TensorType to be PyArrayObject
pointers.TensorType also set up dtype_%(x)s
to be a typdef to the C type for x
.
In C code for a theano op, numpy arrays are represented as Cstructs. This is part of the numpy/scipy C API documented athttp://docs.scipy.org/doc/numpy/reference/c-api.types-and-structures.html
TODO: NEEDS MORE EXPLANATION.
TODO: talk about OPTIMIZATION STAGES
The register_specialize
decorator is what activates our optimization, andtells Theano to use it in the specialization stage.The local_optimizer
decorator builds a class instance around our globalfunction. The argument is a hint that our optimizer works on nodeswhose .op
attribute equals fibby
.The function here (fibby_of_zero
) expects an Apply
instance as anargument for parameter node
. It tests usingfunction get_scalar_constant_value
, which determines if aVariable () is guaranteed to be a constant, and if so, what constant.