PROGRAMMING:Sigmoid function and its gradient
For the convenience of expression, for the activation function acting on the matrix, if there is no special description in this paper, it means that it acts on each element of the matrix respectively, that is, $$f (x)_{ i,j}=f(X_{ i,j})$$。
If there is no nonlinear function as the activation function, then no matter how many layers there are in MLP, there is no hidden layer. For example, a multilayer perceptron with a hidden layer $$y = F_ 2(f_ 1 (x / times a / times b) $$, if $$f_ 1 (x) = KX + B $$, that is, it is a linear function. We can get $$y = f by introducing it into the formula_ 2((kX\times A+b\times \mathbf{1})\times B)=f_ 2 (KX / times a / times B + B / times / mathbf {1} times b) $$, where $$mathbf {1} $$denotes a matrix in which all elements are $$1 $$. Let $$C = Ka / times B $$, $$bar {C} = B / times / mathbf {1} / times B $$, then $$y = F_ 2 (x times C + \ bar {C}) $$, obviously, it has become a single-layer neural network, and the expression ability of the network has been greatly weakened. Therefore, it can be seen that the nonlinear activation function is important for the neural network.
The most classic activation function is sigmoid function $$\ sigma (x) = \ frac {1} {1 + \ exp {(- x)}} $$. It changes dramatically from $$0 $$to $$1 $$near $$x = 0 $$, and is usually used for 0-1 classification tasks. Its derivative function is also very easy to obtain, please deduce by yourself.

*Sigmoid function graph, from [Wikipedia - sigmoid function]( https://en.wikipedia.org/wiki/Sigmoid_ function)*
Now, let's give the matrix $$a \_ C \sigma$$、$$\nabla_ A \sigma$$、$$\nabla_ B \sigma$$。
###Input format:
The first line gives three integers, $$M $$, $$n $$, and $$p $$, which are not greater than $$10 ^ 2 $$. Then followed by a blank line.
Then, the elements in the matrix $$a $$are given in the main order of the $$M $$row, with each row of $$n $$elements separated by spaces. Then followed by a blank line.
Then, the elements in the matrix $$B $$are given in the main order of the $$n $$row, with each row of $$p $$elements separated by spaces.
###Output format:
First of all, output the product matrix after sigmoid function, which has a total of $$M $$lines, each line has $$p $$elements, each element is followed by a space, and each line ends with a new line. Finally, an extra blank line is output.
Then output matrix $$\ nabla_{ C} There are $$M $$lines in total, each line has $$p $$elements, each element is followed by a space, and each line ends with a new line. Finally, an extra blank line is output.
Then output the matrix $$\ nabla_{ A} There are $$n $$elements in each line, each element is followed by a space, and each line ends with a new line. Finally, an extra blank line is output.
Finally, output matrix $$\ nabla_{ B} There are $$p $$elements in each line, each element is followed by a space, and each line ends with a new line. Finally, an extra blank line is output.
The output elements are reserved for 2 decimal places.
###Input example:
```in
2 2 3
0.1 0.2
-0.3 0.4
1 2 3
3 2 1
```
###Output example:
```out
0.67 0.65 0.62
0.71 0.55 0.38
0.22 0.23 0.24
0.21 0.25 0.24
1.38 1.36
1.41 1.35
-0.04 -0.05 -0.05
0.13 0.14 0.14
```
answer:If there is no answer, please comment
If there is no nonlinear function as the activation function, then no matter how many layers there are in MLP, there is no hidden layer. For example, a multilayer perceptron with a hidden layer $$y = F_ 2(f_ 1 (x / times a / times b) $$, if $$f_ 1 (x) = KX + B $$, that is, it is a linear function. We can get $$y = f by introducing it into the formula_ 2((kX\times A+b\times \mathbf{1})\times B)=f_ 2 (KX / times a / times B + B / times / mathbf {1} times b) $$, where $$mathbf {1} $$denotes a matrix in which all elements are $$1 $$. Let $$C = Ka / times B $$, $$bar {C} = B / times / mathbf {1} / times B $$, then $$y = F_ 2 (x times C + \ bar {C}) $$, obviously, it has become a single-layer neural network, and the expression ability of the network has been greatly weakened. Therefore, it can be seen that the nonlinear activation function is important for the neural network.
The most classic activation function is sigmoid function $$\ sigma (x) = \ frac {1} {1 + \ exp {(- x)}} $$. It changes dramatically from $$0 $$to $$1 $$near $$x = 0 $$, and is usually used for 0-1 classification tasks. Its derivative function is also very easy to obtain, please deduce by yourself.

*Sigmoid function graph, from [Wikipedia - sigmoid function]( https://en.wikipedia.org/wiki/Sigmoid_ function)*
Now, let's give the matrix $$a \_ C \sigma$$、$$\nabla_ A \sigma$$、$$\nabla_ B \sigma$$。
###Input format:
The first line gives three integers, $$M $$, $$n $$, and $$p $$, which are not greater than $$10 ^ 2 $$. Then followed by a blank line.
Then, the elements in the matrix $$a $$are given in the main order of the $$M $$row, with each row of $$n $$elements separated by spaces. Then followed by a blank line.
Then, the elements in the matrix $$B $$are given in the main order of the $$n $$row, with each row of $$p $$elements separated by spaces.
###Output format:
First of all, output the product matrix after sigmoid function, which has a total of $$M $$lines, each line has $$p $$elements, each element is followed by a space, and each line ends with a new line. Finally, an extra blank line is output.
Then output matrix $$\ nabla_{ C} There are $$M $$lines in total, each line has $$p $$elements, each element is followed by a space, and each line ends with a new line. Finally, an extra blank line is output.
Then output the matrix $$\ nabla_{ A} There are $$n $$elements in each line, each element is followed by a space, and each line ends with a new line. Finally, an extra blank line is output.
Finally, output matrix $$\ nabla_{ B} There are $$p $$elements in each line, each element is followed by a space, and each line ends with a new line. Finally, an extra blank line is output.
The output elements are reserved for 2 decimal places.
###Input example:
```in
2 2 3
0.1 0.2
-0.3 0.4
1 2 3
3 2 1
```
###Output example:
```out
0.67 0.65 0.62
0.71 0.55 0.38
0.22 0.23 0.24
0.21 0.25 0.24
1.38 1.36
1.41 1.35
-0.04 -0.05 -0.05
0.13 0.14 0.14
```
answer:If there is no answer, please comment