{"id":659,"date":"2025-05-18T12:23:19","date_gmt":"2025-05-18T10:23:19","guid":{"rendered":"https:\/\/renor.it\/neural-networks-in-php-yes-it-can-be-done\/"},"modified":"2025-12-20T15:53:59","modified_gmt":"2025-12-20T14:53:59","slug":"neural-networks-in-php-yes-it-can-be-done","status":"publish","type":"post","link":"https:\/\/renor.it\/en\/blog\/intelligenza-artificiale-algoritmi\/neural-networks-in-php-yes-it-can-be-done\/","title":{"rendered":"Neural Networks in PHP? Yes, It Can Be Done!"},"content":{"rendered":"\n<p>I\u2019ll begin this article with a premise\u2026 PHP is certainly not the ideal language when it comes to artificial intelligence. Neural networks are typically the domain of more scientific languages like Python, which offers optimized libraries for this purpose such as PyTorch and NumPy\u2014and that\u2019s usually the language I rely on when working in this kind of context.   <\/p>\n\n<p>However, a few months ago I was chatting with a friend over an aperitif, and as usual, we started throwing around a bunch of silly ideas about AI (I\u2019ll spare you the wild ones we came up with). But then we focused on one specific topic: \u201cWould it be possible to create a neural network in PHP?\u201d<br \/>The short answer is yes\u2014albeit with some limitations.  <\/p>\n\n<p>To understand this article, we first need to make a few preliminary remarks\u2026<\/p>\n\n<h2 class=\"wp-block-heading\">Introduction<\/h2>\n\n<p>In today\u2019s technological landscape, Artificial Intelligence (AI in English, or IA in Italian) is no longer a niche topic, but a strategic pillar for nearly any company aiming to extract value from its data, optimize processes, and offer truly competitive products.<br \/>In the field of HR &amp; Workforce Management, for instance, predictive models make it possible to anticipate sudden absences, dynamically calibrate shifts, and, ultimately, save time and resources\u2014precisely the \u201cspeed\/quality\u201d combination we discussed in this article: <a href=\"https:\/\/renor.it\/speed-and-quality-in-projects\/?lang=en\">https:\/\/renor.it\/velocita-e-qualita-nei-progetti\/<\/a>.<br \/><br \/><br \/>But how many of you actually know what artificial intelligence really is? <\/p>\n\n<h2 class=\"wp-block-heading\">AI \u2013 The Famed Yet Unknown<\/h2>\n\n<p>The term Artificial Intelligence refers to the set of techniques that allow a computer system to exhibit abilities normally attributed to human intelligence: reasoning, learning, decision-making, and recognizing complex patterns.<br \/>Within this broad field, Machine Learning represents the approach in which learning takes place through statistical analysis of data, without having to manually code every rule.<br \/>In recent years, the rise of Deep Learning has pushed evolution even further: very deep neural networks, composed of dozens or even hundreds of processing layers, are able to detect structures in data that traditional models fail to capture.  <\/p>\n\n<h2 class=\"wp-block-heading\">What are neural networks used for?<\/h2>\n\n<p>An artificial neural network is a mathematical model inspired by the structure of the neurons in our brain. After all, almost all human discoveries are based on observing what already exists in nature.  <br \/>The neural structure of the brain allows us to solve problems where the relationship between input (what we perceive from the outside) and output (what we derive from it) is non-linear or difficult to formalize. <\/p>\n\n<p>Let\u2019s imagine, for example, that we want to predict the likelihood of an employee being late based on variables such as traffic, weather conditions, personal history of delays and absences, and public transportation schedules. The relationship between these factors is far too complex to be described with a few conditional statements\u2014but a well-trained neural network can learn it by analyzing a large amount of historical data.  <\/p>\n\n<p>The reason for this ability lies in the fact that each connection between neurons is associated with a weight and a bias term. We will later understand what these are.  <\/p>\n\n<p>During the training phase, the network adjusts these parameters to minimize a loss function that measures the discrepancy between its prediction and the actual value. This refinement process occurs through the backpropagation algorithm, which computes the gradients, and an optimization method such as stochastic gradient descent, which iteratively updates the weights.<br \/>At the end of the process, the network does not contain a set of rules written by the programmer, but rather a collection of numerical coefficients that encode, in a distributed manner, the knowledge extracted from the data.   <\/p>\n\n<p>Now that we have a general overview, we can understand the roles and responsibilities of weights and biases.<\/p>\n\n<p>The weight is the numerical coefficient that modulates the intensity with which an input signal contributes to the activation of the next neuron. We can think of it like a volume knob: turning it up amplifies the contribution of that specific feature, while turning it down reduces it, even to the point of inverting its effect.<br \/>From a mathematical standpoint, the weight multiplies the input value and determines the slope of the function the network is learning; a high weight indicates that the input is strongly correlated with the output, while a weight close to zero makes it practically irrelevant.   <\/p>\n\n<p>The bias, on the other hand, acts as a translator: it is added to the product of input and weight, shifting the overall result up or down before the activation function is applied.<br \/>Therefore, if the weights\u2014as we\u2019ve seen\u2014represent the slope of a line, the bias represents the y-intercept, allowing the network to model functions that do not necessarily pass through the origin.<br \/>In practice, the bias allows a neuron to activate even when all inputs are zero, introducing that flexibility which makes neural models true function approximators.   <\/p>\n\n<p>During training, weights and biases are updated using the backpropagation algorithm: the gradient of the loss function indicates in which direction and by how much each parameter should be adjusted to reduce the gap between the network\u2019s prediction and the actual value.<br \/>Iteration after iteration, the network adjusts these two types of parameters in a coordinated manner, refining both the slope and the position of its decision curves, until it captures the complexity of the phenomenon we aim to model.  <\/p>\n\n<p>In summary, weights and biases are the fundamental building blocks of the network\u2019s adaptive intelligence: the former controls the relative importance of the inputs, while the latter provides the freedom to move within the solution space without predefined geometric constraints.<\/p>\n\n<h2 class=\"wp-block-heading\">Why is it worth implementing it in PHP?<\/h2>\n\n<p>One might wonder whether it makes sense to build a neural network in a language traditionally considered for web backend development. The answer is yes\u2014for certain well-defined scenarios.<br \/>First of all, a native implementation avoids introducing an additional runtime\u2014typically Python\u2014thereby simplifying the build, test, and deploy cycle when the entire application stack is already in PHP. Furthermore, for microservices that require lightweight models and inference times in the order of a few milliseconds, a self-contained solution is more than adequate.<br \/>There\u2019s also the good old educational aspect, which should not be underestimated: writing the network line by line dismantles the \u201cblack box\u201d aura that surrounds many deep learning frameworks, and puts developers in a position to understand, optimize, and\u2014most importantly\u2014debug every step of the computation. This offers a detailed overview of how a neural network works, leading to a deeper and more comprehensive understanding.    <\/p>\n\n<h2 class=\"wp-block-heading\">Anatomy of a Minimal Feed-Forward Neural Network in PHP<\/h2>\n\n<p>To move forward, we need to define the backbone of a \u201cbare-metal\u201d neural network that we can build using only the core PHP engine\u2014without relying on C extensions or external libraries.<br \/>In practice, this means modeling, using native data structures, the following three fundamental elements:  <\/p>\n\n<ol class=\"wp-block-list\">\n<li>The Layers \u2013 or Processing Elements<\/li>\n\n\n\n<li>weight matrices<\/li>\n\n\n\n<li>bias vectors<\/li>\n<\/ol>\n\n<p>Each layer will be represented by a simple two-dimensional array of weights (<code>$weights<\/code>) and a one-dimensional array of biases (<code>$biases<\/code>). The transfer of activations from one layer to the next will occur through standard matrix-vector multiplication, followed by the application of an activation function (sigmoid, ReLU, or tanh, depending on the use case).<br \/>This minimalist scheme has the advantage of remaining readable and facilitating step-by-step debugging, but it imposes certain design choices: no automatic parallelization, no SIMD optimizations, and a need for extreme attention to computational complexity, since the unrestrained use of nested foreach loops in PHP can cause inference times to spike.   <\/p>\n\n<p>Nevertheless, for networks with one or two hidden layers and a number of neurons in the order of hundreds, performance remains surprisingly decent\u2014provided that op-caching is leveraged and redundant memory allocations are avoided.<br \/>In essence, before diving into the actual code, it\u2019s important to understand that in PHP, neurons are nothing more than rows in arrays, and gradients are float values updated within a loop. The simplicity of the implementation makes the arithmetic of the network easy to grasp, making each step of the learning process clearly visible.   <\/p>\n\n<h2 class=\"wp-block-heading\">Code Implementation: The Basic Structure of the Neural Network<\/h2>\n\n<p>At this point in the article, it\u2019s appropriate to present in detail the bare-metal implementation of a two-layer feed-forward neural network, written entirely in PHP 8.1.<br \/>The following code maintains maximum transparency: every mathematical operation is explicitly expressed using simple for loops, each intermediate variable is stored so it can be inspected during debugging, and the only prerequisites are the PHP engine and opcache enabled in production.  <\/p>\n\n<p>To make the project easily reusable, I have divided the code into two separate files.<br \/>The first, NeuralNetwork.php, contains all the neural network logic, complete with classes, activation functions, forward-pass, backpropagation, and training routines.<br \/>The second, demo_xor.php, is a simple execution script that imports the class, instantiates the network, trains it on the classic XOR problem, and prints the results to the screen.   <\/p>\n\n<h3 class=\"wp-block-heading\">NeuralNetwork.php<\/h3>\n\n<div class=\"wp-block-kevinbatdorf-code-block-pro\" data-code-block-pro-font-family=\"Code-Pro-JetBrains-Mono\" style=\"font-size:.875rem;font-family:Code-Pro-JetBrains-Mono,ui-monospace,SFMono-Regular,Menlo,Monaco,Consolas,monospace;line-height:1.25rem;--cbp-tab-width:2\"><span style=\"display:block;padding:16px 0 0 16px;margin-bottom:-1px;width:100%;text-align:left;background-color:#1E1E1E\"><svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"54\" height=\"14\" viewbox=\"0 0 54 14\"><g fill=\"none\"><\/g><\/svg><\/span><span role=\"button\" style=\"color:#D4D4D4;display:none\" aria-label=\"Copy\" class=\"code-block-pro-copy-button\"><svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" style=\"width:24px;height:24px\" viewbox=\"0 0 24 24\"><path d=\"M9 5H7a2 2 0 00-2 2v12a2 2 0 002 2h10a2 2 0 002-2V7a2 2 0 00-2-2h-2M9 5a2 2 0 002 2h2a2 2 0 002-2M9 5a2 2 0 012-2h2a2 2 0 012 2m-6 9l2 2 4-4\"><\/path><path d=\"M9 5H7a2 2 0 00-2 2v12a2 2 0 002 2h10a2 2 0 002-2V7a2 2 0 00-2-2h-2M9 5a2 2 0 002 2h2a2 2 0 002-2M9 5a2 2 0 012-2h2a2 2 0 012 2\"><\/path><\/svg><\/span><pre class=\"shiki dark-plus\" style=\"background-color: #1E1E1E\"><code><span class=\"line\"><span style=\"color: #D4D4D4\">&lt;?php<\/span><\/span>\n<span class=\"line\"><span style=\"color: #C586C0\">declare<\/span><span style=\"color: #D4D4D4\">(strict_types=<\/span><span style=\"color: #B5CEA8\">1<\/span><span style=\"color: #D4D4D4\">);<\/span><\/span>\n<span class=\"line\" \/>\n<span class=\"line\"><span style=\"color: #6A9955\">\/**<\/span><\/span>\n<span class=\"line\"><span style=\"color: #6A9955\"> * Minimal feed-forward neural network in pure PHP<\/span><\/span>\n<span class=\"line\"><span style=\"color: #6A9955\"> * MIT License \u2013 (c) 2025<\/span><\/span>\n<span class=\"line\"><span style=\"color: #6A9955\"> *\/<\/span><\/span>\n<span class=\"line\" \/>\n<span class=\"line\"><span style=\"color: #6A9955\">\/* ---------- Activation functions ---------- *\/<\/span><\/span>\n<span class=\"line\" \/>\n<span class=\"line\"><span style=\"color: #6A9955\">\/** Sigmoid activation *\/<\/span><\/span>\n<span class=\"line\"><span style=\"color: #569CD6\">function<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #DCDCAA\">sigmoid<\/span><span style=\"color: #D4D4D4\">(<\/span><span style=\"color: #569CD6\">float<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #9CDCFE\">$x<\/span><span style=\"color: #D4D4D4\">): <\/span><span style=\"color: #569CD6\">float<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">{<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    <\/span><span style=\"color: #C586C0\">return<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #B5CEA8\">1.0<\/span><span style=\"color: #D4D4D4\"> \/ (<\/span><span style=\"color: #B5CEA8\">1.0<\/span><span style=\"color: #D4D4D4\"> + <\/span><span style=\"color: #DCDCAA\">exp<\/span><span style=\"color: #D4D4D4\">(-<\/span><span style=\"color: #9CDCFE\">$x<\/span><span style=\"color: #D4D4D4\">));<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">}<\/span><\/span>\n<span class=\"line\" \/>\n<span class=\"line\"><span style=\"color: #6A9955\">\/** Derivative of the sigmoid *\/<\/span><\/span>\n<span class=\"line\"><span style=\"color: #569CD6\">function<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #DCDCAA\">sigmoid_derivative<\/span><span style=\"color: #D4D4D4\">(<\/span><span style=\"color: #569CD6\">float<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #9CDCFE\">$x<\/span><span style=\"color: #D4D4D4\">): <\/span><span style=\"color: #569CD6\">float<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">{<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    <\/span><span style=\"color: #9CDCFE\">$s<\/span><span style=\"color: #D4D4D4\"> = <\/span><span style=\"color: #DCDCAA\">sigmoid<\/span><span style=\"color: #D4D4D4\">(<\/span><span style=\"color: #9CDCFE\">$x<\/span><span style=\"color: #D4D4D4\">);<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    <\/span><span style=\"color: #C586C0\">return<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #9CDCFE\">$s<\/span><span style=\"color: #D4D4D4\"> * (<\/span><span style=\"color: #B5CEA8\">1<\/span><span style=\"color: #D4D4D4\"> - <\/span><span style=\"color: #9CDCFE\">$s<\/span><span style=\"color: #D4D4D4\">);<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">}<\/span><\/span>\n<span class=\"line\" \/>\n<span class=\"line\"><span style=\"color: #6A9955\">\/** ReLU activation *\/<\/span><\/span>\n<span class=\"line\"><span style=\"color: #569CD6\">function<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #DCDCAA\">relu<\/span><span style=\"color: #D4D4D4\">(<\/span><span style=\"color: #569CD6\">float<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #9CDCFE\">$x<\/span><span style=\"color: #D4D4D4\">): <\/span><span style=\"color: #569CD6\">float<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">{<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    <\/span><span style=\"color: #C586C0\">return<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #DCDCAA\">max<\/span><span style=\"color: #D4D4D4\">(<\/span><span style=\"color: #B5CEA8\">0.0<\/span><span style=\"color: #D4D4D4\">, <\/span><span style=\"color: #9CDCFE\">$x<\/span><span style=\"color: #D4D4D4\">);<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">}<\/span><\/span>\n<span class=\"line\" \/>\n<span class=\"line\"><span style=\"color: #6A9955\">\/** Derivative of ReLU *\/<\/span><\/span>\n<span class=\"line\"><span style=\"color: #569CD6\">function<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #DCDCAA\">relu_derivative<\/span><span style=\"color: #D4D4D4\">(<\/span><span style=\"color: #569CD6\">float<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #9CDCFE\">$x<\/span><span style=\"color: #D4D4D4\">): <\/span><span style=\"color: #569CD6\">float<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">{<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    <\/span><span style=\"color: #C586C0\">return<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #9CDCFE\">$x<\/span><span style=\"color: #D4D4D4\"> &gt; <\/span><span style=\"color: #B5CEA8\">0<\/span><span style=\"color: #D4D4D4\"> ? <\/span><span style=\"color: #B5CEA8\">1.0<\/span><span style=\"color: #D4D4D4\"> : <\/span><span style=\"color: #B5CEA8\">0.0<\/span><span style=\"color: #D4D4D4\">;<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">}<\/span><\/span>\n<span class=\"line\" \/>\n<span class=\"line\"><span style=\"color: #6A9955\">\/* ---------- Layer class ---------- *\/<\/span><\/span>\n<span class=\"line\" \/>\n<span class=\"line\"><span style=\"color: #569CD6\">final<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #569CD6\">class<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #4EC9B0\">Layer<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">{<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    <\/span><span style=\"color: #569CD6\">public<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #569CD6\">readonly<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #569CD6\">int<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #9CDCFE\">$in<\/span><span style=\"color: #D4D4D4\">;   <\/span><span style=\"color: #6A9955\">\/\/ number of input neurons<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    <\/span><span style=\"color: #569CD6\">public<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #569CD6\">readonly<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #569CD6\">int<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #9CDCFE\">$out<\/span><span style=\"color: #D4D4D4\">;  <\/span><span style=\"color: #6A9955\">\/\/ number of output neurons<\/span><\/span>\n<span class=\"line\" \/>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    <\/span><span style=\"color: #6A9955\">\/** <\/span><span style=\"color: #569CD6\">@var<\/span><span style=\"color: #6A9955\"> <\/span><span style=\"color: #569CD6\">float[]<\/span><span style=\"color: #6A9955\">[] weight matrix [out][in] *\/<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    <\/span><span style=\"color: #569CD6\">public<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #569CD6\">array<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #9CDCFE\">$W<\/span><span style=\"color: #D4D4D4\"> = [];<\/span><\/span>\n<span class=\"line\" \/>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    <\/span><span style=\"color: #6A9955\">\/** <\/span><span style=\"color: #569CD6\">@var<\/span><span style=\"color: #6A9955\"> <\/span><span style=\"color: #569CD6\">float[]<\/span><span style=\"color: #6A9955\"> bias vector [out] *\/<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    <\/span><span style=\"color: #569CD6\">public<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #569CD6\">array<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #9CDCFE\">$b<\/span><span style=\"color: #D4D4D4\"> = [];<\/span><\/span>\n<span class=\"line\" \/>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    <\/span><span style=\"color: #569CD6\">private<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #569CD6\">array<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #9CDCFE\">$lastInput<\/span><span style=\"color: #D4D4D4\">  = [];<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    <\/span><span style=\"color: #569CD6\">private<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #569CD6\">array<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #9CDCFE\">$lastZ<\/span><span style=\"color: #D4D4D4\">      = [];<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    <\/span><span style=\"color: #569CD6\">private<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #569CD6\">array<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #9CDCFE\">$lastOutput<\/span><span style=\"color: #D4D4D4\"> = [];<\/span><\/span>\n<span class=\"line\" \/>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    <\/span><span style=\"color: #569CD6\">public<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #569CD6\">function<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #DCDCAA\">__construct<\/span><span style=\"color: #D4D4D4\">(<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">        <\/span><span style=\"color: #569CD6\">int<\/span><span style=\"color: #D4D4D4\">      <\/span><span style=\"color: #9CDCFE\">$in<\/span><span style=\"color: #D4D4D4\">,<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">        <\/span><span style=\"color: #569CD6\">int<\/span><span style=\"color: #D4D4D4\">      <\/span><span style=\"color: #9CDCFE\">$out<\/span><span style=\"color: #D4D4D4\">,<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">        <\/span><span style=\"color: #569CD6\">callable<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #9CDCFE\">$act<\/span><span style=\"color: #D4D4D4\">,<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">        <\/span><span style=\"color: #569CD6\">callable<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #9CDCFE\">$actDer<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    ) {<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">        <\/span><span style=\"color: #569CD6\">$this<\/span><span style=\"color: #D4D4D4\">-&gt;<\/span><span style=\"color: #9CDCFE\">in<\/span><span style=\"color: #D4D4D4\">           = <\/span><span style=\"color: #9CDCFE\">$in<\/span><span style=\"color: #D4D4D4\">;<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">        <\/span><span style=\"color: #569CD6\">$this<\/span><span style=\"color: #D4D4D4\">-&gt;<\/span><span style=\"color: #9CDCFE\">out<\/span><span style=\"color: #D4D4D4\">          = <\/span><span style=\"color: #9CDCFE\">$out<\/span><span style=\"color: #D4D4D4\">;<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">        <\/span><span style=\"color: #569CD6\">$this<\/span><span style=\"color: #D4D4D4\">-&gt;<\/span><span style=\"color: #9CDCFE\">activation<\/span><span style=\"color: #D4D4D4\">   = <\/span><span style=\"color: #9CDCFE\">$act<\/span><span style=\"color: #D4D4D4\">;<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">        <\/span><span style=\"color: #569CD6\">$this<\/span><span style=\"color: #D4D4D4\">-&gt;<\/span><span style=\"color: #9CDCFE\">activation_d<\/span><span style=\"color: #D4D4D4\"> = <\/span><span style=\"color: #9CDCFE\">$actDer<\/span><span style=\"color: #D4D4D4\">;<\/span><\/span>\n<span class=\"line\" \/>\n<span class=\"line\"><span style=\"color: #D4D4D4\">        <\/span><span style=\"color: #6A9955\">\/\/ Xavier\/Glorot uniform initialization<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">        <\/span><span style=\"color: #9CDCFE\">$limit<\/span><span style=\"color: #D4D4D4\"> = <\/span><span style=\"color: #DCDCAA\">sqrt<\/span><span style=\"color: #D4D4D4\">(<\/span><span style=\"color: #B5CEA8\">6<\/span><span style=\"color: #D4D4D4\"> \/ (<\/span><span style=\"color: #9CDCFE\">$in<\/span><span style=\"color: #D4D4D4\"> + <\/span><span style=\"color: #9CDCFE\">$out<\/span><span style=\"color: #D4D4D4\">));<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">        <\/span><span style=\"color: #C586C0\">for<\/span><span style=\"color: #D4D4D4\"> (<\/span><span style=\"color: #9CDCFE\">$i<\/span><span style=\"color: #D4D4D4\"> = <\/span><span style=\"color: #B5CEA8\">0<\/span><span style=\"color: #D4D4D4\">; <\/span><span style=\"color: #9CDCFE\">$i<\/span><span style=\"color: #D4D4D4\"> &lt; <\/span><span style=\"color: #9CDCFE\">$out<\/span><span style=\"color: #D4D4D4\">; <\/span><span style=\"color: #9CDCFE\">$i<\/span><span style=\"color: #D4D4D4\">++) {<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">            <\/span><span style=\"color: #569CD6\">$this<\/span><span style=\"color: #D4D4D4\">-&gt;<\/span><span style=\"color: #9CDCFE\">b<\/span><span style=\"color: #D4D4D4\">[<\/span><span style=\"color: #9CDCFE\">$i<\/span><span style=\"color: #D4D4D4\">] = <\/span><span style=\"color: #B5CEA8\">0.0<\/span><span style=\"color: #D4D4D4\">;<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">            <\/span><span style=\"color: #C586C0\">for<\/span><span style=\"color: #D4D4D4\"> (<\/span><span style=\"color: #9CDCFE\">$j<\/span><span style=\"color: #D4D4D4\"> = <\/span><span style=\"color: #B5CEA8\">0<\/span><span style=\"color: #D4D4D4\">; <\/span><span style=\"color: #9CDCFE\">$j<\/span><span style=\"color: #D4D4D4\"> &lt; <\/span><span style=\"color: #9CDCFE\">$in<\/span><span style=\"color: #D4D4D4\">; <\/span><span style=\"color: #9CDCFE\">$j<\/span><span style=\"color: #D4D4D4\">++) {<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">                <\/span><span style=\"color: #569CD6\">$this<\/span><span style=\"color: #D4D4D4\">-&gt;<\/span><span style=\"color: #9CDCFE\">W<\/span><span style=\"color: #D4D4D4\">[<\/span><span style=\"color: #9CDCFE\">$i<\/span><span style=\"color: #D4D4D4\">][<\/span><span style=\"color: #9CDCFE\">$j<\/span><span style=\"color: #D4D4D4\">] = (<\/span><span style=\"color: #DCDCAA\">mt_rand<\/span><span style=\"color: #D4D4D4\">() \/ <\/span><span style=\"color: #DCDCAA\">mt_getrandmax<\/span><span style=\"color: #D4D4D4\">()) * <\/span><span style=\"color: #B5CEA8\">2<\/span><span style=\"color: #D4D4D4\"> * <\/span><span style=\"color: #9CDCFE\">$limit<\/span><span style=\"color: #D4D4D4\"> - <\/span><span style=\"color: #9CDCFE\">$limit<\/span><span style=\"color: #D4D4D4\">;<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">            }<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">        }<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    }<\/span><\/span>\n<span class=\"line\" \/>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    <\/span><span style=\"color: #6A9955\">\/** Forward propagation *\/<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    <\/span><span style=\"color: #569CD6\">public<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #569CD6\">function<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #DCDCAA\">forward<\/span><span style=\"color: #D4D4D4\">(<\/span><span style=\"color: #569CD6\">array<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #9CDCFE\">$input<\/span><span style=\"color: #D4D4D4\">): <\/span><span style=\"color: #569CD6\">array<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    {<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">        <\/span><span style=\"color: #569CD6\">$this<\/span><span style=\"color: #D4D4D4\">-&gt;<\/span><span style=\"color: #9CDCFE\">lastInput<\/span><span style=\"color: #D4D4D4\"> = <\/span><span style=\"color: #9CDCFE\">$input<\/span><span style=\"color: #D4D4D4\">;<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">        <\/span><span style=\"color: #569CD6\">$this<\/span><span style=\"color: #D4D4D4\">-&gt;<\/span><span style=\"color: #9CDCFE\">lastZ<\/span><span style=\"color: #D4D4D4\">     = [];<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">        <\/span><span style=\"color: #569CD6\">$this<\/span><span style=\"color: #D4D4D4\">-&gt;<\/span><span style=\"color: #9CDCFE\">lastOutput<\/span><span style=\"color: #D4D4D4\"> = [];<\/span><\/span>\n<span class=\"line\" \/>\n<span class=\"line\"><span style=\"color: #D4D4D4\">        <\/span><span style=\"color: #C586C0\">for<\/span><span style=\"color: #D4D4D4\"> (<\/span><span style=\"color: #9CDCFE\">$i<\/span><span style=\"color: #D4D4D4\"> = <\/span><span style=\"color: #B5CEA8\">0<\/span><span style=\"color: #D4D4D4\">; <\/span><span style=\"color: #9CDCFE\">$i<\/span><span style=\"color: #D4D4D4\"> &lt; <\/span><span style=\"color: #569CD6\">$this<\/span><span style=\"color: #D4D4D4\">-&gt;<\/span><span style=\"color: #9CDCFE\">out<\/span><span style=\"color: #D4D4D4\">; <\/span><span style=\"color: #9CDCFE\">$i<\/span><span style=\"color: #D4D4D4\">++) {<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">            <\/span><span style=\"color: #9CDCFE\">$z<\/span><span style=\"color: #D4D4D4\"> = <\/span><span style=\"color: #569CD6\">$this<\/span><span style=\"color: #D4D4D4\">-&gt;<\/span><span style=\"color: #9CDCFE\">b<\/span><span style=\"color: #D4D4D4\">[<\/span><span style=\"color: #9CDCFE\">$i<\/span><span style=\"color: #D4D4D4\">];<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">            <\/span><span style=\"color: #C586C0\">for<\/span><span style=\"color: #D4D4D4\"> (<\/span><span style=\"color: #9CDCFE\">$j<\/span><span style=\"color: #D4D4D4\"> = <\/span><span style=\"color: #B5CEA8\">0<\/span><span style=\"color: #D4D4D4\">; <\/span><span style=\"color: #9CDCFE\">$j<\/span><span style=\"color: #D4D4D4\"> &lt; <\/span><span style=\"color: #569CD6\">$this<\/span><span style=\"color: #D4D4D4\">-&gt;<\/span><span style=\"color: #9CDCFE\">in<\/span><span style=\"color: #D4D4D4\">; <\/span><span style=\"color: #9CDCFE\">$j<\/span><span style=\"color: #D4D4D4\">++) {<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">                <\/span><span style=\"color: #9CDCFE\">$z<\/span><span style=\"color: #D4D4D4\"> += <\/span><span style=\"color: #569CD6\">$this<\/span><span style=\"color: #D4D4D4\">-&gt;<\/span><span style=\"color: #9CDCFE\">W<\/span><span style=\"color: #D4D4D4\">[<\/span><span style=\"color: #9CDCFE\">$i<\/span><span style=\"color: #D4D4D4\">][<\/span><span style=\"color: #9CDCFE\">$j<\/span><span style=\"color: #D4D4D4\">] * <\/span><span style=\"color: #9CDCFE\">$input<\/span><span style=\"color: #D4D4D4\">[<\/span><span style=\"color: #9CDCFE\">$j<\/span><span style=\"color: #D4D4D4\">];<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">            }<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">            <\/span><span style=\"color: #569CD6\">$this<\/span><span style=\"color: #D4D4D4\">-&gt;<\/span><span style=\"color: #9CDCFE\">lastZ<\/span><span style=\"color: #D4D4D4\">[<\/span><span style=\"color: #9CDCFE\">$i<\/span><span style=\"color: #D4D4D4\">]      = <\/span><span style=\"color: #9CDCFE\">$z<\/span><span style=\"color: #D4D4D4\">;<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">            <\/span><span style=\"color: #569CD6\">$this<\/span><span style=\"color: #D4D4D4\">-&gt;<\/span><span style=\"color: #9CDCFE\">lastOutput<\/span><span style=\"color: #D4D4D4\">[<\/span><span style=\"color: #9CDCFE\">$i<\/span><span style=\"color: #D4D4D4\">] = (<\/span><span style=\"color: #569CD6\">$this<\/span><span style=\"color: #D4D4D4\">-&gt;<\/span><span style=\"color: #9CDCFE\">activation<\/span><span style=\"color: #D4D4D4\">)(<\/span><span style=\"color: #9CDCFE\">$z<\/span><span style=\"color: #D4D4D4\">);<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">        }<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">        <\/span><span style=\"color: #C586C0\">return<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #569CD6\">$this<\/span><span style=\"color: #D4D4D4\">-&gt;<\/span><span style=\"color: #9CDCFE\">lastOutput<\/span><span style=\"color: #D4D4D4\">;<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    }<\/span><\/span>\n<span class=\"line\" \/>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    <\/span><span style=\"color: #6A9955\">\/** Back-propagation, returns gradient for previous layer *\/<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    <\/span><span style=\"color: #569CD6\">public<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #569CD6\">function<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #DCDCAA\">backward<\/span><span style=\"color: #D4D4D4\">(<\/span><span style=\"color: #569CD6\">array<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #9CDCFE\">$gradOutput<\/span><span style=\"color: #D4D4D4\">, <\/span><span style=\"color: #569CD6\">float<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #9CDCFE\">$lr<\/span><span style=\"color: #D4D4D4\">): <\/span><span style=\"color: #569CD6\">array<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    {<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">        <\/span><span style=\"color: #9CDCFE\">$gradInput<\/span><span style=\"color: #D4D4D4\"> = <\/span><span style=\"color: #DCDCAA\">array_fill<\/span><span style=\"color: #D4D4D4\">(<\/span><span style=\"color: #B5CEA8\">0<\/span><span style=\"color: #D4D4D4\">, <\/span><span style=\"color: #569CD6\">$this<\/span><span style=\"color: #D4D4D4\">-&gt;<\/span><span style=\"color: #9CDCFE\">in<\/span><span style=\"color: #D4D4D4\">, <\/span><span style=\"color: #B5CEA8\">0.0<\/span><span style=\"color: #D4D4D4\">);<\/span><\/span>\n<span class=\"line\" \/>\n<span class=\"line\"><span style=\"color: #D4D4D4\">        <\/span><span style=\"color: #C586C0\">for<\/span><span style=\"color: #D4D4D4\"> (<\/span><span style=\"color: #9CDCFE\">$i<\/span><span style=\"color: #D4D4D4\"> = <\/span><span style=\"color: #B5CEA8\">0<\/span><span style=\"color: #D4D4D4\">; <\/span><span style=\"color: #9CDCFE\">$i<\/span><span style=\"color: #D4D4D4\"> &lt; <\/span><span style=\"color: #569CD6\">$this<\/span><span style=\"color: #D4D4D4\">-&gt;<\/span><span style=\"color: #9CDCFE\">out<\/span><span style=\"color: #D4D4D4\">; <\/span><span style=\"color: #9CDCFE\">$i<\/span><span style=\"color: #D4D4D4\">++) {<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">            <\/span><span style=\"color: #9CDCFE\">$delta<\/span><span style=\"color: #D4D4D4\"> = <\/span><span style=\"color: #9CDCFE\">$gradOutput<\/span><span style=\"color: #D4D4D4\">[<\/span><span style=\"color: #9CDCFE\">$i<\/span><span style=\"color: #D4D4D4\">] * (<\/span><span style=\"color: #569CD6\">$this<\/span><span style=\"color: #D4D4D4\">-&gt;<\/span><span style=\"color: #9CDCFE\">activation_d<\/span><span style=\"color: #D4D4D4\">)(<\/span><span style=\"color: #569CD6\">$this<\/span><span style=\"color: #D4D4D4\">-&gt;<\/span><span style=\"color: #9CDCFE\">lastZ<\/span><span style=\"color: #D4D4D4\">[<\/span><span style=\"color: #9CDCFE\">$i<\/span><span style=\"color: #D4D4D4\">]);<\/span><\/span>\n<span class=\"line\" \/>\n<span class=\"line\"><span style=\"color: #D4D4D4\">            <\/span><span style=\"color: #6A9955\">\/\/ Propagate gradient and update weights<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">            <\/span><span style=\"color: #C586C0\">for<\/span><span style=\"color: #D4D4D4\"> (<\/span><span style=\"color: #9CDCFE\">$j<\/span><span style=\"color: #D4D4D4\"> = <\/span><span style=\"color: #B5CEA8\">0<\/span><span style=\"color: #D4D4D4\">; <\/span><span style=\"color: #9CDCFE\">$j<\/span><span style=\"color: #D4D4D4\"> &lt; <\/span><span style=\"color: #569CD6\">$this<\/span><span style=\"color: #D4D4D4\">-&gt;<\/span><span style=\"color: #9CDCFE\">in<\/span><span style=\"color: #D4D4D4\">; <\/span><span style=\"color: #9CDCFE\">$j<\/span><span style=\"color: #D4D4D4\">++) {<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">                <\/span><span style=\"color: #9CDCFE\">$gradInput<\/span><span style=\"color: #D4D4D4\">[<\/span><span style=\"color: #9CDCFE\">$j<\/span><span style=\"color: #D4D4D4\">] += <\/span><span style=\"color: #9CDCFE\">$delta<\/span><span style=\"color: #D4D4D4\"> * <\/span><span style=\"color: #569CD6\">$this<\/span><span style=\"color: #D4D4D4\">-&gt;<\/span><span style=\"color: #9CDCFE\">W<\/span><span style=\"color: #D4D4D4\">[<\/span><span style=\"color: #9CDCFE\">$i<\/span><span style=\"color: #D4D4D4\">][<\/span><span style=\"color: #9CDCFE\">$j<\/span><span style=\"color: #D4D4D4\">];<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">                <\/span><span style=\"color: #569CD6\">$this<\/span><span style=\"color: #D4D4D4\">-&gt;<\/span><span style=\"color: #9CDCFE\">W<\/span><span style=\"color: #D4D4D4\">[<\/span><span style=\"color: #9CDCFE\">$i<\/span><span style=\"color: #D4D4D4\">][<\/span><span style=\"color: #9CDCFE\">$j<\/span><span style=\"color: #D4D4D4\">] -= <\/span><span style=\"color: #9CDCFE\">$lr<\/span><span style=\"color: #D4D4D4\"> * <\/span><span style=\"color: #9CDCFE\">$delta<\/span><span style=\"color: #D4D4D4\"> * <\/span><span style=\"color: #569CD6\">$this<\/span><span style=\"color: #D4D4D4\">-&gt;<\/span><span style=\"color: #9CDCFE\">lastInput<\/span><span style=\"color: #D4D4D4\">[<\/span><span style=\"color: #9CDCFE\">$j<\/span><span style=\"color: #D4D4D4\">];<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">            }<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">            <\/span><span style=\"color: #6A9955\">\/\/ Update bias<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">            <\/span><span style=\"color: #569CD6\">$this<\/span><span style=\"color: #D4D4D4\">-&gt;<\/span><span style=\"color: #9CDCFE\">b<\/span><span style=\"color: #D4D4D4\">[<\/span><span style=\"color: #9CDCFE\">$i<\/span><span style=\"color: #D4D4D4\">] -= <\/span><span style=\"color: #9CDCFE\">$lr<\/span><span style=\"color: #D4D4D4\"> * <\/span><span style=\"color: #9CDCFE\">$delta<\/span><span style=\"color: #D4D4D4\">;<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">        }<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">        <\/span><span style=\"color: #C586C0\">return<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #9CDCFE\">$gradInput<\/span><span style=\"color: #D4D4D4\">;<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    }<\/span><\/span>\n<span class=\"line\" \/>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    <\/span><span style=\"color: #6A9955\">\/* callable *\/<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #569CD6\">private<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #9CDCFE\">$activation<\/span><span style=\"color: #D4D4D4\">;<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    <\/span><span style=\"color: #6A9955\">\/* callable *\/<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #569CD6\">private<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #9CDCFE\">$activation_d<\/span><span style=\"color: #D4D4D4\">;<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">}<\/span><\/span>\n<span class=\"line\" \/>\n<span class=\"line\"><span style=\"color: #6A9955\">\/* ---------- NeuralNetwork class ---------- *\/<\/span><\/span>\n<span class=\"line\" \/>\n<span class=\"line\"><span style=\"color: #569CD6\">final<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #569CD6\">class<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #4EC9B0\">NeuralNetwork<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">{<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    <\/span><span style=\"color: #6A9955\">\/** <\/span><span style=\"color: #569CD6\">@var<\/span><span style=\"color: #6A9955\"> <\/span><span style=\"color: #4EC9B0\">Layer<\/span><span style=\"color: #569CD6\">[]<\/span><span style=\"color: #6A9955\"> *\/<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    <\/span><span style=\"color: #569CD6\">private<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #569CD6\">array<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #9CDCFE\">$layers<\/span><span style=\"color: #D4D4D4\"> = [];<\/span><\/span>\n<span class=\"line\" \/>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    <\/span><span style=\"color: #6A9955\">\/** Add a layer to the network *\/<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    <\/span><span style=\"color: #569CD6\">public<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #569CD6\">function<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #DCDCAA\">addLayer<\/span><span style=\"color: #D4D4D4\">(<\/span><span style=\"color: #4EC9B0\">Layer<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #9CDCFE\">$layer<\/span><span style=\"color: #D4D4D4\">): <\/span><span style=\"color: #569CD6\">void<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    {<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">        <\/span><span style=\"color: #569CD6\">$this<\/span><span style=\"color: #D4D4D4\">-&gt;<\/span><span style=\"color: #9CDCFE\">layers<\/span><span style=\"color: #D4D4D4\">[] = <\/span><span style=\"color: #9CDCFE\">$layer<\/span><span style=\"color: #D4D4D4\">;<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    }<\/span><\/span>\n<span class=\"line\" \/>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    <\/span><span style=\"color: #6A9955\">\/** Forward pass through all layers *\/<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    <\/span><span style=\"color: #569CD6\">public<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #569CD6\">function<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #DCDCAA\">predict<\/span><span style=\"color: #D4D4D4\">(<\/span><span style=\"color: #569CD6\">array<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #9CDCFE\">$x<\/span><span style=\"color: #D4D4D4\">): <\/span><span style=\"color: #569CD6\">array<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    {<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">        <\/span><span style=\"color: #9CDCFE\">$out<\/span><span style=\"color: #D4D4D4\"> = <\/span><span style=\"color: #9CDCFE\">$x<\/span><span style=\"color: #D4D4D4\">;<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">        <\/span><span style=\"color: #C586C0\">foreach<\/span><span style=\"color: #D4D4D4\"> (<\/span><span style=\"color: #569CD6\">$this<\/span><span style=\"color: #D4D4D4\">-&gt;<\/span><span style=\"color: #9CDCFE\">layers<\/span><span style=\"color: #D4D4D4\"> as <\/span><span style=\"color: #9CDCFE\">$layer<\/span><span style=\"color: #D4D4D4\">) {<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">            <\/span><span style=\"color: #9CDCFE\">$out<\/span><span style=\"color: #D4D4D4\"> = <\/span><span style=\"color: #9CDCFE\">$layer<\/span><span style=\"color: #D4D4D4\">-&gt;<\/span><span style=\"color: #DCDCAA\">forward<\/span><span style=\"color: #D4D4D4\">(<\/span><span style=\"color: #9CDCFE\">$out<\/span><span style=\"color: #D4D4D4\">);<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">        }<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">        <\/span><span style=\"color: #C586C0\">return<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #9CDCFE\">$out<\/span><span style=\"color: #D4D4D4\">;<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    }<\/span><\/span>\n<span class=\"line\" \/>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    <\/span><span style=\"color: #6A9955\">\/**<\/span><\/span>\n<span class=\"line\"><span style=\"color: #6A9955\">     * Train the network with SGD and mean squared error<\/span><\/span>\n<span class=\"line\"><span style=\"color: #6A9955\">     * <\/span><span style=\"color: #569CD6\">@param<\/span><span style=\"color: #6A9955\"> <\/span><span style=\"color: #569CD6\">float[]<\/span><span style=\"color: #6A9955\">[] $xTrain<\/span><\/span>\n<span class=\"line\"><span style=\"color: #6A9955\">     * <\/span><span style=\"color: #569CD6\">@param<\/span><span style=\"color: #6A9955\"> <\/span><span style=\"color: #569CD6\">float[]<\/span><span style=\"color: #6A9955\">[] $yTrain<\/span><\/span>\n<span class=\"line\"><span style=\"color: #6A9955\">     *\/<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    <\/span><span style=\"color: #569CD6\">public<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #569CD6\">function<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #DCDCAA\">train<\/span><span style=\"color: #D4D4D4\">(<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">        <\/span><span style=\"color: #569CD6\">array<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #9CDCFE\">$xTrain<\/span><span style=\"color: #D4D4D4\">,<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">        <\/span><span style=\"color: #569CD6\">array<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #9CDCFE\">$yTrain<\/span><span style=\"color: #D4D4D4\">,<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">        <\/span><span style=\"color: #569CD6\">int<\/span><span style=\"color: #D4D4D4\">   <\/span><span style=\"color: #9CDCFE\">$epochs<\/span><span style=\"color: #D4D4D4\">     = <\/span><span style=\"color: #B5CEA8\">1000<\/span><span style=\"color: #D4D4D4\">,<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">        <\/span><span style=\"color: #569CD6\">float<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #9CDCFE\">$lr<\/span><span style=\"color: #D4D4D4\">         = <\/span><span style=\"color: #B5CEA8\">0.1<\/span><span style=\"color: #D4D4D4\">,<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">        <\/span><span style=\"color: #569CD6\">bool<\/span><span style=\"color: #D4D4D4\">  <\/span><span style=\"color: #9CDCFE\">$verbose<\/span><span style=\"color: #D4D4D4\">    = <\/span><span style=\"color: #569CD6\">true<\/span><span style=\"color: #D4D4D4\">,<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">        <\/span><span style=\"color: #569CD6\">int<\/span><span style=\"color: #D4D4D4\">   <\/span><span style=\"color: #9CDCFE\">$logStride<\/span><span style=\"color: #D4D4D4\">  = <\/span><span style=\"color: #B5CEA8\">100<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    ): <\/span><span style=\"color: #569CD6\">void<\/span><span style=\"color: #D4D4D4\"> {<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">        <\/span><span style=\"color: #9CDCFE\">$n<\/span><span style=\"color: #D4D4D4\"> = <\/span><span style=\"color: #DCDCAA\">count<\/span><span style=\"color: #D4D4D4\">(<\/span><span style=\"color: #9CDCFE\">$xTrain<\/span><span style=\"color: #D4D4D4\">);<\/span><\/span>\n<span class=\"line\" \/>\n<span class=\"line\"><span style=\"color: #D4D4D4\">        <\/span><span style=\"color: #C586C0\">for<\/span><span style=\"color: #D4D4D4\"> (<\/span><span style=\"color: #9CDCFE\">$e<\/span><span style=\"color: #D4D4D4\"> = <\/span><span style=\"color: #B5CEA8\">1<\/span><span style=\"color: #D4D4D4\">; <\/span><span style=\"color: #9CDCFE\">$e<\/span><span style=\"color: #D4D4D4\"> &lt;= <\/span><span style=\"color: #9CDCFE\">$epochs<\/span><span style=\"color: #D4D4D4\">; <\/span><span style=\"color: #9CDCFE\">$e<\/span><span style=\"color: #D4D4D4\">++) {<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">            <\/span><span style=\"color: #9CDCFE\">$loss<\/span><span style=\"color: #D4D4D4\"> = <\/span><span style=\"color: #B5CEA8\">0.0<\/span><span style=\"color: #D4D4D4\">;<\/span><\/span>\n<span class=\"line\" \/>\n<span class=\"line\"><span style=\"color: #D4D4D4\">            <\/span><span style=\"color: #C586C0\">for<\/span><span style=\"color: #D4D4D4\"> (<\/span><span style=\"color: #9CDCFE\">$k<\/span><span style=\"color: #D4D4D4\"> = <\/span><span style=\"color: #B5CEA8\">0<\/span><span style=\"color: #D4D4D4\">; <\/span><span style=\"color: #9CDCFE\">$k<\/span><span style=\"color: #D4D4D4\"> &lt; <\/span><span style=\"color: #9CDCFE\">$n<\/span><span style=\"color: #D4D4D4\">; <\/span><span style=\"color: #9CDCFE\">$k<\/span><span style=\"color: #D4D4D4\">++) {<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">                <\/span><span style=\"color: #9CDCFE\">$out<\/span><span style=\"color: #D4D4D4\">   = <\/span><span style=\"color: #569CD6\">$this<\/span><span style=\"color: #D4D4D4\">-&gt;<\/span><span style=\"color: #DCDCAA\">predict<\/span><span style=\"color: #D4D4D4\">(<\/span><span style=\"color: #9CDCFE\">$xTrain<\/span><span style=\"color: #D4D4D4\">[<\/span><span style=\"color: #9CDCFE\">$k<\/span><span style=\"color: #D4D4D4\">]);<\/span><\/span>\n<span class=\"line\" \/>\n<span class=\"line\"><span style=\"color: #D4D4D4\">                <\/span><span style=\"color: #6A9955\">\/\/ MSE derivative: 2*(\u0177 - y)<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">                <\/span><span style=\"color: #9CDCFE\">$grad<\/span><span style=\"color: #D4D4D4\">  = [];<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">                <\/span><span style=\"color: #C586C0\">foreach<\/span><span style=\"color: #D4D4D4\"> (<\/span><span style=\"color: #9CDCFE\">$out<\/span><span style=\"color: #D4D4D4\"> as <\/span><span style=\"color: #9CDCFE\">$i<\/span><span style=\"color: #D4D4D4\"> =&gt; <\/span><span style=\"color: #9CDCFE\">$o<\/span><span style=\"color: #D4D4D4\">) {<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">                    <\/span><span style=\"color: #9CDCFE\">$diff<\/span><span style=\"color: #D4D4D4\">      = <\/span><span style=\"color: #9CDCFE\">$o<\/span><span style=\"color: #D4D4D4\"> - <\/span><span style=\"color: #9CDCFE\">$yTrain<\/span><span style=\"color: #D4D4D4\">[<\/span><span style=\"color: #9CDCFE\">$k<\/span><span style=\"color: #D4D4D4\">][<\/span><span style=\"color: #9CDCFE\">$i<\/span><span style=\"color: #D4D4D4\">];<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">                    <\/span><span style=\"color: #9CDCFE\">$grad<\/span><span style=\"color: #D4D4D4\">[<\/span><span style=\"color: #9CDCFE\">$i<\/span><span style=\"color: #D4D4D4\">]  = <\/span><span style=\"color: #B5CEA8\">2<\/span><span style=\"color: #D4D4D4\"> * <\/span><span style=\"color: #9CDCFE\">$diff<\/span><span style=\"color: #D4D4D4\">;<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">                    <\/span><span style=\"color: #9CDCFE\">$loss<\/span><span style=\"color: #D4D4D4\">     += <\/span><span style=\"color: #9CDCFE\">$diff<\/span><span style=\"color: #D4D4D4\"> ** <\/span><span style=\"color: #B5CEA8\">2<\/span><span style=\"color: #D4D4D4\">;<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">                }<\/span><\/span>\n<span class=\"line\" \/>\n<span class=\"line\"><span style=\"color: #D4D4D4\">                <\/span><span style=\"color: #6A9955\">\/\/ Backward pass<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">                <\/span><span style=\"color: #C586C0\">for<\/span><span style=\"color: #D4D4D4\"> (<\/span><span style=\"color: #9CDCFE\">$l<\/span><span style=\"color: #D4D4D4\"> = <\/span><span style=\"color: #DCDCAA\">count<\/span><span style=\"color: #D4D4D4\">(<\/span><span style=\"color: #569CD6\">$this<\/span><span style=\"color: #D4D4D4\">-&gt;<\/span><span style=\"color: #9CDCFE\">layers<\/span><span style=\"color: #D4D4D4\">) - <\/span><span style=\"color: #B5CEA8\">1<\/span><span style=\"color: #D4D4D4\">; <\/span><span style=\"color: #9CDCFE\">$l<\/span><span style=\"color: #D4D4D4\"> &gt;= <\/span><span style=\"color: #B5CEA8\">0<\/span><span style=\"color: #D4D4D4\">; <\/span><span style=\"color: #9CDCFE\">$l<\/span><span style=\"color: #D4D4D4\">--) {<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">                    <\/span><span style=\"color: #9CDCFE\">$grad<\/span><span style=\"color: #D4D4D4\"> = <\/span><span style=\"color: #569CD6\">$this<\/span><span style=\"color: #D4D4D4\">-&gt;<\/span><span style=\"color: #9CDCFE\">layers<\/span><span style=\"color: #D4D4D4\">[<\/span><span style=\"color: #9CDCFE\">$l<\/span><span style=\"color: #D4D4D4\">]-&gt;<\/span><span style=\"color: #DCDCAA\">backward<\/span><span style=\"color: #D4D4D4\">(<\/span><span style=\"color: #9CDCFE\">$grad<\/span><span style=\"color: #D4D4D4\">, <\/span><span style=\"color: #9CDCFE\">$lr<\/span><span style=\"color: #D4D4D4\">);<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">                }<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">            }<\/span><\/span>\n<span class=\"line\" \/>\n<span class=\"line\"><span style=\"color: #D4D4D4\">            <\/span><span style=\"color: #C586C0\">if<\/span><span style=\"color: #D4D4D4\"> (<\/span><span style=\"color: #9CDCFE\">$verbose<\/span><span style=\"color: #D4D4D4\"> &amp;&amp; <\/span><span style=\"color: #9CDCFE\">$e<\/span><span style=\"color: #D4D4D4\"> % <\/span><span style=\"color: #9CDCFE\">$logStride<\/span><span style=\"color: #D4D4D4\"> === <\/span><span style=\"color: #B5CEA8\">0<\/span><span style=\"color: #D4D4D4\">) {<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">                <\/span><span style=\"color: #DCDCAA\">printf<\/span><span style=\"color: #D4D4D4\">(<\/span><span style=\"color: #CE9178\">\"Epoch %d\/%d - loss: %.6f<\/span><span style=\"color: #D7BA7D\">\\n<\/span><span style=\"color: #CE9178\">\"<\/span><span style=\"color: #D4D4D4\">, <\/span><span style=\"color: #9CDCFE\">$e<\/span><span style=\"color: #D4D4D4\">, <\/span><span style=\"color: #9CDCFE\">$epochs<\/span><span style=\"color: #D4D4D4\">, <\/span><span style=\"color: #9CDCFE\">$loss<\/span><span style=\"color: #D4D4D4\"> \/ <\/span><span style=\"color: #9CDCFE\">$n<\/span><span style=\"color: #D4D4D4\">);<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">            }<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">        }<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    }<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">}<\/span><\/span><\/code><\/pre><\/div>\n\n<p>In this file, we find the activation functions\u2014sigmoid and ReLU\u2014along with their respective gradients. Keeping them at the global scope, rather than encapsulated within the class, reduces the overhead of method calls and allows them to be passed as callables directly to the layer constructor, maintaining flexibility without sacrificing performance.  <\/p>\n\n<p>The Layer class is declared as final to prevent unwanted extensions and represents the logical unit of computation. It contains the in and out integers marked as readonly, ensuring their integrity throughout the object\u2019s entire lifecycle.<br \/>The weight matrix and bias vector are initialized using the Xavier initialization technique, which distributes values over an interval proportional to the square root of the total number of input and output connections.<br \/>This mathematical strategy prevents the activations from saturating during the initial epochs\u2014a phenomenon that would otherwise compromise the learning process.   <\/p>\n\n<p><img wpfc-lazyload-disable=\"true\" loading=\"lazy\" decoding=\"async\" src=\"https:\/\/renor.it\/wp-content\/ql-cache\/quicklatex.com-898b1dbe08046c765142d1c89d619143_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#119;&#95;&#123;&#105;&#106;&#125;&#32;&#92;&#115;&#105;&#109;&#32;&#92;&#109;&#97;&#116;&#104;&#99;&#97;&#108;&#32;&#85;&#92;&#33;&#92;&#66;&#105;&#103;&#108;&#40;&#45;&#92;&#115;&#113;&#114;&#116;&#123;&#92;&#116;&#102;&#114;&#97;&#99;&#123;&#54;&#125;&#123;&#110;&#95;&#123;&#92;&#116;&#101;&#120;&#116;&#123;&#105;&#110;&#125;&#125;&#32;&#43;&#32;&#110;&#95;&#123;&#92;&#116;&#101;&#120;&#116;&#123;&#111;&#117;&#116;&#125;&#125;&#125;&#125;&#44;&#92;&#44;&#32;&#92;&#115;&#113;&#114;&#116;&#123;&#92;&#116;&#102;&#114;&#97;&#99;&#123;&#54;&#125;&#123;&#110;&#95;&#123;&#92;&#116;&#101;&#120;&#116;&#123;&#105;&#110;&#125;&#125;&#32;&#43;&#32;&#110;&#95;&#123;&#92;&#116;&#101;&#120;&#116;&#123;&#111;&#117;&#116;&#125;&#125;&#125;&#125;&#92;&#66;&#105;&#103;&#114;&#41;\" title=\"Rendered by QuickLaTeX.com\" height=\"32\" width=\"257\" style=\"vertical-align: -11px;\"\/><\/p>\n\n<p><br \/>where <img wpfc-lazyload-disable=\"true\" loading=\"lazy\" decoding=\"async\" src=\"https:\/\/renor.it\/wp-content\/ql-cache\/quicklatex.com-26e3fea958c22ae912c81071bb4dcf67_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#92;&#109;&#97;&#116;&#104;&#99;&#97;&#108;&#123;&#85;&#125;&#40;&#97;&#44;&#32;&#98;&#41;\" title=\"Rendered by QuickLaTeX.com\" height=\"19\" width=\"52\" style=\"vertical-align: -5px;\"\/> denotes the continuous uniform distribution between a and b; the term under the square root, <img wpfc-lazyload-disable=\"true\" loading=\"lazy\" decoding=\"async\" src=\"https:\/\/renor.it\/wp-content\/ql-cache\/quicklatex.com-3512350736c00e541a6d7cd1157791e1_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#92;&#115;&#113;&#114;&#116;&#123;&#54;&#32;&#47;&#32;&#40;&#110;&#95;&#123;&#92;&#116;&#101;&#120;&#116;&#123;&#105;&#110;&#125;&#125;&#32;&#43;&#32;&#110;&#95;&#123;&#92;&#116;&#101;&#120;&#116;&#123;&#111;&#117;&#116;&#125;&#125;&#41;&#125;\" title=\"Rendered by QuickLaTeX.com\" height=\"22\" width=\"125\" style=\"vertical-align: -6px;\"\/>, serves as the upper and lower bound of the sampling interval.<br \/>Alternatively, one could use a Gaussian distribution with zero mean and variance <img wpfc-lazyload-disable=\"true\" loading=\"lazy\" decoding=\"async\" src=\"https:\/\/renor.it\/wp-content\/ql-cache\/quicklatex.com-ecbd0db9b46ca68035ac9f0e178b5931_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#92;&#115;&#105;&#103;&#109;&#97;&#94;&#123;&#50;&#125;&#61;&#32;&#50;&#32;&#47;&#32;&#40;&#110;&#95;&#123;&#92;&#116;&#101;&#120;&#116;&#123;&#105;&#110;&#125;&#125;&#32;&#43;&#32;&#110;&#95;&#123;&#92;&#116;&#101;&#120;&#116;&#123;&#111;&#117;&#116;&#125;&#125;&#41;\" title=\"Rendered by QuickLaTeX.com\" height=\"20\" width=\"149\" style=\"vertical-align: -5px;\"\/>, but the expression above\u2014used in the code example\u2014is the original uniform form proposed by Glorot and Bengio. <\/p>\n\n<p>During forward propagation, each neuron sums the dot product and bias, then applies the chosen activation function. The intermediate results\u2014lastInput, lastZ, and lastOutput\u2014are stored for reuse during backpropagation, allowing the gradient computation to proceed without recalculating anything. This design is ideal for step-by-step debugging.<br \/><br \/><br \/>The backward method receives the error gradient from the next layer, combines it with the local derivative of the activation function, and updates weights and biases by subtracting a fraction proportional to the learning rate. At the same time, it returns the gradient to be propagated backward to the previous layer.  <\/p>\n\n<p>The inner loop is entirely manual\u2014a deliberate choice that highlights the underlying mathematics and makes the code perfectly transparent, even to those who have never used specialized libraries. <\/p>\n\n<p>The NeuralNetwork class\u2014also declared as final\u2014acts as the orchestrator: it maintains the array of layers and provides the predict method, which channels an input vector through each layer.<br \/><br \/><br \/>The train method implements stochastic gradient descent with mean squared error. It iterates over the training set for a specified number of epochs, computes for each example the difference between the predicted and actual output, doubles the value, and propagates the gradient backward, updating the layers in reverse order.<br \/><br \/><br \/>At each iteration, it accumulates the loss to provide a global indicator which\u2014if the verbose flag is enabled\u2014is printed at regular intervals, allowing real-time monitoring of the model\u2019s convergence.   <\/p>\n\n<p>The layer constructor accepts callables, so if in the future you wish to use different activation functions, you can simply pass their references without altering the architecture. <\/p>\n\n<p><\/p>\n\n<h3 class=\"wp-block-heading\">demo_xor.php<\/h3>\n\n<div class=\"wp-block-kevinbatdorf-code-block-pro\" data-code-block-pro-font-family=\"Code-Pro-JetBrains-Mono\" style=\"font-size:.875rem;font-family:Code-Pro-JetBrains-Mono,ui-monospace,SFMono-Regular,Menlo,Monaco,Consolas,monospace;line-height:1.25rem;--cbp-tab-width:2\"><span style=\"display:block;padding:16px 0 0 16px;margin-bottom:-1px;width:100%;text-align:left;background-color:#1E1E1E\"><svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"54\" height=\"14\" viewbox=\"0 0 54 14\"><g fill=\"none\"><\/g><\/svg><\/span><span role=\"button\" style=\"color:#D4D4D4;display:none\" aria-label=\"Copy\" class=\"code-block-pro-copy-button\"><svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" style=\"width:24px;height:24px\" viewbox=\"0 0 24 24\"><path d=\"M9 5H7a2 2 0 00-2 2v12a2 2 0 002 2h10a2 2 0 002-2V7a2 2 0 00-2-2h-2M9 5a2 2 0 002 2h2a2 2 0 002-2M9 5a2 2 0 012-2h2a2 2 0 012 2m-6 9l2 2 4-4\"><\/path><path d=\"M9 5H7a2 2 0 00-2 2v12a2 2 0 002 2h10a2 2 0 002-2V7a2 2 0 00-2-2h-2M9 5a2 2 0 002 2h2a2 2 0 002-2M9 5a2 2 0 012-2h2a2 2 0 012 2\"><\/path><\/svg><\/span><pre class=\"shiki dark-plus\" style=\"background-color: #1E1E1E\"><code><span class=\"line\"><span style=\"color: #D4D4D4\">&lt;?php<\/span><\/span>\n<span class=\"line\"><span style=\"color: #C586C0\">declare<\/span><span style=\"color: #D4D4D4\">(strict_types=<\/span><span style=\"color: #B5CEA8\">1<\/span><span style=\"color: #D4D4D4\">);<\/span><\/span>\n<span class=\"line\" \/>\n<span class=\"line\"><span style=\"color: #6A9955\">\/\/ Include the neural network implementation<\/span><\/span>\n<span class=\"line\"><span style=\"color: #C586C0\">require_once<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #569CD6\">__DIR__<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #D4D4D4\">.<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #CE9178\">'\/NeuralNetwork.php'<\/span><span style=\"color: #D4D4D4\">;<\/span><\/span>\n<span class=\"line\" \/>\n<span class=\"line\"><span style=\"color: #6A9955\">\/\/ Training data for XOR<\/span><\/span>\n<span class=\"line\"><span style=\"color: #9CDCFE\">$xTrain<\/span><span style=\"color: #D4D4D4\"> = [<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    [<\/span><span style=\"color: #B5CEA8\">0<\/span><span style=\"color: #D4D4D4\">, <\/span><span style=\"color: #B5CEA8\">0<\/span><span style=\"color: #D4D4D4\">],<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    [<\/span><span style=\"color: #B5CEA8\">0<\/span><span style=\"color: #D4D4D4\">, <\/span><span style=\"color: #B5CEA8\">1<\/span><span style=\"color: #D4D4D4\">],<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    [<\/span><span style=\"color: #B5CEA8\">1<\/span><span style=\"color: #D4D4D4\">, <\/span><span style=\"color: #B5CEA8\">0<\/span><span style=\"color: #D4D4D4\">],<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    [<\/span><span style=\"color: #B5CEA8\">1<\/span><span style=\"color: #D4D4D4\">, <\/span><span style=\"color: #B5CEA8\">1<\/span><span style=\"color: #D4D4D4\">],<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">];<\/span><\/span>\n<span class=\"line\"><span style=\"color: #9CDCFE\">$yTrain<\/span><span style=\"color: #D4D4D4\"> = [<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    [<\/span><span style=\"color: #B5CEA8\">0<\/span><span style=\"color: #D4D4D4\">],<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    [<\/span><span style=\"color: #B5CEA8\">1<\/span><span style=\"color: #D4D4D4\">],<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    [<\/span><span style=\"color: #B5CEA8\">1<\/span><span style=\"color: #D4D4D4\">],<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    [<\/span><span style=\"color: #B5CEA8\">0<\/span><span style=\"color: #D4D4D4\">],<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">];<\/span><\/span>\n<span class=\"line\" \/>\n<span class=\"line\"><span style=\"color: #6A9955\">\/\/ Build the network: 2-3-1 with sigmoid activations<\/span><\/span>\n<span class=\"line\"><span style=\"color: #9CDCFE\">$net<\/span><span style=\"color: #D4D4D4\"> = <\/span><span style=\"color: #569CD6\">new<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #4EC9B0\">NeuralNetwork<\/span><span style=\"color: #D4D4D4\">();<\/span><\/span>\n<span class=\"line\"><span style=\"color: #9CDCFE\">$net<\/span><span style=\"color: #D4D4D4\">-&gt;<\/span><span style=\"color: #DCDCAA\">addLayer<\/span><span style=\"color: #D4D4D4\">(<\/span><span style=\"color: #569CD6\">new<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #4EC9B0\">Layer<\/span><span style=\"color: #D4D4D4\">(<\/span><span style=\"color: #B5CEA8\">2<\/span><span style=\"color: #D4D4D4\">, <\/span><span style=\"color: #B5CEA8\">3<\/span><span style=\"color: #D4D4D4\">, <\/span><span style=\"color: #CE9178\">'sigmoid'<\/span><span style=\"color: #D4D4D4\">, <\/span><span style=\"color: #CE9178\">'sigmoid_derivative'<\/span><span style=\"color: #D4D4D4\">));<\/span><\/span>\n<span class=\"line\"><span style=\"color: #9CDCFE\">$net<\/span><span style=\"color: #D4D4D4\">-&gt;<\/span><span style=\"color: #DCDCAA\">addLayer<\/span><span style=\"color: #D4D4D4\">(<\/span><span style=\"color: #569CD6\">new<\/span><span style=\"color: #D4D4D4\"> <\/span><span style=\"color: #4EC9B0\">Layer<\/span><span style=\"color: #D4D4D4\">(<\/span><span style=\"color: #B5CEA8\">3<\/span><span style=\"color: #D4D4D4\">, <\/span><span style=\"color: #B5CEA8\">1<\/span><span style=\"color: #D4D4D4\">, <\/span><span style=\"color: #CE9178\">'sigmoid'<\/span><span style=\"color: #D4D4D4\">, <\/span><span style=\"color: #CE9178\">'sigmoid_derivative'<\/span><span style=\"color: #D4D4D4\">));<\/span><\/span>\n<span class=\"line\" \/>\n<span class=\"line\"><span style=\"color: #6A9955\">\/\/ Train the network<\/span><\/span>\n<span class=\"line\"><span style=\"color: #9CDCFE\">$net<\/span><span style=\"color: #D4D4D4\">-&gt;<\/span><span style=\"color: #DCDCAA\">train<\/span><span style=\"color: #D4D4D4\">(<\/span><span style=\"color: #9CDCFE\">$xTrain<\/span><span style=\"color: #D4D4D4\">, <\/span><span style=\"color: #9CDCFE\">$yTrain<\/span><span style=\"color: #D4D4D4\">, <\/span><span style=\"color: #9CDCFE\">epochs<\/span><span style=\"color: #D4D4D4\">: <\/span><span style=\"color: #B5CEA8\">5000<\/span><span style=\"color: #D4D4D4\">, <\/span><span style=\"color: #9CDCFE\">lr<\/span><span style=\"color: #D4D4D4\">: <\/span><span style=\"color: #B5CEA8\">0.5<\/span><span style=\"color: #D4D4D4\">, <\/span><span style=\"color: #9CDCFE\">logStride<\/span><span style=\"color: #D4D4D4\">: <\/span><span style=\"color: #B5CEA8\">500<\/span><span style=\"color: #D4D4D4\">);<\/span><\/span>\n<span class=\"line\" \/>\n<span class=\"line\"><span style=\"color: #6A9955\">\/\/ Test predictions<\/span><\/span>\n<span class=\"line\"><span style=\"color: #C586C0\">foreach<\/span><span style=\"color: #D4D4D4\"> (<\/span><span style=\"color: #9CDCFE\">$xTrain<\/span><span style=\"color: #D4D4D4\"> as <\/span><span style=\"color: #9CDCFE\">$sample<\/span><span style=\"color: #D4D4D4\">) {<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    <\/span><span style=\"color: #9CDCFE\">$out<\/span><span style=\"color: #D4D4D4\"> = <\/span><span style=\"color: #9CDCFE\">$net<\/span><span style=\"color: #D4D4D4\">-&gt;<\/span><span style=\"color: #DCDCAA\">predict<\/span><span style=\"color: #D4D4D4\">(<\/span><span style=\"color: #9CDCFE\">$sample<\/span><span style=\"color: #D4D4D4\">)[<\/span><span style=\"color: #B5CEA8\">0<\/span><span style=\"color: #D4D4D4\">];<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">    <\/span><span style=\"color: #DCDCAA\">printf<\/span><span style=\"color: #D4D4D4\">(<\/span><span style=\"color: #CE9178\">\"Input %s \u21d2 Output %.4f<\/span><span style=\"color: #D7BA7D\">\\n<\/span><span style=\"color: #CE9178\">\"<\/span><span style=\"color: #D4D4D4\">, <\/span><span style=\"color: #DCDCAA\">json_encode<\/span><span style=\"color: #D4D4D4\">(<\/span><span style=\"color: #9CDCFE\">$sample<\/span><span style=\"color: #D4D4D4\">), <\/span><span style=\"color: #9CDCFE\">$out<\/span><span style=\"color: #D4D4D4\">);<\/span><\/span>\n<span class=\"line\"><span style=\"color: #D4D4D4\">}<\/span><\/span><\/code><\/pre><\/div>\n\n<p>In this functional snippet, the training dataset and the corresponding ground truth for the classic XOR problem are declared: four pairs of binary inputs, each paired with its expected output. This setup allows us to test the model\u2019s ability to learn a non-linear function.<\/p>\n\n<p>The core logic begins with the instantiation of the NeuralNetwork object. A two-layer topology is then constructed: the first layer accepts the two input features and projects them onto three output neurons; the second layer receives those three intermediate activations and returns a single scalar value.<br \/>In both layers, the sigmoid activation function is used\u2014chosen for its didactic simplicity and for the ease with which its gradient is computed during the backpropagation phase.   <\/p>\n\n<p>Here is the sigmoid function, along with its derivative\u2014commonly used in neural networks for both activation and backpropagation:<\/p>\n\n<p><img wpfc-lazyload-disable=\"true\" loading=\"lazy\" decoding=\"async\" src=\"https:\/\/renor.it\/wp-content\/ql-cache\/quicklatex.com-954348c5113eda378379bc7343a8e5f8_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#92;&#115;&#105;&#103;&#109;&#97;&#40;&#120;&#41;&#61;&#92;&#102;&#114;&#97;&#99;&#123;&#49;&#125;&#123;&#49;&#43;&#92;&#109;&#97;&#116;&#104;&#114;&#109;&#32;&#101;&#94;&#123;&#45;&#120;&#125;&#125;\" title=\"Rendered by QuickLaTeX.com\" height=\"25\" width=\"101\" style=\"vertical-align: -9px;\"\/><\/p>\n\n<p><br \/>where<img wpfc-lazyload-disable=\"true\" loading=\"lazy\" decoding=\"async\" src=\"https:\/\/renor.it\/wp-content\/ql-cache\/quicklatex.com-14618925f387ca16527ce10c1b1d5121_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#32;&#92;&#109;&#97;&#116;&#104;&#114;&#109;&#123;&#101;&#125;\" title=\"Rendered by QuickLaTeX.com\" height=\"8\" width=\"8\" style=\"vertical-align: 0px;\"\/> is Euler\u2019s number (the base of natural logarithms), and <img wpfc-lazyload-disable=\"true\" loading=\"lazy\" decoding=\"async\" src=\"https:\/\/renor.it\/wp-content\/ql-cache\/quicklatex.com-ede05c264bba0eda080918aaa09c4658_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#120;\" title=\"Rendered by QuickLaTeX.com\" height=\"8\" width=\"10\" style=\"vertical-align: 0px;\"\/> represents the real-valued input to the neuron.<br \/>This expression guarantees a continuous output between 0 and 1, with an inflection point at the origin that defines its characteristic \u201cS\u201d shape: for very large negative values, the function tends asymptotically toward 0, while for very large positive values, it approaches 1.<br \/><br \/><br \/>In the context of machine learning, the derivative of the sigmoid is often used during backpropagation. Its compact form is:   <\/p>\n\n<p><img wpfc-lazyload-disable=\"true\" loading=\"lazy\" decoding=\"async\" src=\"https:\/\/renor.it\/wp-content\/ql-cache\/quicklatex.com-8f744449503600140163a99f16b8007c_l3.png\" class=\"ql-img-inline-formula quicklatex-auto-format\" alt=\"&#92;&#115;&#105;&#103;&#109;&#97;&#39;&#40;&#120;&#41;&#61;&#92;&#115;&#105;&#103;&#109;&#97;&#40;&#120;&#41;&#92;&#44;&#92;&#98;&#105;&#103;&#108;&#40;&#49;&#45;&#92;&#115;&#105;&#103;&#109;&#97;&#40;&#120;&#41;&#92;&#98;&#105;&#103;&#114;&#41;\" title=\"Rendered by QuickLaTeX.com\" height=\"22\" width=\"180\" style=\"vertical-align: -7px;\"\/><\/p>\n\n<p><br \/>This latter relationship derives directly from the original definition and allows the gradient to be computed without the need for additional exponential functions, thus optimizing the weight update phase.<\/p>\n\n<p>Continuing on, the train method initiates the actual training process, iterating over the same small dataset for five thousand epochs. With a learning rate set to 0.5 and a loss log printed every 500 iterations, the loop performs gradient descent on the mean squared error, updating the weights and biases of both layers at each observation.<br \/><br \/><br \/>At the end of training, a simple foreach loop iterates once more over the four input patterns, feeds them to the predict method, and prints the network\u2019s numerical outputs to the screen\u2014allowing for immediate comparison with the expected outputs and a quick evaluation of the model\u2019s accuracy.<br \/><br \/><br \/>In a production context, this same logic could easily be encapsulated in a REST endpoint or a command-line script, but in this minimalist form it already provides a complete demonstration of how PHP can manage the entire lifecycle of a small neural network\u2014from layer definition to final prediction.    <\/p>\n\n<h2 class=\"wp-block-heading\">Conclusions<\/h2>\n\n<p>The experiment demonstrates that, although PHP wasn\u2019t designed for numerical computing, it is possible to implement a basic yet functional neural network, train it in reasonable time on small-scale problems, and deploy it in production without introducing an additional runtime. <\/p>\n\n<p>Xavier initialization preserves signal stability from the very first epochs, the sigmoid ensures a well-defined gradient, and the fully transparent, non\u2013black-box approach makes the model an excellent didactic tool: every weight, every bias, and every step of backpropagation is under full control.<br \/>It\u2019s clear that this solution is not meant to compete with GPU-optimized frameworks\u2014but when the goal is to integrate lightweight inference into an already PHP-based stack, or simply to gain a deep, hands-on understanding of how a neural network works, the presented implementation offers an elegant and accessible path.  <\/p>\n\n<p>In conclusion, the most interesting aspect of this article was not to build a new ChatGPT, but rather to foster awareness and learn the mathematical principles behind the construction of a simple neural network\u2014line by line. <\/p>\n\n<p>In my opinion, the true power of artificial intelligence lies in understanding the scientific principles on which it is based, rather than in the specific programming languages we use to implement it. <\/p>\n\n<p>[starbox]<\/p>\n","protected":false},"excerpt":{"rendered":"<p>I\u2019ll begin this article with a premise\u2026 PHP is certainly not the ideal language when it comes to artificial intelligence. Neural networks are typically the domain of more scientific languages like Python, which offers optimized libraries for this purpose such as PyTorch and NumPy\u2014and that\u2019s usually the language I rely on when working in this [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":507,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"_writerflow_disable_suggestions":false,"footnotes":""},"categories":[1976],"tags":[1383,1348,1384,1385,1386,1387,1388,1389,1390,1357,1391,1392,1393,1394,1395],"class_list":["post-659","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-intelligenza-artificiale-algoritmi","tag-ai-in-php-en","tag-artificial-intelligence","tag-back-propagation-en","tag-backend-programming","tag-deep-learning-en","tag-feed-forward-network","tag-machine-learning-en","tag-microservices","tag-model-serialization","tag-neural-networks","tag-php-en","tag-rest-api-en","tag-sigmoid-function","tag-xavier-initialization","tag-xor-example"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v27.4 (Yoast SEO v27.6) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>Neural Networks in PHP? Yes, It Can Be Done! | RENOR &amp; Partners S.r.l.<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/renor.it\/en\/blog\/intelligenza-artificiale-algoritmi\/neural-networks-in-php-yes-it-can-be-done\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Neural Networks in PHP? Yes, It Can Be Done!\" \/>\n<meta property=\"og:description\" content=\"I\u2019ll begin this article with a premise\u2026 PHP is certainly not the ideal language when it comes to artificial intelligence. Neural networks are typically the domain of more scientific languages like Python, which offers optimized libraries for this purpose such as PyTorch and NumPy\u2014and that\u2019s usually the language I rely on when working in this [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/renor.it\/en\/blog\/intelligenza-artificiale-algoritmi\/neural-networks-in-php-yes-it-can-be-done\/\" \/>\n<meta property=\"og:site_name\" content=\"RENOR &amp; Partners S.r.l.\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/renorsrl\" \/>\n<meta property=\"article:author\" content=\"https:\/\/www.facebook.com\/simone.renzi.3954\/\" \/>\n<meta property=\"article:published_time\" content=\"2025-05-18T10:23:19+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-12-20T14:53:59+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/renor.it\/wp-content\/uploads\/2025\/05\/neural-network.webp\" \/>\n\t<meta property=\"og:image:width\" content=\"1536\" \/>\n\t<meta property=\"og:image:height\" content=\"1024\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/webp\" \/>\n<meta name=\"author\" content=\"Simone Renzi\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Simone Renzi\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"16 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/renor.it\\\/en\\\/blog\\\/intelligenza-artificiale-algoritmi\\\/neural-networks-in-php-yes-it-can-be-done\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/renor.it\\\/en\\\/blog\\\/intelligenza-artificiale-algoritmi\\\/neural-networks-in-php-yes-it-can-be-done\\\/\"},\"author\":{\"name\":\"Simone Renzi\",\"@id\":\"https:\\\/\\\/renor.it\\\/en\\\/#\\\/schema\\\/person\\\/21343be04e5983a87f3a9a6182cf8795\"},\"headline\":\"Neural Networks in PHP? Yes, It Can Be Done!\",\"datePublished\":\"2025-05-18T10:23:19+00:00\",\"dateModified\":\"2025-12-20T14:53:59+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/renor.it\\\/en\\\/blog\\\/intelligenza-artificiale-algoritmi\\\/neural-networks-in-php-yes-it-can-be-done\\\/\"},\"wordCount\":2643,\"publisher\":{\"@id\":\"https:\\\/\\\/renor.it\\\/en\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/renor.it\\\/en\\\/blog\\\/intelligenza-artificiale-algoritmi\\\/neural-networks-in-php-yes-it-can-be-done\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/renor.it\\\/wp-content\\\/uploads\\\/2025\\\/05\\\/neural-network.webp\",\"keywords\":[\"AI in PHP\",\"artificial intelligence\",\"back-propagation\",\"backend programming\",\"Deep Learning\",\"feed-forward network\",\"Machine Learning\",\"microservices\",\"model serialization\",\"neural networks\",\"php\",\"REST API\",\"sigmoid function\",\"Xavier initialization\",\"XOR example\"],\"articleSection\":[\"Intelligenza Artificiale &amp; Algoritmi\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/renor.it\\\/en\\\/blog\\\/intelligenza-artificiale-algoritmi\\\/neural-networks-in-php-yes-it-can-be-done\\\/\",\"url\":\"https:\\\/\\\/renor.it\\\/en\\\/blog\\\/intelligenza-artificiale-algoritmi\\\/neural-networks-in-php-yes-it-can-be-done\\\/\",\"name\":\"Neural Networks in PHP? Yes, It Can Be Done! | RENOR &amp; Partners S.r.l.\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/renor.it\\\/en\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/renor.it\\\/en\\\/blog\\\/intelligenza-artificiale-algoritmi\\\/neural-networks-in-php-yes-it-can-be-done\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/renor.it\\\/en\\\/blog\\\/intelligenza-artificiale-algoritmi\\\/neural-networks-in-php-yes-it-can-be-done\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/renor.it\\\/wp-content\\\/uploads\\\/2025\\\/05\\\/neural-network.webp\",\"datePublished\":\"2025-05-18T10:23:19+00:00\",\"dateModified\":\"2025-12-20T14:53:59+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/renor.it\\\/en\\\/blog\\\/intelligenza-artificiale-algoritmi\\\/neural-networks-in-php-yes-it-can-be-done\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/renor.it\\\/en\\\/blog\\\/intelligenza-artificiale-algoritmi\\\/neural-networks-in-php-yes-it-can-be-done\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/renor.it\\\/en\\\/blog\\\/intelligenza-artificiale-algoritmi\\\/neural-networks-in-php-yes-it-can-be-done\\\/#primaryimage\",\"url\":\"https:\\\/\\\/renor.it\\\/wp-content\\\/uploads\\\/2025\\\/05\\\/neural-network.webp\",\"contentUrl\":\"https:\\\/\\\/renor.it\\\/wp-content\\\/uploads\\\/2025\\\/05\\\/neural-network.webp\",\"width\":1536,\"height\":1024,\"caption\":\"Una trama di neuroni luminescenti si intreccia su uno sfondo di snippet del linguaggio PHP, mentre il logo campeggia al centro a indicare l\u2019inusuale connubio fra programmazione web e reti neurali.\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/renor.it\\\/en\\\/blog\\\/intelligenza-artificiale-algoritmi\\\/neural-networks-in-php-yes-it-can-be-done\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/renor.it\\\/en\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Blog\",\"item\":\"https:\\\/\\\/renor.it\\\/en\\\/blog\\\/\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"Artificial Intelligence &amp; Algorithms\",\"item\":\"https:\\\/\\\/renor.it\\\/en\\\/blog\\\/artificial-intelligence-algorithms\\\/\"},{\"@type\":\"ListItem\",\"position\":4,\"name\":\"Neural Networks in PHP? Yes, It Can Be Done!\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/renor.it\\\/en\\\/#website\",\"url\":\"https:\\\/\\\/renor.it\\\/en\\\/\",\"name\":\"RENOR & Partners S.r.l.\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\\\/\\\/renor.it\\\/en\\\/#organization\"},\"alternateName\":\"RENOR & Partners S.r.l.\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/renor.it\\\/en\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/renor.it\\\/en\\\/#organization\",\"name\":\"RENOR & Partners S.r.l.\",\"url\":\"https:\\\/\\\/renor.it\\\/en\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/renor.it\\\/en\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/renor.it\\\/wp-content\\\/uploads\\\/2025\\\/12\\\/logo-new-1.webp\",\"contentUrl\":\"https:\\\/\\\/renor.it\\\/wp-content\\\/uploads\\\/2025\\\/12\\\/logo-new-1.webp\",\"width\":432,\"height\":146,\"caption\":\"RENOR & Partners S.r.l.\"},\"image\":{\"@id\":\"https:\\\/\\\/renor.it\\\/en\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/renorsrl\",\"https:\\\/\\\/www.instagram.com\\\/renorpartners\\\/\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/renor-partners\\\/posts\\\/?feedView=all\"],\"description\":\"RENOR & Partners Srl \u00e8 una societ\u00e0 di consulenza tecnologica e ingegneristica specializzata in sviluppo software, cloud computing, integrazione di sistemi, intelligenza artificiale applicata e progettazione elettronica. L\u2019azienda supporta imprese e pubbliche amministrazioni nella realizzazione di soluzioni digitali affidabili, scalabili e orientate all\u2019efficienza, con un approccio pragmatico basato su competenze tecniche, progettazione su misura e innovazione concreta.\",\"email\":\"info@renor.it\",\"telephone\":\"3791489430\",\"legalName\":\"RENOR AND PARTNERS S.r.l.\",\"foundingDate\":\"2022-06-21\",\"vatID\":\"16768411007\",\"taxID\":\"16768411007\",\"numberOfEmployees\":{\"@type\":\"QuantitativeValue\",\"minValue\":\"1\",\"maxValue\":\"10\"}},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/renor.it\\\/en\\\/#\\\/schema\\\/person\\\/21343be04e5983a87f3a9a6182cf8795\",\"name\":\"Simone Renzi\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/54f81b51be6bda6d63a06a1cd6563d9b0d5778d7af4f0bda4e246fc3e5737e2e?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/54f81b51be6bda6d63a06a1cd6563d9b0d5778d7af4f0bda4e246fc3e5737e2e?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/54f81b51be6bda6d63a06a1cd6563d9b0d5778d7af4f0bda4e246fc3e5737e2e?s=96&d=mm&r=g\",\"caption\":\"Simone Renzi\"},\"description\":\"Senior full-stack web engineer with over 20 years of experience in cloud architectures, AI, and SaaS solutions; member of Mensa Italia. Creator of platforms such as HR24.ai and Paghe.ai, he oversaw the web development of FNS, a neural network simulator cited in Scientific Reports (Nature Portfolio), and has collaborated on research projects with INFN \u2013 Laboratori Nazionali di Frascati, Universit\u00e0 di Roma \u201cTor Vergata\u201d, Universidad Complutense, Universidad Polit\u00e9cnica and Centro de Tecnolog\u00eda Biom\u00e9dica in Madrid. A classical pianist, he combines musical creativity and technological rigor in every project.\",\"sameAs\":[\"https:\\\/\\\/renor.it\",\"https:\\\/\\\/www.facebook.com\\\/simone.renzi.3954\\\/\",\"https:\\\/\\\/www.linkedin.com\\\/in\\\/simone-renzi\"],\"url\":\"https:\\\/\\\/renor.it\\\/en\\\/author\\\/thesimon\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Neural Networks in PHP? Yes, It Can Be Done! | RENOR &amp; Partners S.r.l.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/renor.it\/en\/blog\/intelligenza-artificiale-algoritmi\/neural-networks-in-php-yes-it-can-be-done\/","og_locale":"en_US","og_type":"article","og_title":"Neural Networks in PHP? Yes, It Can Be Done!","og_description":"I\u2019ll begin this article with a premise\u2026 PHP is certainly not the ideal language when it comes to artificial intelligence. Neural networks are typically the domain of more scientific languages like Python, which offers optimized libraries for this purpose such as PyTorch and NumPy\u2014and that\u2019s usually the language I rely on when working in this [&hellip;]","og_url":"https:\/\/renor.it\/en\/blog\/intelligenza-artificiale-algoritmi\/neural-networks-in-php-yes-it-can-be-done\/","og_site_name":"RENOR &amp; Partners S.r.l.","article_publisher":"https:\/\/www.facebook.com\/renorsrl","article_author":"https:\/\/www.facebook.com\/simone.renzi.3954\/","article_published_time":"2025-05-18T10:23:19+00:00","article_modified_time":"2025-12-20T14:53:59+00:00","og_image":[{"width":1536,"height":1024,"url":"https:\/\/renor.it\/wp-content\/uploads\/2025\/05\/neural-network.webp","type":"image\/webp"}],"author":"Simone Renzi","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Simone Renzi","Est. reading time":"16 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/renor.it\/en\/blog\/intelligenza-artificiale-algoritmi\/neural-networks-in-php-yes-it-can-be-done\/#article","isPartOf":{"@id":"https:\/\/renor.it\/en\/blog\/intelligenza-artificiale-algoritmi\/neural-networks-in-php-yes-it-can-be-done\/"},"author":{"name":"Simone Renzi","@id":"https:\/\/renor.it\/en\/#\/schema\/person\/21343be04e5983a87f3a9a6182cf8795"},"headline":"Neural Networks in PHP? Yes, It Can Be Done!","datePublished":"2025-05-18T10:23:19+00:00","dateModified":"2025-12-20T14:53:59+00:00","mainEntityOfPage":{"@id":"https:\/\/renor.it\/en\/blog\/intelligenza-artificiale-algoritmi\/neural-networks-in-php-yes-it-can-be-done\/"},"wordCount":2643,"publisher":{"@id":"https:\/\/renor.it\/en\/#organization"},"image":{"@id":"https:\/\/renor.it\/en\/blog\/intelligenza-artificiale-algoritmi\/neural-networks-in-php-yes-it-can-be-done\/#primaryimage"},"thumbnailUrl":"https:\/\/renor.it\/wp-content\/uploads\/2025\/05\/neural-network.webp","keywords":["AI in PHP","artificial intelligence","back-propagation","backend programming","Deep Learning","feed-forward network","Machine Learning","microservices","model serialization","neural networks","php","REST API","sigmoid function","Xavier initialization","XOR example"],"articleSection":["Intelligenza Artificiale &amp; Algoritmi"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/renor.it\/en\/blog\/intelligenza-artificiale-algoritmi\/neural-networks-in-php-yes-it-can-be-done\/","url":"https:\/\/renor.it\/en\/blog\/intelligenza-artificiale-algoritmi\/neural-networks-in-php-yes-it-can-be-done\/","name":"Neural Networks in PHP? Yes, It Can Be Done! | RENOR &amp; Partners S.r.l.","isPartOf":{"@id":"https:\/\/renor.it\/en\/#website"},"primaryImageOfPage":{"@id":"https:\/\/renor.it\/en\/blog\/intelligenza-artificiale-algoritmi\/neural-networks-in-php-yes-it-can-be-done\/#primaryimage"},"image":{"@id":"https:\/\/renor.it\/en\/blog\/intelligenza-artificiale-algoritmi\/neural-networks-in-php-yes-it-can-be-done\/#primaryimage"},"thumbnailUrl":"https:\/\/renor.it\/wp-content\/uploads\/2025\/05\/neural-network.webp","datePublished":"2025-05-18T10:23:19+00:00","dateModified":"2025-12-20T14:53:59+00:00","breadcrumb":{"@id":"https:\/\/renor.it\/en\/blog\/intelligenza-artificiale-algoritmi\/neural-networks-in-php-yes-it-can-be-done\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/renor.it\/en\/blog\/intelligenza-artificiale-algoritmi\/neural-networks-in-php-yes-it-can-be-done\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/renor.it\/en\/blog\/intelligenza-artificiale-algoritmi\/neural-networks-in-php-yes-it-can-be-done\/#primaryimage","url":"https:\/\/renor.it\/wp-content\/uploads\/2025\/05\/neural-network.webp","contentUrl":"https:\/\/renor.it\/wp-content\/uploads\/2025\/05\/neural-network.webp","width":1536,"height":1024,"caption":"Una trama di neuroni luminescenti si intreccia su uno sfondo di snippet del linguaggio PHP, mentre il logo campeggia al centro a indicare l\u2019inusuale connubio fra programmazione web e reti neurali."},{"@type":"BreadcrumbList","@id":"https:\/\/renor.it\/en\/blog\/intelligenza-artificiale-algoritmi\/neural-networks-in-php-yes-it-can-be-done\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/renor.it\/en\/"},{"@type":"ListItem","position":2,"name":"Blog","item":"https:\/\/renor.it\/en\/blog\/"},{"@type":"ListItem","position":3,"name":"Artificial Intelligence &amp; Algorithms","item":"https:\/\/renor.it\/en\/blog\/artificial-intelligence-algorithms\/"},{"@type":"ListItem","position":4,"name":"Neural Networks in PHP? Yes, It Can Be Done!"}]},{"@type":"WebSite","@id":"https:\/\/renor.it\/en\/#website","url":"https:\/\/renor.it\/en\/","name":"RENOR & Partners S.r.l.","description":"","publisher":{"@id":"https:\/\/renor.it\/en\/#organization"},"alternateName":"RENOR & Partners S.r.l.","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/renor.it\/en\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/renor.it\/en\/#organization","name":"RENOR & Partners S.r.l.","url":"https:\/\/renor.it\/en\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/renor.it\/en\/#\/schema\/logo\/image\/","url":"https:\/\/renor.it\/wp-content\/uploads\/2025\/12\/logo-new-1.webp","contentUrl":"https:\/\/renor.it\/wp-content\/uploads\/2025\/12\/logo-new-1.webp","width":432,"height":146,"caption":"RENOR & Partners S.r.l."},"image":{"@id":"https:\/\/renor.it\/en\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/renorsrl","https:\/\/www.instagram.com\/renorpartners\/","https:\/\/www.linkedin.com\/company\/renor-partners\/posts\/?feedView=all"],"description":"RENOR & Partners Srl \u00e8 una societ\u00e0 di consulenza tecnologica e ingegneristica specializzata in sviluppo software, cloud computing, integrazione di sistemi, intelligenza artificiale applicata e progettazione elettronica. L\u2019azienda supporta imprese e pubbliche amministrazioni nella realizzazione di soluzioni digitali affidabili, scalabili e orientate all\u2019efficienza, con un approccio pragmatico basato su competenze tecniche, progettazione su misura e innovazione concreta.","email":"info@renor.it","telephone":"3791489430","legalName":"RENOR AND PARTNERS S.r.l.","foundingDate":"2022-06-21","vatID":"16768411007","taxID":"16768411007","numberOfEmployees":{"@type":"QuantitativeValue","minValue":"1","maxValue":"10"}},{"@type":"Person","@id":"https:\/\/renor.it\/en\/#\/schema\/person\/21343be04e5983a87f3a9a6182cf8795","name":"Simone Renzi","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/54f81b51be6bda6d63a06a1cd6563d9b0d5778d7af4f0bda4e246fc3e5737e2e?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/54f81b51be6bda6d63a06a1cd6563d9b0d5778d7af4f0bda4e246fc3e5737e2e?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/54f81b51be6bda6d63a06a1cd6563d9b0d5778d7af4f0bda4e246fc3e5737e2e?s=96&d=mm&r=g","caption":"Simone Renzi"},"description":"Senior full-stack web engineer with over 20 years of experience in cloud architectures, AI, and SaaS solutions; member of Mensa Italia. Creator of platforms such as HR24.ai and Paghe.ai, he oversaw the web development of FNS, a neural network simulator cited in Scientific Reports (Nature Portfolio), and has collaborated on research projects with INFN \u2013 Laboratori Nazionali di Frascati, Universit\u00e0 di Roma \u201cTor Vergata\u201d, Universidad Complutense, Universidad Polit\u00e9cnica and Centro de Tecnolog\u00eda Biom\u00e9dica in Madrid. A classical pianist, he combines musical creativity and technological rigor in every project.","sameAs":["https:\/\/renor.it","https:\/\/www.facebook.com\/simone.renzi.3954\/","https:\/\/www.linkedin.com\/in\/simone-renzi"],"url":"https:\/\/renor.it\/en\/author\/thesimon\/"}]}},"_links":{"self":[{"href":"https:\/\/renor.it\/en\/wp-json\/wp\/v2\/posts\/659","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/renor.it\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/renor.it\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/renor.it\/en\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/renor.it\/en\/wp-json\/wp\/v2\/comments?post=659"}],"version-history":[{"count":0,"href":"https:\/\/renor.it\/en\/wp-json\/wp\/v2\/posts\/659\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/renor.it\/en\/wp-json\/wp\/v2\/media\/507"}],"wp:attachment":[{"href":"https:\/\/renor.it\/en\/wp-json\/wp\/v2\/media?parent=659"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/renor.it\/en\/wp-json\/wp\/v2\/categories?post=659"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/renor.it\/en\/wp-json\/wp\/v2\/tags?post=659"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}